# Basics

The **Convolutional** block is one of the basic building blocks used in deep learning. We go in-depth with Convolution in 1 dimension and understand the basics of convolution, strides, and padding. We explain visually and also through PyTorch code to verify our concepts.

The Kernel takes an Input and provides an output which is sometimes referred to as a `feature map`

The Kernel is made up of many things . This is a very simplified picture of the things it has . The weights , biases , strides and padding are some of them

# Kernel Size = 1 , Stride = 1

Here the size of the kernel is 1. It has a single weight and bias.

Input is [ 2, 3, 4 ]

`Stride is 1`

, therefore the kernel moves 1 slot after every operation.

Outputs are

- 2 * weight + bias
- 3 * weight + bias . The kernel moves 1 slot and operates on 3
- 4 * weight + bias. The kernel moves 1 slot and operates on 4

We implemented this in Pytorch and obtained the same result.

```
m = nn.Conv1d(in_channels = 1, out_channels = 1, kernel_size = 1, stride=1)
input = torch.tensor([[[2.,3.,4.,]]])
print(input)
output = m(input)
print(output)
print(2 * m.weight + m.bias )
print(3 * m.weight + m.bias)
print(4 * m.weight + m.bias)
```

# Kernel Size = 2 , Stride = 1

Here the size of the kernel is 2. It has **2 weights** and bias.

Input is [ 2, 3, 4 ]

`Step 1`

:

The weights w0 and w1 operate on inputs 2, 3. This provides the output 2 * w0 + 3 * w1 + bias

`Step 2`

:

The weights w0 and w1 operate on inputs 3, 4. This provides the output 3 * w0 + 4 * w1 + bias

We implemented this in Pytorch and obtained the same result.

```
m = nn.Conv1d(in_channels = 1, out_channels = 1, kernel_size = 2, stride=1)
m.weight[0][0][0] , m.weight[0][0][1] , m.bias
2 * m.weight[0][0][0] + 3* m.weight[0][0][1] + m.bias
3 * m.weight[0][0][0] + 4* m.weight[0][0][1] + m.bias
output = m(input)
print(output)
```

# Kernel Size = 2 , Stride = 2

Here the size of the kernel is 2. It has **2 weights** and bias.

Input is [ 2, 3, 4 ]

`Step 1`

:

The weights w0 and w1 operate on inputs 2, 3. This provides the output 2 * w0 + 3 * w1 + bias

`Step 2`

:

The kernel moves **2** slots. Therefore, the kernel cannot operate on 4.

```
m = nn.Conv1d(in_channels = 1, out_channels = 1, kernel_size = 2, stride=2)
m.weight[0][0][0] , m.weight[0][0][1] , m.bias
2 * m.weight[0][0][0] + 3* m.weight[0][0][1] + m.bias
output = m(input)
print(output)
```

# Kernel Size = 2 , Stride = 2 , Padding = 1

Here the size of the kernel is 2. It has **2 weights** and bias.

With padding = 1, the kernel has zeros on both sides of the input as you can see in the figure

Input is [ 2, 3, 4 ]

`Step 1`

:

The weights w0 and w1 operate on inputs 0, 2. This provides the output 0 * w0 + 2 * w1 + bias

`Step 2`

:

The kernel moves **2** slots.

The weights w0 and w1 operate on inputs 3, 4. This provides the output 3 * w0 + 4 * w1 + bias

```
m = nn.Conv1d(in_channels = 1, out_channels = 1, kernel_size = 2, stride=2,padding = 1)
m.weight[0][0][0] , m.weight[0][0][1] , m.bias
print(0 * m.weight[0][0][0] + 2* m.weight[0][0][1] + m.bias )
print(3 * m.weight[0][0][0] + 4 * m.weight[0][0][1] + m.bias )
output = m(input)
print(output)
```

## Top comments (0)