DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

ConvTranspose2d in PyTorch

Buy Me a Coffee

*Memos:

ConvTranspose2d() can get the 3D or 4D tensor of the one or more elements computed by 2D transposed convolution from the 3D or 4D tensor of one or more elements as shown below:

*Memos:

  • The 1st argument for initialization is in_channels(Required-Type:float). *It must be 1 <= x.
  • The 2nd argument for initialization is out_channels(Required-Type:float). *It must be 1 <= x.
  • The 3rd argument for initialization is kernel_size(Required-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 4th argument for initialization is stride(Optional-Default:1-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 5th argument for initialization is padding(Optional-Default:0-Type:int or tuple or list of int). *It must be 0 <= x.
  • The 6th argument for initialization is output_padding(Optional-Default:0-Type:int, tuple or list of int). *It must be 0 <= x.
  • The 7th argument for initialization is groups(Optional-Default:1-Type:int). *It must be 1 <= x.
  • The 8th argument for initialization is bias(Optional-Default:True-Type:bool). *My post explains bias argument.
  • The 9th argument for initialization is dilation(Optional-Default:1-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 10th argument for initialization is padding_mode(Optional-Default:'zeros'-Type:str). *Only 'zeros' can be selected.
  • The 11th argument for initialization is device(Optional-Default:None-Type:str, int or device()): *Memos:
  • The 12th argument for initialization is dtype(Optional-Default:None-Type:dtype): *Memos:
  • The 1st argument is input(Required-Type:tensor of float or complex): *Memos:
    • It must be the 3D or 4D tensor of one or more elements.
    • The number of the elements of the 3rd deepest dimension must be same as in_channels.
    • Its device and dtype must be same as ConvTranspose2d()'s.
    • complex must be set to dtype of ConvTranspose2d() to use a complex tensor.
    • The tensor's requires_grad which is False by default is set to True by ConvTranspose2d().
  • convtran2d1.device and convtran2d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[[8., -3., 0., 1., 5., -2.]]])

tensor1.requires_grad
# False

torch.manual_seed(42)

convtran2d1 = nn.ConvTranspose2d(in_channels=1, out_channels=3, kernel_size=1)
tensor2 = convtran2d1(input=tensor1)
tensor2
# tensor([[[4.0616, -0.7939, 0.5304, 0.9718, 2.7374, -0.3525]],
#         [[3.7071, -1.5641, -0.1265, 0.3527, 2.2695, -1.0849]],
#         [[-0.9656, 0.5223, 0.1165, -0.0188, -0.5598, 0.3870]]],
#        grad_fn=<SqueezeBackward1>)

tensor2.requires_grad
# True

convtran2d1
# ConvTranspose2d(1, 3, kernel_size=(1, 1), stride=(1, 1))

convtran2d1.in_channels
# 1

convtran2d1.out_channels
# 3

convtran2d1.kernel_size
# (1, 1)

convtran2d1.stride
# (1, 1)

convtran2d1.padding
# (0, 0)

convtran2d1.output_padding
# (0, 0)

convtran2d1.groups
# 1

convtran2d1.bias
# Parameter containing:
# tensor([0.5304, -0.1265, 0.1165], requires_grad=True)

convtran2d1.dilation
# (1, 1)

convtran2d1.padding_mode
# 'zeros'

convtran2d1.weight
# Parameter containing:
# tensor([[[0.4414], [0.4792], [-0.1353]]], requires_grad=True)

torch.manual_seed(42)

convtran2d2 = nn.ConvTranspose2d(in_channels=3, out_channels=3, kernel_size=1)
convtran2d2(input=tensor2)
# tensor([[[3.6068, -1.7503, -0.2893, 0.1977, 2.1458, -1.2633]],
#         [[1.6518, 0.4964, 0.8115, 0.9165, 1.3367, 0.6014]],
#         [[-0.5008, 0.2990, 0.0809, 0.0082, -0.2827, 0.2263]]],
#        grad_fn=<SqueezeBackward1>)

torch.manual_seed(42)

convtran2d = nn.ConvTranspose2d(in_channels=1, out_channels=3, 
             kernel_size=1, stride=1, padding=0, output_padding=0, 
             groups=1, bias=True, dilation=1, padding_mode='zeros', 
             device=None, dtype=None)
convtran2d(input=tensor1)
# tensor([[[4.0616, -0.7939, 0.5304, 0.9718, 2.7374, -0.3525]],
#         [[3.7071, -1.5641, -0.1265, 0.3527, 2.2695, -1.0849]],
#         [[-0.9656, 0.5223, 0.1165, -0.0188, -0.5598, 0.3870]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[8., -3., 0.],
                           [1., 5., -2.]]])
torch.manual_seed(42)

convtran2d = nn.ConvTranspose2d(in_channels=1, out_channels=3,
                                kernel_size=1)
convtran2d(input=my_tensor)
# tensor([[[4.0616, -0.7939, 0.5304], [0.9718, 2.7374, -0.3525]],
#         [[3.7071, -1.5641, -0.1265], [0.3527, 2.2695, -1.0849]],
#         [[-0.9656, 0.5223, 0.1165], [-0.0188, -0.5598, 0.3870]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[8.], [-3.], [0.],
                           [1.], [5.], [-2.]]])
torch.manual_seed(42)

convtran2d = nn.ConvTranspose2d(in_channels=1, out_channels=3, kernel_size=1)
convtran2d(input=my_tensor)
# tensor([[[4.0616], [-0.7939], [0.5304], [0.9718], [2.7374], [-0.3525]],
#         [[3.7071], [-1.5641], [-0.1265], [0.3527], [2.2695], [-1.0849]],
#         [[-0.9656], [0.5223], [0.1165], [-0.0188], [-0.5598], [0.3870]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.]],
                           [[1.], [5.], [-2.]]]])
torch.manual_seed(42)

convtran2d = nn.ConvTranspose2d(in_channels=2, out_channels=3, kernel_size=1)
convtran2d(input=my_tensor)
# tensor([[[[3.7805], [1.0465], [-1.3418]],
#          [[4.0462], [-1.7310], [0.5921]],
#          [[-0.4566], [1.4973], [0.2760]]]],
#        grad_fn=<ConvolutionBackward0>)

my_tensor = torch.tensor([[[[8.+0.j], [-3.+0.j], [0.+0.j]],
                           [[1.+0.j], [5.+0.j], [-2.+0.j]]]])
torch.manual_seed(42)

convtran2d = nn.ConvTranspose2d(in_channels=2, out_channels=3,
                                kernel_size=1, dtype=torch.complex64)
convtran2d(input=my_tensor)
# tensor([[[[3.6767+4.2509j], [-2.3031+0.3359j], [0.9887-0.5999j]],
#         [[-0.2947+3.7378j], [3.2290-3.7904j], [-0.7395+0.7656j]],
#         [[-0.0651+1.1254j], [3.3337+0.2761j], [-0.5586-0.1308j]]]],
#        grad_fn=<AddBackward0>)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)