DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Updated on

Linear() in PyTorch

Buy Me a Coffee

*Memos:

Linear() can get the 1D or more D tensor of the zero or more elements computed by Affine transformation from the 1D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is in_features(Required-Type:float or complex). *It must be 0 <= x.
  • The 2nd argument for initialization is out_features(Required-Default:False-Type:float): *Memos:
    • It must be 0 <= x.
    • 0 is possible but warning occurs.
  • The 3rd argument for initialization is bias(Optional-Default:True-Type:bool). *If it's False, None is set.
  • The 4th argument for initialization is device(Optional-Type:str, int or device()). *Memos:
  • The 5th argument for initialization is dtype(Optional-Type:int). *Memos:
  • The 1st argument is input(Required-Type:tensor of float). *complex must be set to dtype of Linear() to use a complex tensor.
  • The number of the deepest elements of the input tensor must be the same as in_features.
  • The tensor's requires_grad which is False by default is set to True by Linear().
  • Input tensor's device and dtype must be same as Linear()'s device and dtype respectively.
  • linear1.device and linear1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])

tensor1.requires_grad
# False

torch.manual_seed(42)

linear1 = nn.Linear(in_features=6, out_features=3)
tensor2 = linear1(input=tensor1)
tensor2
# tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>)

tensor2.requires_grad
# True

linear1
# Linear(in_features=6, out_features=4, bias=True)

linear1.in_features
# 6

linear1.out_features
# 3

linear1.bias
# Parameter containing:
# tensor([-0.1906, 0.1041, -0.1881], requires_grad=True)

linear1.weight
# Parameter containing:
# tensor([[0.3121, 0.3388, -0.0956, 0.3750, -0.0894, 0.0824],
#         [-0.1988, 0.2398, 0.3599, -0.2995, 0.3548, 0.0764],
#         [0.3016, 0.0553, 0.1969, -0.0576, 0.3147, 0.0603]],
#        requires_grad=True)

torch.manual_seed(42)

linear2 = nn.Linear(in_features=3, out_features=3)
linear2(input=tensor2)
# tensor([-0.8493, 1.5744, 1.2707], grad_fn=<ViewBackward0>)

torch.manual_seed(42)

linear = nn.Linear(in_features=6, out_features=3, bias=True,
                   device=None, dtype=None)
linear(input=tensor1)
# tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>)

torch.manual_seed(42)

linear = nn.Linear(in_features=6, out_features=3, bias=False,
                   device=None, dtype=None)
linear(input=tensor1)
# tensor([1.2434, -0.9874, 3.6423], grad_fn=<SqueezeBackward4>)

my_tensor = torch.tensor([[8., -3., 0.],
                          [1., 5., -2.]])
torch.manual_seed(42)

linear = nn.Linear(in_features=3, out_features=3)
linear(input=my_tensor)
# tensor([[1.6701, 5.1242, -3.1578],
#         [2.6844, 0.1667, 0.5044]], grad_fn=<AddmmBackward0>)

my_tensor = torch.tensor([[[8.], [-3.], [0.]],
                          [[1.], [5.], [-2.]]])
torch.manual_seed(42)

linear = nn.Linear(in_features=1, out_features=3)
linear(input=my_tensor)
# tensor([[[7.0349, 6.4210, -1.6724],
#          [-1.3750, -2.7091, 0.9046],
#          [0.9186, -0.2191, 0.2018]],
#         [[1.6831, 0.6109, -0.0325],
#          [4.7413, 3.9309, -0.9696],
#          [-0.6105, -1.8791, 0.6703]]], grad_fn=<ViewBackward0>)

my_tensor = torch.tensor([[[8.+0.j], [-3.+0.j], [0.+0.j]],
                          [[1.+0.j], [5.+0.j], [-2.+0.j]]])
torch.manual_seed(42)

linear = nn.Linear(in_features=1, out_features=3, dtype=torch.complex64)
linear(input=my_tensor)
# tensor([[[5.6295+7.2273j, -0.9926+6.6153j, -0.8836+1.8015j],
#          [-2.7805-1.9027j, 1.5844-3.4895j, 1.5265-0.4182j],
#          [-0.4869+0.5873j, 0.8815-0.7336j, 0.8692+0.1872j]],
#         [[0.2777+1.4173j, 0.6473+0.1850j, 0.6501+0.3889j],
#          [3.3358+4.7373j, -0.2898+3.8594j, -0.2263+1.1961j],
#          [-2.0159-1.0727j, 1.3501-2.5709j, 1.3074-0.2164j]]],
#        grad_fn=<ViewBackward0>)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)