DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on

SiLU() and Softplus() in PyTorch

Buy Me a Coffee

*Memos:

SiLU() can get the 0D or more D tensor of the zero or more values computed by SiLU function from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is inplace(Optional-Default:False-Type:bool): *Memos:
    • It does in-place operation.
    • Keep it False because it's problematic with True.
  • The 1st argument is input(Required-Type:tensor of float).

Image description

import torch
from torch import nn

my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])

silu = nn.SiLU()
silu(input=my_tensor)
# tensor([7.9973, -0.1423, 0.0000, 0.7311, 4.9665, -0.2384, -0.2689, 3.9281])

silu
# SiLU()

silu.inplace
# False

silu = nn.SiLU(inplace=True)
silu(input=my_tensor)
# tensor([7.9973, -0.1423, 0.0000, 0.7311, 4.9665, -0.2384, -0.2689, 3.9281])

my_tensor = torch.tensor([[8., -3., 0., 1.],
                          [5., -2., -1., 4.]])
silu = nn.SiLU()
silu(input=my_tensor)
# tensor([[7.9973, -0.1423, 0.0000, 0.7311],
#         [4.9665, -0.2384, -0.2689, 3.9281]])

my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
                          [[5., -2.], [-1., 4.]]])
silu = nn.SiLU()
silu(input=my_tensor)
# tensor([[[7.9973, -0.1423], [0.0000, 0.7311]],
#         [[4.9665, -0.2384], [-0.2689, 3.9281]]])
Enter fullscreen mode Exit fullscreen mode

Softplus() can get the 0D or more D tensor of the zero or more values computed by Mish function from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is beta(Optional-Default:1.0-Type:float). *It's applied to Mish function.
  • The 2nd argument for initialization is threshold(Optional-Default:20.0-Type:float). *The values above it reverted.
  • The 1st argument is input(Required-Type:tensor of float).

Image description

import torch
from torch import nn

my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])

mish = nn.Mish()
mish(input=my_tensor)
# tensor([8.0000, -0.1456, 0.0000, 0.8651, 4.9996, -0.2525, -0.3034, 3.9974])

mish
# Mish()

mish.inplace
# False

mish = nn.Mish(inplace=True)
mish(input=my_tensor)
# tensor([8.0000, -0.1456, 0.0000, 0.8651, 4.9996, -0.2525, -0.3034, 3.9974])

my_tensor = torch.tensor([[8., -3., 0., 1.],
                          [5., -2., -1., 4.]])
mish = nn.Mish()
mish(input=my_tensor)
# tensor([[8.0000, -0.1456, 0.0000, 0.8651],
#         [4.9996, -0.2525, -0.3034, 3.9974]])

my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
                          [[5., -2.], [-1., 4.]]])
mish = nn.Mish()
mish(input=my_tensor)
# tensor([[[8.0000, -0.1456], [0.0000, 0.8651]]
#         [[4.9996, -0.2525], [-0.3034, 3.9974]]])
Enter fullscreen mode Exit fullscreen mode

Top comments (0)