Hello friends, Happy new year
So far you have understood the types and shapes of the tensors. In this I will go more beyond the basics and teach you some more concepts about the tensors.
I have created this as Part-2 as all the information in one post was lengthy :P
Table of Contents
- Indexing and Slicing of Tensors
- Arithmetic Operations on Tensors
- Basic Functions on the Tensors
- Operations on 2D Tensors
Indexing and Slicing of Tensors
The individual values of the tensors are also a tensor, for instance
import torch
t1 = torch.tensor([1, 2, 3, 4])
print(t1[0])
tensor(1)
Since the original tensor was of 1-D, the value would be of course a 0-D tensor. So to if you have 2-D tensor, and are doing using only one index, you will get 1-D tensor
import torch
t2 = torch.tensor([[1, 2, 3, 4], [5, 6, 7, 8]])
print(t2[0])
tensor([1, 2, 3, 4])
NOTE: If you want to get the raw python list of 1+ D tensors, you can use
Tensor.tolist()
method
Like normal list, you can also update the tensor value by simply accessing the index. For example
t1[2] = 100
print(t1)
tensor([1, 2, 100, 4])
NOTE For your sake, pytorch returns a new tensor object whenever you perform slicing on the original tensor. But it will overwrite the original tensor when you will change the value of tensor at particular index.
Arithmetic Operations on Tensors
The very basic operations on tensors are vector additions and subtractions. Visit the link in case you want to study the maths behind these vector operations
Suppose you have two tensors
and
defined as
u = torch.tensor([1.0, 2.0])
v = torch.tensor([3.0, 4.0])
print(u)
print(v)
tensor([1., 2.])
tensor([3., 4.])
Then the vector operations addition and subtraction respectively
print(u + v)
print(u - v)
tensor([4., 6.])
tensor([-2., -2.])
Scalar operations are also supported. For example, taking
as our scalar quantity
ws = torch.tensor(5)
print(ws)
tensor(5)
The operations addition, subtraction, multiplication and division are shown respectively
print(u + v + ws)
print(u + v - ws)
print(u + v * ws)
print(u + (v / ws))
tensor([ 9., 11.])
tensor([-1., 1.])
tensor([16., 22.])
tensor([1.6000, 2.8000])
In pytorch, there is no need of creating a 0D tensor to perform scalar operations you can simply use the scalar value and perform the action. For example,
print(v * 5)
tensor([15., 20.])
The cross-product is fairly short and easy, by using *
symbol. But to perform dot product, you should use torch.dot(vector1, vector2)
or Tensor.dot(vector2)
print(u * v)
print(torch.dot(u, v))
print(u.dot(v))
tensor([3., 8.])
tensor(11.)
tensor(11.)
NOTE While performing cross-product or dot-product, dimensions of both the tensors should be equal, otherwise you will get
RuntimeError
Basic Functions on the Tensors
Torch tensors provide a plethora of functions that you can apply on the tensors for the desired results. The first one of all is Tensor.mean()
x = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float32)
print(x)
print(x.mean())
tensor([1., 2., 3., 4., 5.])
x.mean()
NOTE Since mean of a tensor will be a floating tensor, you can find the mean of floating tensor only otherwise you will get
RuntimeError
Finding maximum or minimum value in the tensor can be done by using Tensor.max()
and Tensor.min()
respectively
print(x.max())
print(x.min())
tensor(5.)
tensor(1.)
You can also apply a function to a tensor element-wise. Suppose you have tensor containing various values of pi and you want to apply the sin and cos function on it. You can use torch.sin(tensor)
and torch.cos(tensor)
import numpy as np
x = torch.tensor([0, np.pi / 2, np.pi])
print(x)
print(torch.sin(x))
print(torch.cos(x))
tensor([0.0000, 1.5708, 3.1416])
tensor([ 0.0000e+00, 1.0000e+00, -8.7423e-08])
tensor([ 1.0000e+00, -4.3711e-08, -1.0000e+00])
Sometimes you will have to get an evenly spaced list of numbers between a range, you can use torch.linspace(start, end, [step])
Let's make it more interactive by plotting the to and to
Making the dataset first,
pi = torch.linspace(-np.pi/2, np.pi/2, steps=1000)
print(pi[:5]) # lower bound
print(pi[-5:]) # upper bound
sined = torch.sin(pi)
cosed = torch.cos(pi)
print(sined[0:5])
print(cosed[0:5])
tensor([-1.5708, -1.5677, -1.5645, -1.5614, -1.5582])
tensor([1.5582, 1.5614, 1.5645, 1.5677, 1.5708])
tensor([-1.0000, -1.0000, -1.0000, -1.0000, -0.9999])
tensor([-4.3711e-08, 3.1447e-03, 6.2894e-03, 9.4340e-03, 1.2579e-02])
Now using matplotlib to create the graph
import matplotlib.pyplot as plt
plt.plot(sined, label="sined")
plt.plot(cosed, label="cosed")
plt.legend()
plt.show()
Operations on 2D Tensors
The greyscale images are the best example of 2D tensors. Each pixel which you see as are the for the matrix.
In the above images, it is demonstrated how a binary image can be represented in a matrix. In case of grayscale image, the value of each element of the matrix would be in range of
Just in case you are wondering how a RGB coloured image is represented, it is basically a 3D tensor where there are 3 matrix for each R, G and B channel. For example
Creating a random 2D tensor using torch.rand(*size)
method.
NOTE: This is a general method, you can create a random tensor of any dimension using this method
t2 = torch.rand((3, 3))
print(t2)
print(t2.ndim) # dimension of matrix is also called rank of matrix
print(t2.shape)
print(t2.numel())
tensor([[0.0376, 0.4297, 0.2987],
[0.8009, 0.1815, 0.5538],
[0.2482, 0.7099, 0.3132]])
2
torch.Size([3, 3])
9
The Tensor.numel()
method used above returns the total number of the elements in the tensor
The indexing and slicing works same as 1D tensor, but there is a minor change. In 1D you were used to use [number_start:number_end]
for slicing and [index]
for getting element at particular index.
Here now you have rows and columns, so the syntax would be [index_row, index_col]
to get the particular element at row and column in the matrix and [number_start_row:number_end_row, number_start_col:number_end_row]
.
print(t2[0, 0])
print(t2[1, 1])
tensor(0.0376)
tensor(0.1815)
The above code will give you the very first and middle element from the matrix. As shown in the following matrix
Getting an element with indexing will always give you a new tensor of 0D
t2[1:2, 1:]
tensor([[0.1815, 0.5538]])
To perform the matrix multiplication you can use either of them: Tensor.matmul(tensor2)
, torch.matmul(tensor1, tensor2)
, Tensor.mm(tensor2)
or torch.mm(tensor1, tensor2)
x = torch.rand((3, 4))
y = torch.rand((4, 3))
print(x)
print(y)
print(x.matmul(y))
print(torch.mm(x, y))
tensor([[0.6413, 0.1338, 0.5066, 0.1618],
[0.3807, 0.8555, 0.2187, 0.5024],
[0.2771, 0.6381, 0.5671, 0.6934]])
tensor([[0.0263, 0.9461, 0.6314],
[0.9180, 0.1586, 0.2589],
[0.3363, 0.4529, 0.9433],
[0.6760, 0.8877, 0.1171]])
tensor([[0.4195, 1.0009, 0.9363],
[1.2086, 1.0409, 0.7270],
[1.2525, 1.2358, 0.9563]])
tensor([[0.4195, 1.0009, 0.9363],
[1.2086, 1.0409, 0.7270],
[1.2525, 1.2358, 0.9563]])
I hope you have liked this post. Please share it with your friends and colleagues and help them learn the concepts of PyTorch. If you have doubts or any idea reach me out via following sources
- Email: tbhaxor@gmail.com (Recommended)
- Twitter: @tbhaxor
- GitHub: @tbhaxor
- LinkedIn: @gurkirat--singh
- Instagram: @_tbhaxor_
Top comments (0)