DEV Community

Nepolean Rameswaran
Nepolean Rameswaran

Posted on

Notes on Deep Learning: 1

Introduction: I am a Python programmer and these are my notes as I foray into the world of Deep Learning for the first time. In this blog post, which is the first of many in this series, we will learn about the basic definitions of Machine Learning and Deep Learning, types of algorithms, and the basics of neural networks and tensor.

Prerequisites: Working knowledge of Python. It is good to have experience in using the Pandas or Numpy libraries.

Although Deep Learning is a subset of Machine Learning, it is not necessary to know ML to learn DL. In fact, I myself do not know ML.

Source material: I am primarily learning DL through Daniel Bourke's fantastic YouTube tutorial PyTorch for Deep Learning & Machine Learning. I am also taking notes from other sources such as Andrew W. Trask's fantastic book Grokking Deep Learning.

Both these materials use the term Deep Learning and Machine Learning interchangeably at times.

What is Deep Learning?

We can think of DL as a sophisticated form of ML. This leads us to the question, "What is ML?"

To put simply, Machine Learning is turning data into numbers and finding patterns in those numbers.

The data can be as simple as a csv file or as complex as images or videos. ML or DL helps us in discovering insights within large collections of data.

ML vs DL

Should I use a ML algorithm or a DL algorithm?
This question is primarily answered by looking at the problem that we are trying to solve and also the type of input data that we have access to.
ML algorithms are primarily used when the input data is structured, and DL, when the data is unstructured.

DL use cases: Speech recognition, Language translation, building a recommendation system like the product or movie recommendations we see at Amazon or Netlix, Computer Vision, NLP, etc.

I stated that DL can be used when the data is unstructured. This raises an interesting question. If DL algorithms can handle unstructured data, they should be able to handle structured data as well, right? I googled and the answer is yes. But I think this requires not a simple yes or no answer. The DL framework PyTorch has many of Numpy's capabilities. So can PyTorch replace Numpy? Yes. But also no because Numpy is a much smaller library with a lesser memory footprint and it is efficient in what it does. So PyTorch is not really required unless we also need its Deep Learning capabilities.
One of the jobs of a ML/DL engineer is also to find the right algorithm to do the job.

Given below are some of the most commonly used algorithms:

ML
Random forest
Gradient boosted models
Naive Bayes
Nearest neighbour
Support vector machine

DL
Neural networks
Fully connected neural network
Convolutional neural network
Recurrent neural network
Transformer

The algorithms themselves, regardless of whether it is a DL or ML one, can be broadly divided into two main categories --
Supervised and Unsupervised learning algorithms.
There are other types such as Transfer Learning and Reinforcement learning as well -- these we will look into later.

Should I use a Supervised ML/DL algorithm or an Unsupervised one?

If we already know the type of output that we want, then we should be using a Supervised algorithm.

Does this photo contain the image of a cat? Can I predict the list of movies that I may like based on the movies that I have watched and liked in the past?
In these cases, we know what we want, and so these are cases for Supervised algorithms.

In cases in which we are to analyse a dataset without actually knowing what kind of output to expect, we use an unsupervised algorithm. These algorithms help us to find patterns in the data and cluster them into groups. If there are 10 groupings, each grouping is labelled from 1 to 10. All forms of unsupervised learning can be viewed as a form of
clustering. Once we have this output, we can then give them meaningful names and then process this data using a Supervised algorithm if required.

Neural Networks and Tensors

Many of the DL algorithms are based on a Neural Network. So how does a Neural Network based algorithm work?

To put it simply, data is converted into numerical encodings which are represented using a tensor(s). These numbers are passed into a neural network which learns the patterns in the data. These patterns are then again represented through tensors. And finally the tensors are converted into a human readable output.
This sounds, minus the technical terms, very much like the simple definition of ML that I had mentioned previously.

Machine Learning is turning data into numbers and finding patterns in those numbers.

Note: The word patterns are also referred to as "embeddings", "weights", "feature representation" or "feature vectors".

I understand that the definition is still somewhat vague and the neural network still sounds like a black box in the description above. But let it be a black box for now.

Tensor

In many DL frameworks such as PyTorch, a Tensor object is one of the basic building blocks. A tensor is similar to a multi-dimensional array and it is used to represent numbers. In fact, tensors can represent almost any kind of data such as images, words, tables of numbers, etc.

Image description
Image courtesy: Daniel Bourke

Image description

In this pictorial representation, apart from the input and output layers, the network has three hidden layers. These hidden layers could be just one or many more depending on the complexity of the data being processed.

The code snippet below shows the creation of tensor objects using the PyTorch library.

import torch

# creating scalar, single dimensional and 
# multi-dimensional tensor objects

x = torch.tensor(6)
y = torch.tensor([42,5])
z = torch.tensor([[1,2,3], [4,5,6]])
z1 = torch.tensor([[[1,2,3],
                    [4,5,6],
                    [7,8,9]]])


Enter fullscreen mode Exit fullscreen mode

Top comments (1)

Collapse
 
ajinkyax profile image
Ajinkya Borade

Good notes
In case anyone wondering it’s LearnPyTorch.io