DEV Community

Adam
Adam

Posted on • Originally published at builtin.com

PyTorch vs. TensorFlow: Which Framework Is Best?

If you are reading this you've probably already started your journey into deep learning. If you are new to this field, in simple terms deep learning is an add-on to develop human-like computers to solve real-world problems with its special brain-like architectures called artificial neural networks. To help develop these architectures, tech giants like Google, Facebook and Uber have released various frameworks for the Python deep learning environment, making it easier for to learn, build and train diversified neural networks. In this article, we’ll take a look at two popular frameworks and compare them: PyTorch vs. TensorFlow. be comparing, in brief, the most used and relied Python frameworks TensorFlow and PyTorch.

GOOGLE’S TENSORFLOW

TensorFlow is open source deep learning framework created by developers at Google and released in 2015. The official research is published in the paper “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.”

TensorFlow is now widely used by companies, startups, and business firms to automate things and develop new systems. It draws its reputation from its distributed training support, scalable production and deployment options, and support for various devices like Android.

FACEBOOK’S PYTORCH

PyTorch is one of the latest deep learning frameworks and was developed by the team at Facebook and open sourced on GitHub in 2017. You can read more about its development in the research paper "Automatic Differentiation in PyTorch."

PyTorch is gaining popularity for its simplicity, ease of use, dynamic computational graph and efficient memory usage, which we'll discuss in more detail later.

WHAT CAN WE BUILD WITH TENSORFLOW AND PYTORCH?

Initially, neural networks were used to solve simple classification problems like handwritten digit recognition or identifying a car’s registration number using cameras. But thanks to the latest frameworks and NVIDIA’s high computational graphics processing units (GPU’s), we can train neural networks on terra bytes of data and solve far more complex problems. A few notable achievements include reaching state of the art performance on the IMAGENET dataset using convolutional neural networks implemented in both TensorFlow and PyTorch. The trained model can be used in different applications, such as object detection, image semantic segmentation and more.

Although the architecture of a neural network can be implemented on any of these frameworks, the result will not be the same. The training process has a lot of parameters that are framework dependent. For example, if you are training a dataset on PyTorch you can enhance the training process using GPU’s as they run on CUDA (a C++ backend). In TensorFlow you can access GPU’s but it uses its own inbuilt GPU acceleration, so the time to train these models will always vary based on the framework you choose.

TOP TENSORFLOW PROJECTS

Magenta: An open source research project exploring the role of machine learning as a tool in the creative process. (https://magenta.tensorflow.org/)

Sonnet: Sonnet is a library built on top of TensorFlow for building complex neural networks. (https://sonnet.dev/)

Ludwig: Ludwig is a toolbox to train and test deep learning models without the need to write code. (https://uber.github.io/ludwig/)

TOP PYTORCH PROJECTS

CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. (https://stanfordmlgroup.github.io/projects/chexnet/)

PYRO: Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. (https://pyro.ai/)

Horizon: A platform for applied reinforcement learning (Applied RL) (https://horizonrl.com)

These are a few frameworks and projects that are built on top of TensorFlow and PyTorch. You can find more on Github and the official websites of TF and PyTorch.

COMPARING PYTORCH AND TENSORFLOW

The key difference between PyTorch and TensorFlow is the way they execute code. Both frameworks work on the fundamental datatype tensor. You can imagine a tensor as a multi-dimensional array shown in the below picture.

Alt Text

1. MECHANISM: DYNAMIC VS STATIC GRAPH DEFINITION
TensorFlow is a framework composed of two core building blocks:

  1. A library for defining computational graphs and runtime for executing such graphs on a variety of different hardware.
    A computational graph which has many advantages (but more on that in just a moment).

  2. A computational graph is an abstract way of describing computations as a directed graph. A graph is a data structure consisting of nodes (vertices) and edges. It’s a set of vertices connected pairwise by directed edges.

When you run code in TensorFlow, the computation graphs are defined statically. All communication with the outer world is performed via tf.Session object and tf.Placeholder, which are tensors that will be substituted by external data at runtime. For example, consider the following code snippet.

Alt Text

This is how a computational graph is generated in a static way before the code is run in TensorFlow. The core advantage of having a computational graph is allowing parallelism or dependency driving scheduling which makes training faster and more efficient.

View the rest with more code examples on BuiltIn.com:PYTORCH VS. TENSORFLOW: WHICH FRAMEWORK IS BEST FOR YOUR DEEP LEARNING PROJECT?

Top comments (0)