DEV Community

Julien Simon
Julien Simon

Posted on • Originally published at julsimon.Medium on

Video: Accelerate PyTorch Transformers with Intel Sapphire Rapids, part 1

In this video, you will learn how to accelerate a PyTorch training job with a cluster of Intel Sapphire Rapids servers running on AWS. We will use the Intel oneAPI Collective Communications Library (CCL) to distribute the job, and the Intel Extension for PyTorch (IPEX) library to automatically put the new CPU instructions to work. As both libraries are already integrated with the Hugging Face transformers library, we will be able to run our sample scripts out of the box without changing a line of code.

Top comments (0)