DEV Community

Julien Simon
Julien Simon

Posted on • Originally published at julsimon.Medium on

Video: Accelerate Transformer inference with Optimum and Intel OpenVINO

In this video, I show you how to accelerate Transformer inference with Optimum, an open source library by Hugging Face, and Intel OpenVINO.

I start from a Vision Transformer model fine-tuned for image classification, and quantize it with OpenVINO. Running benchmarks on an AWS c6i instance (Intel Ice Lake architecture), we speed up the original model more than 20% and divide its size by almost 4, with just a few lines of simple Python code and just a tiny accuracy drop!

Top comments (0)