DEV Community

Tabnine
Tabnine

Posted on

Tabnine Adds Native Support for Apple Silicon (M1)

Last week we released native support for Apple Silicon (M1), bringing our efficient inference engine to the latest Apple architecture. You can read Apple’s M1 announcement here.

The Tabnine Neural Engine running locally (a.k.a deep-local) is our own implementation of an efficient neural inference using low-level intrinsics. Our original version of the engine was based on x86 vector instructions (FMA, SSE/AVX).

With the release of Apple Silicon, we extended the engine to support the vector instructions of the M1 processor.

The M1 processor is based on ARM 128-bit Neon technology. While Neon registers are not as wide as x86 registers, the overall throughput for (our kind of) vector operations for Neon is superior to that of Intel.

While earlier versions of Tabnine can run via Rosetta and use Tabnine cloud, running the engine locally on M1 requires the latest version of Tabnine.

Most official Tabnine plugins have been already updated to support M1.

Note that you need to run the native M1 version of the editor for the engine to correctly detect the M1 processor. See instructions for your IDE below.

Alt Text

Top comments (0)