DEV Community

Cover image for AI in browsers: Comparing TensorFlow, ONNX, and WebDNN for image classification
Brian Neville-O'Neill
Brian Neville-O'Neill

Posted on • Originally published at blog.logrocket.com on

AI in browsers: Comparing TensorFlow, ONNX, and WebDNN for image classification

Written by Zain Sajjadx✏️

The web has transformed from the world’s most widely used document platform to its most widely used application platform. In the past few years, we have seen tremendous growth in the field of AI. Web as a platform is making great progress, allowing developers to ship some excellent experiences leveraging AI advancements. Today, we have devices with great processing power and browsers capable of leveraging them to the full extent.

Tech giants have invested heavily in making it easier for developers to ship AI features with their web apps. Today, we have many libraries to perform complex AI tasks inside the browser. In this article, we will compare three major libraries that allow us to perform image recognition inside the browser.

Three major image classification libraries

Before we dive in, let’s go over the basics of TensorFlow.js, ONNX.js, and WebDNN (if you’re already familiar with these libraries, feel free to scroll to the next section).

LogRocket Free Trial Banner

TensorFlow

Backed by Google, TensorFlow.js allows users to develop machine learning models in JavaScript and use ML directly in the browser or Node.js. It enables developers to train and execute models in the browser and to retrain the existing model via transfer learning using their data. The recent acquisition of Keras.js has already brought some significant improvements to TensorFlow and is poised to enhance the library’s capabilities further.

ONNX.js

The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. ONNX is developed and supported by a community of partners that includes AWS, Facebook OpenSource, Microsoft, AMD, IBM, and Intel AI. ONNX.js uses a combination of web worker and web assembly to achieve extraordinary CPU performance.

WebDNN

Deep neural networks show great promise when it comes to getting accurate results. Contrary to libraries like TensorFlow, MIL WebDNN provides an efficient architecture for deep learning applications such as image recognition and language modeling using convolutional and recurrent neural networks. This framework optimizes the trained DNN model to compress the model data and accelerate its execution. It executes with novel JavaScript APIs such as WebAssembly and WebGPU to achieve zero-overhead execution.

Comparing performance

To evaluate the performance of all three libraries, we developed a react app that uses Squeezenet model for image classification. Let’s take a look at the results.

Inference on CPU

All three libraries support multiple backends but use CPU as a fallback for older browsers. Besides having WebAssembly and WebWorker as backends, ONNX.js and WebDNN also treat native JavaScript as a different backend. We gave our red wine to all three libraries and saw their judgement.

Ahen it comes to CPU inference, as shown below, TensorFlow.js leads with a magnificent speed of 1501ms, followed by ONNX.js at 2195ms. Both WebDNN and ONNX.js have other WASM backends that can be considered CPU backends as well since they don’t use GPU.

TensorFlow, ONNX, and WedDNN Inference on CPU

Inference on WebAssembly

WASM has emerged as one of the best performance boosters for web apps, and it is now available for use with all the major browsers. WASM enables developers to deliver performant experiences on devices without GPU. The image below shows how the libraries judged red wine using WASM.

TensorFlow, ONNX, and WedDNN Inference on WebAssembly

ONNX.js and WebDNN both scored high here; figures such as 135ms (ONNX.js) and 328ms (WebDNN) aren’t too far from GPU performance. ONNX’s speed is due to its wise use of the web worker to offload many calculations from the main thread.

Inference on WebGL

WebGL is based on OpenGL. It provides developers with a great API to perform complex calculations in an optimized way. All of these libraries use WebGL as a backend to provide boosted results.

TensorFlow, ONNX, and WedDNN Inference on WebGL

As shown above, ONNX.js takes lead here with 48ms, compared to TensorFlow’s 69ms. WebDNN isn’t really in this race; they may be preparing for WebGL2 or perhaps focusing more on WebMetal.

Note: These results were obtained using Safari on a MacBook Pro (2018), 2.2GHz 6-Core Intel Core i7, 16GB 2400MHz DDR4, Intel UHD Graphics 630 1536MB.

Backends supported

There are four backends available in modern browsers:

  1. WebMetal — Compute on GPU by WebMetal API. This is the fastest of the four backends, but it is currently only supported only in Safari. Apple originally proposed this API as WebGPU in 2017 and renamed it to WebMetal in 2019
  2. WebGL — Today, all major browsers are shipped with the support of WebGL. It is up to 100 times faster than the vanilla CPU backend
  3. WebAssembly — A binary instruction format for a stack-based virtual machine, WebAssembly aims to execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms
  4. PlainJS — Compute on CPU by ECMAScript3. This backend is only for backward compatibility and is not very fast

All three libraries support both CPU and WebGL backends. WebDNN takes a lead and allows you to leverage the WebMetal experimental feature. ONNX.js, meanwhile, smartly combines WASM and WebWorker to make CPU inferencing more efficient.

Alt Text

Browser support

Supporting all of the major browsers across different operating systems is a major challenge when handling heavy computational tasks. The chart below compares browser support for these libraries.

Alt Text

Popularity and adoption

Popularity and adoption is also an important parameter. The chart below shows the download trend for each of the three major libraries over a six-month period.

Popularity and Adoption of TensorFlow, ONNX, and WedDNN

(Source: npm trends)

As you can see, TensorFlow.js is far ahead in the race for adoption compared to other ML libraries available today. However, ONNX.js and WebDNN are ahead in performance, indicating a promising future for both.

Conclusion

TensorFlow, ONNX and WebDNN all have their own advantages, and any one can serve as a strong foundation for your next AI-based web app. We found that ONNX.js the most promising library when it comes to performance and TensorFlow.js has the highest adoption rate. WebDNN, meanwhile, is focusing on leveraging modern hardware and, as a result, has made significant improvements recently.

In addition to the three major libraries we compared in this post, you can also check out the following libraries to perform tasks other than image recognition in browsers:


Editor's note: Seeing something wrong with this post? You can find the correct version here.

Plug: LogRocket, a DVR for web apps

 
LogRocket Dashboard Free Trial Banner
 
LogRocket is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
 
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.
 
Try it for free.


The post AI in browsers: Comparing TensorFlow, ONNX, and WebDNN for image classification appeared first on LogRocket Blog.

Top comments (0)