This post is a short summary of our recent experiment with WASM & ML.
The goal was to run a simple ML model in WASM and then maybe benchmark it against existing JavaScript ML libraries. Our Machine Learning team provided a basic grayscale filter model in three formats: CoreML, Tensorflow and PyTorch.
We tried PyTorch first as it looked very promising. We found good bindings for Rust - https://github.com/LaurentMazare/tch-rs - that seemed to be very easy to use. But after some tinkering it turned out that it’s not possible to export the whole model from PyTorch in a form that can be easily imported in Rust. We’d basically have to implement the model from scratch before loading the data the ML team provided. That was a bit too much for our experiment so we moved on.
Next we looked at CoreML, but quickly found that there were virtually no libraries for Rust or anything else that we could compile to WASM. I'm not sure if that’s due to CoreML being made by Apple (and thus being proprietary) or the community isn’t as big and there is no interest in libraries for other languages. If it's the second case than maybe it’s worth taking a look at it in the future.
Our last chance was Tensorflow and it started out great. Even though it doesn’t have first-class support for WebAssembly (it's a work in progress: https://github.com/tensorflow/tfjs/issues/1497), it does have official support for Rust - https://github.com/tensorflow/rust.
We were able to quickly prototype a simple app that loaded the Tensorflow model in a *.pb format, pass it an image as a tensor and finally save the output from the neural network to the file system. It worked really well, so we tried to port it to WebAssembly. For Rust it’s really easy thanks to wasm-pack
(https://github.com/rustwasm/wasm-pack).
Unfortunatelly that’s when we hit a major blocker. Turns out that Tensorflow depends on a library called “aligned_alloc” which wouldn’t compile to WebAssembly, most likely due to some system-dependent functionality that could not be ported to the browser.
My conclusion for now is that porting big libraries to WASM can be, more often than not, tricky. The differences in memory allocation create conflicts in low-level code and some libraries simply won’t compile. Another solution would be to write a neural network from scratch, using only basic libraries and language-level constructs. This approach was explored e.g. in https://ngoldbaum.github.io/posts/python-vs-rust-nn. It would most likely compile without issues, but require more time to develop.
Top comments (1)
MLIR is coming to LLVM
This may help with the ndarray crate.
reddit.com/r/rust/comments/d252xq/...