In this video tutorial we will go over how to do client side inferencing in the browser with ONNX Runtime web. Below is a video on how to understand and use a QuickStart template to start building out a static web app with an open source computer vision model. Additionally, you can find a written step-by-step tutorial in the onnxruntime.ai docs here. Let's learn a bit more about the library, ONNX Runtime (ORT), which allows us to inference in many different languages.
WebGL for GPU processing or WebAssembly
Check out the written tutorial here: ONNX Runtime Web Docs tutorial