Today Tensorflow Lite is available as a library for both iOS and Android using Swift and Kotlin, and this is great if all you need is just running inference using some model. But what if your pipeline is more complicated? like running various image processing tasks before/after using the model output? in that case it would be more efficient to develop the entire pipeline once in C++, and use it in both iOS and Android.
In this video series we will see how to run inference in C++ using Tensorflow Lite C API and OpenCV. We'll also see how to use that code later in iOS, Android and Windows.
In this video we'll see how to develop an ObjectDetector class in C++ that we will be used across all platforms.
We will also test our detector on Windows.