Let’s speak about Machine Learning and cryptography. Are they a match?
Let’s imagine that you are in ML.
You trained fantastic ML models that add cat’s ears =^..^= (nekomimi) to all people on the video. You decided to make an app for that! Suddenly, your app became popular, and some people wanted to copy it. So, it would be best to protect your ML models from leakage and misuse.
Simplified, it works like this: users upload their videos to your app. Your app takes them to your backend, which generates a video-specific ML model, and then sends it back to the app. Then your application stores and executes it.
Being a 💪 security pro, you understand that ML models need protection. But from a data security perspective ML model is a… just file with model data and procedure/algorithm. So, you’re to adjust your security efforts and protect those tiny ML models—from their generation point to their usage.
You carefully add encryption: the backend will encrypt each ML model per user per video using ephemeral keys and an HPKE-like approach. It means that every ML model will be explicitly encrypted for specific videos by your backend code. This approach is known as application-level encryption (ALE).
Your mobile apps will receive an encrypted model and decrypt it before usage. Each ML model is encrypted by a unique encryption key used only once to make things complicated for attackers.
Indeed, you don’t want to leave encryption alone.
So, you add multiple protection measures: use Keychain/Keystore on a device, add logging and monitoring on a server, and an anti-fraud system that prevents sending ML models to untrusted users.
Curious to learn details?
Dive into the full video—to learn more about cryptography, cloud storage security, API protection, anti-fraud system, etc.
Sounds too complicated as for protecting =^..^= cat-ears ML model?
Well, imagine a financial analytics ML model instead. We built specific ML-protection technologies several times, as it was exciting every time.