DEV Community

Cover image for Ice Cream Or Dalmatian? Who Can Tell?! Building a Machine-Learning Powered PWA
Jen Looper for Microsoft Azure

Posted on • Edited on

Ice Cream Or Dalmatian? Who Can Tell?! Building a Machine-Learning Powered PWA

Tricky images courtesy of Karen Zack

Create a PWA for Image Inference Using Vue.js and Custom Vision AI

Tip! Do you want to try this tutorial in a cool interactive environment and earn a badge for completing it? Check out the companion module on Microsoft Learn!

I've been fascinated for a long time - inexplicably so - by a series of Instagram images created by Karen Zack (@teenybiscuit on Insta) who has generated a wonderful service to a world gripped by heavy news recently: the ability to determine images of parrots from guacamole, dachshunds from bagels, Labradoodles from fried chicken, and much more. Here is a sample of this seminal work:

Kitten Vs. Caramel

kitten vs. caramel

Puppy Vs. Bagel

puppy vs. bagel

Parrot Vs. Guacamole

parrot vs. guacamole

Note: you can build your app by following along in Microsoft Learn where this content is explained in more depth. Take a look! The entire codebase is also found here.

Working with these kinds of tricky images helps us to both lighten our mood, and to discover new ways of testing various methods of machine learning for image recognition and classification. In this tutorial, you'll learn how to use a snappy tool for image inference called Custom Vision AI so you don't have to train a model from scratch. You'll build a web app that can cycle through these images to determine whether the model can make a correct prediction.

Using a cognitive service that builds on pre-trained models is a great way to dip your toe into machine learning models using some of the excellent Azure Machine Learning cognitive services (my favorite cognitive services ML platform), build a completely useless web app (my favorite kind) and have some fun doing it (my favorite activity). Let's get started!

Bonus! we're going to turn this app into a PWA (Progressive Web App). This kind of app works offline and on your mobile phone, even while leveraging a ML model - no API calls outside the app will be made! We might as well learn how to do this since it's a neat way to build a ML-infused app and a good thing to know how to do.

Scaffold your PWA

First, spin up a plain vanilla Vue.js app. Use the snazzy vue ui or start from the Vue CLI, assuming you have all the necessary requirements installed on your local machine. If you're using the Vue CLI via vue create my-tricky-app, manually select the features you want to install into your app, making sure to select 'PWA':

cli

A basic Vue web site will be created with some extra files that control the behavior of your PWA. Specifically, these files include:

  • registerServiceWorker.js
  • service-worker.js
  • several icons for use on various platforms
  • manifest.json in the /public folder that manages these icons and other important elements

You can use your browser's Developer Tools > Audits pane to run a Lighthouse audit on your web app. This will reveal interesting data such as how it measures up in an offline scenario. Note, you need to be running your app on a webserver in production to view a proper Lighthouse audit, so you can come back to this step later to get this more detailed information.

web app

Now you're ready to build out the app's inference that will test various images against the machine learning model you're going to build in Custom Vision AI. You're going to use the images Karen Zack used to create her Dalmatians vs. Ice Cream collage, which I've separated into 16 images. You can download them here and include them in your local /src/assets/images folder.

For a more detailed explanation of the reasoning behind the following code snippets, please visit the Learn module.

Build Your Interface

Rename the default <HelloWorld/> page to <DetectImage/>, referencing it as such in App.vue. Then, create a simple template with a header, image and button, changing the value of the msg prop to What do you see?:

  <template>
    <div class="hello">
      <h1>{{ msg }}</h1>
      <div>
        <img class="image" ref="img" :src="require('../assets/images/' + getImgIndex + '.jpg')" />
      </div>
      <div>
        <button class="button" @click="next()" :disabled="disable">Next</button>
      </div>
      <div
        v-for="pred in predictions"
        :key="pred.index"
      >{{ pred.label }}: {{ pred.probability.toFixed(0) + '%' }}</div>
      <div v-if="!predictions.length">hmm.....</div>
    </div>
  </template>
Enter fullscreen mode Exit fullscreen mode

Now you need to get your app ready to host a model trained in Custom Vision AI.

Train Your Model

Here's where the fun starts. Go to CustomVision.ai and sign in. Create a project on a resource group available to you (create one if you don't have one handy). This project should be created as a classification project as you're doing a binary classification. Select MultiClass as there's only one tag per image, and choose the General (compact) domain so that you can use your model on the web. Export it for a Basic platform as you'll use it within a TensorFlow.js-powered context.

create a project

Now you're going to teach the pretrained models a little about ice cream and dalmatians! To do this, you're going to need several images of these things - start with about ten of each class. I searched for 'chocolate chip ice cream` and used a cool extension to scrape the images off of the web page to create a training imageset.

Note, I only trained on six images per class. This is too small a set, of course, for accurate transfer learning but Custom Vision AI still handles this small set pretty well. For a production caliber model, you would of course want a bigger imageset.

Save your ice cream and dalmatian images in two separate folders per class (ice cream and dalmatian) on your local machine. In the Custom Vision AI interface, drag and drop your folders, one at a time, into the web page. Tag the dog images dalmatian and the ice cream images ice cream:

tag your images

When your images are uploaded and tagged, you can start the training routine. Select the train button and watch your model build! When it's done, you will see its accuracy. Test it against a new image of a cute doggo. How accurate is your model?

test

Now you can download the model files that were generated and place them in your web app in public/models:

  • cvexport.manifest
  • labels.txt
  • model.json
  • weights.bin

Why place these files in /public, rather than in /assets or elsewhere? The public folder in a Vue.js app is a place to store static assets that should not be built by webpack. The four files produced by Custom Vision AI's build process need to stay untouched and be served ad hoc by your app without being bundled by webpack.

Now you can use these in your web app.

Complete The Web App

You need to install a few libraries via npm to support the use of the machine learning files.

  1. In your package.json file in the root of your web app, add "customvision-tfjs": "^1.0.1", to the dependencies list.
  2. In the same file, also add "raw-loader": "^4.0.0", to the devDependencies list. You need this package to manage reading .txt files in your Vue app.
  3. In the same file, finally add "webpack-cli": "^3.3.10" to the devDependencies list so that the webpack CLI will be usable within the app, also necessary for text file parsing.

Fun fact! CustomVision-tfjs uses TensorFlow.js under the hood.

In your terminal in VS Code, stop your app if it's currently running (ctrl-c) and add these packages: npm install. Now you can start building the <script> area of your app.

Under the <template>'s closing tag, create a new <script> tag with the following code:

<script>
import * as cvstfjs from "customvision-tfjs";
import labels from "raw-loader!../../public/models/labels.txt";
export default {
  name: "DetectImage",
  props: {
    msg: String
  },
  data() {
    return {
      labels: labels,
      model: null,
      predictions: [],
      image: 0,
      numImages: 16
    };
  },
  computed: {
    getImgIndex() {
      return this.image.toString();
    },
    disable() {
      if (this.image == this.numImages) {
        return true;
      } else return false;
    }
  },

  async mounted() {
    this.image++;
    //load up a new model
    this.model = new cvstfjs.ClassificationModel();
    await this.model.loadModelAsync("models/model.json");
    //parse labels
    this.labels = labels.split("\n").map(e => {
      return e.trim();
    });
    //run prediction
    this.predict();
  },

  methods: {
    async predict() {
      //execute inference
      let prediction = await this.model.executeAsync(this.$refs.img);
      let label = prediction[0];
      //build up a predictions object
      this.predictions = label.map((p, i) => {
        return { index: i, label: this.labels[i], probability: p * 100 };
      });
    },

    next() {
      this.image++;
      this.predictions = [];
      setTimeout(this.predict, 500);
    }
  }
};
</script>
Enter fullscreen mode Exit fullscreen mode

Let's walk through this code. First, we import cvstfjs from the npm library we installed earlier, to help manage the Custom Vision models we built.

Then, we load the labels .txt file. This makes use of the raw-loader package. You need to tell webpack how to handle this type of text file, so add a new file to your root, if it's not there, called webpack.config.js with the following code:

module.exports = {
  module: {
    rules: [
      {
        test: /\.txt$/i,
        use: 'raw-loader',
      },
    ],
  },
};
Enter fullscreen mode Exit fullscreen mode

Your Data object stores references to the variables you will use while building the inference methods.

There are also some computed properties. These are used to compute various UI elements, such as the index of the image being shown and the moment the 'next' button needs to be disabled when there are no more images to be shown.

In the asynchronous mounted lifecycle hook, you load your model. Models can be large, so it's best to wait till they load along with the labels files, which must also be parsed, before starting inference. Finally, when everything is ready, you call predict.

Predict() is also asynchronous, and uses Custom Vision's npm library to match predictions to labels. After a prediction is made, the next button can be clicked and prediction can start on the following image. Note, you use a setTimeout method to slow the prediction from starting until the image is loaded.

Once you are satisfied with the performance of your model and its predictions, you can publish your web app to a hosting provider such as Azure websites.

Remember how your app is a PWA? Once your app is built and published, you can switch to 'offline' mode using DevTools and watch how you can continue to use the inference methods. Custom Vision AI also allows you to create endpoints for your models, but using them requires online latency.

Publishing your app to Azure Websites

The absolute easiest way to do this is via a GitHub Action. Follow these instructions to create a workflow and connect the Azure portal to GitHub. Every time a change is made to your app, it will be rebuilt. It's a good way to refresh your models simply with a push to GitHub.

But wait! If you do publish to Azure, you need one more file in your root, a web.config file that will enable .json files to be parsed, so create this file and add this code to it:

<?xml version="1.0" encoding="utf-8"?>
  <configuration>
    <system.webServer>
      <staticContent>
        <remove fileExtension=".json"/>
        <mimeMap fileExtension=".json" mimeType="application/json"/>
      </staticContent>
    </system.webServer>
</configuration>
Enter fullscreen mode Exit fullscreen mode

Oh and one more thing! One last change you need to make is to enable service-workers to be built. You need to create one more file in your app's root folder called vue.config.js that can contain this code:

module.exports = {
    pwa: {
        workboxOptions: {
            exclude: [/\.map$/, /web\.config$/],
        },
    },
};
Enter fullscreen mode Exit fullscreen mode

This file tells the service worker to ignore the web.config file you added earlier, whose existence causes problems for the service-worker build process.

Now you can watch your app working both on and offline when it's published to a web server!

a dalmatian in a web site

Conclusion

In this article you learned how to build a Vue.js web app powered by machine learning models that can also work offline as it's a PWA with embedded files. Moreover, you learned how to deploy such an app to Azure itself, a true end to end solution for your image inference needs. I hope you try CustomVision.ai when looking for a nice solution for image processing, as it's a superb way to handle image inference which is not easy to build from scratch. Please let me know what you build in the comments below! And if you'd like to watch a video of me explaining some of the elements that went into building this app, check out the video below.

Top comments (3)

Collapse
 
hermanmoreno profile image
Herman • Edited

Fascinating read on using machine learning in distinguishing between ice cream and Dalmatians! The blend of technology and creativity is truly impressive. Speaking of ice, those interested in precision cutting for ice sculptures or intricate projects might find the ice cutting saw C70 at incredibly useful. It's all about having the right tools for the job, whether in machine learning applications or crafting with ice!

Collapse
 
andrehe001 profile image
Andre Heim

hi Jen, many thanks for this great example. I created the app locally and it works fine. When I upload it to Azure App Service via GitHub actions the process runs through without any problem but the prediction part of the Web App does not work and produces the error:

Request to /models/model.json failed with status code 404. Please verify this URL points to the model JSON of the model to load.

It seems as the model.json file isn't there but looking with Kudu I see the folder and the file. Any hints, what could be wrong?

Collapse
 
jenlooper profile image
Jen Looper

hi, I think you figured it out? I think there's a path issue in your /models folder?