DEV Community

Cover image for Fruit quality detection web app using SashiDo and Teachable Machine
Nishka Kotian
Nishka Kotian

Posted on

Fruit quality detection web app using SashiDo and Teachable Machine

Hello! I recently built a web application that can identify if a fruit(apple,orange or banana) is fresh or rotten and I'd like to share how I went about it.I thought this was an interesting idea and also something that has real life applications.An automated tool that can scan through fruits and get rid of the spoilt ones would be really beneficial in the agriculture industry.I used Google's Teachable machine to train a machine learning model and SashiDo for storing images.Users can either upload an image or use their webcam to get prediction results.

Here's a short demo video showing how the website works.

Table Of Contents

Trainable Machine

For classifying fruits the first step is to generate a ML model.Teachable machine is a web-based tool that can be used to generate 3 types of models based on the input type, namely Image,Audio and Pose.I created an image project and uploaded images of fresh as well as rotten samples of apples,oranges and banana which were taken from a kaggle dataset.I resized the images to 224*224 using OpenCV and took only 100 images in each class.

Upload images to train model

There are a few advanced settings for epochs,learning rate and batch size, but I felt the default ones were good enough for the task.After training, I exported the model and uploaded it.This stores it in the cloud and gives a shareable public link which can be then used in the project.

Exporting model

The next step would be to use to model to perform classification.There are two ways of providing input,we shall go through both of them.


SashiDo is a beautiful backend as a service platform and has a lot of built in functions.In this project, I've used only the Files functionality to store images uploaded by users.I agree that this isn't totally necessary but it is a great way to obtain more samples from the public and build a better dataset.To connect the application with SashiDo copy the code in the getting started page in SashiDo's Dashboard to the javascript file and also add the following script.

<script src=></script>
Enter fullscreen mode Exit fullscreen mode

Connect SashiDo to the web app


I've created two buttons to start/stop the webcam and to upload image, an input element for file upload and 3 empty divs to display the webcam input,image input and the output(prediction result).I have used Bootstrap, so in case you're not familiar with it, the class names basically correspond to various utilities in it.

<label for="webcam" class="ps-3 pt-3 pb-3">USE WEBCAM:</label>
<button id="webcam" type="button" class="btn btn-outline-primary ms-3" onclick="useWebcam()">Start webcam</button><br />
<label class="p-3" for="fruitimg">UPLOAD IMAGE:</label>
<div class="input-group px-3 pb-3" id="inputimg">
    <input type="file" class="form-control" accept="image/*" id="fruitimg">
    <button class="btn btn-outline-primary" id="loadBtn">Load</button>
<div id="webcam-container" class="px-3"></div>
<div id="uploadedImage" class="px-3"></div>
<div id="label-container" class="px-3 pt-3"></div>       
Enter fullscreen mode Exit fullscreen mode

Website design

Webcam based prediction

Web cam image predicts fresh banana

The model can be used in our javascript project easily using the Teachable Machine library for images.To use the library, just add the following scripts at the bottom of the html file.Alternatively, you could also install the library from NPM.

<script src=""></script>
Enter fullscreen mode Exit fullscreen mode

The following code helps in toggling the webcam button and declares some variables.The URL constant is set to the model link.

const URL = "";

let model, webcam, newlabel, canvas, labelContainer, maxPredictions, camera_on = false, image_upload = false;

function useWebcam() {
    camera_on = !camera_on;

    if (camera_on) {
        document.getElementById("webcam").innerHTML = "Close Webcam";
    else {
        document.getElementById("webcam").innerHTML = "Start Webcam";

async function stopWebcam() {
    await webcam.stop();
Enter fullscreen mode Exit fullscreen mode

Now,we can load the model and perform the prediction and display the class having highest probability.

// Load the image model and setup the webcam
async function init() {

    const modelURL = URL + "model.json";
    const metadataURL = URL + "metadata.json";

    // load the model and metadata
    model = await tmImage.load(modelURL, metadataURL);
    maxPredictions = model.getTotalClasses();

    // Convenience function to setup a webcam
    const flip = true; // whether to flip the webcam
    webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip
    await webcam.setup(); // request access to the webcam

    // append element to the DOM

    newlabel = document.createElement("div");
    labelContainer = document.getElementById("label-container");

async function loop() {
    webcam.update(); // update the webcam frame
    await predict(webcam.canvas);

async function predict(input) {
    // predict can take in an image, video or canvas html element
    const prediction = await model.predict(input);

    var highestVal = 0.00;
    var bestClass = "";
    result = document.getElementById("label-container");
    for (let i = 0; i < maxPredictions; i++) {
        var classPrediction = prediction[i].probability.toFixed(2);
        if (classPrediction > highestVal) {
            highestVal = classPrediction;
            bestClass = prediction[i].className;

    if (bestClass == "Fresh Banana" || bestClass == "Fresh Apple" || bestClass == "Fresh Orange") {
        newlabel.className = "alert alert-success";
    else {
        newlabel.className = "alert alert-danger";

    newlabel.innerHTML = bestClass;
Enter fullscreen mode Exit fullscreen mode

Uploaded image based prediction

Uploaded image predicting it as rotten orange

The second way of providing input is by uploading an image. I've used a little bit of jQuery code to do this.Essentially, once a user selects an image file using the input element on the client side and clicks load,the reference to the file is obtained using a click handler and a new Parse file is created.A Parse file lets us store application files in the cloud that would be too large to store in a object.Next,I created a canvas element to display the saved image and used it to predict the class of the uploaded image.

$(document).ready(function () {
    $("#loadBtn").on("click", async function () {

        labelContainer = document.getElementById("label-container");

        image_upload = !image_upload;

        if (!image_upload) {

        const fileUploadControl = $("#fruitimg")[0];
        if (fileUploadControl.files.length > 0) {

            const modelURL = URL + "model.json";
            const metadataURL = URL + "metadata.json";

            // load the model and metadata
            model = await tmImage.load(modelURL, metadataURL);
            maxPredictions = model.getTotalClasses();

            const file = fileUploadControl.files[0];

            const name = "photo.jpg";
            const parseFile = new Parse.File(name, file);

   function () {
                //The file has been saved to the Parse server

                img = new Image(224, 224);
                img.crossOrigin = "Anonymous";
                img.addEventListener("load", getPredictions, false);
                img.src = parseFile.url();

            }, function (error) {
                // The file either could not be read, or could not be saved to Parse.
                result.innerHTML = "Uploading your image failed!";
        else {
            result.innerHTML = "Try Again!";
Enter fullscreen mode Exit fullscreen mode

In the below code a canvas is created to display the image and prediction is done using the same predict function that was used for webcam.

async function getPredictions() {

    canvas = document.createElement("canvas");
    var context = canvas.getContext("2d");
    canvas.width = "224";
    canvas.height = "224";
    context.drawImage(img, 0, 0);

    newlabel = document.createElement("div");
    labelContainer = document.getElementById("label-container");

    await predict(canvas);
Enter fullscreen mode Exit fullscreen mode

That's it! Any fruit can now be tested for defects.


I had a lot of fun making this project and learnt a lot doing it.I hadn't used SashiDo nor Teachable machine before so this was a nice opportunity for me to learn about them.I hope you enjoyed reading this.I think this is a pretty simple project, so if you have some time and are interested, go ahead and try building it yourself!

Github link

Github repo
Check out the project here


SashiDo -
Teachable Machine -
Teachable Machine library -
Dataset -
Parse SDK -
Parse File -

Top comments (1)

monfernape profile image
Usman Khalil

Cool idea.