DEV Community

Cover image for Quick guide to FaceApi Machine learning model for web - ML5.js
Gourav Singh Rawat
Gourav Singh Rawat

Posted on

Quick guide to FaceApi Machine learning model for web - ML5.js

What is ML5.js?

ml5.js is machine learning for the web in your web browser. Through some clever and exciting advancements, the folks building TensorFlow.js figured out that it is possible to use the web browser's built in graphics processing unit (GPU) to do calculations that would otherwise run very slowly using central processing unit (CPU). ml5 strives to make all these new developments in machine learning on the web more approachable for everyone.

What I find amazing about ML5.js is that it's really easy for beginners to get started and it also gives a nice idea about running machine learning models.

Getting started

ML5 provides an amazingly easy face api to work with.It provides this face-api.js that allows you to access face and face landmark detection.

Create a basic html page

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8" />
    <title>face-api</title>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.1.9/p5.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.1.9/addons/p5.sound.js"></script>
    <script src="https://cdn.jsdelivr.net/gh/ml5js/Intro-ML-Arts-IMA@ml5-build-10-7-19/ml5_build/ml5.min.js"></script>
    <link rel="stylesheet" type="text/css" href="style.css" />
    <meta charset="utf-8" />
  </head>
  <body>
    <script src="sketch.js"></script>
  </body>
</html>
Enter fullscreen mode Exit fullscreen mode

We are also importing p5.js, as it works best with Ml5.js.
Once we have imported all necessary things we can get started with our sketch.js.

So, the basic idea is that we create a video element at starting. Once

// initialized variables
let faceapi;
let detections = [];

let video;
let canvas;

function setup() {
  canvas = createCanvas(1080, 720); // canvas window
  canvas.id("canvas");

  // getting video of user
  video = createCapture(video);
  video.id("video");
  video.size(width, height);

  // making face details
  const faceOptions = {
    withLandmarks: true,
    withExpressions: true,
    withDescriptors: true,
    minConfidence: 0.5,
  };

  //Initialize the model:
  faceapi = ml5.faceApi(video, faceOptions, faceReady);
}
Enter fullscreen mode Exit fullscreen mode

We initialised some variables like video and canvas, in which we will setup our video element. We have created faceOption array for ML5 and it'll return us the details about the face data that we sent when initialising the model. we used ml5.faceapi() for this project as this is for detecting faces. Once a face is detected in our video element this should be called.

// on face detection
function faceReady() {
  faceapi.detect(gotFaces);
}
Enter fullscreen mode Exit fullscreen mode

Above here, gotFaces is a callback function, so we'll make another function once faceApi detects a face.
This is a tricky part!

// Got faces:
function gotFaces(error, result) {
  if (error) {
    console.log(error);
    return;
  }

  detections = result; //Now all the data in this detections:

  clear(); //Draw transparent background;:
  drawBoxs(detections); //Draw detection box:
  drawLandmarks(detections); //// Draw all the face points:

  faceapi.detect(gotFaces); // Call the function again at here:
}
Enter fullscreen mode Exit fullscreen mode

Once we get the face details, we store details in detections variable and clear previous on screen renders caused by our next step.
Next we need to draw box and face landmarks over our user's face.

Here we create two functions that draw box user's face.

function drawBoxs(detections) {
  if (detections.length > 0) {
    //If at least 1 face is detected:
    for (f = 0; f < detections.length; f++) {
      let { _x, _y, _width, _height } = detections[f].alignedRect._box;
      stroke(44, 169, 225);
      strokeWeight(1);
      noFill();
      rect(_x, _y, _width, _height);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Above, if check in detections array, if atleast one face is detected then we create a box with the help of coordinates that are provided with the help of Ml5 library. Remember we did a clear() in our previous gotFaces() function? We did this so once a face is detected, we create a box around it and then after few frames we clear that box and re-create it. So as to update it's coordinates.

Now to create face landmarks we do similar things.

function drawLandmarks(detections) {
  if (detections.length > 0) {
    //If at least 1 face is detected:
    for (f = 0; f < detections.length; f++) {
      let points = detections[f].landmarks.positions;
      for (let i = 0; i < points.length; i++) {
        stroke(47, 255, 0); // points color
        strokeWeight(5); // points weight
        point(points[i]._x, points[i]._y);
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Here, landmark points and coordinates are returned by ML5 api.

Add some basic css for centering canvas.

body {
  background-color: #000;
}

#canvas {
  position: absolute;
  top: 50%;
  left: 50%;
  transform: translate(-50%, -50%);
  z-index: 1;
}

#video {
  position: absolute;
  top: 50%;
  left: 50%;
  transform: translate(-50%, -50%);
  z-index: 0;
  border: 3px #fff solid;
  border-radius: 10px;
}
Enter fullscreen mode Exit fullscreen mode

Image description
And our face detection application is ready!
Hope you liked this basic start guide. Thanks for reading

Discussion (0)