DEV Community

Álvaro
Álvaro

Posted on

A tale of Tensorflow.js - Chapter 2: Models

Hey everyone, it's Álvaro, and today we are going to start the chapter 2 of our Tensorflow.js journey.
And today: models!

Today begins the fun part.

If you didn't read the first part, go ahead and start from there:
alvarojsnish image


First of all: I created an HTML boilerplate for the rest of the series here:

GitHub logo AlvaroJSnish / A-tale-of-Tensorflow

Repo for our series on tensorflow in https://dev.to

You can fork it and switch to the branch boilerplate.

Now let's start, but we need a bit of theory first:

Machine Learning introduces a new way or form of thinking and coding.
We are used to make apps where we fetch data, we process it through a lot of rules (ifs, elses, conditions, etc) to get answers about that data.

With ML everything is different. We know the answers for the data or the questions we have, and we are going to give our machines that answers, but their job now is to figure out the rules.

Example: we are going to feed the networks with pictures of dogs and cats, and we are telling that every pic of a cat is a cat, and every pic of a dog, it's a dog. Now it's job is to figure out why.

In every way of learning (there are 4 major ways of learning in ML), there are features, and there are labels:

Features: Represent the caracteristics of the data. Number of bathrooms in a house, number of doors in a car, legs in an animal, etc.
Labels: Are the answers we want the network to figure out. The price of that house or car, or what animal appears in this picture.

But sometimes we cannot train a network with labels, and that leads us to the different learning methods I said:

Supervised learning: When we have our features and our labels.
Unsupervised learning: We have the features, but we don't have the labels.
Semi-supervised learning: We don't have all the labels, but we have all the features.
Reinforcement learning: We are not playing with that by now, but it's used in scenarios where involves behaviours and actions. Self-driving cars in example.

Now, what is a model? A model is somewhat what we call our neural networks.
We'll go deeper in that on the Layers chapter, but the neural network have a set of layers, and that layers got neurons, every neuron activated by a function to process the inputs and outputs that comes to them.

Let's code

If you download the boilerplate you should have tensorflow added to the dependencies, if not:

npm install @tensorflow/tfjs
Enter fullscreen mode Exit fullscreen mode

Create a js file in the root directory and append it to our html:
index.js

console.log('hi');
Enter fullscreen mode Exit fullscreen mode

Append it to the index.html head:

<script src="index.js"></script>
Enter fullscreen mode Exit fullscreen mode

Let's start defining our features and our labels.
We want to make our network figure out a simple equation:
y = 2x + 3

To do so, we import tf and create our tensors.

import * as tf from '@tensorflow/tfjs'

const x = tf.tensor([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
const y = tf.tensor([5.0, 7.0, 9.0, 11.0, 13.0, 15.0])
Enter fullscreen mode Exit fullscreen mode

X are our features, Y our labels. You can see the relation is y = 2x + 3

Next, let's create our model:

const model = tf.sequential({
  layers: [
    tf.layers.dense({
      units: 1,
      inputShape: [1]
    })
  ]
})
Enter fullscreen mode Exit fullscreen mode

Units are the number of neurons that the layer has, meanwhile inputShape is the input_size of the tensor we are passing to it.

By now, let's stay with these properties.

Now, we need to compile and train our model:
We need to choose an optimizer, and a loss function. We'll go in deep with this in later chapters, for now, we are going to use sgd as optimizer and mse as loss.
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
https://es.wikipedia.org/wiki/Error_cuadr%C3%A1tico_medio

We'll train it for 500 epochs (500 "loops"), and we'll watch how or loss decreases for every train.

We are going to wrap everything inside a function:

async function main() {
  await model.compile({
    optimizer: 'sgd',
    loss: 'meanSquaredError'
  });

  function onBatchEnd(batch, logs) {
    console.log(`Error: ${logs.loss}`)
  }

  await model.fit(x, y, { epochs: 500, verbose: true, callbacks: { onBatchEnd } });
}
Enter fullscreen mode Exit fullscreen mode

Notice how in model.fit we passed our data first and our labels next.

Now it's time to make some predictions on y = 2x + 3.
If we predict hmmm... 10 on x, y should be 23. Let's try:

async function main() {
  await model.compile({
    optimizer: 'sgd',
    loss: 'meanSquaredError'
  });

  function onBatchEnd(batch, logs) {
    console.log(`Error: ${logs.loss}`)
  }

  await model.fit(x, y, { epochs: 500, verbose: true, callbacks: { onBatchEnd } });

  const prediction = await model.predict(tf.tensor([10]));

  console.log(`Prediction: ${prediction}`)
}

main();
Enter fullscreen mode Exit fullscreen mode

I trained it for 1000 epochs and gave me this result:
Prediction result

Why it's a little bit more than the correct answer, 23? Our network is figuring out the algorythm and the equation y = 2x + 3. It's starting to think that it's a number close to 2 (a bit above and beyond) and the same with 3. But we have a very very very little quantity of data to train with (only 6 features), that's why it's not enough to figure out the exact number.

But it's a good start for our journey here. We'll go in deeper with custom models, using all the properties it has, custom training, etc.

As always, it's been a pleasure, I hope you enjoyed it.
See'll you in the next chapter!
Álvaro

Top comments (0)