Hey there,
So we are entered to the Coding Mode, All the concepts are over and you probably now have an idea about it. If it is not just go back read once again just like i am doing right now.
Welcome to this Colab where you will train your first Machine Learning model!
The problem we will solve is to convert from Celsius to Fahrenheit, where the approximate formula is:
f=c×1.8+32
Of course, it would be simple enough to create a conventional Python function that directly performs this calculation, but that wouldn't be machine learning.
Instead, we will give TensorFlow some sample Celsius values (0, 8, 15, 22, 38) and their corresponding Fahrenheit values (32, 46, 59, 72, 100). Then, we will train a model that figures out the above formula through the training process.
Import dependencies
First, import TensorFlow. Here, we're calling it tf for ease of use. We also tell it to only display errors.
import tensorflow as tf
Next, import NumPy as np. Numpy helps us to represent our data as highly performant lists.
import numpy as np
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
Set up training data
As we saw before, supervised Machine Learning is all about figuring out an algorithm given a set of inputs and outputs. Since the task in this Codelab is to create a model that can give the temperature in Fahrenheit when given the degrees in Celsius, we create two lists celsius_q and fahrenheit_a that we can use to train our model.
celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
fahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)
for i,c in enumerate(celsius_q):
print("{} degrees Celsius = {} degrees Fahrenheit".format(c, fahrenheit_a[i]))
Some Machine Learning terminology
Feature — The input(s) to our model. In this case, a single value — the degrees in Celsius.
Labels — The output our model predicts. In this case, a single value — the degrees in Fahrenheit.
Example — A pair of inputs/outputs used during training. In our case a pair of values from celsius_q and fahrenheit_a at a specific index, such as (22,72).
Create the model
Next, create the model. We will use the simplest possible model we can, a Dense network. Since the problem is straightforward, this network will require only a single layer, with a single neuron.
Build a layer
We'll call the layer l0 and create it by instantiating tf.keras.layers.Dense with the following configuration:
input_shape=[1] — This specifies that the input to this layer is a single value. That is, the shape is a one-dimensional array with one member. Since this is the first (and only) layer, that input shape is the input shape of the entire model. The single value is a floating point number, representing degrees Celsius.
units=1 — This specifies the number of neurons in the layer. The number of neurons defines how many internal variables the layer has to try to learn how to solve the problem (more later). Since this is the final layer, it is also the size of the model's output — a single float value representing degrees Fahrenheit. (In a multi-layered network, the size and shape of the layer would need to match the input_shape of the next layer.)
l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
Assemble layers into the model
Once layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as an argument, specifying the calculation order from the input to the output.
This model has just a single layer, l0.
model = tf.keras.Sequential([l0])
Note
You will often see the layers defined inside the model definition, rather than beforehand:
model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ])
Compile the model, with loss and optimizer functions
Before training, the model has to be compiled. When compiled for training, the model is given:
Loss function — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the "loss".)
Optimizer function — A way of adjusting internal values in order to reduce the loss.
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
l0 = tf.keras.layers.Dense(units=1, input_shape=[1])
Top comments (0)