Expanding on the blog i did last week i wanted to talk more about how you can explore and learn Keras. i was able to find a very simple data set to explore Keras on and followed a few tutorials to get my feet wet with this sort of programming before i delved further into machine learning. To Start us off i imported the very basics i needed to run a Keras program. The data set i was working with was on Pima Indians with diabetes dataset, i chose this because it had clear a clear set of categories i could develop and they were very short and simple.
- Number of times pregnant
- Plasma glucose concentration a 2 hours in an oral glucose
- tolerance test
- Diastolic blood pressure (mm Hg)
- Triceps skin fold thickness (mm)
- 2-Hour serum insulin (mu U/ml)
- Body mass index (weight in kg/(height in m)^2)
- Diabetes pedigree function
Class variable (0 or 1)
- the y variable whether a patient had diabetes or not
i imported the basic libraries i needed
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
i then loaded the data set that i was working with
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
X = dataset[:,0:8]
y = dataset[:,8]
The first hidden layer has 12 nodes and uses the relu activation function. (Rectified Linear Unit)
The second hidden layer has 8 nodes and uses the relu activation function.
The output layer has one node and uses the sigmoid activation function.
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
loss = binary_crossentropy
* Computes the cross-entropy loss between true labels and predicted labels.
optimizer = adam
* adam is a popular version of gradient descent because it automatically tunes itself and gives good results in a wide range of problems.
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
other metrics to look at: https://keras.io/api/metrics/
epoch = how many times the data set is passed into the AI
batch_size = The batch size defines the number of samples that will be propagated through the network
model.fit(X, y, epochs=150, batch_size=10)
you can evaluate your model on your training dataset using the evaluate() function
accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
768/768 [==============================] - 0s 63us/step - loss: 0.4817 - acc: 0.7708
768/768 [==============================] - 0s 63us/step - loss: 0.4764 - acc: 0.7747
768/768 [==============================] - 0s 63us/step - loss: 0.4737 - acc: 0.7682
768/768 [==============================] - 0s 64us/step - loss: 0.4730 - acc: 0.7747
768/768 [==============================] - 0s 63us/step - loss: 0.4754 - acc: 0.7799
768/768 [==============================] - 0s 38us/step
To find predictions based off of our model we can use the predict() function to predict how the model will go moving forward.
predictions = model.predict(X)
rounded = [round(x) for x in predictions]
Number of times pregnant => 0 (expected 1)
Plasma glucose concentration a 2 hours in an oral glucose => 0 (expected 0)
tolerance test => 1 (expected 1)
Diastolic blood pressure (mm Hg) => 0 (expected 0)
Triceps skin fold thickness (mm) => 1 (expected 1)
In conclusion i am still learning and i am not proficient in Keras and i Don't exactly know if all of my data was perfect but i wanted to do a rough tutorial on how to filter in data within Keras to show how easy it is to create models and predictions using this library for machine learning. the main take away i want people to read this short tutorial is that the keras library takes a lot of the convoluted coding and simplifies it even within tensorflow and allows for a much easier time taking data and creating models from the data.