Hey, there hope you are doing well. So, recently I started going through the Fast.AI deep learning curriculum where two brilliant persons - Jeremy Howard and Rachel Thomas teach Deep Learning. One is a very experienced programmer and another is a mathematician so what could be a better combination .
Learning with Fast.AI
The interesting thing about the course is that it takes the top-down approach to teach, meaning first you code and train models and later you understand the math or the underlying concepts behind it. And according to various experiences this approach works better than the Andrew Ng way of teaching i.e bottom-up which is also great. Whatever works for you . I am more of a hands on guy so this is great for me .
TESLA or NOT !
Okay now let's get to the topic. So, in Lesson 2 you learn how to do image recognition or object identification and the Fast.AI library makes it very easy to do so in just some lines.
And as I always keep saying in my YouTube videos to always get your hands dirty cuz that's the way you really learn, so I decided to make a model that detects whether a car is a TESLA or NOT. Obviously what can you expect from a Elon admirer .
How to do it ?
Now let's go through the lines of code that make it happen and I will be explaining you what is actually happening there.
For some more context, I wrote this on a Google Colab notebook with a GPU instance to train the models faster.
1. Let's install some necessary packages that we need to use in the program .
!pip install fastai
!pip install -Uqq fastbook
!pip install jmd_imagescraper
Now when you run this and some of these are already installed in colab then you may get something like this :
This just says that some package is already installed and we are on the safe side so let's move on .
2. Let's Import the Required Packages.
import fastbook
fastbook.setup_book()
from fastbook import *
from fastai.vision.widgets import *
from pathlib import Path
from jmd_imagescraper.core import *
3. Creating the Classifying Categories
classify_car = 'cars', 'tesla car'
tesla_models= 'tesla model x','tesla model y','tesla model s','tesla model 3', 'tesla roadster','tesla cybertruck'
path= Path('images')
path.mkdir(exist_ok= True)
- Line 1 : Here I set the two main parent categories to which we will be classifying the images .
- Line 2 : Actually after just scraping images by 'tesla car' I realized that it was too ambiguous so I made this list to also train the model on the basis of all the specific models of Telsa to increase the model's accuracy and it did .
- Line 3 : Here I just set the path with another folder named 'images' .
- Line 4 : Here I create the folder using .mkdir function and the 'exist_ok' tag which checks whether the folder is already made or not as we run the cell many times.
4. Let's scrap and get the image data .
For better context currently our 'path' variable is in the 'images' folder which is currently empty .
for model in classify_car:
mpath = (path/model)
mpath.mkdir(exist_ok= True)
img = duckduckgo_search(mpath,'',f"{model}",max_results=150)
for models in tesla_models :
tpath=(path/'tesla car')
img = duckduckgo_search(tpath,'',f"{model}",max_results=150)
- Line 1 : Here as you can see I am going through the two categories in the 'classify_car' array .
- Line 2 : Here I set the new path with category .
- Line 3 : Here I create the new folder .
- Line 4 : Here I use the duckduckgo_search() function to download the images.
More about the duckduckgo_search() function :
- First Argument : The path where images will be downloaded.
- Second Argument : The new folder in which it will download the images but we don't need that as we have already made the folder so we leave it blank.
- Third Argument : It is the search term that will get searched in the DuckDuckGo search engine
- Fourth Argument : It is maximum number of images it will download .
So, after it's execution you must have two new folders inside the 'images' i.e 'cars' and 'tesla cars' with 150 images in each .
Now in the next for loop the same thing happens, where I have just set the path to the 'tesla cars' and I pass in the specific models from the 'tesla_models' array to download their images into the folder too .
5.Checking out the images
files = get_image_files(path)
len(files)
- Line 1 : Just gets all the images in the 'images' folder .
- *Line 2 *: This prints the number of files or images you have in total .
corrupted= verify_images(files)
corrupted
- This checks for any corrupted images and displays if it found any .
corrupted.map( Path.unlink) #Remove corrupted files
- This removes all the images that it found to be corrupted and we are just left with the good images ready to be trained .
6. Getting the data ready to load
data= DataBlock(blocks=(ImageBlock,CategoryBlock),
get_items=get_image_files,
get_y= parent_label,
splitter = RandomSplitter(valid_pct=0.2,seed=42 ),
item_tfms = Resize(128))
Here we use DataBlock which is a high level api which prepares the final form of data before it gets loaded into Dataloaders .
Let's see what's happening here :
- Line 1 : As our model is based on images as input and category as output we use the ' blocks=(ImageBlock,CategoryBlock) '.
- Line 2 : We get all the image files .
- Line 3 : Here y is considered as the output so we set it to the 'parent_label' which gets the name of the folder in which the images are located which is what our output will be when we classify a new image .
- Line 4 : Here we split 20 % of the data set into validation set .
- Line 5 : Here we transform our images by resizing them all into 128 x 128 squares to maintain uniformity in the dataset.
7. Loading the data
data_load = data.dataloaders(path)
- Here we finally load the data into the the DataLoader .
data_load.valid.show_batch(max_n=10 , nrows=2)
- This shows you some of the images in the data .
data = data.new(item_tfms=Resize(128), batch_tfms=aug_transforms())
data_load = data.dataloaders(path)
Here we are optimizing the dataset further by using aug_transformer which is a Utility function to easily create a list of flip, rotate, zoom, warp, lighting transforms .
8. Lets Train the Model
model = cnn_learner(data_load, resnet18 , metrics=error_rate)
model.fine_tune(10)
Here we train the model using the cnn_learner factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data.
You must see something like this after your model finishes training :
Now let's take a look at the confusion matrix to see how our model has performed :
interp= ClassificationInterpretation.from_learner(model)
interp.plot_confusion_matrix( )
That looks good !
Congrats you have successfully trained the model .
Let's Predict Now - Moment of Truth
tst= (path/'test')
tst.mkdir(exist_ok=True)
img = download_images(tst,urls=['https://www.businessinsider.in/photo/52674243/with-model-3-demand-surging-tesla-is-bringing-back-a-66000-version-of-its-model-s.jpg'])
image = PILImage.create('/content/images/test/00000000.jpg')
pred, predId, prob = model.predict('/content/images/test/00000000.jpg')
if pred=='cars':
print(f'Hey I am {prob[predId]*100:.04f} % sure that this image is a NOT a Tesla ')
else :
print(f'Hey I am {prob[predId]*100:.04f}% sure that this image is a Tesla !')
image
You can just change the url in 'urls=[...]' to whatever image's url you want to test on .
This is how it should look like :
I will be working on it's deployment now and do ask if you have any questions !
Top comments (0)