Hey! Welcome back. In my last blog, we found the ideal learning rate of the model using the LRScheduler. Today we are going to train a model with the ideal learning rate (0.0003) and see how it goes. if it goes well, we will build our final model which is trained on the whole dataset.

Experiment number 1

I retained the same architecture and changed only learning rate from 0.001 to 0.003. The model looked something like this:


Hello! In my last blog, I created a baseline model using MobileNet. It had an accuracy of 52.78% on validation dataset. Today, I tried to tweak a very important hyperparameters of the model.

Faster Experiments

Hyperparameter tuning in TensorFlow is an experimental process. In order to reduce the duration of each experiment, I decided to temporarily shrink the size of he train and validation dataset. I indexed the file paths and labels to the first 5000 samples to decrease the dataset size.


Welcome back! In the last blog, we pre-processed our data to bring it into a form that our models understand — numbers. In this blog, I will write about the steps I took for modelling.

Transfer-Learning and MobileNet

Transfer Learning is a process where we use pre-trained models and modify them to suit our needs.

Advantages of Transfer Learning :

  • Comparatively less data is required
  • Pre-defined weights help in better accuracy
  • Easier and quicker to train

Modifying our labels

When I tried to train my MobileNet model, I ran into an unexpected error and the model architecture required our labels to be in the form of…


Hey everyone! Welcome back. This is the day 2 of my Distracted Driver project. If you haven’t read my first day’s blog, it is here.

In the last blog, I had aligned my images and labels by re-creating parts of TensorFlow’s image_dataset_from_directory().

What is Data pre-processing and why do it?

Our computers don’t understand images. All they understand is numbers. Therefore, we will have to convert the images into number before we feed it to our models. The steps in data pre-processing change according to the objective of the project.

These are the steps I took for my project:

  • Numerical Encoding (converting images into numbers)
  • Normalizing (turning numerical…


Hey there! Hope you are doing well. I started a project using Kaggle’s Distracted Driver dataset today and I am going to write down my daily progress in here.

I am using a Colab notebook as it is super-easy to setup and just like a Jupyter Notebook.

Getting the Data from Kaggle

I first tried to use the Kaggle API in the notebook as it was an efficient way of getting the data into the Colab environment. This didn’t work due to some path issues, so I downloaded the .zip file to my local computer and then uploaded it to Google Drive.

Getting the Data into folders

Then I needed…


This article is a continuation of my previous article on data modelling and evaluation.

correction from previous blog: I tried quite a lot of things to get my model’s score lesser than 100% accuracy, but it didn’t work, so in this article, we will skip the hyperparameter-tuning part and see how to save and reload our model.

Saving our Model

In this project, we are dealing with very small amounts of data and a very small model that takes no more than a few seconds to train. Most of the time, the projects will have a lot of data and very complex models…


This article picks off from my previous article on data pre-processing.

Data Modelling

Everything we have done till now has just been a preparation for this step — Data modelling!

In this section, we will experiment on 4 models and choose the best one as our final model. The models we are going to try are:

  1. RandomForestClassifier
  2. LogisticRegression
  3. KNeighborsClassifier
  4. LinearSVC

In this article, we will not get into the theory of these models, but if you want to understand more about how these models works check out LogisticRegression, RandomForestClassifier, KNeighborsClassifier and LinearSVC.

The really good thing about Scikit-Learn is that all of…


This is the continuation of my previous article on exploratory data analysis.

Data pre-processing

Data, in order to be modelled, needs to cleaned and processed.

The most common data pre-processing steps are:

  • Feature Imputing (filling missing values. We don’t have missing values in our dataset.)
  • Feature Encoding (Converting categorical values into numbers. We have plenty of this)
  • Data splitting (splitting data into train and test splits)

Feature Imputing

In this dataset, we don’t have missing data, but as feature imputing is a very important step in Machine learning, and needs to be done in majority of the data, we will perform it. (This is…


In this article, we will be picking off from where left off in the previous article. This article will be all about exploring data.

Getting the dataset for the project

The first step is to download the dataset. Data collection is perhaps one of the most difficult challenge faced by anyone who is using machine learning.

Fortunately for us, we can get the required data from a website named Kaggle.

First, we need to register for Kaggle. When you have done so, you will be redirected to a page like this:


In this article, we will be getting our project environment and workflow set up. For understanding what machine learning is, check out my previous article

Miniconda Installation

First, download the Miniconda Installer. While choosing the installer, do so according to you computer OS and bit specification (32-bit or 64-bit). Make sure you install a python 3.x installer and not python 2.7 installer.

Next, run the Miniconda installer and click “Next” and “Agree”.

Chiraag K V

Programming enthusiast, bibliophile

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store