Getting Started with Docker !

In this new blog, lets get our hands immersed in the Docker containers and what is the difference between the Docker and Virtual Machines and Why Docker is more powerful than Virtual machines. The internal working of the docker is explained in simple terms for easy understanding.

So, Lets get started !

What is Docker ?

Lets break it down into simple words, Docker is a platform for developing and deploying applications by isolating them.

What is Virtual machine ?

Virtual machines is like a emulator, which allows to run an operating system in an app window on our desktop that behaves like a full, separate computer allowing the developers to develop and deploy applications.

Difference between Virtual machines and Docker ?

As you can see, virtual machines isolate the entire system whereas docker container isolates the application.

Virtual machines Architecture

From the above architecture,

  1. Infrastructure – refers to Laptops, Systems
  2. Host OS – refers to Operating system (Linux, Windows, Mac OS)
  3. Hypervisor – refers to managing director who manages and allocates resources and provides access to the applications
  4. Guest OS – refers to the guest operating system which the developer wish to run (various varieties of Linux)
  5. Bins/Libs – refers to binaries and libraries associated with the guest operating system which occupies more space
  6. App1, App2, App3 – refers to the application running on different guest operating system

Docker Architecture

1 Infrastructure, Host OS,Bins/Libs and Apps are same as the Virtual machines.

2 Docker Daemon – similar to hypervisor which provides interface and isolates  the applications from the host operating system.

With these, you might have gained the difference between Virtual machines and Docker containers.

Lets make it more clear by diving into simple “hello-world” example :

Running Docker “hello-world”:

Docker Desktop should be installed based on the operating system you are using.

The simple working of the docker is explained in the above diagram.

After installing, try running the below command from your favourite command prompt :

$ docker run hello-world

“Hello-world” is the official docker image which is available in the Docker Hub. It is similar to running “hello-world” program.

When you run this command, the docker searches for the “docker-image” locally and the image wont be available in your local system, so it pulls the images from the docker hub and streams the output in the terminal as follows:

Hello from Docker!

By this, you come to know what is docker and its internal working. In the next tutorial, we can explore more about the terminologies in the docker world in detail !

Cheers 🙂

Extracting Feature Vectors using VGG16

In this new exciting blog, I’m gonna help you to extract the features vectors from images and use that features vectors to build a Random Forest (RF) model.

So, we are going to use the VGG16 model ( you are free to use any pre-trained models based on your problem statement ) to extract the feature vectors of images.

The extraction part begins with specifying the directory of images and using VGG16 model to predict the feature vectors and appending the feature vectors in to the list.

img_path = r'E:\Thesis\Try1\green'
feature_green = []

for each in os.listdir(img_path):
    path = os.path.join(img_path,each)
    img = image.load_img(path, target_size=(224, 224))
    img_data = image.img_to_array(img)
    img_data = np.expand_dims(img_data, axis=0)
    img_data = preprocess_input(img_data)
    feature = model.predict(img_data)
    feature_green.append(feature)

Since we have totally 3 classes of images we need to repeat this for all the three classes and write them into a dataframe along with their labels.

After that, it’s as usual to train and test the data and build the random forest model and evaluate its accuracy.

Cheers 🙂

Transfer Learning for Image Classification

In this blog post, we are going to explore how Transfer Learning technique helps us to overcome the computation challenges for building a neural network from scratch and training it with images.

Generally, its computationally hard and expensive to train images, which requires GPU support.

But Transfer Learning is a technique which makes this training computation simple, super cool and handy.

Oxford’s Visual Geometry Group developed and trained the so called (VGG16) model with Imagenet database, which contains hundreds and thousands of images.

Lets dive into transfer learning,

Lets begin with importing the VGG16 model and keras layers for building the fully connected layers.

from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten,Input
from keras.models import Sequential, Model
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import VGG16

In this classification, we are going to classify three classes of traffic light signals – Red, Green and Yellow. The shape of the input images is expected to be 224,224 with RGB color channel at the last.

num_classes=3
img_size=224
image_input = Input(shape=(224, 224, 3))
model = VGG16(input_tensor=image_input, include_top=False, weights= 'imagenet')
model.summary()

As you can see, we are not including the top layers (fully connected layers) and we are using the imagenet weights for our VGG16 model, which reduces the training computation.

Now, we need to build our own new network layers to append it on top of the base VGG16 model for our classification problem.

top_model = Sequential()
top_model.add(model)
top_model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=model.output_shape[1:],padding='same'))
top_model.add(MaxPooling2D((2,2), padding='same'))
top_model.add(Dropout(0.25))
top_model.add(Conv2D(64, kernel_size= (3,3),activation='relu',padding='same'))
top_model.add(MaxPooling2D((2,2), padding='same'))
top_model.add(Flatten())
top_model.add(Dense(128,activation='relu'))
top_model.add(Dense(num_classes,activation='softmax'))

Since, the base VGG16 model is already trained, it is good at extracting the patterns, edges and textures from the images, so we don’t need to train the base VGG16 model, so we are freezing the base model and training only the newly appended fully connected layers.

or layer in top_model.layers[:-8]:
    layer.trainable = False

top_model.summary()

So, after building the model, its time to fit our training and test data to evaluate our model’s accuracy. We are using “Accuracy” as our evaluation metric and “Adam” , “categorical_crossentropy” as optimisers and loss metrics respectively.

top_model.compile(optimizer = 'Adam', loss='categorical_crossentropy', metrics=['accuracy'])
from keras.callbacks import EarlyStopping

earlystop = EarlyStopping(monitor='val_acc', min_delta=0.0001, patience=3,
                          verbose=1, mode='auto')
callbacks_list = [earlystop]

batch_size=32

data_generator = ImageDataGenerator(rescale=1./255)

data_generator_with_aug = ImageDataGenerator(horizontal_flip=True,
                                            rescale=1./255,
                                            width_shift_range=0.2,
                                            height_shift_range=0.2)

train_generator = data_generator_with_aug.flow_from_directory(r'E:\Thesis\Try1\Train',
                                                             batch_size=batch_size,
                                                             target_size=(img_size,img_size),
                                                             class_mode='categorical')

validation_generator = data_generator.flow_from_directory(r'E:\Thesis\Try1\Test',
                                                         batch_size=batch_size,
                                                         target_size=(img_size,img_size),
                                                         class_mode='categorical') 
history = top_model.fit_generator(train_generator,
                   steps_per_epoch=10,
                   epochs=50,
                   validation_data = validation_generator,
                   validation_steps = 1,
                   callbacks=callbacks_list)

Early Stopping” – is a callback function, which is used to reduce overfitting by monitoring the “validation loss“. If the validation loss doesn’t reduces for 2 or 3 iterations, the this Early Stopping stops the training process.

Hope, u guys have learnt something about Transfer Learning.

Cheers 🙂