top of page
realcode4you

Image Classification over CIFAR10 Dataset Using VGG19,ResNet18 and an n-layer MLP | Sample Paper

Experimental Settings

  • Supervised Learning: Since we have labels for all images, we will use supervised learning to solve this task. Here, we will use all 49,000 training samples with their labels for supervised learning.

  • Semi-supervised Learning: It comes handy when we have a dataset that is partially labelled or we want to use a pool of unlabeled data for training. To simulate this setting, we will split the CIFAR-10 training data into 2 parts, 5,000 images which have labels and the rest 44,000 images which are treated as unlabeled images. We shall use self-training as a wrapper on supervised learning technique for semi-supervised learning.

What is Self Training?

Self-Training is a technique which can be applied to any supervised learning algorithm to train it in a semi-supervised fashion. The algorithm first uses the labeled data to train the model. After training it for few iterations, some part of unlabled data is labeled using the trained model and is added to the set of labeled training data for the next iteration.


Implementation Details

Libraries

This time we give you freedom to choose between Pytorch and Tensorflow. Go through these links to get acquainted yourself with the frameworks and decide on your choice. Make sure you follow more or less the same format while coding in either of the two deep learning libraries.


Data Augmentation

It is the process of preprocessing the data before feeding it to the model. It increases the diversity of the data which allows the model to generalize well and results in higher accuracy. We will primarily do the following three types of modification on the dataset.

  • Random Crop: Add sufficient padding to the original image and then randomly crop to create a new image.

  • Random Horizontal Flip: The original image is randomly flipped horizontally to produce new image.

Resizing the Original Image: The original image will be resized to a different shape. Some deep neural networks require the image to be of sufficient size due to multiple down-sampling layers in the architecture


Hyperparameter Optimization

The accuracy of any particular model depends on the careful selection of the optimizer and the parameters that optimizer depends on, such as learning rate, momentum for Stochastic Gradient Descent, weight decays, and batch size.


To find the optimum value of these hyperparameters, vary these hyperparameters over a certain range and track the accuracy of the model on the validation dataset.


Deliverables

  • Perform image classifcation over CIFAR10 dataset using VGG19, ResNet18, and an n-layer MLP. Compare their performances by reporting the confusion matrices.

  • Compare the model performance under the two settings (described above) and report your observations. Note: Do include the optimum values for the hyperparameters and show the plots (as above) justifying your choice.

  • Include the training plots (Train loss vs Epochs) for each of the trained models.



If you need any help related to Deep Learning then we are ready to help you or need solution of above problem the comment in below comment section and get instant help with an affordable price:


realcode4you@gmail.com

Comments


bottom of page