My colleague and I had Sanchit Singh had the honor to present at WeAreDevelopers Congress Vienna 2019. We wanted to support the whole talk with (ideally) real-world examples. We decided to show most of the ideas on exaggerated use cases in Google Collab with training right on the stage. The code can be found here.

Our stage. Source: own work

The video of the talk is not yet published, so for now following is a small summary of what we presented.

Why use Transfer learning? (TR)

  • To reuse a big model (trained once on millions of samples)
  • To use Deep Learning even on normally too small datasets
  • To increase accuracy by leveraging the extra information from different data

In this article, we will work with ResNet501 architecture pre-trained on the ImageNet2 dataset. All code will be using fastai library, which is included in the Collab environment.

Example images from ImageNet.
Source: ImageNet

Code to create ResNet50 with pretrained weights on ImageNet:

def get_resnet(data) -> Model:
    model = models.resnet50
    model = cnn_learner(data, model, metrics=[accuracy])

    return model

Classifying cats and dogs

The most common example of them all. Train set has 363 images, validation 90 images and test 463 images. The large size of the test set compared to validation or even train is really important for such a small data. Otherwise we may see totally different results in the inference. Let’s perform one cycle using the one cycle policy3:

def train(model: Model, cyc_len=1) -> Model:
    callbacks = [EarlyStoppingCallback(model, monitor='valid_loss', min_delta=0.01, patience=2)]
    model.fit_one_cycle(cyc_len=cyc_len, max_lr=1e-3, callbacks=callbacks)

    return model

model = get_resnet(train_data)
model = train(model)    

For better evaluation, we will use the confusion matrix of the validation and test set.

def eval(model: Model, data, name=''):
    model_copy = deepcopy(model)
    model_copy.data = data

    _, acc = model_copy.validate(data.valid_dl, metrics=[accuracy])
    interp = ClassificationInterpretation.from_learner(model_copy)

    interp.plot_confusion_matrix(figsize=(12,12), dpi=60, return_fig=True)

    display(HTML(f'<h1>{name} accuracy: {float(acc)*100:3.4} %</h1>'))

eval(model, train_data, 'Val')
Validation accuracy: 98.89 %

That is a great result. It was to be expected as the different breeds of cats and dogs are already present in the ImageNet dataset. What is the accuracy of the test set?

eval(model, test_data, 'Test')
Test accuracy: 81.64 %

That is significantly worse than the validation. We are overfitting the train set.

Want more? Please watch the capture from the conference, or run the code from gCollab.

References

  1. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015. Deep Residual Learning for Image Recognition. arXiv:1512.03385 

  2. Deng, J. and Dong, W. and Socher, R. and Li, L.-J. and Li, K. and Fei-Fei, L. 2009. ImageNet: A Large-Scale Hierarchical Image Database. image-net.org 

  3. Leslie N. Smith. 2017. A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay. arXiv:1803.09820