Skip to content

ScReameer/Image-captioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image captioning

Example (clickable spoiler)

Overview

Architecture of the model:

Model is trained on Flickr30k dataset.

Model have 53.1 M trainable parameters and 58.1 M non-trainable parameters with total number of parameters 111.2 M. You can change number of trainable parameters by changing d_model argument.

d_model is the embedding size for the text, feature map size for the image and hidden size of the Transformer.

It took 19 epochs of training in order to achieve cross-entropy loss on a validation sample equal to 2.31 with following hyperparameters:

d_model: 512
dropout_rate: 0.1
gamma: 0.95
lr_start: 5.0e-05
num_heads: 8
vocab_size: 7736


You can achieve better results by changing hyperparameters, especially d_model and num_heads, but it will take more time to train.

You can also look at extra_research to see the result of the model with d_model=128 and num_heads=4 with same training hyperparameters.

How to use

  1. Python 3.11.x with installed requirements: pip install -r requirements.txt

  2. To train model on your own dataset follow this notebook: training.ipynb

  3. To use pretrained model on any single image from path or URL follow this notebook: inference.ipynb

Reference

Flickr30k dataset

Attention is All You Need

A detailed guide to PyTorch’s nn.Transformer() module

About

Image captioning using ResNet-152 and Transformer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published