Model is trained on Flickr30k dataset.
-
As an encoder used ResNet-152 pretrained model.
-
As a decoder used Transformer module.
Model have 53.1 M trainable parameters and 58.1 M non-trainable parameters with total number of parameters 111.2 M. You can change number of trainable parameters by changing d_model argument.
d_model is the embedding size for the text, feature map size for the image and hidden size of the Transformer.
It took 19 epochs of training in order to achieve cross-entropy loss on a validation sample equal to 2.31 with following hyperparameters:
d_model: 512
dropout_rate: 0.1
gamma: 0.95
lr_start: 5.0e-05
num_heads: 8
vocab_size: 7736You can achieve better results by changing hyperparameters, especially d_model and num_heads, but it will take more time to train.
You can also look at extra_research to see the result of the model with d_model=128 and num_heads=4 with same training hyperparameters.
-
Python 3.11.x with installed requirements:
pip install -r requirements.txt -
To train model on your own dataset follow this notebook: training.ipynb
-
To use pretrained model on any single image from path or URL follow this notebook: inference.ipynb




