I completed my graduation project based on this responsitory.
Here are the details of my work. And this is the link to the original project.
This responsitory contains the code of paper Learning Binary Code for Personalized Fashion Recommendation
- pytorch
- torchvision
- PIL
- numpy
- pandas
- tqdm: A Fast, Extensible Progress Bar for Python and CLI
- lmdb: A universal Python binding for the LMDB 'Lightning' Database.
- yaml: PyYAML is a full-featured YAML framework for the Python programming language.
- visdom: To start a visdom server run
python -m visdom.server
I upgraded the version of PyTorch to 1.2.0 and the package dependency is solved automatically with conda.
The last 4 packages can be install via conda:
conda install python-lmdb pyyaml visdom tqdm -c conda-forgeconda create -n FashionHashNet python>=3.8 anaconda
conda activate FashionHashNet
pip install -r requirements.txt
conda install --file requirements.txtRemember to add this project in the PYTHONPATH environmental variable if you plan to run the experiments on the terminal:
export PYTHONPATH=$PYTHONPATH:/path/to/project/folder
docker build -t fashion-hash-net:v1 .
docker run -itd --shm-size=256M --name fashion-hash-net fashion-hash-net:v1If you're trying to train networks with this image, you may run into an error like:
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
This is because Docker defaults to 64MB of SHM no matter how much memory it's allocated. You can override this by passing e.g. --shm-size 32g as an argument to docker run before the Docker image name.
The main script scripts/run.py currently supports the following functions:
ACTION_FUNS = {
# train models
"train": train,
# runing the FITB task
"fitb": fitb,
# evaluate pairs accuracy
"evaluate-accuracy": evalute_accuracy,
# evaluate NDCG and AUC
"evaluate-rank": evalute_rank,
# compute the binary codes
"extract-features": extract_features,
}There are three main modules in polyvore:
polyvore.data: module for polyvore-datasetpolyvore.model: module for fashion hash netpolyvore.solver: module for training
For configurations, see polyvore.param, and we give some examples in cfg folder. The configuration file was written in yaml format.
To train FHN-T3 with both visual and semantic features, run the following script:
start the visdom server before starting the training
python -m visdom.server > visdom.log 2>&1 &scripts/run.py train --cfg ./cfg/train/FHN_VSE_T3_630.yamlpython scripts/run.py train --cfg ./cfg/train/FHN_T3_53.yaml > train_53.log 2>&1 &To evaluate the accuracy of positive-negative pairs:
scripts/run.py evaluate-accuracy --cfg ./cfg/evaluate-accuracy/FHN_VSE_T3_630.yamlpython scripts/run.py evaluate-accuracy --cfg ./cfg/evaluate-accuracy/FHN_T3_53.yaml > evaluate_accuracy_53.log 2>&1 &To evaluate the rank quality:
scripts/run.py evaluate-rank --cfg ./cfg/evaluate-rank/FHN_VSE_T3_630.yamlpython scripts/run.py evaluate-rank --cfg ./cfg/evaluate-rank/FHN_T3_53.yaml > evaluate_rank_53.log 2>&1 &To evaluate the FITB task:
scripts/run.py fitb --cfg ./cfg/fitb/FHN_VSE_T3_630.yamlpython scripts/run.py fitb --cfg ./cfg/fitb/FHN_T3_53.yaml > fitb_53.log 2>&1 &-
Download the data from OneDrive and put the
polyvorefolder underdata; -
Unzip the
polyvore/images/291x291.tar.gz; -
Use
script/build_polyvore.pyto convert images and save indata/polyvore/lmdb.
python scripts/build_polyvore.py data/polyvore/images/291x291 data/polyvore/images/lmdbThe
lmdbformat can accelerate the load of images and set as default in configuration. If you don't want to use thelmdbformat, change the setting touse_lmdb: falseinyamlfiles.
See <data/README.md> for details
@inproceedings{Lu:2019tk,
author = {Lu, Zhi and Hu, Yang and Jiang, Yunchao and Chen, Yan and Zeng, Bing},
title = {{Learning Binary Code for Personalized Fashion Recommendation}},
booktitle = {CVPR},
year = {2019}
}Email: zhilu@std.uestc.edu.cn