Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed
Yifan Wang*, Xingyi He*, Sida Peng, Dongli Tan, Xiaowei Zhou
CVPR 2024 Highlight
realtime_demo.mp4
- Inference code and pretrained models
- Code for reproducing the test-set results
- Add options of flash-attention and torch.compiler for better performance
- jupyter notebook demo for matching a pair of images
- Training code
pip3 -m venv venv
source venv/bin/activate
pip install torch==2.0.0+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txtThe test and training can be downloaded by download link provided by LoFTR
We provide our pretrained model in download link
Quickstart
- Put model into
weights/eloftr_outdoor.ckpt - Run sample in
notebooks/demo_single_pair.ipynb - MC Additions: run
python match_and_compute_homography.py assets/thermal_visible/ visible-frame-007900.jpg thermal-frame-007900.jpg .
You need to first set up the testing subsets of ScanNet and MegaDepth. We create symlinks from the previously downloaded datasets to data/{{dataset}}/test.
# set up symlinks
ln -s /path/to/scannet-1500-testset/* /path/to/EfficientLoFTR/data/scannet/test
ln -s /path/to/megadepth-1500-testset/* /path/to/EfficientLoFTR/data/megadepth/testconda activate eloftr
bash scripts/reproduce_test/indoor_full_time.sh
bash scripts/reproduce_test/indoor_opt_time.shconda activate eloftr
bash scripts/reproduce_test/outdoor_full_auc.sh
bash scripts/reproduce_test/outdoor_opt_auc.sh
bash scripts/reproduce_test/indoor_full_auc.sh
bash scripts/reproduce_test/indoor_opt_auc.shconda env create -f environment_training.yaml # used a different version of pytorch, maybe slightly different from the inference environment
pip install -r requirements.txt
conda activate eloftr_training
bash scripts/reproduce_train/eloftr_outdoor.sh eloftr_outdoorIf you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{wang2024eloftr,
title={{Efficient LoFTR}: Semi-Dense Local Feature Matching with Sparse-Like Speed},
author={Wang, Yifan and He, Xingyi and Peng, Sida and Tan, Dongli and Zhou, Xiaowei},
booktitle={CVPR},
year={2024}
}