Skip to content

Commit 272fcf3

Browse files
authored
Update README.md
Verify that the code works well in pytorch 1.7.0
1 parent f9002d5 commit 272fcf3

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313
## Environment
1414
- python 3.7.7
15-
- pytorch 1.4.0 (>=1.2.0, 1.6.0 works too)
15+
- pytorch 1.4.0 (>=1.2.0, 1.7.0 works too)
1616
- opencv 4.2.0.34 (others work too)
1717

1818
## Visualization
@@ -78,6 +78,8 @@ to **Non-Local_pytorch_0.3.1/**.
7878

7979
10. The code also works well in **pytorch 1.6.0**. Add **demo_MNIST_AMP_train_with_single_gpu.py** with Automatic Mixed Precision Training (FP16), supported by **pytorch 1.6.0**. It can reduce GPU memory during training. What's more, if you use GPU 2080Ti (tensor cores), training speed can be increased. More details (such as how to train with multiple GPUs) can be found in [here](https://pytorch.org/docs/stable/notes/amp_examples.html#typical-mixed-precision-training)
8080

81+
11. Verify that the code works well in **pytorch 1.7.0**.
82+
8183
## Todo
8284
- Experiments on Charades dataset.
8385
- Experiments on COCO dataset.
@@ -87,4 +89,4 @@ to **Non-Local_pytorch_0.3.1/**.
8789
1. [**Non-local ResNet-50 TSM**](https://github.com/MIT-HAN-LAB/temporal-shift-module)
8890
([**Paper**](https://arxiv.org/abs/1811.08383)) on Kinetics dataset. They report that their model achieves a good performance
8991
of **75.6% on Kinetics**, which is even higher than Non-local ResNet-50 I3D
90-
([**Here**](https://github.com/AlexHex7/Non-local_pytorch/issues/23)).
92+
([**Here**](https://github.com/AlexHex7/Non-local_pytorch/issues/23)).

0 commit comments

Comments
 (0)