diff --git a/README.md b/README.md
index 05f46613..5a062ecf 100644
--- a/README.md
+++ b/README.md
@@ -14,12 +14,12 @@ We introduce existing datasets for Human Trajectory Prediction (HTP) task, and a
|  | [GC](datasets/GC) | Grand Central Train Station Dataset: 1 scene of 33:20 minutes of crowd trajectories #Traj:[Peds=12,684] Coord=image-2D FPS=25 | [dropbox](https://www.dropbox.com/s/7y90xsxq0l0yv8d/cvpr2015_pedestrianWalkingPathDataset.rar) [paper](http://openaccess.thecvf.com/content_cvpr_2015/html/Yi_Understanding_Pedestrian_Behaviors_2015_CVPR_paper.html) |
|  | [HERMES](datasets/HERMES) | Controlled Experiments of Pedestrian Dynamics (Unidirectional and bidirectional flows) #Traj:[?] Coord=world-2D FPS=16 | [website](https://www.fz-juelich.de/ias/ias-7/EN/AboutUs/Projects/Hermes/_node.html) [data](https://www.fz-juelich.de/ias/ias-7/EN/Research/Pedestrian_Dynamics-Empiricism/_node.html) |
|  | [Waymo](datasets/Waymo) | High-resolution sensor data collected by Waymo self-driving cars #Traj:[?] Coord=2D and 3D FPS=? | [website](https://waymo.com/open/) [github](https://github.com/waymo-research/waymo-open-dataset) |
-|  | [KITTI](datasets/KITTI) | 6 hours of traffic scenarios. various sensors #Traj:[?] Coord=image-3D + Calib FPS=10 | [website](http://www.cvlibs.net/datasets/kitti/) |
+|  | [KITTI](datasets/KITTI) | 6 hours of traffic scenarios. Various sensors #Traj:[?] Coord=image-3D + Calib FPS=10 | [website](http://www.cvlibs.net/datasets/kitti/) |
|  | [inD](datasets/InD) | Naturalistic Trajectories of Vehicles and Vulnerable Road Users Recorded at German Intersections #Traj:[Total=11,500] Coord=world-2D FPS=25 | [website](https://www.ind-dataset.com/) [paper](https://arxiv.org/pdf/1911.07602.pdf) |
|  | [L-CAS](datasets/L-CAS) | Multisensor People Dataset Collected by a Pioneer 3-AT robot #Traj:[?] Coord=0 FPS=0 | [website](https://lcas.lincoln.ac.uk/wp/research/data-sets-software/l-cas-multisensor-people-dataset/) |
|  | [Edinburgh](datasets/Edinburgh) | People walking through the Informatics Forum (University of Edinburgh) #Traj:[ped=+92,000] FPS=0 | [website](http://homepages.inf.ed.ac.uk/rbf/FORUMTRACKING/) |
|  | [Town Center](datasets/Town-Center) | CCTV video of pedestrians in a busy downtown area in Oxford #Traj:[peds=2,200] Coord=0 FPS=0 | [website](https://megapixels.cc/datasets/oxford_town_centre/) |
-|  | [Wild Track](datasets/Wild-Track) | surveillance video dataset of students recorded outside the ETH university main building in Zurich. #Traj:[peds=1,200] | [website](https://megapixels.cc/wildtrack/) |
+|  | [Wild Track](datasets/Wild-Track) | Surveillance video dataset of students recorded outside the ETH university main building in Zurich. #Traj:[peds=1,200] | [website](https://megapixels.cc/wildtrack/) |
|  | [ATC](datasets/ATC) | 92 days of pedestrian trajectories in a shopping center in Osaka, Japan #Traj:[?] Coord=world-2D + Range data | [website](https://irc.atr.jp/crest2010_HRI/ATC_dataset) |
|  | [VIRAT](datasets/VIRAT) | Natural scenes showing people performing normal actions #Traj:[?] Coord=0 FPS=0 | [website](http://viratdata.org/) |
|  | [Forking Paths Garden](datasets/Forking-Paths-Garden) | **Multi-modal** _Synthetic_ dataset, created in [CARLA](https://carla.org) (3D simulator) based on real world trajectory data, extrapolated by human annotators #Traj:[?] | [website](https://next.cs.cmu.edu/multiverse/index.html) [github](https://github.com/JunweiLiang/Multiverse) [paper](https://arxiv.org/abs/1912.06445) |
@@ -30,7 +30,7 @@ We introduce existing datasets for Human Trajectory Prediction (HTP) task, and a
|  | [City Scapes](datasets/City-Scapes) | 25,000 annotated images (Semantic/ Instance-wise/ Dense pixel annotations) #Traj:[?] | [website](https://www.cityscapes-dataset.com/dataset-overview/) |
|  | [Argoverse](datasets/Argoverse) | 320 hours of Self-driving dataset #Traj:[objects=11,052] Coord=3D FPS=10 | [website](https://www.argoverse.org) |
|  | [Ko-PER](datasets/Ko-PER) | Trajectories of People and vehicles at Urban Intersections (Laserscanner + Video) #Traj:[peds=350] Coord=world-2D | [paper](https://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.110/Bilder/Forschung/Datensaetze/20141010_DatasetDocumentation.pdf) |
-|  | [TRAF](datasets/TRAF) | small dataset of dense and heterogeneous traffic videos in India (22 footages) #Traj:[Cars=33 Bikes=20 Peds=11] Coord=image-2D FPS=10 | [website](https://gamma.umd.edu/researchdirections/autonomousdriving/trafdataset/) [gDrive](https://drive.google.com/drive/folders/1zKaeboslkqoLdTJbRMyQ0Y9JL3007LRr) [paper](https://arxiv.org/pdf/1812.04767.pdf) |
+|  | [TRAF](datasets/TRAF) | Small dataset of dense and heterogeneous traffic videos in India (22 footages) #Traj:[Cars=33 Bikes=20 Peds=11] Coord=image-2D FPS=10 | [website](https://gamma.umd.edu/researchdirections/autonomousdriving/trafdataset/) [gDrive](https://drive.google.com/drive/folders/1zKaeboslkqoLdTJbRMyQ0Y9JL3007LRr) [paper](https://arxiv.org/pdf/1812.04767.pdf) |
|  | [ETH-Person](datasets/ETH-Person) | Multi-Person Data Collected from Mobile Platforms | [website](https://data.vision.ee.ethz.ch/cvl/aess/) |
@@ -105,7 +105,7 @@ Final Displacement Error (FDE) measures the distance between final predicted pos
- Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks, Gupta et al. CVPR 2018. [paper]()
- Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs, Amirian et al. CVPR 2019. [paper](), [code]()
-->
-**References**: an *awsome* list of trajectory prediction references can be found [here](https://github.com/jiachenli94/Awesome-Interaction-aware-Trajectory-Prediction)
+**References**: an *awesome* list of trajectory prediction references can be found [here](https://github.com/jiachenli94/Awesome-Interaction-aware-Trajectory-Prediction)