This repository is an extension for legged robot reinforcement learning based on Isaac Lab, which allows to develop in an isolated environment, outside of the core Isaac Lab repository. The RL algorithm is based on a forked RSL-RL library.
This project is originally developed by zitongbai.
Key Features:
DeepMimicfor humanoid robots, including Unitree G1.AMPAdversarial Motion Priors (AMP) for humanoid robots, including Atom01, Unitree G1. We suggest retargeting the human motion data by GMR.
- Adversarial Motion Priors for Unitree G1:
rl-video-step-0.mp4
- 2026/01/06: Add Atom01 open-source robot from Roboparty(2 version, short and long base link).
- 2025/12/16: Test in Isaac Lab 2.3.1 and RSL-RL 3.2.0.
- 2025/12/05: Use git lfs to store large files, including motion data and robot models.
- 2025/11/23: Add Symmetry data augmentation in AMP training.
- 2025/11/22: New implementation of AMP.
- 2025/11/19: Add DeepMimic for G1.
- 2025/10/14: Update to support rsl_rl v3.1.1. Only walking in flat terrain is supported now.
- 2025/08/24: Support using more steps observations and motion data in AMP training.
- 2025/08/22: Compatible with Isaac Lab 2.2.0.
- 2025/08/21: Add support for retargeting human motion data by GMR.
- Isaac Lab: Ensure you have installed Isaac Lab
v2.3.1. Follow the official guide. - Git LFS: Required for downloading large model files.
-
Clone the Repository Clone this repository outside your existing
IsaacLabdirectory to maintain isolation.# Option 1: HTTPS git clone https://github.com/zerojuhao/legged_lab # Option 2: SSH git clone git@github.com:zerojuhao/legged_lab.git cd legged_lab
-
Install the Package Use the Python interpreter associated with your Isaac Lab installation.
python -m pip install -e source/legged_lab
-
Install RSL-RL (Forked Version) We use a customized version of
rsl_rlto support advanced features like AMP.# Clone outside of IsaacLab and legged_lab directories git clone -b feature/amp https://github.com/zitongbai/rsl_rl.git cd rsl_rl python -m pip install -e .
We have already provided some off-the-shelf motion data in the source/legged_lab/legged_lab/data/MotionData folder for testing.
If you want to add more motion data, you can do so by following the steps below.
-
Retarget human motion data to the robot model. We recommend using GMR for retargeting human motion data.
-
Put the retargeted motion data in the
temp/gmr_datafolder. -
Use a helper script to convert the motion data to the required format:
python scripts/tools/retarget/dataset_retarget.py \ --robot g1 \ --input_dir temp/gmr_data/ \ --output_dir temp/lab_data/ \ --config_file scripts/tools/retarget/config/g1_29dof.yaml \ --loop clamp -
Move the converted data from
temp/lab_datatosource/legged_lab/legged_lab/data/MotionData, and set theMotionDataCfgin the config file, e.g.,source/legged_lab/legged_lab/tasks/locomotion/amp/config/g1/g1_amp_env_cfg.py.
Please refer to the comments in the script for more details about the arguments, and refer to scripts/tools/retarget/gmr_to_lab.py for the data format used in this repository.
Train
To train the DeepMimic algorithm, you can run the following command:
python scripts/rsl_rl/train.py --task LeggedLab-Isaac--Deepmimic-G1-v0 --headless --max_iterations 50000The max_iterations can be adjusted based on your needs. For more details about the arguments, run python scripts/rsl_rl/train.py -h.
Play
You can play the trained model in a headless mode and record the video:
# replace the checkpoint path with the path to your trained model
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-Deepmimic-G1-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.ptTrain
To train the AMP algorithm, you can run the following command:
python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-Flat-Atom01-v0 --headless --num_envs 8192python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --max_iterations 50000If you want to train it in a non-default gpu, you can pass more arguments to the command:
# replace `x` with the gpu id you want to use
python scripts/rsl_rl/train.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --max_iterations 50000 --device cuda:x agent.device=cuda:xFor more details about the arguments, run python scripts/rsl_rl/train.py -h.
Play
You can play the trained model in a headless mode and record the video:
# replace the checkpoint path with the path to your trained model
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-AMP-Flat-Atom01-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.pt# replace the checkpoint path with the path to your trained model
python scripts/rsl_rl/play.py --task LeggedLab-Isaac-AMP-G1-v0 --headless --num_envs 64 --video --checkpoint logs/rsl_rl/experiment_name/run_name/model_xxx.ptThe video will be saved in the logs/rsl_rl/experiment_name/run_name/videos/play directory.
To check sim to sim using Mujoco, you can run:
python scripts/atom01_long_base_link_lab_to_mujoco.py- Add more legged robots, such as Unitree H1
- Self-contact penalty in AMP
- Asymmetric Actor-Critic in AMP
- Symmetric Reward
- Sim2sim in mujoco (support atom01)
- Add support for image observations
- Walk in rough terrain with AMP
We would like to express our gratitude to the following open-source projects:
- legged_lab - The foundation of this project.
- Isaac Lab - The foundation of this project.
- RSL-RL - Reinforcement learning algorithms for legged robots.
- AMP_for_hardware - Inspiration for AMP implementation.
- GMR - Excellent motion retargeting library.
- MimicKit - Reference for imitation learning.