Template repository for building Embodied AI task environment using EmbodiChain
User can fork this repository to build their own task environments using EmbodiChain. After forking, you can replace the embodichain_task_template package with your own task package, and update the project information in pyproject.toml accordingly.
embodichain_task_template/
├── README.md
├── pyproject.toml # Project configuration and dependencies
├── configs/ # Task configuration files
│ └── demo/ # Use one folder for each task
│ └── dummy.json # Put all gym config and action config into the folder.
└── embodichain_task_template/ # Task implementations
├── __init__.py
└── tasks/
├── __init__.py # Task to be registered in __init__.py
└── dummy_task.py. # Task implementation
Install EmbodiChain in development mode to use the latest features.
git clone https://github.com/DexForce/EmbodiChain.git
cd EmbodiChain
pip install -e . --extra-index-url http://pyp.open3dv.site:2345/simple/ --trusted-host pyp.open3dv.siteThen install the package in development mode.
# Install in development mode
cd {your_task_package_name}
pip install -e . - Create a new task environment class in
tasks/{task_name}.pythat inherits fromEmbodiedEnv. - Create a configuration file in
configs/{task_name}/xxx.jsonthat defines the environment and robot setup. - Implement the
create_demo_action_list()method in your task environment to generate demonstration actions based on the task requirements.
References:
If you are implementing a digital twin of a real-world task (e.g., a task in Table-30), it is recommended to follow the steps below to ensure the accuracy of the simulation environment:
- Use sim-ready assets to construct the simulation environment.
[!NOTE] Currently, a sim-ready asset should have the following properties at least:
- Accurate geometry and dimensions.
- Correct coordinate system and origin point.
- Reasonable number of vertices (not too high for real-time simulation, and not too low to lose important details).
- Properly defined visual materials and textures (if necessary for the task).
We may use USD format for sim-ready assets in the future, which can provide more standardized and comprehensive support for the above properties.
- Replay real demonstration data in the simulation environment to check the feasibility and accuracy of the environment setup. You can use the utilities provided in this repository to facilitate the replay process.
# Launch the environment in data generation mode.
python scripts/run_env.py \
--gym_config configs/demo/dummy.json \
...
# Launch the environment in preview mode.
python scripts/run_env.py \
--gym_config configs/demo/dummy.json \
--preview \
...The following command-line arguments are commonly used when running the environment:
--enable_rt: Enable ray tracing rendering backend. (recommended used for most of case)--headless: Run the environment in headless mode. (must be used on servers without display)--filter_dataset_saving: Prevent saving dataset for episodes. This argument is used for debugging and testing purposes.
Run the env in preview mode, and execute the following code snippet in the Python console to control the camera pose with keyboard input. Once you are satisfied with the camera pose, press p to print the pose in the console, and you can copy the printed pose to your config file.
from embodichain.lab.sim.utility.keyboard_utils import run_keyboard_control_for_camera
run_keyboard_control_for_camera(cam_uid, vis_pose=True)Run the env in preview mode, and execute the following code snippet in the Python console to control the light conditions with keyboard input. Once you are satisfied with the light conditions, press p to print the light configuration in the console, and you can copy the printed configuration to your config file.
from embodichain.lab.sim.utility.keyboard_utils import run_keyboard_control_for_light
run_keyboard_control_for_light(light_uid, vis_config=True)Run the env in preview mode, and execute the following code snippet in the Python console to control the robot with gizmo. This is useful for checking the feasibility of a task. Use p to print the current robot state (joint positions, end-effector pose, etc.) in the console, which can be used as a reference for generating demonstration actions.
from embodichain.lab.sim.utility.gizmo_utils import run_gizmo_robot_control_loop
robot = env.get_wrapper_attr("robot")
run_gizmo_robot_control_loop(robot, control_part=part)