-
Notifications
You must be signed in to change notification settings - Fork 14
Description
Dear SEM Author,
In the original paper, the SEM-GD model was evaluated on the single-task stack_blocks_three under the multi-task mode, achieving an 80% success rate. I replicated this task using the same configuration as described in the paper but found 0% success rate. Could there be an issue with my configuration, or is it necessary to run 1600 episodes for each task?
-
My dataset: Robotwin2.0 collected 16 tasks identical to the original text (100 episodes per task), totaling 1600 datasets (compressed by HDF5 to about 240G), and then converted them from HDF5 to LMDB format (converted to 40G)
-
My condfig and parameters:
config = dict( hist_steps=1, pred_steps=64, chunk_size=8, embed_dims=256, with_depth=True, with_depth_loss=True, min_depth=0.01, max_depth=1.2, num_depth=128, batch_size=4, **max_step=160000**,# # multi-task step=16k single-task step=8k step_log_freq=25, save_step_freq=10000, num_workers=8, **lr=1e-4,** # checkpoint="/home/zts/RoboOrchardLab_pre_challenge_cup_final/projects/sem/robotwin_0/ckpt/groundingdino_swint_ogc_mmdet-822d7e9d-rename.pth", checkpoint="/home/zts/SEM/RoboOrchardLab/dino-swin-pt/swin_base_patch4_window7_224_22k.pth", bert_checkpoint="/home/zts/RoboOrchardLab_pre_challenge_cup_final/projects/sem/robotwin_0/ckpt/bert-base-uncased", data_path="/media/zts/新加卷1/episode_1600_datasets_lmdb/lmdb", # 1600 episodes with 16 tasks urdf="/home/zts/RoboOrchardLab_pre_challenge_cup_final/projects/sem/RoboTwin/assets/embodiments/aloha-agilex-1/urdf/arx5_description_isaac.urdf", **multi_task=True,** task_names=["stack_blocks_three"], -
Other model configurations have not been modified!
-
Graphics card: 4060ti CUDA12.1 torch2.4.1 Consistent with RoboWin
Looking forward to receiving your sincere reply!