Paper collection on building and evaluating language model agents via executable language grounding
-
Updated
Apr 29, 2024
Paper collection on building and evaluating language model agents via executable language grounding
[IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268
A hierarchical Large Language Models (LLMs) framework for real-time multi-robot task allocation and target tracking with unknown hazards.
Official code repository for CurricuLLM: Automatic Task Curricula Design for Learning Complex Robot Skills using Large Language Models
It is a repository to prepare a demo for dansing robot.
Embodied validation suite for the Spiral‑Time Governor (STG), demonstrating deterministic safety filtering for LLM‑controlled quadruped locomotion in MuJoCo. Includes synthetic and physics‑based experiments, reproducible pipelines, and figure generation for the STG v2.2 paper.
Add a description, image, and links to the llm-robotics topic page so that developers can more easily learn about it.
To associate your repository with the llm-robotics topic, visit your repo's landing page and select "manage topics."