Skip to content

OpenDCAI/OpenWorldLib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

400 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

openworldlib_logo

Welcome to join us open-source world model project !


Build License Issues Report

English | δΈ­ζ–‡

Extension repo:[3D generation] | [VLA] | [simulator]


Matrix-Game-2

Hunyuan-GameCraft

Hunyuan-Worldplay

Lingbot-World

YUME-1.5

FlashWorld

Wan-2.2-IT2V

WoW

Cosmos-Predict-2.5

Pi3

Libero

Ai2-THOR

We define a world model as: A model or framework centered on perception, equipped with interaction and long-term memory capabilities, for understanding and predicting the complex world. Accordingly, πŸŽ“ Multimodal Understanding, πŸ€– Visual Action Prediction, and πŸ–ΌοΈ Visual Generation are all sub-tasks that a world model needs to accomplish.

We warmly welcome researchers to share their views on this framework or thoughts on world models in the Issues section. We also hope that you can submit valuable world-model-related methods to our framework via Pull Requests, or document and submit them to [awesome_world_models]. Feel free to give our repo a star 🌟 to follow the latest progress of OpenWorldLib!

Important Docs

The following three documents are essential to this project (click to navigate):

  • docs/planning.md: This document tracks the short-term optimization goals and future development plans for OpenWorldLib.
  • docs/awesome_world_models.md: This document records cutting-edge research, related surveys, and open-source projects on world models.
  • docs/installation.md: This document provides installation instructions for different methods in OpenWorldLib.

Table of Contents

Features

Project Goals

The main goals of OpenWorldLib include:

  • Establishing a unified and standardized world model framework to make the invocation of existing world-model-related code more consistent and well-structured;
  • Integrating open-source world model research outcomes and systematically curating related papers for researchers' reference and use.

Supported Tasks

OpenWorldLib covers the following research directions related to World Models, We sincerely thank all the excellent methods included in this framework for their significant contributions to world model:

Task Category Sub-task Representative Methods/Models
Video Generation Navigation Generation lingbot, matrix-game, hunyuan-worldplay, genie3, etc.
Long Video Generation sora-2, veo-3, wan, etc.
3D Scene Generation 3D Scene Generation flash-world, vggt, etc.
Reasoning VQA (Visual Question Answering) spatialVLM, omnivinci and other VLMs with world understanding
VLA (Vision-Language-Action) pi-0, pi-0.5, giga-brain, etc.

Getting Started

Installation

First, create a conda environment:

conda create -n "openworldlib" python=3.10 -y
conda activate "openworldlib"

Then install using the provided script:

cd OpenWorldLib
bash scripts/setup/default_install.sh

Some methods have special installation requirements. All installation scripts are located in ./scripts/setup.

πŸ“– For the full installation guide, please refer to docs/installation.md

Quickstart

After installing the base environment, you can test matrix-game-2 generation and multi-turn interaction with the following commands:

cd OpenWorldLib
bash scripts/test_inference/test_nav_video_gen.sh matrix-game-2
bash scripts/test_stream/test_nav_video_gen.sh matrix-game-2

Scripts for other methods can be found under scripts/test_inference and scripts/test_stream. Currently, we are primarily using GPUs with 80GB and 141GB of VRAM for testing. In the future, we will test more models and provide updates in the ./docs/installation.md file.

Structure

To help developers and users better understand OpenWorldLib, we provide details about our codebase. The framework structure is as follows:

OpenWorldLib
β”œβ”€ assets
β”œβ”€ data                                # Test data
β”‚  β”œβ”€ benchmarks
β”‚  β”‚  └─ reasoning
β”‚  β”œβ”€ test_case
β”‚  └─ ...
β”œβ”€ docs                                # Documentation
β”œβ”€ examples                            # Benchmark examples
β”œβ”€ scripts                             # All key test scripts
β”œβ”€ src
β”‚  └─ openworldlib                        # Main source path
β”‚     β”œβ”€ base_models                   # Base models
β”‚     β”‚  β”œβ”€ diffusion_model
β”‚     β”‚  β”‚  β”œβ”€ image
β”‚     β”‚  β”‚  β”œβ”€ video
β”‚     β”‚  β”‚  └─ ...
β”‚     β”‚  β”œβ”€ llm_mllm_core
β”‚     β”‚  β”‚  β”œβ”€ llm
β”‚     β”‚  β”‚  β”œβ”€ mllm
β”‚     β”‚  β”‚  └─ ...
β”‚     β”‚  β”œβ”€ perception_core
β”‚     β”‚  β”‚  β”œβ”€ detection
β”‚     β”‚  β”‚  β”œβ”€ general_perception
β”‚     β”‚  β”‚  └─ ...
β”‚     β”‚  └─ three_dimensions
β”‚     β”‚     β”œβ”€ depth
β”‚     β”‚     β”œβ”€ general_3d
β”‚     β”‚     └─ ...
β”‚     β”œβ”€ memories                      # Memory module
β”‚     β”‚  β”œβ”€ reasoning
β”‚     β”‚  └─ visual_synthesis
β”‚     β”œβ”€ operators                     # Input & interaction signal processing
β”‚     β”œβ”€ pipelines                     # All runtime pipelines
β”‚     β”œβ”€ reasoning                     # Reasoning module
β”‚     β”‚  β”œβ”€ audio_reasoning
β”‚     β”‚  β”œβ”€ general_reasoning
β”‚     β”‚  └─ spatial_reasoning
β”‚     β”œβ”€ representations               # Representation module
β”‚     β”‚  β”œβ”€ point_clouds_generation
β”‚     β”‚  └─ simulation_environment
β”‚     └─ synthesis                     # Generation module
β”‚        β”œβ”€ audio_generation
β”‚        β”œβ”€ visual_generation
β”‚        └─ vla_generation
β”œβ”€ submodules                          # Auxiliary installs (e.g., diff-gaussian-raster)
β”œβ”€ test                                # All test code
β”œβ”€ test_stream                         # All interactive test code
└─ tools                               # Utilities
   β”œβ”€ installing
   └─ vibe_code

When using OpenWorldLib, users typically call the pipeline class directly, which handles weight loading, environment initialization, and other tasks. Users interact with the operator class, and leverage the synthesis, reasoning, and representation classes for generation. In multi-turn interactions, the memory class is used to maintain the running context.

Planning

  • We document the latest cutting-edge world model research in docs/awesome_world_models.md, and welcome contributions of valuable research.
  • We document our upcoming training and optimization plans in docs/planning.md.

For Developers

We welcome all developers to contribute and help improve OpenWorldLib as a unified world model repository. We recommend using Vibe Coding for quick code contributions β€” related prompts can be found under tools/vibe_code/prompts. You are also encouraged to add high-quality world model works to docs/planning.md and docs/awesome_world_models.md. We look forward to your contributions!

Related documents: [Project Overview] | [Development Guide] | [Task Assignment Details] | [Code Submission Guidelines]

Acknowledgment

This project is an extension of DataFlow and DataFlow-MM for world model tasks. We are also actively collaborating with RayOrch, Paper2Any, and other projects.

Citation

If OpenWorldLib has been helpful to you, please consider giving our repo a star 🌟 and citing the related papers:

@misc{dataflow-team-openworldlib,
  author = {{OpenDCAI}},
  title = {OpenWorldLib: A Unified Codebase for Advanced World Models},
  year = {2026},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/OpenDCAI/OpenWorldLib}}
}

@article{zeng2026research,
  title={Research on World Models Is Not Merely Injecting World Knowledge into Specific Tasks},
  author={Zeng, Bohan and Zhu, Kaixin and Hua, Daili and Li, Bozhou and Tong, Chengzhuo and Wang, Yuran and Huang, Xinyi and Dai, Yifan and Zhang, Zixiang and Yang, Yifan and others},
  journal={arXiv preprint arXiv:2602.01630},
  year={2026}
}

To further elaborate on our framework's design philosophy and our understanding of world models, we will release a technical report for OpenWorldLib. We hope our work is helpful to you!