-
Notifications
You must be signed in to change notification settings - Fork 49
docs: add PaddlePaddle quickstart and load example #415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: add PaddlePaddle quickstart and load example #415
Conversation
Summary of ChangesHello @SigureMo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates PaddlePaddle into the TVM FFI ecosystem by updating documentation and providing a new quickstart example. This integration leverages the full TVM FFI support introduced in PaddlePaddle 3.3.0, which includes enhanced DLPack capabilities and exchange protocols. The changes aim to demonstrate and document how TVM FFI can seamlessly interoperate with PaddlePaddle, expanding the range of ML frameworks supported. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request does a great job of adding documentation and an example for PaddlePaddle interoperability with TVM FFI. The changes are clear and the new example is helpful. My review includes several suggestions to improve consistency across the documentation by alphabetically sorting lists of frameworks, which will enhance readability and maintainability. I've also included a minor style suggestion for the new Python example file to adhere to best practices.
| - **Kernel libraries** - ship one wheel to support multiple frameworks, Python versions, and different languages. [[FlashInfer](https://docs.flashinfer.ai/)] | ||
| - **Kernel DSLs** - reusable open ABI for JIT and AOT kernel exposure frameworks and runtimes. [[TileLang](https://tilelang.com/)][[cuteDSL](https://docs.nvidia.com/cutlass/latest/media/docs/pythonDSL/cute_dsl_general/compile_with_tvm_ffi.html)] | ||
| - **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)] | ||
| - **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better readability and maintainability, please consider sorting the list of frameworks alphabetically.
| - **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)] | |
| - **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)] |
| At runtime, TVM-FFI often needs to accept tensors from many sources: | ||
|
|
||
| * Frameworks (e.g. PyTorch, JAX) via :py:meth:`array_api.array.__dlpack__`; | ||
| * Frameworks (e.g. PyTorch, JAX, PaddlePaddle) via :py:meth:`array_api.array.__dlpack__`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and better readability, it would be great to sort the list of frameworks alphabetically.
| * Frameworks (e.g. PyTorch, JAX, PaddlePaddle) via :py:meth:`array_api.array.__dlpack__`; | |
| * Frameworks (e.g. JAX, PaddlePaddle, PyTorch) via :py:meth:`array_api.array.__dlpack__`; |
| On the Python side, :py:class:`tvm_ffi.Tensor` is a managed n-dimensional array that: | ||
|
|
||
| * can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`NumPy/CuPy <ship-to-numpy>`; | ||
| * can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`NumPy/CuPy <ship-to-numpy>`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and better readability, please sort the list of frameworks alphabetically.
| * can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`NumPy/CuPy <ship-to-numpy>`; | |
| * can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`JAX <ship-to-jax>`, :ref:`NumPy/CuPy <ship-to-numpy>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`PyTorch <ship-to-pytorch>`; |
| We can build a single shared library that works across: | ||
|
|
||
| - **ML frameworks**, e.g. PyTorch, JAX, NumPy, CuPy, and others; | ||
| - **ML frameworks**, e.g. PyTorch, JAX, PaddlePaddle, NumPy, CuPy, and others; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Python: 3.9 or newer | ||
| - Compiler: C++17-capable toolchain (GCC/Clang/MSVC) | ||
| - Optional ML frameworks for testing: NumPy, PyTorch, JAX, CuPy | ||
| - Optional ML frameworks for testing: NumPy, PyTorch, JAX, CuPy, PaddlePaddle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| - NumPy, CuPy, | ||
| - PyTorch, JAX, or | ||
| - PyTorch, JAX, PaddlePaddle, or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Python version/ABI. They are not compiled or linked with Python and depend only on TVM-FFI's stable C ABI; | ||
| - Languages, including C++, Python, Rust, or any other language that can interop with the C ABI; | ||
| - ML frameworks, such as PyTorch, JAX, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`. | ||
| - ML frameworks, such as PyTorch, JAX, PaddlePaddle, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and better readability, please sort the list of frameworks alphabetically.
| - ML frameworks, such as PyTorch, JAX, PaddlePaddle, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`. | |
| - ML frameworks, such as CuPy, JAX, NumPy, PaddlePaddle, PyTorch, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`. |
| **Cross-language.** TVM-FFI implements this calling convention in multiple languages (C, C++, Python, Rust, ...), enabling code written in one language - or generated by a DSL targeting the ABI - to be called from another language. | ||
|
|
||
| **Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, and others). | ||
| **Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, PaddlePaddle, and others). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency and better readability, please sort the list of frameworks alphabetically.
| **Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, PaddlePaddle, and others). | |
| **Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (CuPy, JAX, NumPy, PaddlePaddle, PyTorch, TensorFlow, and others). |
| import tvm_ffi | ||
| mod = tvm_ffi.load_module("build/add_one_cuda.so") | ||
|
|
||
| import paddle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to PEP 8, imports should be grouped at the top of the file. While this is a small example, it's a good practice to follow this convention. Please move import paddle to the top and group it with other imports.
| import tvm_ffi | |
| mod = tvm_ffi.load_module("build/add_one_cuda.so") | |
| import paddle | |
| import paddle | |
| import tvm_ffi | |
| mod = tvm_ffi.load_module("build/add_one_cuda.so") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds documentation and example code demonstrating PaddlePaddle's TVM FFI interoperability introduced in PaddlePaddle 3.3.0, which includes full DLPack support and C DLPack exchange API.
Changes:
- Added new Python example
load_paddle.pydemonstrating PaddlePaddle tensor interop with TVM-FFI - Updated quickstart documentation to include PaddlePaddle alongside PyTorch, JAX, NumPy, and CuPy
- Added PaddlePaddle to framework lists across all relevant documentation files
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| examples/quickstart/load/load_paddle.py | New example demonstrating how to load and use TVM-FFI compiled functions with PaddlePaddle tensors |
| examples/quickstart/run_all_cuda.sh | Added invocation of the new PaddlePaddle example to the CUDA test script |
| examples/quickstart/README.md | Added PaddlePaddle example to the list of framework loading examples |
| docs/get_started/quickstart.rst | Added new PaddlePaddle section with version requirement and example code |
| docs/get_started/stable_c_abi.rst | Added PaddlePaddle to the list of supported DLPack frameworks |
| docs/concepts/tensor.rst | Added PaddlePaddle references in tensor interop documentation |
| README.md | Added PaddlePaddle link to the frameworks and runtimes bullet point |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| mod = tvm_ffi.load_module("build/add_one_cuda.so") | ||
|
|
||
| import paddle | ||
| x = paddle.tensor([1, 2, 3, 4, 5], dtype=paddle.float32, device="cuda") |
Copilot
AI
Jan 17, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PaddlePaddle API usage is incorrect. The function should be paddle.to_tensor() instead of paddle.tensor(), and the parameter for specifying the device should be place instead of device. The correct usage is: x = paddle.to_tensor([1, 2, 3, 4, 5], dtype=paddle.float32, place="cuda")
| x = paddle.tensor([1, 2, 3, 4, 5], dtype=paddle.float32, device="cuda") | |
| x = paddle.to_tensor([1, 2, 3, 4, 5], dtype=paddle.float32, place="cuda") |
|
That's exciting news :) |
PaddlePaddle's recently released version 3.3.0 includes full TVM FFI support:
__c_dlpack_exchange_api__for tvm_ffi 0.1.0-0.1.4 /__dlpack_c_exchange_api__for tvm_ffi 0.1.5+)__dlpack_data_type__) and Device exchange protocol (__dlpack_device__)This PR documents and demonstrates PaddlePaddle's TVM FFI interoperability introduced in PaddlePaddle 3.3.0.
The new example runs as expected: