Skip to content

Conversation

@SigureMo
Copy link
Contributor

PaddlePaddle's recently released version 3.3.0 includes full TVM FFI support:

  • DLPack 1.2 support (or 1.3) and many DLPack implementation fixes
  • C DLPack exchange API (__c_dlpack_exchange_api__ for tvm_ffi 0.1.0-0.1.4 / __dlpack_c_exchange_api__ for tvm_ffi 0.1.5+)
  • DataType exchange protocol (__dlpack_data_type__) and Device exchange protocol (__dlpack_device__)

This PR documents and demonstrates PaddlePaddle's TVM FFI interoperability introduced in PaddlePaddle 3.3.0.

The new example runs as expected:

image

Copilot AI review requested due to automatic review settings January 17, 2026 11:39
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @SigureMo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates PaddlePaddle into the TVM FFI ecosystem by updating documentation and providing a new quickstart example. This integration leverages the full TVM FFI support introduced in PaddlePaddle 3.3.0, which includes enhanced DLPack capabilities and exchange protocols. The changes aim to demonstrate and document how TVM FFI can seamlessly interoperate with PaddlePaddle, expanding the range of ML frameworks supported.

Highlights

  • PaddlePaddle TVM FFI Support: PaddlePaddle version 3.3.0 now includes full TVM FFI support, encompassing DLPack 1.2/1.3, C DLPack exchange APIs (__c_dlpack_exchange_api__ / __dlpack_c_exchange_api__), and DataType/Device exchange protocols.
  • Documentation Updates: The documentation has been updated across README.md, docs/concepts/tensor.rst, docs/get_started/quickstart.rst, and docs/get_started/stable_c_abi.rst to reflect PaddlePaddle's integration and support for TVM FFI and DLPack.
  • New Quickstart Example: A new quickstart example (examples/quickstart/load/load_paddle.py) has been added, demonstrating how to load and run a TVM FFI module with PaddlePaddle tensors, along with updates to examples/quickstart/README.md and examples/quickstart/run_all_cuda.sh to include this example.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request does a great job of adding documentation and an example for PaddlePaddle interoperability with TVM FFI. The changes are clear and the new example is helpful. My review includes several suggestions to improve consistency across the documentation by alphabetically sorting lists of frameworks, which will enhance readability and maintainability. I've also included a minor style suggestion for the new Python example file to adhere to best practices.

- **Kernel libraries** - ship one wheel to support multiple frameworks, Python versions, and different languages. [[FlashInfer](https://docs.flashinfer.ai/)]
- **Kernel DSLs** - reusable open ABI for JIT and AOT kernel exposure frameworks and runtimes. [[TileLang](https://tilelang.com/)][[cuteDSL](https://docs.nvidia.com/cutlass/latest/media/docs/pythonDSL/cute_dsl_general/compile_with_tvm_ffi.html)]
- **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)]
- **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and maintainability, please consider sorting the list of frameworks alphabetically.

Suggested change
- **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)][[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)]
- **Frameworks and runtimes** - a uniform extension point for ABI-compliant libraries and DSLs. [[JAX](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-jax)][[NumPy/CuPy](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-numpy)][[PaddlePaddle](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-paddle)][[PyTorch](https://tvm.apache.org/ffi/get_started/quickstart.html#ship-to-pytorch)]

At runtime, TVM-FFI often needs to accept tensors from many sources:

* Frameworks (e.g. PyTorch, JAX) via :py:meth:`array_api.array.__dlpack__`;
* Frameworks (e.g. PyTorch, JAX, PaddlePaddle) via :py:meth:`array_api.array.__dlpack__`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, it would be great to sort the list of frameworks alphabetically.

Suggested change
* Frameworks (e.g. PyTorch, JAX, PaddlePaddle) via :py:meth:`array_api.array.__dlpack__`;
* Frameworks (e.g. JAX, PaddlePaddle, PyTorch) via :py:meth:`array_api.array.__dlpack__`;

On the Python side, :py:class:`tvm_ffi.Tensor` is a managed n-dimensional array that:

* can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`NumPy/CuPy <ship-to-numpy>`;
* can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`NumPy/CuPy <ship-to-numpy>`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
* can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`PyTorch <ship-to-pytorch>`, :ref:`JAX <ship-to-jax>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`NumPy/CuPy <ship-to-numpy>`;
* can be created via :py:func:`tvm_ffi.from_dlpack(ext_tensor, ...) <tvm_ffi.from_dlpack>` to import tensors from external frameworks, e.g., :ref:`JAX <ship-to-jax>`, :ref:`NumPy/CuPy <ship-to-numpy>`, :ref:`PaddlePaddle <ship-to-paddle>`, :ref:`PyTorch <ship-to-pytorch>`;

We can build a single shared library that works across:

- **ML frameworks**, e.g. PyTorch, JAX, NumPy, CuPy, and others;
- **ML frameworks**, e.g. PyTorch, JAX, PaddlePaddle, NumPy, CuPy, and others;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
- **ML frameworks**, e.g. PyTorch, JAX, PaddlePaddle, NumPy, CuPy, and others;
- **ML frameworks**, e.g. CuPy, JAX, NumPy, PaddlePaddle, PyTorch, and others;

- Python: 3.9 or newer
- Compiler: C++17-capable toolchain (GCC/Clang/MSVC)
- Optional ML frameworks for testing: NumPy, PyTorch, JAX, CuPy
- Optional ML frameworks for testing: NumPy, PyTorch, JAX, CuPy, PaddlePaddle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
- Optional ML frameworks for testing: NumPy, PyTorch, JAX, CuPy, PaddlePaddle
- Optional ML frameworks for testing: CuPy, JAX, NumPy, PaddlePaddle, PyTorch


- NumPy, CuPy,
- PyTorch, JAX, or
- PyTorch, JAX, PaddlePaddle, or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
- PyTorch, JAX, PaddlePaddle, or
- JAX, PaddlePaddle, PyTorch, or

- Python version/ABI. They are not compiled or linked with Python and depend only on TVM-FFI's stable C ABI;
- Languages, including C++, Python, Rust, or any other language that can interop with the C ABI;
- ML frameworks, such as PyTorch, JAX, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`.
- ML frameworks, such as PyTorch, JAX, PaddlePaddle, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
- ML frameworks, such as PyTorch, JAX, PaddlePaddle, NumPy, CuPy, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`.
- ML frameworks, such as CuPy, JAX, NumPy, PaddlePaddle, PyTorch, or any array library that implements the standard :external+data-api:doc:`DLPack protocol <design_topics/data_interchange>`.

**Cross-language.** TVM-FFI implements this calling convention in multiple languages (C, C++, Python, Rust, ...), enabling code written in one language - or generated by a DSL targeting the ABI - to be called from another language.

**Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, and others).
**Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, PaddlePaddle, and others).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency and better readability, please sort the list of frameworks alphabetically.

Suggested change
**Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (NumPy, PyTorch, TensorFlow, CuPy, JAX, PaddlePaddle, and others).
**Cross-framework.** TVM-FFI uses standard data structures such as :external+data-api:doc:`DLPack tensors <design_topics/data_interchange>` to represent arrays, so compiled functions can be used from any array framework that implements the DLPack protocol (CuPy, JAX, NumPy, PaddlePaddle, PyTorch, TensorFlow, and others).

Comment on lines +22 to +25
import tvm_ffi
mod = tvm_ffi.load_module("build/add_one_cuda.so")

import paddle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

According to PEP 8, imports should be grouped at the top of the file. While this is a small example, it's a good practice to follow this convention. Please move import paddle to the top and group it with other imports.

Suggested change
import tvm_ffi
mod = tvm_ffi.load_module("build/add_one_cuda.so")
import paddle
import paddle
import tvm_ffi
mod = tvm_ffi.load_module("build/add_one_cuda.so")

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds documentation and example code demonstrating PaddlePaddle's TVM FFI interoperability introduced in PaddlePaddle 3.3.0, which includes full DLPack support and C DLPack exchange API.

Changes:

  • Added new Python example load_paddle.py demonstrating PaddlePaddle tensor interop with TVM-FFI
  • Updated quickstart documentation to include PaddlePaddle alongside PyTorch, JAX, NumPy, and CuPy
  • Added PaddlePaddle to framework lists across all relevant documentation files

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
examples/quickstart/load/load_paddle.py New example demonstrating how to load and use TVM-FFI compiled functions with PaddlePaddle tensors
examples/quickstart/run_all_cuda.sh Added invocation of the new PaddlePaddle example to the CUDA test script
examples/quickstart/README.md Added PaddlePaddle example to the list of framework loading examples
docs/get_started/quickstart.rst Added new PaddlePaddle section with version requirement and example code
docs/get_started/stable_c_abi.rst Added PaddlePaddle to the list of supported DLPack frameworks
docs/concepts/tensor.rst Added PaddlePaddle references in tensor interop documentation
README.md Added PaddlePaddle link to the frameworks and runtimes bullet point

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

mod = tvm_ffi.load_module("build/add_one_cuda.so")

import paddle
x = paddle.tensor([1, 2, 3, 4, 5], dtype=paddle.float32, device="cuda")
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PaddlePaddle API usage is incorrect. The function should be paddle.to_tensor() instead of paddle.tensor(), and the parameter for specifying the device should be place instead of device. The correct usage is: x = paddle.to_tensor([1, 2, 3, 4, 5], dtype=paddle.float32, place="cuda")

Suggested change
x = paddle.tensor([1, 2, 3, 4, 5], dtype=paddle.float32, device="cuda")
x = paddle.to_tensor([1, 2, 3, 4, 5], dtype=paddle.float32, place="cuda")

Copilot uses AI. Check for mistakes.
@junrushao
Copy link
Member

That's exciting news :)

@junrushao junrushao merged commit 2702f2b into apache:main Jan 18, 2026
13 of 14 checks passed
@SigureMo SigureMo deleted the docs/add-paddlepaddle-example branch January 18, 2026 04:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants