Skip to content

Conversation

@msaroufim
Copy link
Member

Install CUTLASS C++ headers to /opt/cutlass so users can #include <cutlass/...> and #include <cute/...> in their submissions. Also adds a test script to validate the setup before deploying, and documents how to add new C++ deps.

Install CUTLASS C++ headers to /opt/cutlass so users can
#include <cutlass/...> and #include <cute/...> in their
submissions. Also adds a test script to validate the setup
before deploying, and documents how to add new C++ deps.
Copilot AI review requested due to automatic review settings February 9, 2026 03:02
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

Coverage report

Click to see where and how coverage changed

FileStatementsMissingCoverageCoverage
(new stmts)
Lines missing
  src/libkernelbot
  utils.py
Project Total  

This report was generated by python-coverage-comment-action

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds NVIDIA CUTLASS v4.3.5 header-only dependency to the Modal CUDA runner image and introduces a Modal test script to validate that CUDA compilation and PyTorch inline extensions can include CUTLASS/CuTe headers.

Changes:

  • Install CUTLASS headers into /opt/cutlass in the production runner image and set related env vars.
  • Add a Modal-based validation script that compiles a small nvcc program and a PyTorch load_inline extension.
  • Document a workflow for adding future C++ header dependencies to the runner image.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
src/runners/test_cutlass_image.py Adds a Modal script that builds a test CUDA image, installs CUTLASS headers, and validates include/compile behavior.
src/runners/modal_runner.py Installs CUTLASS v4.3.5 headers in the production image and documents how to add more C++ deps.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

)
# CUTLASS C++ headers for #include <cutlass/...>
.run_commands(
"git clone --depth 1 --branch v4.3.5 https://github.com/NVIDIA/cutlass.git /opt/cutlass",
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cloning CUTLASS by tag at build time is a supply-chain risk because tags can be moved/retagged upstream. Consider pinning to an exact commit SHA (and ideally verifying it), e.g., by fetching the repo and checking out a known commit, to make the image build reproducible and tamper-resistant.

Suggested change
"git clone --depth 1 --branch v4.3.5 https://github.com/NVIDIA/cutlass.git /opt/cutlass",
# Pin CUTLASS to an exact commit SHA for reproducible, tamper-resistant builds.
# This SHA corresponds to the v4.3.5 release tag.
"git init /opt/cutlass && "
"cd /opt/cutlass && "
"git remote add origin https://github.com/NVIDIA/cutlass.git && "
"git fetch --depth 1 origin 9f2b20cdb57ee97dc0b1819b7358126d1c44cc97 && "
"git checkout --detach FETCH_HEAD",

Copilot uses AI. Check for mistakes.
Comment on lines 30 to 32
.run_commands(
"git clone --depth 1 --branch v4.3.5 https://github.com/NVIDIA/cutlass.git /opt/cutlass",
)
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same supply-chain concern as production: cloning by tag is not fully reproducible and can be retagged upstream. Pin to an immutable commit SHA (and/or validate the expected commit) so the pre-deploy validation script tests the exact dependency revision intended for production.

Copilot uses AI. Check for mistakes.
Comment on lines +76 to +79
.env({
"CUTLASS_PATH": "/opt/cutlass",
"CPLUS_INCLUDE_PATH": "/opt/cutlass/include:/opt/cutlass/tools/util/include",
})
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting CPLUS_INCLUDE_PATH to a fixed value replaces any existing include paths that may already be configured in the base image (or by future dependencies). Prefer prepending/appending to the existing value (while handling the empty/unset case) to avoid breaking other C++ builds that rely on CPLUS_INCLUDE_PATH.

Copilot uses AI. Check for mistakes.
)
.env({
"CUTLASS_PATH": "/opt/cutlass",
"CPLUS_INCLUDE_PATH": "/opt/cutlass/include:/opt/cutlass/tools/util/include",
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test image also overwrites CPLUS_INCLUDE_PATH. To make this script robust (and to mirror best practice for the production image), consider prepending/appending to any existing value so this test doesn’t accidentally mask/include-path behavior changes from other packages.

Suggested change
"CPLUS_INCLUDE_PATH": "/opt/cutlass/include:/opt/cutlass/tools/util/include",
"CPLUS_INCLUDE_PATH": "/opt/cutlass/include:/opt/cutlass/tools/util/include:${CPLUS_INCLUDE_PATH:-}",

Copilot uses AI. Check for mistakes.

app = modal.App("test-cutlass-image")

cuda_version = "13.1.0"
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The base image is CUDA 13.1 (nvidia/cuda:13.1.0-...), but PyTorch below is installed from the cu130 index (CUDA 13.0). This mismatch can cause runtime/library or extension build/link issues (especially for load_inline). Align the CUDA toolkit version in the base image with the PyTorch wheel CUDA version (or vice versa) to ensure the test is validating the same CUDA stack users will run.

Suggested change
cuda_version = "13.1.0"
cuda_version = "13.0.0"

Copilot uses AI. Check for mistakes.
Comment on lines 25 to 28
.uv_pip_install(
"torch==2.9.1",
index_url="https://download.pytorch.org/whl/cu130",
)
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses the cu130 wheel index while the image is CUDA 13.1 (see above). If the intention is to validate CUTLASS on the same CUDA version as the toolchain in the image, update either the base image CUDA tag or the PyTorch wheel index/version so they match.

Copilot uses AI. Check for mistakes.
Comment on lines 88 to 96
compile_cmd = [
"nvcc",
cu_file,
"-o", binary,
"-I", f"{cutlass_path}/include",
"-I", f"{cutlass_path}/tools/util/include",
"-std=c++17",
"-arch=sm_75",
]
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script comment + image env setup suggests CPLUS_INCLUDE_PATH should make CUTLASS headers discoverable without -I flags, but the nvcc compilation test always passes explicit -I include paths. If you want this test to validate the env-based include behavior, add a compilation attempt that omits -I (or make that the primary path), so failures in CPLUS_INCLUDE_PATH wiring are caught.

Copilot uses AI. Check for mistakes.
@msaroufim msaroufim merged commit 3d766cf into main Feb 9, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant