Skip to content

Conversation

@nzmora-nvidia
Copy link
Collaborator

@nzmora-nvidia nzmora-nvidia commented Dec 26, 2025

Add a transform to replace torch.ops.auto_deploy.torch_quant_nvfp4_moe
with the optimized torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused.

Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128,
so torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused is not set as the default FP4 MoE implementation (i.e. the transform is disabled).

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for NVFP4 Mixture of Experts quantization with FP4 weight compression for improved model efficiency.
    • Introduced configuration option to enable NVFP4 MoE fusion transformations in deployment pipeline.
  • Improvements

    • Enhanced weight scaling and padding logic for NVFP4 quantization handling.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@nzmora-nvidia nzmora-nvidia requested a review from a team as a code owner December 26, 2025 00:42
@nzmora-nvidia nzmora-nvidia enabled auto-merge (squash) December 26, 2025 00:43
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 26, 2025

📝 Walkthrough

Walkthrough

This PR introduces NVFP4 (FP4 quantization) MoE fusion support, adding configuration entries, quantization constants, custom operator implementations with padding and quantization logic, transform-based weight stacking, and comprehensive test coverage for the new MoE fusion pathway.

Changes

Cohort / File(s) Change Summary
Configuration
tensorrt_llm/_torch/auto_deploy/config/default.yaml
Added fuse_nvfp4_moe transform entry with stage: post_load_fusion and enabled: false.
Quantization Constants
tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
Added two new NVFP4 geometry constants: TRTLLM_NVFP4_ROW_SIZE = 128 and TRTLLM_NVFP4_COLUMN_SIZE = 4.
MoE Custom Operator
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
Extended with FP4/FP8 quantization paths including validation, padding logic for inter_size and hidden_size alignment, FP4 block-scale tensor handling, weight dequantization workflows, and updated fused_moe kernel invocation with input_blockscale parameter.
Transform Pipeline
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
Introduced _stack_nvfp4_moe_weights() helper function for NVFP4 weight materialization and stacking, _prepare_args_cutlass_format_nvfp4() for argument assembly, and new FuseNVFP4Moe transform class to register and apply NVFP4 MoE weight fusion during graph optimization.
Quantization Transform
tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
Enhanced NVFP4LinearQuantizationFromConfig with _pad_m_n() helper to compute padded dimensions; updated default_scales() to allocate weight_scale as 2D tensor with padded shape; modified load_hook() to store swizzled scales in padded 2D form.
Test Suite
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
Significant refactor: introduced per-expert FP4 quantization workflows, new _get_test_data() and _quantize_weights() helpers, split test generation into structured phases (data generation → quantization → routing → fused output → reference computation), added per-expert dequantization in compute_ref_output(), and introduced gated/non-gated MLP branching with configuration-specific skip logic.

Sequence Diagram(s)

sequenceDiagram
    actor Input as Quantized Weights<br/>(NVFP4)
    participant QuantTransform as NVFP4Quantization<br/>Transform
    participant FuseMoE as FuseNVFP4Moe<br/>Transform
    participant WeightStack as Weight Stacking<br/>(_stack_nvfp4_moe_weights)
    participant Op as trtllm_moe_fused<br/>Custom Operator
    actor Output as Fused MoE<br/>Output

    Input->>QuantTransform: Apply quantization<br/>(compute padded weight_scale)
    QuantTransform->>QuantTransform: Allocate weight_scale<br/>(padded_m, padded_n)
    QuantTransform->>FuseMoE: Quantized weights + scales

    FuseMoE->>WeightStack: Locate NVFP4 MoE nodes
    WeightStack->>WeightStack: Stack per-expert weights<br/>& blockscales
    WeightStack->>WeightStack: Register as parameters<br/>& validate padding
    WeightStack->>Op: Invoke fused operator<br/>with stacked weights,<br/>blockscales, input_blockscale

    Op->>Op: FP4 dequantization<br/>(per-expert)
    Op->>Op: Apply gating function
    Op->>Op: Fused MoE computation
    Op->>Output: Return fused output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • suyoggupta
  • QiJune
  • liji-nv

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.73% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The PR description provides only a brief summary without addressing required template sections like Description, Test Coverage, and incomplete PR Checklist items. Complete the Description section explaining the issue and solution. Fill in the Test Coverage section listing relevant tests. Ensure all PR Checklist items are properly addressed or marked.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding an auto-deploy transform for cutlass FP4 MoE kernels, which directly aligns with the changeset.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
tensorrt_llm/_torch/auto_deploy/config/default.yaml (1)

129-131: Consider adding backend field for consistency with other MoE transforms.

The fuse_fp8_moe transform (line 125-128) specifies backend: trtllm, but fuse_nvfp4_moe omits this field. While the NVFP4 path currently only supports TRT-LLM, adding the field would maintain consistency and future-proof the configuration.

🔎 Suggested addition
   fuse_nvfp4_moe:
     stage: post_load_fusion
     enabled: false
+    backend: trtllm
tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py (1)

339-346: Remove commented-out code artifacts.

Lines 342-344 contain commented-out alternative implementations that appear to be leftover from development. These should be removed before merging to keep the codebase clean.

🔎 Suggested cleanup
         return {
             "input_scale": torch.tensor(1.0 / 6.0),
             "weight_scale": torch.empty((padded_m, padded_n), dtype=torch.uint8),
-            # "weight_scale": torch.empty((m, n), dtype=torch.uint8),
-            # "weight_scale": torch.empty(padded_m * padded_n, dtype=torch.float8_e4m3fn),
-            # "weight_scale": torch.empty(padded_m * padded_n, dtype=torch.uint8),
             "alpha": torch.tensor(1.0 / 6.0),
         }
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py (1)

622-625: Remove unused is_gated_mlp parameter.

Static analysis correctly identifies that is_gated_mlp is passed to _quantize_weights but never used within the function body.

🔎 Suggested fix
-    def _quantize_weights(fc1_weights, fc2_weights, is_gated_mlp):
+    def _quantize_weights(fc1_weights, fc2_weights):
         def round_up(x, y):
             return math.ceil(x / y) * y

And update the call site at line 757:

-    ) = _quantize_weights(fc1_expert_weights, fc2_expert_weights, is_gated_mlp)
+    ) = _quantize_weights(fc1_expert_weights, fc2_expert_weights)
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py (3)

1577-1619: Consider extracting shared helpers to reduce duplication.

The helper functions _register_parameter, get_param_or_buffer, and _stack are nearly identical to those in _stack_fp8_moe_weights (lines 1276-1314). While acceptable for an initial implementation, consider extracting these to module-level helpers in a future refactor to reduce maintenance burden.


1698-1701: Prefix unused unpacked variables with underscore.

Static analysis correctly identifies that w3_weight_scale and w3_alpha are unpacked but never used. This appears intentional since the NVFP4 path concatenates w1 and w3 weights for gated MLP, handling scales differently than FP8. Prefix with underscore to silence the warning and document the intentional omission.

🔎 Suggested fix
             w1_weight_scale,
             w2_weight_scale,
-            w3_weight_scale,
+            _w3_weight_scale,
             w1_alpha,
             w2_alpha,
-            w3_alpha,
+            _w3_alpha,
             is_gated_mlp,
         ) = _extract_op_args(node)

1724-1751: Remove commented-out code blocks.

Lines 1724-1745 and 1750 contain commented-out assertions and w3 handling code. If this code is not needed for the current implementation, it should be removed. If it's a placeholder for future work, consider adding a TODO comment explaining the intent.

🔎 Suggested cleanup
         w3_input_scale_stacked = (
             _stack(w3_input_scale, dim=0)
             if w3_input_scale
             else torch.empty(
                 0, device=w1_input_scale_stacked.device, dtype=w1_input_scale_stacked.dtype
             )
         )
-        # assert torch.all(w1_input_scale_stacked[0] == w1_input_scale_stacked), (
-        #     "All w1 scales should have the same value."
-        # )
-        # assert torch.all(w2_input_scale_stacked[0] == w2_input_scale_stacked), (
-        #     "All w2 scales should have the same value."
-        # )

         w1_weight_blockscale_fp8_stacked = _stack(w1_weight_scale, dim=0).to(torch.float8_e4m3fn)
         w2_weight_blockscale_fp8_stacked = _stack(w2_weight_scale, dim=0).to(torch.float8_e4m3fn)
-        # w3_weight_blockscale_fp8_stacked = (
-        #     (
-        #         _stack(w3_weight_scale, dim=0)
-        #         if w3_weight_scale
-        #         else torch.empty(
-        #             0,
-        #             device=w1_weight_blockscale_fp8_stacked.device,
-        #             dtype=w1_weight_blockscale_fp8_stacked.dtype,
-        #         )
-        #     )
-        #     .to(torch.float8_e4m3fn)
-        #     .contiguous()
-        # )

-        ###
         w1_alpha_stacked = _stack(w1_alpha, dim=0)
         w2_alpha_stacked = _stack(w2_alpha, dim=0)
-        # w3_alpha_stacked = _stack(w3_alpha, dim=0)
-        ###
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 13ffe52 and 78a1ca3.

📒 Files selected for processing (6)
  • tensorrt_llm/_torch/auto_deploy/config/default.yaml
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
🧠 Learnings (21)
📓 Common learnings
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/config/default.yaml
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/config/default.yaml
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation with asserts for total size and TP divisibility.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
📚 Learning: 2025-11-14T11:22:03.729Z
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
📚 Learning: 2025-10-20T17:07:18.745Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py:98-116
Timestamp: 2025-10-20T17:07:18.745Z
Learning: In NemotronH models (tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py), the gate (self.gate) returns topk_indices and topk_weights that are already in the correct shape to be passed directly to torch_ops.auto_deploy.torch_moe without needing to reshape them when hidden_states is flattened.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
📚 Learning: 2025-10-20T17:09:21.560Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/transform/library/rms_norm.py:180-182
Timestamp: 2025-10-20T17:09:21.560Z
Learning: In tensorrt_llm/_torch/auto_deploy/transform/library/rms_norm.py, the _gated_rmsnorm_replacement function does not need to cast the output of torch.ops.auto_deploy.torch_rmsnorm_gated back to the input dtype, even though the custom op returns fp32. The dtype handling is managed elsewhere or the fp32 output is acceptable for downstream consumers.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
📚 Learning: 2025-08-09T02:04:49.623Z
Learnt from: Fridah-nv
Repo: NVIDIA/TensorRT-LLM PR: 6760
File: tensorrt_llm/_torch/auto_deploy/models/quant_config_reader.py:81-98
Timestamp: 2025-08-09T02:04:49.623Z
Learning: In TensorRT-LLM's auto_deploy module, torch.dtype values in configuration dictionaries must be stored as string representations (e.g., "float16" instead of torch.float16) because OmegaConf.merge does not support torch.dtype types. These string representations are converted to actual torch.dtype objects in downstream code.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-12-19T06:31:54.973Z
Learnt from: nvyocox
Repo: NVIDIA/TensorRT-LLM PR: 10117
File: tensorrt_llm/_torch/auto_deploy/transform/library/fuse_rope_attention.py:336-339
Timestamp: 2025-12-19T06:31:54.973Z
Learning: In tensorrt_llm/_torch/auto_deploy/transform/library/fuse_rope_attention.py, the cast to torch.float16 for qkv_node before creating the AttentionPlugin is intentional and required because DriveOS LLM expects float16 dtype specifically. This should not be changed to preserve original dtype or made configurable for bfloat16 models in the DriveOS LLM ONNX export path.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-10-20T16:54:09.824Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py:6-6
Timestamp: 2025-10-20T16:54:09.824Z
Learning: In tensorrt_llm/_torch/auto_deploy/custom_ops/rms_norm.py, the import `from ...modules.mamba.layernorm_gated import _layer_norm_fwd` is correct and should not be changed to modules.fla.layernorm_gated. The _layer_norm_fwd function exists in both modules/mamba/layernorm_gated.py and modules/fla/layernorm_gated.py, but the mamba version is the intended implementation for this use case.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
🧬 Code graph analysis (3)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py (2)
cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h (1)
  • ActivationType (28-41)
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py (1)
  • trtllm_quant_nvfp4_moe_fused (221-358)
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py (1)
tensorrt_llm/_torch/auto_deploy/utils/node_utils.py (1)
  • extract_op_args (557-594)
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py (1)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (12)
  • _ (262-315)
  • _ (397-405)
  • _ (631-641)
  • _ (681-691)
  • _ (975-987)
  • _ (1165-1192)
  • _ (1225-1235)
  • _ (1315-1325)
  • _ (1425-1441)
  • _ (1528-1536)
  • _ (1605-1608)
  • _ (1641-1652)
🪛 Ruff (0.14.10)
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py

622-622: Unused function argument: is_gated_mlp

(ARG001)

tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py

1698-1698: Unpacked variable w3_weight_scale is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1701-1701: Unpacked variable w3_alpha is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1787-1787: Unused method argument: cm

(ARG002)


1788-1788: Unused method argument: factory

(ARG002)


1789-1789: Unused method argument: shared_config

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (12)
tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py (1)

14-16: LGTM - NVFP4 geometry constants are well-defined.

The new constants TRTLLM_NVFP4_ROW_SIZE (128) and TRTLLM_NVFP4_COLUMN_SIZE (4) correctly define the padding requirements for the FP4 GEMM plugin, complementing the existing TRTLLM_NVFP4_SCALING_VECTOR_SIZE.

tensorrt_llm/_torch/auto_deploy/transform/library/quantization.py (2)

323-329: LGTM - Padding helper is correctly implemented.

The _pad_m_n method properly computes padded dimensions using math.ceil to round up to the nearest multiple of the NVFP4 row (128) and column (4) sizes, aligning with the FP4 GEMM plugin requirements referenced in the docstring.


389-402: LGTM - Weight scale reshaping correctly handles NVFP4 swizzled layout.

The load_hook properly:

  1. Swizzles the weight_scale using block_scale_interleave
  2. Computes padded dimensions using _pad_m_n
  3. Reshapes the swizzled scale to the 2D padded shape

This aligns with the FP4 GEMM plugin's expectation for pre-swizzled block scale factors.

tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py (3)

506-511: LGTM - Dequantization correctly uses NVFP4 row size constant.

The convert_swizzled_to_linear function properly uses TRTLLM_NVFP4_ROW_SIZE (128) for tile size calculation, aligning with the swizzled layout requirements.


585-587: Known limitation documented with skip condition.

The skip condition for Relu2 with intermediate_size=1856 aligns with the PR description stating the fused implementation produces incorrect results when rows are not divisible by 128 (1856 % 128 = 64). Good practice to document known limitations in tests.


786-792: Consider tightening tolerances or documenting why 20% is acceptable.

The test uses rtol=2e-1, atol=2e-1 (20% tolerance), which is quite relaxed. While FP4 quantization inherently loses precision, documenting the expected precision loss or comparing against other FP4 tests would help future maintainers understand if this tolerance is appropriate.

tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py (1)

1777-1800: LGTM - FuseNVFP4Moe transform correctly implements the BaseTransform interface.

The transform properly:

  1. Registers with TransformRegistry as "fuse_nvfp4_moe"
  2. Wraps the weight stacking in cuda_memory_tracker
  3. Returns appropriate TransformInfo with skipped/match counts

The unused cm, factory, and shared_config parameters are required by the BaseTransform._apply interface signature, so the static analysis warnings can be safely ignored.

tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py (5)

17-25: LGTM - Required imports for NVFP4 quantization path.

The imports of math and NVFP4 geometry constants are necessary for the padding calculations and validation logic added below.


265-280: Good validation of NVFP4 block scale dimensions.

The assertions correctly verify that block scale tensors have the expected 3D shape and that their dimensions are properly aligned to the NVFP4 row (128) and column (4) sizes. This catches configuration errors early.


335-342: Correct quant_scales construction for NVFP4 MoE.

The quant_scales list correctly:

  1. Uses .view(torch.int32) for blockscale tensors (lines 337, 340), matching the cpp code expectation
  2. Includes all required scales: fc1_act_global_scale, fc1_weight_blockscale, fc1_alpha, fc2_act_global_scale, fc2_weight_blockscale, fc2_alpha

The comment referencing the cpp source (line 334) is helpful for maintainability.


344-358: LGTM - fused_moe call with correct NVFP4 parameters.

The call correctly:

  1. Uses x_q_fp4 (quantized input) instead of raw x
  2. Casts selected_experts to int and routing_weights to float32
  3. Uses .view(torch.long) for weight tensors (lines 348, 350) for 64-bit addressing
  4. Passes input_sf=input_blockscale for the FP4 quantization path
  5. Uses the converted act_fn (Swiglu for gated MLP)

304-316: The padding slice assignment is necessary, not redundant.

Line 315 requires the slice [:, :fc1_inter_size, :] because fc1_padded is allocated with shape [E, fc1_inter_size_padded, hidden_size_padded // FP4_PER_UINT8], which has a larger second dimension than the source tensor fc1_expert_weights_fp4 [E, fc1_inter_size, ...]. The slice correctly places the original data in the first fc1_inter_size rows while the remaining padded rows remain zero-initialized. The logic is correct.

@nzmora-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29982 [ run ] triggered by Bot. Commit: 78a1ca3

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29982 [ run ] completed with state SUCCESS. Commit: 78a1ca3
/LLM/main/L0_MergeRequest_PR pipeline #23063 completed with status: 'SUCCESS'

Copy link
Collaborator

@galagam galagam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some general comments:

  1. In a previous PR you changed the terminology from w1, w2, w3 to fc1 expect weights and fc2 expert weights, but here we're still using the old terminology. I find it confusing. I warrants at least a comment.
  2. PR description states "Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128"
    Better check for this condition and assert. Easier to figure out what is happening when an assertion fails instead of debugging accuracy issues.
    Even better if we also have a unit test for scenario, skipped and referencing a bug ID.

@nzmora-nvidia
Copy link
Collaborator Author

Some general comments:

  1. In a previous PR you changed the terminology from w1, w2, w3 to fc1 expect weights and fc2 expert weights, but here we're still using the old terminology. I find it confusing. I warrants at least a comment.
  2. PR description states "Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128"
    Better check for this condition and assert. Easier to figure out what is happening when an assertion fails instead of debugging accuracy issues.
    Even better if we also have a unit test for scenario, skipped and referencing a bug ID.
  1. I think you misunderstood the code - Let me know which code you're looking at. The operator is trtllm_quant_nvfp4_moe_fused and it's using fc1, fc2 (I created this interface in a previous PR and this PR changes it).
  2. Yeah, maybe the assert is a good idea. There is a UT with the scenario and it is skipped (tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py). There's no bug id because I don't consider the task complete without this configuration working. Nonetheless, if I merge this PR in its current state then I will open a bug.

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
@nzmora-nvidia nzmora-nvidia force-pushed the user/nzmora/nvfp4_transform_2_rebased branch from 78a1ca3 to 15618ee Compare December 29, 2025 17:45
@nzmora-nvidia
Copy link
Collaborator Author

Some general comments:

  1. In a previous PR you changed the terminology from w1, w2, w3 to fc1 expect weights and fc2 expert weights, but here we're still using the old terminology. I find it confusing. I warrants at least a comment.
  2. PR description states "Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128"
    Better check for this condition and assert. Easier to figure out what is happening when an assertion fails instead of debugging accuracy issues.
    Even better if we also have a unit test for scenario, skipped and referencing a bug ID.
  1. I think you misunderstood the code - Let me know which code you're looking at. The operator is trtllm_quant_nvfp4_moe_fused and it's using fc1, fc2 (I created this interface in a previous PR and this PR changes it).
  2. Yeah, maybe the assert is a good idea. There is a UT with the scenario and it is skipped (tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py). There's no bug id because I don't consider the task complete without this configuration working. Nonetheless, if I merge this PR in its current state then I will open a bug.

For (2): opened a bug and added the remarks and the assert - thanks!

@nzmora-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30089 [ run ] triggered by Bot. Commit: 15618ee

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30089 [ run ] completed with state SUCCESS. Commit: 15618ee
/LLM/main/L0_MergeRequest_PR pipeline #23151 completed with status: 'SUCCESS'

@nzmora-nvidia nzmora-nvidia merged commit 966231d into NVIDIA:main Dec 29, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants