Skip to content

Torchscript deprecation#8777

Open
aymuos15 wants to merge 9 commits intoProject-MONAI:devfrom
aymuos15:torchscript_deprecation
Open

Torchscript deprecation#8777
aymuos15 wants to merge 9 commits intoProject-MONAI:devfrom
aymuos15:torchscript_deprecation

Conversation

@aymuos15
Copy link
Contributor

This is an initial attempt to approach: #8632

I can folllow up with the tutorials once/if this is merged.

…tils

Introduce save_exported_program() and load_exported_program() in a new
monai/data/export_utils.py module for the torch.export-based .pt2
serialisation format.  Mark the legacy save_net_with_metadata() and
load_net_with_metadata() as deprecated.

Also adds ExportMetadataKeys enum and inlines warn_deprecated() into
deprecate_utils to remove a circular-import hazard.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Introduce convert_to_export() as the torch.export replacement for
convert_to_torchscript(), with a _recursive_to() helper for moving
ExportedProgram state dicts across devices.  Mark
convert_to_torchscript() as deprecated and update convert_to_trt()
to use the dynamo path gated behind pytorch_after().

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Add export_checkpoint() as the torch.export replacement for the
deprecated ckpt_export() command.  Update load() to support .pt2 files
via load_exported_module().  Register the new command in __main__.py
and wire up the TRT save wrapper.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Replace @torch.jit.script, torch.jit.is_scripting() guards,
@torch.jit.unused, and @torch.jit.export annotations with plain
Python across all network modules.  Convert torch.jit-unfriendly
patterns (ModuleList iteration, Python containers) back to idiomatic
code.

Also adds .pt2 loading support to MMARS, removes outdated TorchScript
notes from MetaTensor, and refactors _compute_acr_mask in the coil
sensitivity model.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Replace torch.arange().tolist() with list(range()) in dice, focal,
and tversky losses to avoid graph breaks under torch.export tracing.
Move class_weight validation to __init__ in hausdorff_loss.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Add new test suites for export_utils, convert_to_export, and the
export_checkpoint bundle CLI.  Migrate ~60 existing test files from
test_script (torch.jit.script) to test_export (torch.export.export),
removing TorchScript-specific skip decorators where no longer needed.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
Update the Model Bundle specification and modules documentation to
describe the new torch.export-based workflow and .pt2 file format.

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces TorchScript-centric references and workflows with a torch.export exported-program (.pt2) workflow across docs, bundle tooling, utilities, networks, and tests. Adds monai.data.export_utils (save_exported_program, load_exported_program), convert_to_export, and export_checkpoint; re-exports export helpers in package inits. Deprecates TorchScript helpers (save_net_with_metadata/load_net_with_metadata) with warnings. Updates MMAR loading for .pt2/.ts, adjusts network code to be export-friendly (removing scripting guards and converting control flow), adds export verification paths, and bulk-updates tests to use export-based save/load utilities.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is minimal; it mentions issue #8632 but lacks required sections like detailed description, types of changes, testing status, and documentation updates from the template. Complete the PR description template with explicit sections covering what changed, why, test results, and documentation updates. Reference the extensive scope of changes across 70+ files.
Docstring Coverage ⚠️ Warning Docstring coverage is 49.19% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive Title 'Torchscript deprecation' is vague and generic; it doesn't convey the specific, substantial changes in this PR (transition from TorchScript to torch.export APIs). Use a more specific title like 'Migrate from TorchScript to torch.export APIs' or 'Add torch.export support and deprecate TorchScript utilities' to better communicate the scope.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
monai/networks/blocks/feature_pyramid_network.py (1)

206-218: ⚠️ Potential issue | 🟡 Minor

Update docstrings to explicitly mention torch.export compatibility.

Both get_result_from_inner_blocks and get_result_from_layer_blocks use O(n) iteration to avoid dynamic module indexing, which is required because torch.export does not support runtime-varying module indices. The docstrings mention "compatibility" but should explicitly state this is for torch.export support, making the design intent clear to future maintainers.

Also applies to: get_result_from_layer_blocks (lines 220–232)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/blocks/feature_pyramid_network.py` around lines 206 - 218,
Update the docstrings for get_result_from_inner_blocks and
get_result_from_layer_blocks to explicitly state that the O(n) iteration (rather
than dynamic module indexing) is used to maintain compatibility with
torch.export, because torch.export does not support runtime-varying module
indices; reference the function names in the docstrings and mention torch.export
and "runtime-varying module indices" so future maintainers understand the design
intent and exportability constraint.
monai/networks/layers/factories.py (1)

260-265: ⚠️ Potential issue | 🟡 Minor

Scope the custom-autograd note to the Apex path.

This now reads as if every return path uses a custom autograd implementation, but this factory also returns plain nn.InstanceNorm{1,2,3}d classes. Please narrow the note to InstanceNorm3dNVFuser and state the fallback behavior explicitly.

📝 Suggested docstring tweak
-    This layer is based on a customized autograd function.
+    When available, `apex.normalization.InstanceNorm3dNVFuser` is based on a custom autograd function.
+    Otherwise, this factory falls back to the standard PyTorch `nn.InstanceNorm{1,2,3}d` layers.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/layers/factories.py` around lines 260 - 265, Update the
docstring so the note about a "custom autograd function" is explicitly tied to
the Apex implementation (InstanceNorm3dNVFuser) and not implied for all return
paths; mention that the factory will return the Apex class InstanceNorm3dNVFuser
when available (CUDA, non-Windows) and otherwise falls back to the plain PyTorch
classes nn.InstanceNorm1d / nn.InstanceNorm2d / nn.InstanceNorm3d as
appropriate, and document the return value and conditions briefly (e.g., when
apex is missing or unsupported platform, the fallback to nn.InstanceNormXd is
used).
tests/networks/blocks/test_selfattention.py (1)

224-243: ⚠️ Potential issue | 🔴 Critical

Bug: use_combined_linear is undefined.

Line 233 references use_combined_linear but the parameterized decorator on line 224 only provides include_fc. This will cause a NameError at runtime.

🐛 Proposed fix
-    `@parameterized.expand`([[True], [False]])
+    `@parameterized.expand`([[True, True], [True, False], [False, True], [False, False]])
     def test_no_extra_weights_if_no_fc(self, include_fc):
+    def test_no_extra_weights_if_no_fc(self, include_fc, use_combined_linear):

Or if use_combined_linear should be constant:

     `@parameterized.expand`([[True], [False]])
     def test_no_extra_weights_if_no_fc(self, include_fc):
+        use_combined_linear = True  # or False, depending on intent
         input_param = {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/blocks/test_selfattention.py` around lines 224 - 243, The test
references an undefined variable use_combined_linear in
test_no_extra_weights_if_no_fc causing a NameError; fix by either extending the
parameterized.expand to include use_combined_linear as a second parameter (e.g.,
parameterize over [[True, True],[True, False],[False, True],[False, False]] and
update the test signature to def test_no_extra_weights_if_no_fc(self,
include_fc, use_combined_linear)) or, if use_combined_linear should be constant,
set a local value before creating net (e.g., use_combined_linear = False) so
SABlock(**input_param) receives a defined boolean and the assertions on
net.out_proj remain valid.
docs/source/modules.md (1)

240-264: ⚠️ Potential issue | 🟡 Minor

Bundle structure example still shows .ts file.

Line 240 correctly references the new .pt2 export format, but the bundle structure example at line 259 still lists *model.ts. Consider updating to *model.pt2 for consistency.

📝 Suggested fix
   ┣━ models
   ┃  ┣━ model.pt
-  ┃  ┣━ *model.ts
+  ┃  ┣━ *model.pt2
   ┃  ┗━ *model.onnx
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/source/modules.md` around lines 240 - 264, The bundle example still
references the legacy filename pattern "*model.ts"; update the example in the
"bundle structure example" block to use the new export format "*model.pt2" (so
replace "*model.ts" with "*model.pt2") to match the earlier mention of `.pt2`
and keep the examples consistent with the `torch.export` format.
🧹 Nitpick comments (38)
monai/apps/mmars/mmars.py (1)

235-241: Consider warning when map_location is ignored for .pt2 files.

For consistency with the pretrained and weights_only warnings, consider warning when map_location is provided but ignored.

♻️ Suggested addition
     if _model_file.name.endswith(".pt2"):
         if not pretrained:
             warnings.warn("Loading an ExportedProgram, 'pretrained' option ignored.", stacklevel=2)
         if weights_only:
             warnings.warn("Loading an ExportedProgram, 'weights_only' option ignored.", stacklevel=2)
+        if map_location is not None:
+            warnings.warn("Loading an ExportedProgram, 'map_location' option ignored.", stacklevel=2)
         return torch.export.load(str(_model_file))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/apps/mmars/mmars.py` around lines 235 - 241, The .pt2 branch in
mmars.py (the block checking _model_file.name.endswith(".pt2")) currently warns
about ignored pretrained and weights_only but not about ignored map_location;
update that branch to detect when the incoming parameter map_location is not
None (or not the default) and issue a warnings.warn similar to the others (e.g.,
"Loading an ExportedProgram, 'map_location' option ignored.", stacklevel=2)
before calling torch.export.load(str(_model_file)), referencing the existing
_model_file and torch.export.load usage so the new check sits alongside the
pretrained and weights_only warnings.
monai/networks/blocks/feature_pyramid_network.py (1)

209-209: Clarify "compatibility" in comments.

The updated comments say "for compatibility" but don't specify with what. Based on the PR context (torch.export migration), these should explicitly mention torch.export or the specific constraint.

📝 Suggested comment clarification
-        but module indexing with a variable is used for compatibility
+        but module indexing with a variable is used for torch.export compatibility

Apply to both lines 209 and 223.

Also applies to: 223-223

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/blocks/feature_pyramid_network.py` at line 209, Update the
vague "for compatibility" comments in FeaturePyramidNetwork where module
indexing with a variable is used (specifically around the lateral_convs and
output_convs access) to explicitly state that this form is required for
torch.export (and the related torch.jit/torch.fx export constraints) so static
exporters can trace the module list; e.g., mention "required for torch.export
migration / torch.jit export compatibility" and briefly note why (exporters
can't handle direct list/dict module attribute dispatch), applying the change to
both comment occurrences that reference compatibility for module indexing.
monai/apps/reconstruction/networks/nets/coil_sensitivity_model.py (1)

88-120: Add explicit edge-case tests for _compute_acr_mask

Please ensure tests cover odd/even sampling lengths and the “center not sampled” case, since this helper now drives masking behavior directly.

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/apps/reconstruction/networks/nets/coil_sensitivity_model.py` around
lines 88 - 120, Add unit tests for _compute_acr_mask covering odd and even
sampling lengths and the case where the center sample is not present: construct
input masks of shape (..., S, 1) with known contiguous fully-sampled center
regions (for both odd S and even S) and assert the returned tensor from
coil_sensitivity_model._compute_acr_mask has the correct boolean True span and
broadcastable shape; also add a test where the center index is False and assert
the ACR mask contains no True entries (or otherwise matches the expected
behavior), and include variations with different leading batch dimensions to
ensure correct reshaping/broadcasting.
monai/networks/nets/dynunet.py (1)

30-31: Clarify the recursion rationale.

After the TorchScript wording was removed, this reads like Python itself forbids iterating layer/output lists. Reframe it as a topology/readability choice instead.

✏️ Suggested wording
-    structure. The recursive class design avoids restrictions on looping over lists of layers and accumulating
-    lists of output tensors which must be indexed. The `heads` list is
+    structure. The recursive class design mirrors the skip-path hierarchy directly and avoids manual
+    indexing across layer and output lists. The `heads` list is
As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dynunet.py` around lines 30 - 31, The docstring text that
implies Python forbids iterating layer/output lists should be rewritten to state
that using a recursive class design (mentioning the "recursive class design"
phrase and the "heads" list) is a topology/readability choice to simplify
traversal and avoid manual indexing, not a language restriction; update the
paragraph around those phrases to the suggested wording. Also ensure every
public definition in this file (classes and functions in dynunet.py, e.g., the
DynUNet class and its key methods) has a Google-style docstring that documents
Args, Returns, and Raises where applicable, describing each parameter, return
value, and possible exceptions.
monai/utils/enums.py (1)

439-441: Consider adding a brief docstring for ExportMetadataKeys.

Alias lacks documentation. Users inspecting it won't see it's an alias or its purpose.

📝 Proposed docstring
 # ExportMetadataKeys shares the same members as JITMetadataKeys; alias to avoid duplication.
+"""
+Metadata keys for exported models (.pt2). Alias for :class:`JITMetadataKeys`.
+
+Members:
+    NAME: model name
+    TIMESTAMP: export timestamp
+    VERSION: model version
+    DESCRIPTION: model description
+"""
 ExportMetadataKeys = JITMetadataKeys
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/utils/enums.py` around lines 439 - 441, Add a short docstring above the
ExportMetadataKeys alias explaining that it is simply an alias of
JITMetadataKeys, why the alias exists (to expose the same members under a
different public name), and any intended usage or backward-compatibility note;
update the ExportMetadataKeys declaration (the alias of JITMetadataKeys) to
include that docstring so users inspecting ExportMetadataKeys see its purpose
and relation to JITMetadataKeys.
tests/networks/nets/test_vqvae.py (1)

128-145: Replace the commented-out test with an explicit skipped test.

This block is dead code right now. Prefer a real test method with @unittest.skip(...) so it stays visible in reports and can be re-enabled cleanly later.

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_vqvae.py` around lines 128 - 145, Replace the
commented-out block by adding a real test function named test_export that is
decorated with `@unittest.skip`("torch.export does not support activation
checkpoint") and contains the same body as the removed code: construct the VQVAE
instance (referencing class VQVAE and parameters shown), create test_data via
torch.randn, and call test_export_save(net, test_data) so the test is visible
and can be re-enabled later.
monai/networks/nets/dints.py (1)

40-50: Expand the new abstract-method docstrings to full Google style.

The new interface definitions are documented, but they don’t yet specify arguments, return type semantics, and raised exceptions.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 40 - 50, Update the docstrings for
the abstract interfaces CellInterface.forward and StemInterface.forward to full
Google-style: add an "Args" section documenting parameter names and types (e.g.,
x: torch.Tensor, weight: Optional[torch.Tensor] for CellInterface), a "Returns"
section describing the returned torch.Tensor and its shape/semantics, and a
"Raises" section stating that NotImplementedError is raised by the base
implementation; keep concise language and match the types used in the function
signatures so readers know expected inputs/outputs and error behavior.
tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py (1)

157-160: Add a Google-style docstring to test_export.

The new definition should include a docstring per repo standards.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py`
around lines 157 - 160, Add a Google-style docstring to the test_export function
describing its purpose and parameters: state that test_export loads input_param
and input_data from TEST_CASES[0], constructs a
LocalNormalizedCrossCorrelationLoss with input_param, and calls test_export_save
with pred and target from input_data; include Args describing input_param
(dict), input_data (dict) and any implicit test behavior, a Returns section
noting None, and a Raises section if the test expects exceptions (or state None)
following the repo's Google-style docstring format.
tests/losses/test_focal_loss.py (1)

379-384: Add a Google-style docstring to test_export.

The modified test method currently has no docstring.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_focal_loss.py` around lines 379 - 384, The test_export
method lacks a Google-style docstring; add a concise docstring above def
test_export describing the purpose (verifies FocalLoss exports correctly for
both softmax and non-softmax modes), the inputs it uses (iterates use_softmax
over True/False and constructs test_input as a torch.ones tensor of shape
(2,2,8,8)), what it calls (test_export_save to export and validate), and note
there is no return value but an assertion may be raised on failure (e.g.,
AssertionError). Ensure the docstring follows Google style with an opening
summary line and an Args section listing use_softmax and test_input, and a
Raises section mentioning AssertionError.
tests/networks/nets/test_patch_gan_dicriminator.py (1)

127-142: Add Google-style docstrings to both test_export methods.

Both modified definitions are missing docstrings.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 159-176

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_patch_gan_dicriminator.py` around lines 127 - 142,
Add a Google-style docstring to the test_export function(s) that clearly
documents the purpose of the test, the parameters used (e.g., net:
PatchDiscriminator instance with given constructor args, i: input tensor created
via torch.rand), what the test asserts or returns (calls test_export_save to
validate export), and any exceptions that may be raised; update both occurrences
of the test_export definition so the docstring follows Google-style sections
(Args, Returns, Raises) and references the helper test_export_save and
PatchDiscriminator to make intent and inputs explicit.
tests/networks/nets/test_basic_unetplusplus.py (1)

102-106: Add a Google-style docstring to test_export.

The modified method has no docstring.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_basic_unetplusplus.py` around lines 102 - 106, Add a
Google-style docstring to the test_export function: describe the purpose
("Verify model export of BasicUNetPlusPlus with sample input"), document
parameters/variables (net: BasicUNetPlusPlus instance, test_data: torch.Tensor
sample input), state the return value (None) and note any exceptions that could
be raised during export (e.g., RuntimeError from torch or export helper). Update
the docstring at the top of test_export so it follows Google style sections
Args, Returns, and Raises and references the net and test_data identifiers.
tests/networks/nets/test_efficientnet.py (1)

373-378: Please add a docstring to test_export.

The modified definition is missing the required Google-style docstring.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_efficientnet.py` around lines 373 - 378, Add a
Google-style docstring to the test_export function describing the purpose of the
test, the local variables used (net: the EfficientNetBN instance configured for
export; test_data: the input tensor used for export validation), and what is
being validated (that test_export_save successfully exports the model); include
Args and Raises sections even if Raises indicates no exceptions expected, and a
short Returns section if applicable (typically None for tests). Ensure the
docstring appears immediately under def test_export(self): and follows Google
docstring formatting conventions.
tests/networks/nets/test_autoencoder.py (1)

84-87: Add a Google-style docstring to test_export.

The modified test definition is missing a docstring per repo rule.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_autoencoder.py` around lines 84 - 87, Add a
Google-style docstring to the test_export function describing the purpose of the
test and its parameters: document the "net" (AutoEncoder instance), "test_data"
(torch.Tensor of shape [2,1,32,32]), and that the function invokes
test_export_save to verify model export; include sections Args and Raises (if
any exceptions are expected) and a short Returns line (typically None for
tests). Locate the test_export definition in the file and insert the docstring
immediately under def test_export(self): using Google-style sections (Args,
Returns, Raises) and concise descriptions referencing net and test_data.
monai/data/__init__.py (1)

79-80: Consider alphabetical import ordering.

The .export_utils import should come before .torchscript_utils for alphabetical consistency.

Proposed fix
-from .torchscript_utils import load_net_with_metadata, save_net_with_metadata
 from .export_utils import load_exported_program, save_exported_program
+from .torchscript_utils import load_net_with_metadata, save_net_with_metadata
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/data/__init__.py` around lines 79 - 80, The import order is not
alphabetized; swap the two imports so .export_utils is imported before
.torchscript_utils. Specifically reorder the lines importing
load_exported_program and save_exported_program to precede the lines importing
load_net_with_metadata and save_net_with_metadata so the file imports
load_exported_program, save_exported_program, then load_net_with_metadata,
save_net_with_metadata.
tests/networks/nets/test_highresnet.py (1)

60-60: Unused variable expected_shape.

Per static analysis: prefix with underscore to indicate intentional discard.

♻️ Suggested fix
-        input_param, input_shape, expected_shape = TEST_CASE_1
+        input_param, input_shape, _ = TEST_CASE_1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_highresnet.py` at line 60, The variable
expected_shape from the unpack of TEST_CASE_1 in the
tests/networks/nets/test_highresnet.py is unused; update the unpack to use a
discard name (e.g., _expected_shape) so static analysis knows it is
intentional—locate the assignment "input_param, input_shape, expected_shape =
TEST_CASE_1" and change expected_shape to _expected_shape (also update any local
references if they exist, otherwise leave only the renamed unused variable).
tests/losses/test_tversky_loss.py (1)

191-194: Add a short docstring to test_export.

This updated test method should include a docstring for consistency with repo rules.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_tversky_loss.py` around lines 191 - 194, Add a short
Google-style docstring to the test_export test function explaining its purpose:
that it verifies TverskyLoss can be exported/saved via test_export_save using a
sample input; mention the input shape and that it asserts successful export (no
return/exceptions). Update the function named test_export in
tests/losses/test_tversky_loss.py (which instantiates TverskyLoss and calls
test_export_save) to include this concise docstring above the body.
tests/networks/nets/test_segresnet.py (2)

80-85: Add short docstrings to both test_export methods.

Both updated test definitions should include docstrings for guideline compliance.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 95-99

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_segresnet.py` around lines 80 - 85, The two test
functions named test_export (one at the top of the file and the other around
lines 95-99) lack docstrings; add concise Google-style docstrings to each
test_export that describe the purpose of the test, the parameters/fixtures used
(e.g., input_param, input_shape, test_data), the expected behavior
(expected_shape or successful export), and any exceptions that could be raised;
place the docstring immediately below the def line in triple quotes and keep it
short and focused so the tests remain guideline-compliant.

81-81: Drop unused expected_shape in tuple unpacking.

expected_shape is unpacked but unused in both export tests.

♻️ Proposed cleanup
-        input_param, input_shape, expected_shape = TEST_CASE_SEGRESNET[0]
+        input_param, input_shape, _ = TEST_CASE_SEGRESNET[0]
@@
-        input_param, input_shape, expected_shape = TEST_CASE_SEGRESNET_VAE[0]
+        input_param, input_shape, _ = TEST_CASE_SEGRESNET_VAE[0]

Also applies to: 96-96

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_segresnet.py` at line 81, The tuple unpacking
currently does "input_param, input_shape, expected_shape =
TEST_CASE_SEGRESNET[0]" but expected_shape is never used in the export tests;
change the unpacking to only extract the used values (e.g., "input_param,
input_shape = TEST_CASE_SEGRESNET[0]") in both export tests that reference
TEST_CASE_SEGRESNET (the occurrences around the current line and the other
similar occurrence noted at line 96) so the unused variable is removed.
tests/networks/nets/test_discriminator.py (1)

52-55: Add a short docstring to test_export.

Please add a docstring to this updated test definition.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_discriminator.py` around lines 52 - 55, Add a short
Google-style docstring to the test_export function that briefly describes the
test purpose (verifies that a Discriminator with in_shape=(1,64,64),
channels=(2,4), strides=(2,2), num_res_units=0 can be exported), documents the
parameters/fixtures used (the net variable and test_data tensor), and notes that
it asserts successful export via test_export_save; reference the Discriminator
construction and the call to test_export_save in the docstring and include any
expected exceptions or failures that may be raised during export per the project
docstring style.
tests/networks/nets/dints/test_dints_mixop.py (1)

78-80: Add a short docstring to test_export.

The test rename is fine; please also document the test method.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/dints/test_dints_mixop.py` around lines 78 - 80, Add a
short Google-style docstring to the test_export function describing the purpose
of the test, its parameters and expected behavior; place it immediately under
the def test_export(...) line and mention the parameters input_param, ops,
weight, input_shape and expected_shape, describe that it constructs a MixedOp
(using Cell.OPS3D) and calls test_export_save with a random tensor and weight,
and note the expected outcome (e.g., that export succeeds or the output shape
matches expected_shape). Ensure the docstring follows Google-style sections for
Args and Returns (if any) and keeps it concise.
tests/networks/nets/test_basic_unet.py (1)

95-98: Add a short docstring to test_export.

Please add a Google-style docstring to this updated test definition.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_basic_unet.py` around lines 95 - 98, Add a
Google-style docstring to the test_export function describing the purpose of the
test, the parameters used, and any expected behavior; specifically mention the
local variables net (BasicUNet instance) and test_data (a torch.Tensor of shape
[16, 1, 32, 32]), that the test calls test_export_save(net, test_data), and note
that no return value is expected and no exceptions should be raised under normal
conditions. Place the docstring immediately under the def test_export(self):
line in the test_export function.
tests/networks/blocks/test_subpixel_upsample.py (1)

78-82: Add a short docstring to test_export.

This modified test method should include a docstring to satisfy repo rules.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/blocks/test_subpixel_upsample.py` around lines 78 - 82, Add a
short Google-style docstring to the test_export function describing the purpose
of the test, inputs, and behavior; update the test_export definition (which
instantiates SubpixelUpsample, creates test_data via torch.randn, and calls
test_export_save) to include a one- or two-line docstring that states it
verifies exporting the SubpixelUpsample network using TEST_CASE_SUBPIXEL inputs
and test_export_save, following repository docstring conventions.
tests/losses/test_dice_loss.py (1)

220-223: Add a short docstring to test_export.

The renamed test method is clear, but it still misses a docstring per repo guidance.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_dice_loss.py` around lines 220 - 223, Add a short
Google-style docstring to the test_export function describing what the test
verifies (that DiceLoss exports correctly and the exported module produces
expected outputs), document the input arguments used (test_input tensor) and
that no return value is expected; update the docstring inside the test_export
function (which constructs DiceLoss and calls test_export_save) to follow the
repo's Google-style sections (Args, Returns) even if Returns is None.
tests/networks/nets/test_net_adapter.py (1)

57-57: Mark the unused parameter explicitly.

Line 57 keeps expected_shape unused; rename it to _expected_shape to make intent clear.

Suggested fix
-    def test_export(self, input_param, input_shape, expected_shape):
+    def test_export(self, input_param, input_shape, _expected_shape):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_net_adapter.py` at line 57, The test_export function
has an unused parameter expected_shape; rename it to _expected_shape in the
test_export signature to mark it as intentionally unused (update the def
test_export(self, input_param, input_shape, expected_shape) to def
test_export(self, input_param, input_shape, _expected_shape) so linters and
readers know it's unused).
tests/apps/detection/networks/test_retinanet_detector.py (1)

92-93: Expand export coverage beyond one case.

Line 92 restricts export testing to TEST_CASE_1; add at least one 2D case so export regressions are caught across dimensions.

As per coding guidelines, "Ensure new or modified definitions will be covered by existing or new unit tests."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/apps/detection/networks/test_retinanet_detector.py` around lines 92 -
93, Update the test export cases to include at least one 2D scenario in addition
to TEST_CASE_1 by expanding the TEST_CASES_EXPORT list; locate the
TEST_CASES_EXPORT definition in
tests/apps/detection/networks/test_retinanet_detector.py and add a reference to
an existing or new 2D test case (e.g., TEST_CASE_2_2D or similar) so the export
tests run across both 2D and the existing case, ensuring the test suite covers
multiple dimensions.
tests/losses/test_generalized_wasserstein_dice_loss.py (1)

306-315: Add a docstring to test_export.

This new/renamed definition is missing a docstring per the Python guideline used for this review.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_generalized_wasserstein_dice_loss.py` around lines 306 -
315, Add a Google-style docstring to the test_export function describing the
purpose (verifies loss export with near-perfect predictions), the parameters
used (pred_very_good and target shapes/meaning), and any expectations (no return
value; should not raise exceptions). Locate the test_export definition in
tests/losses/test_generalized_wasserstein_dice_loss.py and add a short
triple-quoted docstring above the function that documents the inputs, behavior,
and that the test asserts successful export without errors.
tests/networks/test_convert_to_export.py (2)

69-70: Use more descriptive names than D/S.

Please rename these to clearer identifiers (for example, dynamic_dim and static_dim) to keep variable names informative.

As per coding guidelines, "Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/test_convert_to_export.py` around lines 69 - 70, Rename the
terse constants D and S to descriptive PEP8-compliant names (e.g., dynamic_dim
and static_dim) in the test where they are defined and update every use of
torch.export.Dim.DYNAMIC and torch.export.Dim.STATIC assigned to D/S to use the
new dynamic_dim/static_dim identifiers; ensure any assertions, function calls,
or helper usages that referenced D or S now reference dynamic_dim or static_dim
so the test remains consistent and readable (search for the symbols D and S in
test_convert_to_export.py and replace them).

24-24: Add a class docstring for TestConvertToExport.

The class is new and should include a docstring to satisfy the project’s Python definition documentation rule.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/test_convert_to_export.py` at line 24, Add a Google-style
class docstring to TestConvertToExport that briefly describes the test class
purpose and the behavior it validates; include an "Attributes" section if the
class defines any instance attributes and document them, and add any "Raises" or
"Returns" sections only if applicable to helper methods defined on the class.
Locate the class definition TestConvertToExport and insert a concise
triple-quoted docstring immediately under the class line following the project's
docstring conventions (summary line, blank line, then sections like
Attributes/Methods/Raises as needed).
tests/networks/nets/test_segresnet_ds.py (1)

138-142: Add a docstring to test_export.

The renamed test method is missing a docstring; adding one keeps this definition aligned with the repository guideline for Python definitions.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_segresnet_ds.py` around lines 138 - 142, Add a
Google-style docstring to the test_export function that briefly describes the
test purpose (exporting a SegResNetDS model), documents the parameters/fixtures
used (input_param, input_shape, test_data) and any expected behavior or
exceptions, and mentions that it uses SegResNetDS and test_export_save; update
the docstring in the test_export definition so it follows the repository's
docstring style guidelines.
tests/networks/nets/test_dynunet.py (1)

137-141: Add a docstring to test_export.

The renamed test definition should include a docstring to match the documented Python standards for this repository.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_dynunet.py` around lines 137 - 141, Add a
Google-style docstring to the test_export function describing the test purpose
and the key variables/steps: state that it constructs a DynUNet from
TEST_CASE_DYNUNET_2D, creates random test_data with torch.randn using
input_shape, and calls test_export_save(net, test_data); include an Args section
noting input_param and input_shape (where they come from), and a short Raises
section if test_export_save can raise exceptions, or a Returns section
indicating None (test). Ensure the docstring appears immediately under the def
test_export(self): header.
tests/networks/nets/test_resnet.py (1)

314-317: Add a docstring to test_export.

This renamed definition should include a docstring to align with the Python documentation standard used for this review.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_resnet.py` around lines 314 - 317, Add a
Google-style docstring to the test_export function: above the body of
test_export, insert a triple-quoted docstring that briefly describes the test
purpose and documents parameters model, input_param, input_shape, and
expected_shape (their types and roles), notes that the function constructs net =
model(**input_param), generates test_data via torch.randn(input_shape), calls
test_export_save(net, test_data), and returns None; also mention any expected
exceptions (if none, state that none are raised). Ensure the docstring follows
the Google-style sections (Args, Returns, Raises) and references the key symbols
test_export, model, input_param, input_shape, expected_shape, test_export_save,
net, and test_data.
tests/test_utils.py (1)

777-800: Docstring missing parameter documentation.

Per coding guidelines, docstrings should describe each parameter, return value, and raised exception.

Suggested docstring improvement
 def test_export_save(net, *inputs, dynamic_shapes=None, rtol=1e-4, atol=0.0):
     """
     Test the ability to save ``net`` as a ``torch.export`` ``.pt2`` object, reload it, and apply inference.
     The value ``inputs`` is forward-passed through the original and loaded copy of the network and their
     results returned. The forward pass for both is done without gradient accumulation.
+
+    Args:
+        net: The neural network module to export.
+        *inputs: Input tensors to forward-pass through the network.
+        dynamic_shapes: Optional dynamic shape specifications for export.
+        rtol: Relative tolerance for output verification.
+        atol: Absolute tolerance for output verification.
     """

As per coding guidelines: "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_utils.py` around lines 777 - 800, The test_export_save function's
docstring lacks Google-style parameter, return, and raises documentation; update
the docstring for test_export_save to include an "Args" section describing net
(torch.Module), inputs (variadic input tensors/structures), dynamic_shapes
(optional dict/flag), rtol (float), and atol (float), a "Returns" section noting
it returns None, and a "Raises" section documenting exceptions that may
propagate from convert_to_export or tempfile.TemporaryDirectory usage (e.g.,
RuntimeError or export/IO errors); ensure names match the function signature and
mention convert_to_export where relevant.
tests/networks/nets/test_varautoencoder.py (1)

118-123: Add a docstring to test_export.

The new test method is missing a docstring. Please add a short Google-style docstring for intent and expected behavior.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_varautoencoder.py` around lines 118 - 123, Add a
Google-style docstring to the test_export function describing its intent and
expected behavior: state that it constructs a VarAutoEncoder instance
(parameters: spatial_dims, in_shape, out_channels, latent_size, channels,
strides), generates random test_data, calls test_export_save to export and
validate the model, and asserts outputs are numerically close within rtol and
atol; include Args for net and test_data, and a Returns/Raises section
indicating the test will raise an assertion failure if exported outputs do not
match within tolerances.
tests/networks/nets/test_unet.py (1)

175-180: Add docstrings for the updated export tests.

Both modified test methods are missing docstrings.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 182-194

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_unet.py` around lines 175 - 180, Add Google-style
docstrings to the updated test methods (e.g., test_export and the other export
test that calls test_export_save) describing the purpose of the test,
parameters/variables used (like net: UNet instance and test_data tensor), any
return value (typically None for tests), and possible exceptions; locate the
functions by name (test_export, test_export_save, UNet) and insert concise
docstrings that explain inputs, expected behavior, and that no value is
returned, following the project's Google-style section headings (Args, Returns,
Raises).
tests/networks/blocks/test_fpn_block.py (1)

58-64: Add docstrings to the updated export tests.

Both modified test_export methods are missing docstrings.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 78-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/blocks/test_fpn_block.py` around lines 58 - 64, Add
Google-style docstrings to the updated test_export functions so each test has a
clear description and documents parameters/returns/exceptions: update the
test_export function (and the other test_export variant at the later lines) to
include a docstring that briefly describes the test purpose, documents the input
parameters (input_param, input_shape, expected_shape or the OrderedDict data),
and notes any expected outcome or raised exceptions; place the docstrings
directly above the test_export definitions that construct FeaturePyramidNetwork
and call test_export_save so the test intent and inputs are clear.
tests/networks/blocks/test_dynunet_block.py (1)

86-93: Add docstrings to the modified test methods.

Both new test_export methods are missing docstrings.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 103-110

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/blocks/test_dynunet_block.py` around lines 86 - 93, Add
Google-style docstrings to the test_export test method (and the other modified
test_export at lines 103-110) describing the purpose of the test, the inputs
used (e.g., TEST_CASE_RES_BASIC_BLOCK, input_param, input_shape, test_data),
what is asserted or expected (export/serialization via test_export_save), and
any exceptions that could be raised; update the docstrings for tests that
instantiate UnetResBlock and UnetBasicBlock and call test_export_save so they
include sections for Args (input_param, input_shape), Returns (None), and Raises
(if applicable) to conform to the project’s docstring guidelines.
tests/data/test_export_utils.py (1)

28-29: Document TestModule.forward.

forward is missing a docstring; add a concise Google-style docstring.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/data/test_export_utils.py` around lines 28 - 29, Add a concise
Google-style docstring to TestModule.forward that documents the parameter and
return value: describe the input parameter "x" (type and meaning) under Args and
the returned value under Returns (e.g., the input incremented by 10), and note
that it does not raise exceptions; update the forward method's docstring
accordingly to follow Google-style sections (Args, Returns).
tests/networks/nets/test_vit.py (1)

102-106: Add a docstring to test_export.

Please add a short Google-style docstring to the modified test method.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_vit.py` around lines 102 - 106, The test_export
function lacks a Google-style docstring; add a short docstring above def
test_export describing the purpose of the test and documenting parameters:
input_param (dict of ViT init args), input_shape (tuple for test input tensor
shape), and the implicit behavior (runs model.eval() and calls test_export_save
with random input), and note that no return value is expected; place the
docstring in Google-style sections Args: and Returns: to satisfy the project
docstring guideline for test_export.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 40e30318-e5d0-4a06-9f6c-55a8eed25a7e

📥 Commits

Reviewing files that changed from the base of the PR and between daaedaa and 2e8a7ab.

📒 Files selected for processing (97)
  • docs/source/mb_specification.rst
  • docs/source/modules.md
  • monai/apps/detection/networks/retinanet_network.py
  • monai/apps/detection/utils/anchor_utils.py
  • monai/apps/mmars/mmars.py
  • monai/apps/reconstruction/networks/nets/coil_sensitivity_model.py
  • monai/bundle/__init__.py
  • monai/bundle/__main__.py
  • monai/bundle/scripts.py
  • monai/data/__init__.py
  • monai/data/export_utils.py
  • monai/data/meta_tensor.py
  • monai/data/torchscript_utils.py
  • monai/losses/dice.py
  • monai/losses/focal_loss.py
  • monai/losses/hausdorff_loss.py
  • monai/losses/tversky.py
  • monai/networks/__init__.py
  • monai/networks/blocks/feature_pyramid_network.py
  • monai/networks/blocks/selfattention.py
  • monai/networks/blocks/squeeze_and_excitation.py
  • monai/networks/layers/factories.py
  • monai/networks/layers/simplelayers.py
  • monai/networks/layers/vector_quantizer.py
  • monai/networks/nets/basic_unet.py
  • monai/networks/nets/dints.py
  • monai/networks/nets/dynunet.py
  • monai/networks/nets/netadapter.py
  • monai/networks/nets/swin_unetr.py
  • monai/networks/nets/vit.py
  • monai/networks/utils.py
  • monai/utils/__init__.py
  • monai/utils/deprecate_utils.py
  • monai/utils/enums.py
  • tests/apps/detection/networks/test_retinanet.py
  • tests/apps/detection/networks/test_retinanet_detector.py
  • tests/apps/detection/utils/test_anchor_box.py
  • tests/bundle/test_bundle_export_checkpoint.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • tests/data/test_export_utils.py
  • tests/integration/test_retinanet_predict_utils.py
  • tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py
  • tests/losses/test_dice_ce_loss.py
  • tests/losses/test_dice_focal_loss.py
  • tests/losses/test_dice_loss.py
  • tests/losses/test_ds_loss.py
  • tests/losses/test_focal_loss.py
  • tests/losses/test_generalized_dice_focal_loss.py
  • tests/losses/test_generalized_dice_loss.py
  • tests/losses/test_generalized_wasserstein_dice_loss.py
  • tests/losses/test_masked_loss.py
  • tests/losses/test_multi_scale.py
  • tests/losses/test_spectral_loss.py
  • tests/losses/test_ssim_loss.py
  • tests/losses/test_tversky_loss.py
  • tests/networks/blocks/test_dynunet_block.py
  • tests/networks/blocks/test_fpn_block.py
  • tests/networks/blocks/test_se_block.py
  • tests/networks/blocks/test_se_blocks.py
  • tests/networks/blocks/test_selfattention.py
  • tests/networks/blocks/test_subpixel_upsample.py
  • tests/networks/blocks/test_unetr_block.py
  • tests/networks/nets/dints/test_dints_mixop.py
  • tests/networks/nets/regunet/test_localnet.py
  • tests/networks/nets/regunet/test_regunet.py
  • tests/networks/nets/test_ahnet.py
  • tests/networks/nets/test_autoencoder.py
  • tests/networks/nets/test_basic_unet.py
  • tests/networks/nets/test_basic_unetplusplus.py
  • tests/networks/nets/test_daf3d.py
  • tests/networks/nets/test_densenet.py
  • tests/networks/nets/test_dints_network.py
  • tests/networks/nets/test_discriminator.py
  • tests/networks/nets/test_dynunet.py
  • tests/networks/nets/test_efficientnet.py
  • tests/networks/nets/test_generator.py
  • tests/networks/nets/test_globalnet.py
  • tests/networks/nets/test_highresnet.py
  • tests/networks/nets/test_hovernet.py
  • tests/networks/nets/test_milmodel.py
  • tests/networks/nets/test_net_adapter.py
  • tests/networks/nets/test_patch_gan_dicriminator.py
  • tests/networks/nets/test_quicknat.py
  • tests/networks/nets/test_resnet.py
  • tests/networks/nets/test_segresnet.py
  • tests/networks/nets/test_segresnet_ds.py
  • tests/networks/nets/test_senet.py
  • tests/networks/nets/test_unet.py
  • tests/networks/nets/test_unetr.py
  • tests/networks/nets/test_varautoencoder.py
  • tests/networks/nets/test_vit.py
  • tests/networks/nets/test_vnet.py
  • tests/networks/nets/test_voxelmorph.py
  • tests/networks/nets/test_vqvae.py
  • tests/networks/test_convert_to_export.py
  • tests/networks/test_varnet.py
  • tests/test_utils.py
💤 Files with no reviewable changes (1)
  • monai/data/meta_tensor.py

Comment on lines +106 to +107
m = mask.reshape(-1)[:s_len].bool()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ACR derivation currently depends on the first flattened slice

mask.reshape(-1)[:s_len] takes only the first sampling vector. If masks differ across leading dims, ACR is computed from one slice but applied to all data at Line 135.

Proposed fix
-        m = mask.reshape(-1)[:s_len].bool()
+        # Collapse non-sampling dims and keep only frequencies sampled across all of them.
+        m = mask.reshape(-1, s_len).all(dim=0)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/apps/reconstruction/networks/nets/coil_sensitivity_model.py` around
lines 106 - 107, The current computation uses m =
mask.reshape(-1)[:s_len].bool() which picks only the first flattened sampling
vector, so ACR (derived from m and s_len) is computed from one slice but later
applied to all data (variable ACR usage near the ACR derivation and
application). Fix by reshaping mask into chunks matching s_len (e.g.,
mask.reshape(-1, s_len) or mask.view(-1, s_len)) and compute m (and the
resulting ACR) per row/leading-dim instead of a single vector, then broadcast or
index the per-slice ACR to match the data when it is applied; update any uses of
m, s_len, and ACR to operate on per-slice masks rather than the single first
slice.

Comment on lines 730 to 731
if device is None:
device = "cuda:0" if is_available() else "cpu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

device is ignored for exported loads.

Lines 730-731 compute a target device, but the .pt2 branch returns load_exported_program(...) without using it. That makes load()'s placement semantics diverge only for exported modules.

Also applies to: 756-758

Comment on lines 56 to +58
net = DAF3D(in_channels=1, out_channels=1)
test_data = torch.randn(16, 1, 32, 32)
test_script_save(net, test_data)
test_export_save(net, test_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix export test input rank for DAF3D.

DAF3D is exercised with 5D tensors in this file, but this export test uses a 4D tensor. It will fail when the skip is removed.

Proposed fix
 def test_export(self):
     net = DAF3D(in_channels=1, out_channels=1)
-    test_data = torch.randn(16, 1, 32, 32)
+    test_data = torch.randn(1, 1, 32, 32, 64)
     test_export_save(net, test_data)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_daf3d.py` around lines 56 - 58, The test uses DAF3D
(which expects 5D inputs) but constructs a 4D tensor; update the test to pass a
5D tensor to test_export_save. Locate the DAF3D instantiation and replace the
test_data creation used by test_export_save so it creates a torch.randn with
five dims (batch, channels, depth, height, width) — e.g., torch.randn(16, 1, D,
32, 32) — ensuring the depth (D) is a small positive integer so the export test
exercises a proper 5D input.

Comment on lines +158 to +170
def test_export(self, dints_grid_params, dints_params, input_shape, _):
num_blocks = dints_grid_params["num_blocks"]
num_depths = dints_grid_params["num_depths"]
_cell = Cell(1, 1, 0, spatial_dims=dints_grid_params["spatial_dims"])
num_cell_ops = len(_cell.OPS)
arch_code_a = np.ones((num_blocks, 3 * num_depths - 2))
arch_code_c = np.random.randint(num_cell_ops, size=(num_blocks, 3 * num_depths - 2))
dints_grid_params["arch_code"] = [arch_code_a, arch_code_c]
dints_grid_params["device"] = "cpu"
grid = TopologyInstance(**dints_grid_params)
dints_params["dints_space"] = grid
net = DiNTS(**dints_params).to(dints_grid_params["device"])
test_script_save(net, torch.randn(input_shape).to(dints_grid_params["device"]))
dints_params["node_a"] = torch.ones((num_blocks + 1, num_depths))
net = DiNTS(**dints_params).to("cpu")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Copy the case dictionaries before mutating them.

Lines 165-169 mutate objects taken from the module-level TEST_CASES_* tables. Those same dicts are reused by the other parameterized tests in this file, so arch_code, node_a, and the forced "cpu" device can leak across tests and make the suite order-dependent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_dints_network.py` around lines 158 - 170, The
test_export function mutates shared parameter dicts dints_grid_params and
dints_params which can leak state across tests; fix by making local deep copies
at the start (e.g., use copy.deepcopy) and operate on those copies before
creating TopologyInstance and DiNTS so modifications to fields like "arch_code",
"node_a", and "device" do not alter the module-level TEST_CASES_* dicts; update
references to use the copied variables when constructing
TopologyInstance(**dints_grid_params_copy) and DiNTS(**dints_params_copy).

Comment on lines +181 to +185
def test_export(self):
for padding_flag in [True, False]:
net = HoVerNet(mode=HoVerNet.Mode.FAST, decoder_padding=padding_flag)
test_data = torch.randn(1, 3, 256, 256)
test_script_save(net, test_data)
test_export_save(net, test_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

test_export currently validates only one padding mode.

test_export_save(...) runs once after the loop, so only the last padding_flag is covered.

🐛 Proposed fix
 def test_export(self):
-    for padding_flag in [True, False]:
-        net = HoVerNet(mode=HoVerNet.Mode.FAST, decoder_padding=padding_flag)
-    test_data = torch.randn(1, 3, 256, 256)
-    test_export_save(net, test_data)
+    test_data = torch.randn(1, 3, 256, 256)
+    for padding_flag in [True, False]:
+        net = HoVerNet(mode=HoVerNet.Mode.FAST, decoder_padding=padding_flag)
+        test_export_save(net, test_data)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_hovernet.py` around lines 181 - 185, The test_export
function only calls test_export_save once after looping over padding_flag, so
only the final decoder_padding value is actually tested; update test_export to
invoke test_export_save inside the loop for each padding_flag (i.e., for each
HoVerNet instantiation), ensuring both decoder_padding=True and False are passed
to test_export_save using the HoVerNet(...) created in that iteration (refer to
test_export, HoVerNet, HoVerNet.Mode.FAST, decoder_padding, and
test_export_save).

Comment on lines +84 to +88
def test_export(self):
input_param, input_shape, expected_shape = TEST_CASE_MILMODEL[0]
net = MILModel(**input_param)
test_data = torch.randn(input_shape, dtype=torch.float)
test_script_save(net, test_data)
test_export_save(net, test_data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused variable expected_shape.

Prefix with underscore to indicate it's intentionally unused.

Proposed fix
-        input_param, input_shape, expected_shape = TEST_CASE_MILMODEL[0]
+        input_param, input_shape, _expected_shape = TEST_CASE_MILMODEL[0]
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 85-85: Unpacked variable expected_shape is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/nets/test_milmodel.py` around lines 84 - 88, In the
test_export test function, the unpacked variable expected_shape from
TEST_CASE_MILMODEL is unused; change the unpacking to prefix it with an
underscore (e.g., _expected_shape) so the linter knows it is intentionally
unused. Update the tuple unpacking in test_export (where TEST_CASE_MILMODEL[0]
is assigned to input_param, input_shape, expected_shape) to use input_param,
input_shape, _expected_shape to silence the warning without changing behavior.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
tests/bundle/test_bundle_export_checkpoint.py (2)

99-106: Create models/ explicitly before writing checkpoint.

This removes reliance on save_state creating parent dirs implicitly.

Proposed patch
             ckpt_file = os.path.join(tempdir, "models", "model.pt")
             pt2_file = os.path.join(tempdir, "models", "model.pt2")
+            os.makedirs(os.path.dirname(ckpt_file), exist_ok=True)
@@
             save_state(src=net if key_in_ckpt == "" else {key_in_ckpt: net}, path=ckpt_file)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/bundle/test_bundle_export_checkpoint.py` around lines 99 - 106, The
test currently writes a checkpoint to ckpt_file under a "models" subdirectory
without ensuring that directory exists; before calling save_state (which writes
to ckpt_file) create the directory explicitly (e.g., mkdir parents=True) for the
models folder under tempdir to avoid relying on save_state to create parent
dirs—add this directory creation right after defining ckpt_file/pt2_file and
before parser/export/read/get_parsed_content and the save_state call so the
directory exists when save_state(net or {key_in_ckpt: net}, path=ckpt_file)
runs.

35-125: Add Google-style docstrings for class and methods.

setUp, tearDown, and tests need structured Args/Returns/Raises sections to match repo standards.

As per coding guidelines "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/bundle/test_bundle_export_checkpoint.py` around lines 35 - 125, Add
Google-style docstrings to the TestExportCheckpoint class and its methods setUp,
tearDown, test_export, and test_default_value: for the class provide a short
description of the test purpose; for setUp/tearDown describe state setup/cleanup
and any environment variables modified (CUDA_VISIBLE_DEVICES) with no return
value; for test_export and test_default_value describe the test scenario,
key_in_ckpt parameter (argument provided by parameterized.expand), the files
created/checked (pt2_file, ckpt_file, def_args_file, config_file), expected
assertions, and note that they raise AssertionError on failure; ensure each
docstring uses Google-style sections (Args/Returns/Raises) and mentions the
relevant symbols (TestExportCheckpoint, setUp, tearDown, test_export,
test_default_value) so reviewers can locate them.
tests/data/test_export_utils.py (2)

64-65: Use explicit key assertion + pop for timestamp normalization.

del is brittle and gives less clear failures. Assert key existence, then remove with pop.

Proposed patch
-        del meta[ExportMetadataKeys.TIMESTAMP.value]
+        self.assertIn(ExportMetadataKeys.TIMESTAMP.value, meta)
+        meta.pop(ExportMetadataKeys.TIMESTAMP.value, None)
@@
-        del meta[ExportMetadataKeys.TIMESTAMP.value]
+        self.assertIn(ExportMetadataKeys.TIMESTAMP.value, meta)
+        meta.pop(ExportMetadataKeys.TIMESTAMP.value, None)

Also applies to: 81-85

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/data/test_export_utils.py` around lines 64 - 65, Replace the brittle
del-based timestamp normalization with an explicit assertion that the timestamp
key exists and removal via pop: in tests/data/test_export_utils.py locate the
two places using "del meta[ExportMetadataKeys.TIMESTAMP.value]" (one at the
shown block and the similar block around lines 81-85), change to
assertIn(ExportMetadataKeys.TIMESTAMP.value, meta) and then call
meta.pop(ExportMetadataKeys.TIMESTAMP.value) before asserting meta ==
get_config_values(); this will give clearer failures and safely normalize the
timestamp key.

25-100: Bring definition docstrings to project-required Google style.

Definitions are documented, but they don’t include structured Args/Returns/Raises sections required by the repo rule.

As per coding guidelines "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/data/test_export_utils.py` around lines 25 - 100, The test definitions
(TestModule and the test methods in TestExportUtils such as
test_save_exported_program, test_save_exported_program_ext,
test_save_with_metadata, test_load_exported_program, test_load_with_metadata,
test_save_load_more_extra_files) lack Google-style docstrings with structured
Args/Returns/Raises sections; update each definition to include a Google-style
docstring that documents parameters (e.g., describe the args for
TestModule.forward and any test inputs), return values (if any), and any
exceptions raised so they conform to the project docstring rule (include Args:,
Returns:, Raises: blocks as appropriate).
tests/networks/test_convert_to_export.py (2)

31-41: Assert saved artifact exists in test_basic_export.

You pass filename_or_obj; add a file-existence assertion so save-path regressions are caught.

Proposed patch
         with tempfile.TemporaryDirectory() as tempdir:
+            out_path = os.path.join(tempdir, "model.pt2")
             exported = convert_to_export(
                 model=model,
-                filename_or_obj=os.path.join(tempdir, "model.pt2"),
+                filename_or_obj=out_path,
                 verify=True,
                 inputs=[torch.randn((16, 1, 32, 32), requires_grad=False)],
                 device="cpu",
                 rtol=1e-3,
                 atol=1e-4,
             )
             self.assertIsInstance(exported, torch.export.ExportedProgram)
+            self.assertTrue(os.path.exists(out_path))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/test_convert_to_export.py` around lines 31 - 41, The test
calls convert_to_export with a filename_or_obj (os.path.join(tempdir,
"model.pt2")) but never asserts the file was actually written; add an assertion
after export to verify the saved artifact exists (e.g., check os.path.exists or
Path.exists on the same path used for filename_or_obj) in the test
(tests/networks/test_convert_to_export.py / test_basic_export) alongside the
existing self.assertIsInstance(exported, torch.export.ExportedProgram) to catch
save-path regressions.

25-80: Align test/class docstrings with required Google style.

Current docstrings are brief and don’t include structured Args/Returns/Raises sections.

As per coding guidelines "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/networks/test_convert_to_export.py` around lines 25 - 80, Update the
class and each test method docstrings in TestConvertToExport to Google-style
docstrings: for the class TestConvertToExport add a short description and an
Args section if any class-level attributes exist; for each test method
(test_basic_export, test_export_without_save, test_missing_inputs_raises,
test_export_with_dynamic_shapes) replace the one-line string with a multi-line
Google-style docstring that includes a short summary, an Args section
documenting inputs used (e.g., model, filename_or_obj, inputs, dynamic_shapes),
a Returns section if applicable (e.g., exported torch.export.ExportedProgram),
and a Raises section for expected exceptions (e.g., ValueError in
test_missing_inputs_raises); ensure formatting matches other project tests
(triple-quoted docstrings, sections titled Args:, Returns:, Raises:) and update
any references to convert_to_export or behavior described so the docstrings
accurately reflect what each test asserts.
monai/networks/nets/dints.py (1)

40-50: Add complete Google-style docstrings for modified definitions.

These updated definitions are still missing full Args, Returns, and Raises sections (e.g., Line 42, Line 49, Line 631).

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 54-54, 633-633

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 40 - 50, The abstract methods
CellInterface.forward and StemInterface.forward lack full Google-style
docstrings; update each docstring to include Args (describe parameter names and
types: x: torch.Tensor, weight: torch.Tensor | None for CellInterface.forward),
Returns (torch.Tensor) and Raises (e.g., NotImplementedError when not
implemented) sections, following the Google docstring format used elsewhere in
the project; ensure you update any other similar missing docstrings flagged
(around the same classes/methods near the other mentions) to match style and
content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/networks/nets/dints.py`:
- Around line 672-678: The current construction of _active_flags uses
bool(self.arch_code_a[b, r]) which treats any nonzero value as True and can
enable paths that weren't explicitly instantiated; change the mask creation to
an explicit binary comparison (e.g., self.arch_code_a[b, r] == 1 or
self.arch_code_a.eq(1)) so _active_flags only marks entries equal to 1, and
apply the same explicit ==1 check wherever a similar booleanization is used for
arch_code_a (notably the place that builds cell_tree and the later construction
around the same logic) to keep semantics consistent with cell_tree.
- Around line 376-380: Validate that self.node_a has shape (self.num_blocks + 1,
self.num_depths) before building self._node_flags and snapshot node_a to prevent
later mutation: check node_a.dim() and that node_a.shape[0] == self.num_blocks +
1 and node_a.shape[1] == self.num_depths, raise a clear ValueError if not; then
create an immutable snapshot by using a detached/cpu clone of node_a (e.g.,
clone().detach()) and build self._node_flags from that snapshot so the boolean
list reflects the frozen state of node_a for torch.export compatibility.

---

Nitpick comments:
In `@monai/networks/nets/dints.py`:
- Around line 40-50: The abstract methods CellInterface.forward and
StemInterface.forward lack full Google-style docstrings; update each docstring
to include Args (describe parameter names and types: x: torch.Tensor, weight:
torch.Tensor | None for CellInterface.forward), Returns (torch.Tensor) and
Raises (e.g., NotImplementedError when not implemented) sections, following the
Google docstring format used elsewhere in the project; ensure you update any
other similar missing docstrings flagged (around the same classes/methods near
the other mentions) to match style and content.

In `@tests/bundle/test_bundle_export_checkpoint.py`:
- Around line 99-106: The test currently writes a checkpoint to ckpt_file under
a "models" subdirectory without ensuring that directory exists; before calling
save_state (which writes to ckpt_file) create the directory explicitly (e.g.,
mkdir parents=True) for the models folder under tempdir to avoid relying on
save_state to create parent dirs—add this directory creation right after
defining ckpt_file/pt2_file and before parser/export/read/get_parsed_content and
the save_state call so the directory exists when save_state(net or {key_in_ckpt:
net}, path=ckpt_file) runs.
- Around line 35-125: Add Google-style docstrings to the TestExportCheckpoint
class and its methods setUp, tearDown, test_export, and test_default_value: for
the class provide a short description of the test purpose; for setUp/tearDown
describe state setup/cleanup and any environment variables modified
(CUDA_VISIBLE_DEVICES) with no return value; for test_export and
test_default_value describe the test scenario, key_in_ckpt parameter (argument
provided by parameterized.expand), the files created/checked (pt2_file,
ckpt_file, def_args_file, config_file), expected assertions, and note that they
raise AssertionError on failure; ensure each docstring uses Google-style
sections (Args/Returns/Raises) and mentions the relevant symbols
(TestExportCheckpoint, setUp, tearDown, test_export, test_default_value) so
reviewers can locate them.

In `@tests/data/test_export_utils.py`:
- Around line 64-65: Replace the brittle del-based timestamp normalization with
an explicit assertion that the timestamp key exists and removal via pop: in
tests/data/test_export_utils.py locate the two places using "del
meta[ExportMetadataKeys.TIMESTAMP.value]" (one at the shown block and the
similar block around lines 81-85), change to
assertIn(ExportMetadataKeys.TIMESTAMP.value, meta) and then call
meta.pop(ExportMetadataKeys.TIMESTAMP.value) before asserting meta ==
get_config_values(); this will give clearer failures and safely normalize the
timestamp key.
- Around line 25-100: The test definitions (TestModule and the test methods in
TestExportUtils such as test_save_exported_program,
test_save_exported_program_ext, test_save_with_metadata,
test_load_exported_program, test_load_with_metadata,
test_save_load_more_extra_files) lack Google-style docstrings with structured
Args/Returns/Raises sections; update each definition to include a Google-style
docstring that documents parameters (e.g., describe the args for
TestModule.forward and any test inputs), return values (if any), and any
exceptions raised so they conform to the project docstring rule (include Args:,
Returns:, Raises: blocks as appropriate).

In `@tests/networks/test_convert_to_export.py`:
- Around line 31-41: The test calls convert_to_export with a filename_or_obj
(os.path.join(tempdir, "model.pt2")) but never asserts the file was actually
written; add an assertion after export to verify the saved artifact exists
(e.g., check os.path.exists or Path.exists on the same path used for
filename_or_obj) in the test (tests/networks/test_convert_to_export.py /
test_basic_export) alongside the existing self.assertIsInstance(exported,
torch.export.ExportedProgram) to catch save-path regressions.
- Around line 25-80: Update the class and each test method docstrings in
TestConvertToExport to Google-style docstrings: for the class
TestConvertToExport add a short description and an Args section if any
class-level attributes exist; for each test method (test_basic_export,
test_export_without_save, test_missing_inputs_raises,
test_export_with_dynamic_shapes) replace the one-line string with a multi-line
Google-style docstring that includes a short summary, an Args section
documenting inputs used (e.g., model, filename_or_obj, inputs, dynamic_shapes),
a Returns section if applicable (e.g., exported torch.export.ExportedProgram),
and a Raises section for expected exceptions (e.g., ValueError in
test_missing_inputs_raises); ensure formatting matches other project tests
(triple-quoted docstrings, sections titled Args:, Returns:, Raises:) and update
any references to convert_to_export or behavior described so the docstrings
accurately reflect what each test asserts.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: bd7bb7a4-bb9d-4a86-90ee-f9722e0f524a

📥 Commits

Reviewing files that changed from the base of the PR and between 2e8a7ab and d5cca7a.

📒 Files selected for processing (8)
  • monai/data/__init__.py
  • monai/networks/nets/dints.py
  • tests/bundle/test_bundle_export_checkpoint.py
  • tests/data/test_export_utils.py
  • tests/networks/nets/test_ahnet.py
  • tests/networks/nets/test_efficientnet.py
  • tests/networks/nets/test_senet.py
  • tests/networks/test_convert_to_export.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • monai/data/init.py
  • tests/networks/nets/test_efficientnet.py

Comment on lines +376 to +380
# Pre-compute node activation flags as Python booleans for torch.export compatibility.
# NOTE: node_a must not be mutated after construction.
self._node_flags: list[list[bool]] = [
[bool(self.node_a[b, d]) for d in range(self.num_depths)] for b in range(self.node_a.shape[0])
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate node_a shape and snapshot before caching _node_flags.

Line 379 trusts node_a.shape[0] and caches flags without validating expected shape (num_blocks + 1, num_depths). Invalid input fails later in forward with index errors.

Proposed fix
         if node_a is None:
-            self.node_a = torch.ones((self.num_blocks + 1, self.num_depths))
+            node_a_t = torch.ones((self.num_blocks + 1, self.num_depths), dtype=torch.bool)
         else:
-            self.node_a = node_a
+            node_a_t = torch.as_tensor(node_a)
+            expected_shape = (self.num_blocks + 1, self.num_depths)
+            if tuple(node_a_t.shape) != expected_shape:
+                raise ValueError(f"node_a must have shape {expected_shape}, got {tuple(node_a_t.shape)}")
+        self.node_a = node_a_t.clone()
-        self._node_flags: list[list[bool]] = [
-            [bool(self.node_a[b, d]) for d in range(self.num_depths)] for b in range(self.node_a.shape[0])
-        ]
+        self._node_flags: list[list[bool]] = self.node_a.to(dtype=torch.bool).cpu().tolist()

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 376 - 380, Validate that
self.node_a has shape (self.num_blocks + 1, self.num_depths) before building
self._node_flags and snapshot node_a to prevent later mutation: check
node_a.dim() and that node_a.shape[0] == self.num_blocks + 1 and node_a.shape[1]
== self.num_depths, raise a clear ValueError if not; then create an immutable
snapshot by using a detached/cpu clone of node_a (e.g., clone().detach()) and
build self._node_flags from that snapshot so the boolean list reflects the
frozen state of node_a for torch.export compatibility.

Comment on lines +672 to +678
# Pre-compute activation flags as plain Python booleans so that the
# control flow in forward() is static and compatible with torch.export.
# NOTE: arch_code_a must not be mutated after construction; this class
# is only used at inference/re-training time, not during architecture search.
self._active_flags: list[list[bool]] = [
[bool(self.arch_code_a[b, r]) for r in range(self.arch_code_a.shape[1])] for b in range(self.num_blocks)
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use an explicit binary mask for _active_flags to match cell_tree creation.

cell_tree is built with self.arch_code_a[blk_idx, res_idx] == 1, but Line 677 uses bool(...). Non-binary nonzero values can make active=True for paths that were never instantiated.

Proposed fix
-        self._active_flags: list[list[bool]] = [
-            [bool(self.arch_code_a[b, r]) for r in range(self.arch_code_a.shape[1])] for b in range(self.num_blocks)
-        ]
+        if not torch.all((self.arch_code_a == 0) | (self.arch_code_a == 1)):
+            raise ValueError("arch_code_a must be binary (0/1) for TopologyInstance.")
+        self._active_flags: list[list[bool]] = (self.arch_code_a == 1).to(torch.bool).cpu().tolist()

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

Also applies to: 689-690

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 672 - 678, The current
construction of _active_flags uses bool(self.arch_code_a[b, r]) which treats any
nonzero value as True and can enable paths that weren't explicitly instantiated;
change the mask creation to an explicit binary comparison (e.g.,
self.arch_code_a[b, r] == 1 or self.arch_code_a.eq(1)) so _active_flags only
marks entries equal to 1, and apply the same explicit ==1 check wherever a
similar booleanization is used for arch_code_a (notably the place that builds
cell_tree and the later construction around the same logic) to keep semantics
consistent with cell_tree.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
tests/losses/test_multi_scale.py (1)

79-79: Add a docstring to test_export.

The modified definition is missing a Google-style docstring.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_multi_scale.py` at line 79, Add a Google-style docstring to
the test_export function: insert a triple-quoted docstring immediately under the
def test_export(self): line that briefly describes the test's purpose, documents
any parameters (e.g., self) if desired, states what it returns (typically None
for tests), and notes any exceptions it may raise; ensure the sections follow
Google style (Args, Returns, Raises) and are concise so the test has a proper
descriptive docstring.
monai/networks/utils.py (2)

890-911: Add Raises section to docstring.

The function raises ValueError when inputs is None. Per coding guidelines, exceptions should be documented.

Proposed addition
     Returns:
         A :class:`torch.export.ExportedProgram` representing the exported model.
+
+    Raises:
+        ValueError: If ``inputs`` is None.
     """

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/utils.py` around lines 890 - 911, Update the function
docstring (the export utility that accepts model, filename_or_obj, verify,
inputs, etc.) to include a Google-style "Raises" section documenting that a
ValueError is raised when the inputs parameter is None (e.g., "Raises:
ValueError: If inputs is None when verification is requested" or similar),
ensuring the new section appears with the other docstring sections and follows
the same formatting as the existing Args/Returns blocks.

798-804: Consider handling dict types for completeness.

If nested dicts containing tensors are passed, they won't be moved to the target device.

Proposed enhancement
 def _recursive_to(x, device):
     """Recursively move tensors (and nested tuples/lists of tensors) to *device*."""
     if isinstance(x, torch.Tensor):
         return x.to(device)
     if isinstance(x, (tuple, list)):
         return type(x)(_recursive_to(i, device) for i in x)
+    if isinstance(x, dict):
+        return {k: _recursive_to(v, device) for k, v in x.items()}
     return x
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/utils.py` around lines 798 - 804, The helper _recursive_to
currently handles torch.Tensor and tuple/list but skips dicts; update
_recursive_to to also detect dict/mapping types (e.g., isinstance(x, dict)) and
recursively move each value to the device while preserving the mapping type and
keys (e.g., build the same dict-like type from (key, _recursive_to(value,
device)) pairs). Keep the existing branches for torch.Tensor and tuple/list and
add the dict branch before the final return so nested dictionaries of tensors
are moved as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/networks/utils.py`:
- Around line 1174-1177: The loop comparing torch_out and trt_out can silently
skip mismatched-length outputs because zip truncates; update the verification in
the block that iterates over torch_out and trt_out to first ensure the two
sequences have equal length (e.g., assert len(torch_out) == len(trt_out) and
raise a clear error) or use Python's zip(..., strict=True) so a mismatch raises
immediately, then keep using torch.testing.assert_close for element-wise
comparison; reference the variables torch_out, trt_out and the assert_close call
to locate the change.

---

Nitpick comments:
In `@monai/networks/utils.py`:
- Around line 890-911: Update the function docstring (the export utility that
accepts model, filename_or_obj, verify, inputs, etc.) to include a Google-style
"Raises" section documenting that a ValueError is raised when the inputs
parameter is None (e.g., "Raises: ValueError: If inputs is None when
verification is requested" or similar), ensuring the new section appears with
the other docstring sections and follows the same formatting as the existing
Args/Returns blocks.
- Around line 798-804: The helper _recursive_to currently handles torch.Tensor
and tuple/list but skips dicts; update _recursive_to to also detect dict/mapping
types (e.g., isinstance(x, dict)) and recursively move each value to the device
while preserving the mapping type and keys (e.g., build the same dict-like type
from (key, _recursive_to(value, device)) pairs). Keep the existing branches for
torch.Tensor and tuple/list and add the dict branch before the final return so
nested dictionaries of tensors are moved as well.

In `@tests/losses/test_multi_scale.py`:
- Line 79: Add a Google-style docstring to the test_export function: insert a
triple-quoted docstring immediately under the def test_export(self): line that
briefly describes the test's purpose, documents any parameters (e.g., self) if
desired, states what it returns (typically None for tests), and notes any
exceptions it may raise; ensure the sections follow Google style (Args, Returns,
Raises) and are concise so the test has a proper descriptive docstring.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1f47888d-325e-465b-865f-4e2cb6dcb525

📥 Commits

Reviewing files that changed from the base of the PR and between d5cca7a and 9f4bac0.

📒 Files selected for processing (3)
  • monai/networks/utils.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • tests/losses/test_multi_scale.py

@aymuos15 aymuos15 force-pushed the torchscript_deprecation branch from 9f4bac0 to 704fb03 Compare March 17, 2026 11:00
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (5)
monai/networks/nets/dints.py (2)

672-678: ⚠️ Potential issue | 🟠 Major

Use explicit == 1 masking for _active_flags to match cell_tree construction.

cell_tree is created only for arch_code_a == 1 (Line 620), but Line 677 uses bool(...). Any nonzero non-1 value can mark a path active and then fail at Line 691 with missing key lookup.

Suggested fix
         # Pre-compute activation flags as plain Python booleans so that the
         # control flow in forward() is static and compatible with torch.export.
         # NOTE: arch_code_a must not be mutated after construction; this class
         # is only used at inference/re-training time, not during architecture search.
-        self._active_flags: list[list[bool]] = [
-            [bool(self.arch_code_a[b, r]) for r in range(self.arch_code_a.shape[1])] for b in range(self.num_blocks)
-        ]
+        if not torch.all((self.arch_code_a == 0) | (self.arch_code_a == 1)):
+            raise ValueError("arch_code_a must be binary (0/1) for TopologyInstance.")
+        self._active_flags: list[list[bool]] = self.arch_code_a.eq(1).to(torch.bool).cpu().tolist()

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

Also applies to: 689-690

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 672 - 678, The pre-computed
_active_flags uses bool(self.arch_code_a[...]) which treats any non-zero value
as active but cell_tree and downstream lookups only contain entries for
arch_code_a == 1; change construction of _active_flags to test equality to 1
(e.g., self.arch_code_a[b, r] == 1) so only cells with value exactly 1 are
marked active, and make the same equality-to-1 change where _active_flags is
consulted later (the code around the forward()/lines handling the active flag
checks) to avoid missing-key failures with non-1 values; reference symbols:
_active_flags, arch_code_a, cell_tree, forward.

376-380: ⚠️ Potential issue | 🟠 Major

Validate and snapshot node_a before computing _node_flags.

Line 379 assumes valid shape and mutable state. Bad shape or later mutation can break gating in forward (Line 513) with index/runtime errors.

Suggested fix
         if node_a is None:
-            self.node_a = torch.ones((self.num_blocks + 1, self.num_depths))
+            node_a_t = torch.ones((self.num_blocks + 1, self.num_depths), dtype=torch.bool)
         else:
-            self.node_a = node_a
+            node_a_t = torch.as_tensor(node_a)
+        expected_shape = (self.num_blocks + 1, self.num_depths)
+        if tuple(node_a_t.shape) != expected_shape:
+            raise ValueError(f"node_a must have shape {expected_shape}, got {tuple(node_a_t.shape)}")
+        self.node_a = node_a_t.clone().detach()
         # Pre-compute node activation flags as Python booleans for torch.export compatibility.
         # NOTE: node_a must not be mutated after construction.
-        self._node_flags: list[list[bool]] = [
-            [bool(self.node_a[b, d]) for d in range(self.num_depths)] for b in range(self.node_a.shape[0])
-        ]
+        self._node_flags: list[list[bool]] = self.node_a.to(dtype=torch.bool).cpu().tolist()

As per coding guidelines, "Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 376 - 380, The code assumes
self.node_a has the correct shape and won't change when building
self._node_flags; validate that self.node_a is a tensor with shape [B,
num_depths] (i.e., node_a.shape[1] == self.num_depths) and then take an
immutable snapshot (e.g., clone or convert to a Python nested tuple/list from
the snapshot) before computing self._node_flags so later mutations can't change
gating; update the construction near the current _node_flags creation to first
validate shape and snapshot node_a, then compute the list of booleans from that
snapshot, and ensure forward uses the snapshot (or the boolean flags) rather
than the original mutable self.node_a.
tests/losses/test_multi_scale.py (1)

79-82: ⚠️ Potential issue | 🟡 Minor

Drop the unused expected_val binding.

test_export() no longer uses the third tuple item, so this still trips RUF059.

Suggested fix
-        input_param, input_data, expected_val = TEST_CASES[0]
+        input_param, input_data, _expected_val = TEST_CASES[0]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/losses/test_multi_scale.py` around lines 79 - 82, The test_export
function binds an unused variable expected_val from TEST_CASES which triggers a
lint warning; update the test to only unpack the two needed items by changing
the tuple unpack in test_export to "input_param, input_data = TEST_CASES[0]" and
keep the rest (instantiating MultiScaleLoss and calling test_export_save)
unchanged so symbols referenced are test_export, TEST_CASES, MultiScaleLoss, and
test_export_save.
monai/bundle/scripts.py (1)

759-761: ⚠️ Potential issue | 🟠 Major

device is ignored on the exported-module path.

This branch just returns load_exported_program(...), so load(..., device="cuda:0", load_exported_module=True) behaves differently from the weights and TorchScript paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/bundle/scripts.py` around lines 759 - 761, The exported-module branch
in load ignores the device parameter because it simply returns
load_exported_program(full_path, more_extra_files=config_files or ()), so calls
like load(..., device="cuda:0", load_exported_module=True) don't honor device;
update the load logic to pass the device through to the exported-module loader
(e.g., add a device argument to load_exported_program or call a wrapper that
moves the loaded program to the requested device), referencing
load_exported_program, load, load_exported_module, full_path, config_files and
ensuring the device is applied the same way as in the weights and TorchScript
paths.
tests/apps/detection/networks/test_retinanet.py (1)

145-145: ⚠️ Potential issue | 🟡 Minor

Narrow overly broad BaseException handlers.

Line 145 and Line 179 should not catch BaseException; it can suppress interrupts and exit signals.

Proposed fix
-        except BaseException:
+        except (ValueError, IndexError):
             idx = 0
@@
-        except BaseException:
+        except (ValueError, IndexError):
             idx = 0
#!/bin/bash
# Verify no broad BaseException catches remain in this file.
rg -nP 'except\s+BaseException' tests/apps/detection/networks/test_retinanet.py

Expected result: no matches.

Also applies to: 179-179

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/apps/detection/networks/test_retinanet.py` at line 145, Replace the
overly broad "except BaseException:" handlers in
tests/apps/detection/networks/test_retinanet.py with a narrower catch (e.g.,
"except Exception:" or a specific exception type relevant to the failing
operation) so interrupts and system-exit signals are not suppressed; locate the
two occurrences of "except BaseException" and update them to "except Exception"
(or a more specific exception if you can determine it from the surrounding code
in the test function names or helper calls) and run the provided ripgrep check
(rg -nP 'except\s+BaseException'
tests/apps/detection/networks/test_retinanet.py) to confirm no matches remain.
🧹 Nitpick comments (4)
monai/losses/focal_loss.py (1)

196-196: Sync Raises docs with validation location.

forward() still documents weight < 0 errors, but validation now happens in __init__. Update both docstrings accordingly.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/losses/focal_loss.py` at line 196, The forward() docstring still claims
it raises errors for invalid weight values even though weight validation was
moved to __init__; update the docstrings so that __init__ (in class FocalLoss)
documents the ValueError for negative weight values and forward() no longer
lists weight validation under Raises—only keep raises that actually occur during
forward execution; specifically edit the __init__ docstring to mention "Raises:
ValueError: if weight < 0" and remove that clause from the forward() docstring
in focal_loss.py.
monai/networks/nets/dints.py (1)

40-50: Add Google-style Args/Returns/Raises to interface forward docstrings.

These methods now explicitly raise NotImplementedError, so the docstrings should document that in a Raises: section.

Suggested docstring update
 class CellInterface(torch.nn.Module):
     """Abstract interface for Cell modules used in DiNTS."""
 
     def forward(self, x: torch.Tensor, weight: torch.Tensor | None) -> torch.Tensor:  # type: ignore
+        """Run a cell forward pass.
+
+        Args:
+            x: Input tensor.
+            weight: Optional operation weights.
+
+        Returns:
+            Output tensor.
+
+        Raises:
+            NotImplementedError: Always, in the interface base class.
+        """
         raise NotImplementedError
@@
 class StemInterface(torch.nn.Module):
     """Abstract interface for Stem modules used in DiNTS."""
 
     def forward(self, x: torch.Tensor) -> torch.Tensor:  # type: ignore
+        """Run a stem forward pass.
+
+        Args:
+            x: Input tensor.
+
+        Returns:
+            Output tensor.
+
+        Raises:
+            NotImplementedError: Always, in the interface base class.
+        """
         raise NotImplementedError

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/nets/dints.py` around lines 40 - 50, Update the docstrings for
the abstract forward methods to follow Google-style by adding Args, Returns, and
Raises sections: for CellInterface.forward (method name forward in class
CellInterface) add an Args: x (torch.Tensor), weight (torch.Tensor | None)
description; Returns: torch.Tensor; Raises: NotImplementedError when not
implemented; and for StemInterface.forward (method name forward in class
StemInterface) add an Args: x (torch.Tensor) description; Returns: torch.Tensor;
Raises: NotImplementedError when not implemented. Ensure the docstring wording
is concise and uses exact type names as in the signatures.
tests/bundle/test_bundle_export_checkpoint.py (2)

109-117: test_default_value does not cover omitted --key_in_ckpt.

Line 109-Line 117 always include --key_in_ckpt, so the truly omitted-argument path is not tested.

Proposed test adjustment
-    `@parameterized.expand`([TEST_CASE_1, TEST_CASE_2])
+    `@parameterized.expand`([[None], TEST_CASE_2])
     def test_default_value(self, key_in_ckpt):
@@
-            cmd = [
+            cmd = [
                 "coverage",
                 "run",
                 "-m",
                 "monai.bundle",
                 "export_checkpoint",
-                "--key_in_ckpt",
-                key_in_ckpt,
                 "--config_file",
                 config_file,
                 "--bundle_root",
                 tempdir,
                 "--input_shape",
                 "[1, 1, 96, 96, 96]",
             ]
+            if key_in_ckpt is not None:
+                cmd.extend(["--key_in_ckpt", key_in_ckpt])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/bundle/test_bundle_export_checkpoint.py` around lines 109 - 117, Update
the test to actually exercise the omitted-argument path by removing the
"--key_in_ckpt" flag from the command constructed in test_default_value (or
adding a new sub-case) so the code path where key_in_ckpt is not provided is
executed; locate the cmd list constructed in
tests/bundle/test_bundle_export_checkpoint.py (the variable named cmd that
currently contains "--key_in_ckpt", key_in_ckpt) and create a variant that does
not append those two entries, then invoke the CLI
(monai.bundle.export_checkpoint) with that cmd so the default behavior is
asserted.

35-46: Add docstrings to the new test definitions.

setUp, tearDown, test_export, and test_default_value are new/modified definitions without docstrings. Add concise Google-style docstrings.

As per coding guidelines, **/*.py: "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

Also applies to: 93-94

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/bundle/test_bundle_export_checkpoint.py` around lines 35 - 46, Add
concise Google-style docstrings to the new/modified test definitions: setUp,
tearDown, test_export, and test_default_value. For each function include a
one-line summary and brief sections for Args (describe any parameters such as
key_in_ckpt for test_export), Returns (if applicable, typically None for tests),
and Raises (mention raised exceptions if the test expects any), following
Google-style formatting; attach these to the corresponding definitions (e.g.,
def setUp(self):, def tearDown(self):, def test_export(self, key_in_ckpt):, def
test_default_value(self):) so they satisfy the project docstring guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/bundle/scripts.py`:
- Around line 1912-1920: Update _save_trt_model to conditionally save dynamo
ExportedProgram vs JIT modules: detect if trt_obj is an instance of
torch.export.ExportedProgram and call
torch.export.save_exported_program(trt_obj, filepath, ...), otherwise call
save_net_with_metadata(trt_obj, filepath, ...). Keep the warnings.catch_warnings
wrapper for the save_net_with_metadata branch and preserve existing kwargs and
flags (include_config_vals=False, append_timestamp=False) for the JIT path;
ensure imports/qualifiers reference torch.export.save_exported_program and
torch.export.ExportedProgram where needed.

In `@monai/losses/dice.py`:
- Around line 126-130: Update the DiceLoss.__init__ docstring Raises section to
document the two new ValueError conditions for the weight parameter: (1) raising
ValueError when weight is empty (weight.numel() == 0) and (2) raising ValueError
when any weight value is negative (weight.min() < 0); mention the parameter name
"weight" and include short descriptions of both error cases so users know why
the constructor may raise these exceptions.

In `@monai/networks/utils.py`:
- Around line 933-945: The current verify path uses exported.module() (in
variable loaded_module) even when a filename_or_obj was provided, so replace the
in-memory check with an actual reload from the saved archive: when
filename_or_obj is set, call torch.export.load on that filename/object with
map_location set to CPU to avoid device placement issues, then move the reloaded
module to device before running the comparison; keep using set_determinism and
ensure_tuple around model(*verify_args) and the reloaded_module(*verify_args) to
preserve the existing deterministic verification flow.

In `@tests/apps/detection/networks/test_retinanet.py`:
- Line 182: Update the stale inline comment inside the test_onnx block in
tests/apps/detection/networks/test_retinanet.py: replace the incorrect reference
to "torch.export" with a clear comment that mentions ONNX export (e.g., "test
whether support ONNX export" or similar) so the comment matches the test_onnx
function behavior and avoids confusion.

---

Duplicate comments:
In `@monai/bundle/scripts.py`:
- Around line 759-761: The exported-module branch in load ignores the device
parameter because it simply returns load_exported_program(full_path,
more_extra_files=config_files or ()), so calls like load(..., device="cuda:0",
load_exported_module=True) don't honor device; update the load logic to pass the
device through to the exported-module loader (e.g., add a device argument to
load_exported_program or call a wrapper that moves the loaded program to the
requested device), referencing load_exported_program, load,
load_exported_module, full_path, config_files and ensuring the device is applied
the same way as in the weights and TorchScript paths.

In `@monai/networks/nets/dints.py`:
- Around line 672-678: The pre-computed _active_flags uses
bool(self.arch_code_a[...]) which treats any non-zero value as active but
cell_tree and downstream lookups only contain entries for arch_code_a == 1;
change construction of _active_flags to test equality to 1 (e.g.,
self.arch_code_a[b, r] == 1) so only cells with value exactly 1 are marked
active, and make the same equality-to-1 change where _active_flags is consulted
later (the code around the forward()/lines handling the active flag checks) to
avoid missing-key failures with non-1 values; reference symbols: _active_flags,
arch_code_a, cell_tree, forward.
- Around line 376-380: The code assumes self.node_a has the correct shape and
won't change when building self._node_flags; validate that self.node_a is a
tensor with shape [B, num_depths] (i.e., node_a.shape[1] == self.num_depths) and
then take an immutable snapshot (e.g., clone or convert to a Python nested
tuple/list from the snapshot) before computing self._node_flags so later
mutations can't change gating; update the construction near the current
_node_flags creation to first validate shape and snapshot node_a, then compute
the list of booleans from that snapshot, and ensure forward uses the snapshot
(or the boolean flags) rather than the original mutable self.node_a.

In `@tests/apps/detection/networks/test_retinanet.py`:
- Line 145: Replace the overly broad "except BaseException:" handlers in
tests/apps/detection/networks/test_retinanet.py with a narrower catch (e.g.,
"except Exception:" or a specific exception type relevant to the failing
operation) so interrupts and system-exit signals are not suppressed; locate the
two occurrences of "except BaseException" and update them to "except Exception"
(or a more specific exception if you can determine it from the surrounding code
in the test function names or helper calls) and run the provided ripgrep check
(rg -nP 'except\s+BaseException'
tests/apps/detection/networks/test_retinanet.py) to confirm no matches remain.

In `@tests/losses/test_multi_scale.py`:
- Around line 79-82: The test_export function binds an unused variable
expected_val from TEST_CASES which triggers a lint warning; update the test to
only unpack the two needed items by changing the tuple unpack in test_export to
"input_param, input_data = TEST_CASES[0]" and keep the rest (instantiating
MultiScaleLoss and calling test_export_save) unchanged so symbols referenced are
test_export, TEST_CASES, MultiScaleLoss, and test_export_save.

---

Nitpick comments:
In `@monai/losses/focal_loss.py`:
- Line 196: The forward() docstring still claims it raises errors for invalid
weight values even though weight validation was moved to __init__; update the
docstrings so that __init__ (in class FocalLoss) documents the ValueError for
negative weight values and forward() no longer lists weight validation under
Raises—only keep raises that actually occur during forward execution;
specifically edit the __init__ docstring to mention "Raises: ValueError: if
weight < 0" and remove that clause from the forward() docstring in
focal_loss.py.

In `@monai/networks/nets/dints.py`:
- Around line 40-50: Update the docstrings for the abstract forward methods to
follow Google-style by adding Args, Returns, and Raises sections: for
CellInterface.forward (method name forward in class CellInterface) add an Args:
x (torch.Tensor), weight (torch.Tensor | None) description; Returns:
torch.Tensor; Raises: NotImplementedError when not implemented; and for
StemInterface.forward (method name forward in class StemInterface) add an Args:
x (torch.Tensor) description; Returns: torch.Tensor; Raises: NotImplementedError
when not implemented. Ensure the docstring wording is concise and uses exact
type names as in the signatures.

In `@tests/bundle/test_bundle_export_checkpoint.py`:
- Around line 109-117: Update the test to actually exercise the omitted-argument
path by removing the "--key_in_ckpt" flag from the command constructed in
test_default_value (or adding a new sub-case) so the code path where key_in_ckpt
is not provided is executed; locate the cmd list constructed in
tests/bundle/test_bundle_export_checkpoint.py (the variable named cmd that
currently contains "--key_in_ckpt", key_in_ckpt) and create a variant that does
not append those two entries, then invoke the CLI
(monai.bundle.export_checkpoint) with that cmd so the default behavior is
asserted.
- Around line 35-46: Add concise Google-style docstrings to the new/modified
test definitions: setUp, tearDown, test_export, and test_default_value. For each
function include a one-line summary and brief sections for Args (describe any
parameters such as key_in_ckpt for test_export), Returns (if applicable,
typically None for tests), and Raises (mention raised exceptions if the test
expects any), following Google-style formatting; attach these to the
corresponding definitions (e.g., def setUp(self):, def tearDown(self):, def
test_export(self, key_in_ckpt):, def test_default_value(self):) so they satisfy
the project docstring guidelines.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 41c5bfa1-7380-4322-a0f9-489ea14b42ff

📥 Commits

Reviewing files that changed from the base of the PR and between 9f4bac0 and 59f8a7b.

📒 Files selected for processing (22)
  • docs/source/mb_specification.rst
  • monai/apps/mmars/mmars.py
  • monai/bundle/scripts.py
  • monai/data/__init__.py
  • monai/data/export_utils.py
  • monai/losses/dice.py
  • monai/losses/focal_loss.py
  • monai/networks/nets/dints.py
  • monai/networks/utils.py
  • monai/utils/deprecate_utils.py
  • tests/apps/detection/networks/test_retinanet.py
  • tests/bundle/test_bundle_export_checkpoint.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • tests/data/test_export_utils.py
  • tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py
  • tests/losses/test_multi_scale.py
  • tests/min_tests.py
  • tests/networks/nets/test_ahnet.py
  • tests/networks/nets/test_efficientnet.py
  • tests/networks/nets/test_senet.py
  • tests/networks/test_convert_to_export.py
  • tests/networks/test_varnet.py
🚧 Files skipped from review as they are similar to previous changes (7)
  • tests/networks/nets/test_ahnet.py
  • monai/apps/mmars/mmars.py
  • tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py
  • tests/data/test_export_utils.py
  • tests/networks/test_convert_to_export.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • tests/networks/nets/test_efficientnet.py

Comment on lines +126 to +130
if weight is not None:
if weight.numel() == 0:
raise ValueError("`weight` must not be empty.")
if weight.min() < 0:
raise ValueError("the value/values of the `weight` should be no less than 0.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the new weight validation errors in the constructor docstring.

This block now raises ValueError for empty and negative weight, but DiceLoss.__init__ Raises section doesn’t describe these cases.

📝 Suggested docstring update
         Raises:
             TypeError: When ``other_act`` is not an ``Optional[Callable]``.
             ValueError: When more than 1 of [``sigmoid=True``, ``softmax=True``, ``other_act is not None``].
                 Incompatible values.
+            ValueError: When ``weight`` is empty.
+            ValueError: When ``weight`` contains negative values.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/losses/dice.py` around lines 126 - 130, Update the DiceLoss.__init__
docstring Raises section to document the two new ValueError conditions for the
weight parameter: (1) raising ValueError when weight is empty (weight.numel() ==
0) and (2) raising ValueError when any weight value is negative (weight.min() <
0); mention the parameter name "weight" and include short descriptions of both
error cases so users know why the constructor may raise these exceptions.

Comment on lines +933 to +945
# Always verify against the in-memory export to avoid device placement
# issues that can occur when reloading from file (torch.export.load does
# not support map_location).
loaded_module = exported.module()
loaded_module.to(device)
model.to(device)

with torch.no_grad():
set_determinism(seed=0)
torch_out = ensure_tuple(model(*verify_args))
set_determinism(seed=0)
export_out = ensure_tuple(loaded_module(*verify_args))
set_determinism(seed=None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

verify=True skips the actual round-trip load.

When filename_or_obj is set, this branch still compares against exported.module() instead of reloading the saved .pt2. That weakens both the public verify contract and the new test_export_save() coverage, because a broken archive can still pass here. If map_location is the blocker, do this check on CPU instead of falling back to the in-memory object.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/utils.py` around lines 933 - 945, The current verify path uses
exported.module() (in variable loaded_module) even when a filename_or_obj was
provided, so replace the in-memory check with an actual reload from the saved
archive: when filename_or_obj is set, call torch.export.load on that
filename/object with map_location set to CPU to avoid device placement issues,
then move the reloaded module to device before running the comparison; keep
using set_determinism and ensure_tuple around model(*verify_args) and the
reloaded_module(*verify_args) to preserve the existing deterministic
verification flow.

@aymuos15 aymuos15 force-pushed the torchscript_deprecation branch from 59f8a7b to eb0e9f2 Compare March 17, 2026 12:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
monai/networks/utils.py (1)

899-900: ⚠️ Potential issue | 🟠 Major

verify=True still bypasses saved-file validation.

Line 899 says saved-model verification, but Lines 933-937 always verify against in-memory exported.module(). Corrupted .pt2 on disk can still pass.

Proposed fix
-        # Always verify against the in-memory export to avoid device placement
-        # issues that can occur when reloading from file (torch.export.load does
-        # not support map_location).
-        loaded_module = exported.module()
-        loaded_module.to(device)
-        model.to(device)
+        verify_device = torch.device(device)
+        if filename_or_obj is not None:
+            loaded_module = torch.export.load(filename_or_obj).module()
+            # torch.export.load has no map_location; verify persisted artifact on CPU.
+            verify_device = torch.device("cpu")
+        else:
+            loaded_module = exported.module()
+
+        verify_args = tuple(_recursive_to(i, verify_device) for i in inputs)
+        loaded_module.to(verify_device)
+        model.to(verify_device)
#!/bin/bash
# Verify that convert_to_export verify-path does not currently reload from file.
rg -n 'def convert_to_export|if verify:|filename_or_obj is not None|loaded_module = exported.module|torch\.export\.load' monai/networks/utils.py -C2

Also applies to: 933-937

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/networks/utils.py` around lines 899 - 900, convert_to_export's verify
path currently validates the in-memory exported.module() even when
filename_or_obj is provided, so corrupted files on disk can pass; update
convert_to_export to, when verify is True and filename_or_obj is not None, load
the saved export from disk (use torch.export.load or the existing load path that
handles file-like and path inputs) into a loaded_module and run the verification
against loaded_module instead of exported.module(); otherwise (no
filename_or_obj) continue verifying exported.module(); ensure you reference and
reuse the existing filename_or_obj handling, the loaded_module variable, and
keep proper error handling/cleanup around the load operation.
🧹 Nitpick comments (1)
monai/bundle/scripts.py (1)

1631-1662: Add a Raises: section to export_checkpoint docstring.

This function raises FileNotFoundError and ValueError, but the docstring does not document raised exceptions.

As per coding guidelines, "Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/bundle/scripts.py` around lines 1631 - 1662, Add a "Raises:" section to
the export_checkpoint docstring that documents the exceptions this function can
throw: list FileNotFoundError with a short note (e.g., when ckpt_file,
meta_file, config_file or args_file paths are missing) and ValueError with a
short note (e.g., when required inputs are invalid or conflicting such as
missing config when args_file is None, invalid input_shape parsing, or
converter-related validation errors). Update the export_checkpoint docstring
(function name export_checkpoint) to include these two entries so callers know
when FileNotFoundError and ValueError may be raised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/data/export_utils.py`:
- Line 27: The __all__ export list is unsorted which triggers Ruff RUF022;
update the __all__ variable to list identifiers in alphabetical order (e.g.,
["load_exported_program", "save_exported_program"]) so that the names exported
by the module (save_exported_program, load_exported_program) are sorted and
linting passes. Ensure you only change the order of the strings in __all__ and
do not modify function names or other code.

In `@monai/networks/utils.py`:
- Around line 798-804: _recursive_to currently only handles torch.Tensor and
tuple/list, so dict/mapping inputs leave tensor values on the original device;
update _recursive_to to detect Mapping (e.g., collections.abc.Mapping) and
recursively apply _recursive_to to each value while preserving the mapping type
(construct the same mapping class or use type(x)((k, _recursive_to(v, device))
for k,v in x.items())), ensuring nested tuples/lists/dicts are handled; keep
existing behavior for tensors, tuples, lists and other types and use the same
function name _recursive_to for recursion.

---

Duplicate comments:
In `@monai/networks/utils.py`:
- Around line 899-900: convert_to_export's verify path currently validates the
in-memory exported.module() even when filename_or_obj is provided, so corrupted
files on disk can pass; update convert_to_export to, when verify is True and
filename_or_obj is not None, load the saved export from disk (use
torch.export.load or the existing load path that handles file-like and path
inputs) into a loaded_module and run the verification against loaded_module
instead of exported.module(); otherwise (no filename_or_obj) continue verifying
exported.module(); ensure you reference and reuse the existing filename_or_obj
handling, the loaded_module variable, and keep proper error handling/cleanup
around the load operation.

---

Nitpick comments:
In `@monai/bundle/scripts.py`:
- Around line 1631-1662: Add a "Raises:" section to the export_checkpoint
docstring that documents the exceptions this function can throw: list
FileNotFoundError with a short note (e.g., when ckpt_file, meta_file,
config_file or args_file paths are missing) and ValueError with a short note
(e.g., when required inputs are invalid or conflicting such as missing config
when args_file is None, invalid input_shape parsing, or converter-related
validation errors). Update the export_checkpoint docstring (function name
export_checkpoint) to include these two entries so callers know when
FileNotFoundError and ValueError may be raised.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f2ed0152-de62-4ed7-90f3-c92a2bff55ca

📥 Commits

Reviewing files that changed from the base of the PR and between 59f8a7b and 47ee9ba.

📒 Files selected for processing (23)
  • docs/source/mb_specification.rst
  • monai/apps/mmars/mmars.py
  • monai/bundle/scripts.py
  • monai/data/__init__.py
  • monai/data/export_utils.py
  • monai/losses/dice.py
  • monai/losses/focal_loss.py
  • monai/networks/nets/dints.py
  • monai/networks/utils.py
  • monai/utils/deprecate_utils.py
  • tests/apps/detection/networks/test_retinanet.py
  • tests/bundle/test_bundle_export_checkpoint.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • tests/data/test_export_utils.py
  • tests/losses/image_dissimilarity/test_local_normalized_cross_correlation_loss.py
  • tests/losses/test_multi_scale.py
  • tests/min_tests.py
  • tests/networks/nets/test_ahnet.py
  • tests/networks/nets/test_efficientnet.py
  • tests/networks/nets/test_senet.py
  • tests/networks/test_convert_to_export.py
  • tests/networks/test_varnet.py
  • tests/test_utils.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • monai/apps/mmars/mmars.py
  • monai/losses/dice.py
  • tests/data/meta_tensor/test_meta_tensor.py
  • monai/networks/nets/dints.py
  • monai/losses/focal_loss.py
  • tests/bundle/test_bundle_export_checkpoint.py

@aymuos15 aymuos15 force-pushed the torchscript_deprecation branch from 47ee9ba to 1a246d5 Compare March 17, 2026 13:58
CI fixes:
- Apply black --skip-magic-trailing-comma formatting
- Fix isort ordering in monai/data/__init__.py and test files
- Rename D/S to dynamic/static (ruff N806)
- Skip torch.export tests on PyTorch < 2.6 (test_export_save early return)
- Skip TestConvertToExport class on PyTorch < 2.6
- Skip test_export_with_dynamic_shapes on PyTorch < 2.9
- Revert incompatible networks to test_script_save (regunet, basic_unet,
  fpn, globalnet, voxelmorph, basic_unetplusplus, wasserstein_dice, masked_loss)
- Switch test_varnet to test_export_save (CoilSensitivityModel changes
  broke TorchScript compat)
- Add test_bundle_export_checkpoint to min_tests skip list
- Merge dev into torchscript_deprecation

CodeRabbit review fixes:
- Fix grammar in mb_specification.rst
- Document map_location ignored for .pt2 in mmars.py
- Make load_ts_module and load_exported_module mutually exclusive
- Handle meta_file as Sequence in export_checkpoint via ensure_tuple
- Add output length check before zip in convert_to_export verify
- Add output length check before zip in convert_to_trt verify
- Guard METADATA_FILENAME from being overwritten in save_exported_program
- Guard empty weight tensors in DiceLoss and FocalLoss
- Suppress deprecation warning in convert_to_trt fallback path
- Use save_exported_program for dynamo objects in _save_trt_model
- Sort __all__ in export_utils.py (RUF022)
- Add dict support to _recursive_to helper
- Fix stale comment in test_retinanet ONNX test
- Rename unused expected_shape param in test_varnet

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
@aymuos15 aymuos15 force-pushed the torchscript_deprecation branch from 933c12b to cbc0346 Compare March 17, 2026 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant