Skip to content

Conversation

@virginiafdez
Copy link
Contributor

Fixes #8627.
Moves the perceptual loss code to MONAI repository https://github.com/Project-MONAI/perceptual-models and the checkpoints to Huggingface.

Description

This PR changes and simplifies the torch.hub loading process and gets the models from Huggingface lirbary.
A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • [N/A] New tests added to cover the changes.
  • [N/A] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Virginia Fernandez added 3 commits November 26, 2025 10:11
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 2, 2025

Walkthrough

Perceptual loss backbones now load from Project-MONAI/perceptual-models via torch.hub. Added HF_MONAI_MODELS constant and renamed/expanded the enum to PerceptualNetworkType with new members (radimagenet_resnet50, medicalnet_resnet10_23datasets, medicalnet_resnet50_23datasets, resnet50). PerceptualLoss, MedicalNetPerceptualSimilarity, and RadImageNetPerceptualSimilarity accept a new cache_dir parameter and propagate it to constructors and torch.hub.load calls. MedicalNet models now enforce spatial_dims==3 and is_fake_3d==False; channel_wise is restricted to MedicalNet (non-MedicalNet with channel_wise raises). Model-name inputs are validated against HF_MONAI_MODELS. Axis/slicing bug in channel-wise feature differencing fixed (slice end index corrected). Docstrings and comments updated to reflect new hub sources and cache_dir behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Areas needing extra attention:

  • MedicalNet validation: enforcement of spatial_dims==3, is_fake_3d==False, and restriction of channel_wise to MedicalNet
  • cache_dir propagation through PerceptualLoss → MedicalNetPerceptualSimilarity / RadImageNetPerceptualSimilarity → torch.hub.load
  • torch.hub.load targets changed to Project-MONAI/perceptual-models and model-name validation against HF_MONAI_MODELS
  • Channel-wise computation fix: corrected slice indexing for feats_diff
  • Public API changes: new cache_dir parameters and renamed enum PerceptualNetworkType (check imports and backward-compatibility)

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly identifies the bug fix (issue #8627) by referencing the specific problem with perceptual loss download limits.
Description check ✅ Passed Description links to the issue, explains the solution (moving to MONAI repo and HuggingFace), and documents the testing and documentation updates performed.
Linked Issues check ✅ Passed Changes fully address issue #8627 by migrating models from Google Drive to HuggingFace and updating the torch.hub loading process to prevent download quota failures.
Out of Scope Changes check ✅ Passed All changes in perceptual.py directly support the migration objective: new model sources, cache_dir propagation, validation enums, and strict MedicalNet handling are all within scope.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Virginia Fernandez added 3 commits December 5, 2025 16:51
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
@virginiafdez virginiafdez marked this pull request as ready for review December 5, 2025 17:02
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)

290-293: Indexing bug in channel-wise slicing.

l_idx : i + r_idx should be l_idx : r_idx. Current code produces incorrect slice ranges for i > 0.

             for i in range(input.shape[1]):
                 l_idx = i * feats_per_ch
                 r_idx = (i + 1) * feats_per_ch
-                results[:, i, ...] = feats_diff[:, l_idx : i + r_idx, ...].sum(dim=1)
+                results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
🧹 Nitpick comments (3)
monai/losses/perceptual.py (3)

105-106: Add stacklevel=2 to warning.

Ensures warning points to caller's code, not this internal line.

             if not channel_wise:
-                warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.")
+                warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)

219-225: Document cache_dir parameter in docstring.

The cache_dir parameter (line 232) is missing from the Args section.

         channel_wise: if True, the loss is returned per channel. Otherwise the loss is averaged over the channels.
                 Defaults to ``False``.
+        cache_dir: path to cache directory to save the pretrained network weights.
     """

324-328: Document cache_dir parameter in docstring.

The cache_dir parameter is missing from the Args section.

         verbose: if false, mute messages from torch Hub load function.
+        cache_dir: path to cache directory to save the pretrained network weights.
     """
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 15fd428 and c99e16e.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
  • StrEnum (68-90)
monai/utils/module.py (1)
  • optional_import (315-445)
🪛 Ruff (0.14.7)
monai/losses/perceptual.py

102-104: Avoid specifying long messages outside the exception class

(TRY003)


106-106: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


235-235: Unused lambda argument: a

(ARG005)


235-235: Unused lambda argument: b

(ARG005)


235-235: Unused lambda argument: c

(ARG005)


237-239: Avoid specifying long messages outside the exception class

(TRY003)


333-335: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: build-docs
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
🔇 Additional comments (1)
monai/losses/perceptual.py (1)

23-28: LGTM!

Immutable tuple for allowed model names addresses the security concern from prior review.

ericspod and others added 3 commits December 5, 2025 17:32
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
monai/losses/perceptual.py (2)

112-116: Incorrect error message.

Message references "Adversarial Loss" but this is PerceptualLoss.

         if network_type.lower() not in list(PercetualNetworkType):
             raise ValueError(
-                "Unrecognised criterion entered for Adversarial Loss. Must be one in: %s"
+                "Unrecognised criterion entered for Perceptual Loss. Must be one in: %s"
                 % ", ".join(PercetualNetworkType)
             )

291-293: Off-by-one slicing bug.

The slice l_idx : i + r_idx should be l_idx : r_idx. The i + causes incorrect indices for channels after the first.

                 l_idx = i * feats_per_ch
                 r_idx = (i + 1) * feats_per_ch
-                results[:, i, ...] = feats_diff[:, l_idx : i + r_idx, ...].sum(dim=1)
+                results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
♻️ Duplicate comments (1)
monai/losses/perceptual.py (1)

133-134: Missing cache_dir propagation.

RadImageNetPerceptualSimilarity accepts cache_dir but it's not passed here.

         elif "radimagenet_" in network_type:
-            self.perceptual_function = RadImageNetPerceptualSimilarity(net=network_type, verbose=False)
+            self.perceptual_function = RadImageNetPerceptualSimilarity(net=network_type, verbose=False, cache_dir=cache_dir)
🧹 Nitpick comments (4)
monai/losses/perceptual.py (4)

105-106: Add stacklevel=2 to warning.

Without explicit stacklevel, the warning will point to this line instead of the caller.

         if not channel_wise:
-            warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.")
+            warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)

120-123: Add stacklevel=2 to warning.

             warnings.warn(
-                f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls."
+                f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.",
+                stacklevel=2,
             )

219-225: Missing cache_dir in docstring.

Per coding guidelines, all parameters should be documented.

Add to Args section:

        cache_dir: path to cache directory to save the pretrained network weights.

324-328: Missing cache_dir in docstring.

Add to Args section:

        cache_dir: path to cache directory to save the pretrained network weights.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between c99e16e and 2156b84.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🪛 Ruff (0.14.7)
monai/losses/perceptual.py

102-104: Avoid specifying long messages outside the exception class

(TRY003)


106-106: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


235-235: Unused lambda argument: a

(ARG005)


235-235: Unused lambda argument: b

(ARG005)


235-235: Unused lambda argument: c

(ARG005)


237-239: Avoid specifying long messages outside the exception class

(TRY003)


333-335: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: build-docs
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.11)
🔇 Additional comments (4)
monai/losses/perceptual.py (4)

23-28: LGTM!

Valid model names centralized as a constant.


34-44: LGTM!

New network types properly added to the enum.


235-244: LGTM!

Model validation and hub loading properly configured with trust_repo=True and explicit branch. The lambda disabling fork validation is an intentional workaround.


330-337: LGTM!

Consistent with MedicalNetPerceptualSimilarity: validates model names, uses :main branch, and includes trust_repo=True.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Virginia Fernandez <61539159+virginiafdez@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)

118-123: Add stacklevel to cache_dir warning.

         warnings.warn(
-            f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls."
+            f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.",
+            stacklevel=2
         )
🧹 Nitpick comments (1)
monai/losses/perceptual.py (1)

23-28: Consider frozenset for O(1) membership checks.

While the tuple works fine for 3 items, a frozenset would provide O(1) lookups and better communicate intent for membership validation.

-HF_MONAI_MODELS = (
+HF_MONAI_MODELS = frozenset((
     "medicalnet_resnet10_23datasets",
     "medicalnet_resnet50_23datasets",
     "radimagenet_resnet50",
-)
+))
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 2156b84 and e2b982e.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
  • StrEnum (68-90)
monai/utils/module.py (1)
  • optional_import (315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py

102-104: Avoid specifying long messages outside the exception class

(TRY003)


106-106: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


235-235: Unused lambda argument: a

(ARG005)


235-235: Unused lambda argument: b

(ARG005)


235-235: Unused lambda argument: c

(ARG005)


237-239: Avoid specifying long messages outside the exception class

(TRY003)


333-335: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (macOS-latest)
🔇 Additional comments (4)
monai/losses/perceptual.py (4)

236-239: Validation against HF_MONAI_MODELS is correct.

Good explicit validation with helpful error message.


241-243: Model loading looks correct.

trust_repo=True is appropriate for the hardcoded Project-MONAI repository, and cache_dir is properly propagated.


332-337: Validation and loading are correct.

Good validation against HF_MONAI_MODELS and proper cache_dir propagation. Note that this class doesn't disable fork validation (unlike MedicalNetPerceptualSimilarity line 235), which is the preferred approach.


143-144: No action required. The LPIPS library does not expose a cache_dir parameter in its constructor. The original lpips package automatically caches pretrained model files under a default directory (typically ~/.lpips). The instantiation at line 144 is correct as written and is consistent with the LPIPS library's design.

Likely an incorrect or invalid review comment.

Virginia Fernandez added 3 commits December 9, 2025 16:04
…um-number-of-downloads' of github.com:virginiafdez/MONAI into 8627-perceptual-loss-errors-out-after-hitting-the-maximum-number-of-downloads
….ac.uk>

I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: b1e4a50
I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: fa0639b
I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: 5594bfe
I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: c99e16e
I, Virginia Fernandez <virginia.fernandez@kcl.ac.uk>, hereby add my Signed-off-by to this commit: e3be8de

Signed-off-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
monai/losses/perceptual.py (3)

235-235: Lambda monkey-patch bypasses security validation.

Disabling _validate_not_a_forked_repo modifies global state and circumvents torch.hub's fork validation. Consider documenting rationale or using trust_repo parameter instead.


35-35: Class name typo still present.

Past review marked this as addressed, but PercetualNetworkType should be PerceptualNetworkType. This is a public API class. Fix requires updating all references.

-class PercetualNetworkType(StrEnum):
+class PerceptualNetworkType(StrEnum):

106-106: Add stacklevel to warning.

-            warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.")
+            warnings.warn("MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.", stacklevel=2)
🧹 Nitpick comments (1)
monai/losses/perceptual.py (1)

121-123: Add stacklevel to warning for consistency.

-            warnings.warn(
-                f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls."
-            )
+            warnings.warn(
+                f"Setting cache_dir to {cache_dir}, this may change the default cache dir for all torch.hub calls.",
+                stacklevel=2
+            )
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between e2b982e and b02053b.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (8 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
  • StrEnum (68-90)
monai/utils/module.py (1)
  • optional_import (315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py

102-104: Avoid specifying long messages outside the exception class

(TRY003)


106-106: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)


235-235: Unused lambda argument: a

(ARG005)


235-235: Unused lambda argument: b

(ARG005)


235-235: Unused lambda argument: c

(ARG005)


237-239: Avoid specifying long messages outside the exception class

(TRY003)


333-335: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: packaging
  • GitHub Check: build-docs
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (macOS-latest)
🔇 Additional comments (6)
monai/losses/perceptual.py (6)

23-28: LGTM.

Model name whitelist is correctly defined and serves the security validation purpose mentioned in past reviews.


128-144: LGTM.

cache_dir correctly propagated to MedicalNet (line 131) and RadImageNet (line 134) constructors.


236-239: LGTM.

Model name validation against HF_MONAI_MODELS whitelist is correctly implemented with clear error message.


241-244: LGTM.

torch.hub.load correctly specifies :main branch, includes trust_repo=True, and propagates cache_dir.


286-293: LGTM.

Channel-wise slice indexing correctly uses l_idx:r_idx to extract features per input channel.


330-337: LGTM.

cache_dir parameter properly propagated (line 330, 336). Model name validation (lines 332-335) and torch.hub.load call (lines 336-337) correctly implemented with :main branch and trust_repo=True.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between b02053b and 6dfc209.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/enums.py (1)
  • StrEnum (68-90)
monai/utils/module.py (1)
  • optional_import (315-445)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py

102-104: Avoid specifying long messages outside the exception class

(TRY003)


110-110: Avoid specifying long messages outside the exception class

(TRY003)


236-238: Avoid specifying long messages outside the exception class

(TRY003)


332-334: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: build-docs
  • GitHub Check: packaging
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
🔇 Additional comments (7)
monai/losses/perceptual.py (7)

23-28: LGTM! Constant provides security validation.

Frozenset prevents modification and ensures only trusted model names are downloaded from the hub.


35-44: LGTM! Enum properly renamed and expanded.

Typo fixed (PercetualNetworkType → PerceptualNetworkType) and new network types added to match available models.


112-116: LGTM! Validation logic is correct.

Properly validates network_type against the enum members.


128-144: LGTM! cache_dir properly propagated where supported.

MedicalNet and RadImageNet paths receive cache_dir; TorchVision and LPIPS paths correctly omit it as those classes don't support the parameter.


228-243: LGTM! Constructor properly implements validation and hub loading.

Model name validated against HF_MONAI_MODELS, cache_dir propagated to torch.hub.load, and trust_repo=True set appropriately.


289-292: LGTM! Critical bug fix in channel-wise slicing.

Corrected from l_idx : i + r_idx to l_idx : r_idx. Since r_idx = (i + 1) * feats_per_ch, the original code incorrectly added i again.


329-336: LGTM! Constructor matches MedicalNet pattern.

Validates model name, propagates cache_dir, and uses trust_repo=True with explicit branch specifier.

Signed-off-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/losses/perceptual.py (1)

280-289: Channel-wise MedicalNet path: device/dtype mismatch risk; slice fix is correct.

The new slice feats_diff[:, l_idx:r_idx, ...] is the right fix for grouping features per input channel. However, results is currently allocated on the default CPU device and default dtype, which will break for GPU inputs or non-default dtypes.

Consider:

-        feats_diff: torch.Tensor = (feats_input - feats_target) ** 2
-        if self.channel_wise:
-            results = torch.zeros(
-                feats_diff.shape[0], input.shape[1], feats_diff.shape[2], feats_diff.shape[3], feats_diff.shape[4]
-            )
+        feats_diff: torch.Tensor = (feats_input - feats_target) ** 2
+        if self.channel_wise:
+            results = torch.zeros(
+                feats_diff.shape[0],
+                input.shape[1],
+                feats_diff.shape[2],
+                feats_diff.shape[3],
+                feats_diff.shape[4],
+                dtype=feats_diff.dtype,
+                device=feats_diff.device,
+            )
             for i in range(input.shape[1]):
                 l_idx = i * feats_per_ch
                 r_idx = (i + 1) * feats_per_ch
                 results[:, i, ...] = feats_diff[:, l_idx:r_idx, ...].sum(dim=1)
♻️ Duplicate comments (1)
monai/losses/perceptual.py (1)

94-104: Fix typo in MedicalNet warning message (already flagged earlier).

The warning string still has the "supp, ort" typo; it should read “support”.

         if "medicalnet_" in network_type:
@@
-            if not channel_wise:
-                warnings.warn(
-                    "MedicalNet networks supp, ort channel-wise loss. Consider setting channel_wise=True.", stacklevel=2
-                )
+            if not channel_wise:
+                warnings.warn(
+                    "MedicalNet networks support channel-wise loss. Consider setting channel_wise=True.",
+                    stacklevel=2,
+                )
🧹 Nitpick comments (2)
monai/losses/perceptual.py (2)

31-40: Enum-backed network_type default looks good; consider reflecting it in the type hint.

Using PerceptualNetworkType as the default is nice for discoverability. For stricter typing, you might consider allowing the enum explicitly in the annotation (without changing runtime behavior), e.g.:

-        network_type: str = PerceptualNetworkType.alex,
+        network_type: str | PerceptualNetworkType = PerceptualNetworkType.alex,

This remains backwards compatible because StrEnum inherits from str.

Also applies to: 77-88


212-224: Document the new cache_dir parameter in class docstrings.

cache_dir was added to MedicalNetPerceptualSimilarity and RadImageNetPerceptualSimilarity but isn’t mentioned in their docstrings; this breaks the “all parameters documented” guideline and can confuse users.

Suggested docstring updates:

 class MedicalNetPerceptualSimilarity(nn.Module):
@@
     Args:
         net: {``"medicalnet_resnet10_23datasets"``, ``"medicalnet_resnet50_23datasets"``}
             Specifies the network architecture to use. Defaults to ``"medicalnet_resnet10_23datasets"``.
         verbose: if false, mute messages from torch Hub load function.
         channel_wise: if True, the loss is returned per channel. Otherwise the loss is averaged over the channels.
-                Defaults to ``False``.
+            Defaults to ``False``.
+        cache_dir: optional path to a cache directory used by ``torch.hub`` for downloading and caching weights.
@@
-    Args:
-        net: {``"radimagenet_resnet50"``}
-            Specifies the network architecture to use. Defaults to ``"radimagenet_resnet50"``.
-        verbose: if false, mute messages from torch Hub load function.
-    """
+    Args:
+        net: {``"radimagenet_resnet50"``}
+            Specifies the network architecture to use. Defaults to ``"radimagenet_resnet50"``.
+        verbose: if false, mute messages from torch Hub load function.
+        cache_dir: optional path to a cache directory used by ``torch.hub`` for downloading and caching weights.
+    """

Also applies to: 227-231, 313-323, 325-325

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 6dfc209 and d258390.

📒 Files selected for processing (1)
  • monai/losses/perceptual.py (9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/losses/perceptual.py
🧬 Code graph analysis (1)
monai/losses/perceptual.py (2)
monai/utils/module.py (1)
  • optional_import (315-445)
monai/utils/enums.py (1)
  • StrEnum (68-90)
🪛 Ruff (0.14.8)
monai/losses/perceptual.py

97-99: Avoid specifying long messages outside the exception class

(TRY003)


107-107: Avoid specifying long messages outside the exception class

(TRY003)


235-235: Avoid specifying long messages outside the exception class

(TRY003)


328-328: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (2)
monai/losses/perceptual.py (2)

31-40: PerceptualNetworkType enum definition itself looks solid.

Enum values are consistent with the supported backends and the docstring is now spelled correctly; no additional changes needed here.


131-133: RadImageNet hub loading and cache_dir propagation look correct.

RadImageNetPerceptualSimilarity now loads from Project-MONAI/perceptual-models:main with trust_repo=True and forwards cache_dir into torch.hub.load, matching the intended repository and allowing callers to control cache location.

Also applies to: 317-331

Comment on lines +22 to +25
# Valid model name to download from the repository
HF_MONAI_MODELS = frozenset(
("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restrict model validation per family and guard 3D path to MedicalNet only.

As written, HF_MONAI_MODELS is shared by both MedicalNet and RadImageNet, and the 3D branch in PerceptualLoss always instantiates MedicalNetPerceptualSimilarity when spatial_dims == 3 and is_fake_3d is False, regardless of network_type. This leads to:

  • network_type="radimagenet_resnet50" with spatial_dims=3 and is_fake_3d=False being passed into MedicalNetPerceptualSimilarity, which will attempt to run a 2D RadImageNet backbone in a 3D MedicalNet path (shape/device errors at runtime instead of a clean validation failure).
  • MedicalNetPerceptualSimilarity and RadImageNetPerceptualSimilarity both accepting each other’s model names because they both use HF_MONAI_MODELS directly.

Recommend:

  1. Split the valid model-name set into MedicalNet-specific and RadImageNet-specific subsets and validate each class against its own subset.
  2. In the 3D branch, require that network_type is a MedicalNet variant before constructing MedicalNetPerceptualSimilarity, otherwise raise a clear ValueError.

Example patch:

@@
-# Valid model name to download from the repository
-HF_MONAI_MODELS = frozenset(
-    ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50")
-)
+# Valid model names to download from the repository
+HF_MONAI_MODELS = frozenset(
+    ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets", "radimagenet_resnet50")
+)
+HF_MONAI_MEDICALNET_MODELS = frozenset(
+    ("medicalnet_resnet10_23datasets", "medicalnet_resnet50_23datasets")
+)
+HF_MONAI_RADIMAGENET_MODELS = frozenset(("radimagenet_resnet50",))
@@
-        # If spatial_dims is 3, only MedicalNet supports 3D models, otherwise, spatial_dims=2 and fake_3D must be used.
-        if spatial_dims == 3 and is_fake_3d is False:
-            self.perceptual_function = MedicalNetPerceptualSimilarity(
-                net=network_type, verbose=False, channel_wise=channel_wise, cache_dir=cache_dir
-            )
+        # If spatial_dims is 3, only MedicalNet supports 3D models; other networks must use the fake 3D path.
+        if spatial_dims == 3 and is_fake_3d is False:
+            if "medicalnet_" not in network_type:
+                raise ValueError(
+                    "Only MedicalNet networks support 3D perceptual loss with is_fake_3d=False; "
+                    f"got network_type={network_type!r}."
+                )
+            self.perceptual_function = MedicalNetPerceptualSimilarity(
+                net=network_type, verbose=False, channel_wise=channel_wise, cache_dir=cache_dir
+            )
@@
-        if net not in HF_MONAI_MODELS:
-            raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.")
+        if net not in HF_MONAI_MEDICALNET_MODELS:
+            raise ValueError(
+                f"Invalid MedicalNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_MEDICALNET_MODELS)}."
+            )
@@
-        if net not in HF_MONAI_MODELS:
-            raise ValueError(f"Invalid download model name '{net}'. Must be one of: {', '.join(HF_MONAI_MODELS)}.")
+        if net not in HF_MONAI_RADIMAGENET_MODELS:
+            raise ValueError(
+                f"Invalid RadImageNet model name '{net}'. Must be one of: {', '.join(HF_MONAI_RADIMAGENET_MODELS)}."
+            )

Also applies to: 94-107, 125-133, 234-239, 325-331

Comment on lines +109 to 113
if network_type.lower() not in list(PerceptualNetworkType):
raise ValueError(
"Unrecognised criterion entered for Adversarial Loss. Must be one in: %s"
% ", ".join(PercetualNetworkType)
% ", ".join(PerceptualNetworkType)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error message still mentions “Adversarial Loss”.

The validation is for PerceptualLoss.network_type, but the message references “Adversarial Loss”, which is confusing for users.

-        if network_type.lower() not in list(PerceptualNetworkType):
-            raise ValueError(
-                "Unrecognised criterion entered for Adversarial Loss. Must be one in: %s"
-                % ", ".join(PerceptualNetworkType)
-            )
+        if network_type.lower() not in list(PerceptualNetworkType):
+            raise ValueError(
+                "Unrecognised network_type for PerceptualLoss. Must be one of: %s"
+                % ", ".join(PerceptualNetworkType)
+            )
🤖 Prompt for AI Agents
In monai/losses/perceptual.py around lines 109-113, the ValueError message
incorrectly references "Adversarial Loss" while validating
PerceptualLoss.network_type; update the error text to mention "Perceptual Loss"
(or "PerceptualNetworkType") and list the valid network types (use the
appropriate enum names/values from PerceptualNetworkType) so the message reads
something like: "Unrecognised network_type for Perceptual Loss. Must be one of:
<allowed types>".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Perceptual Loss errors out after hitting the maximum number of downloads

2 participants