Skip to content

Conversation

@sudomakeinstall
Copy link

Fixes #8619.

Description

As detailed in #8619, the convert_to_channel_last method of the ImageWriter class implicitly converts data from a MetaTensor to an ndarray here. This PR addresses this issue by calling the squeeze method on data rather than using np.squeeze. A unit test has been added to test this change.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

Notes for reviewers/maintainers:

  • When running ./runtests.sh -f -u --net --coverage, I'm getting several flake8 linting errors in files not touched by this commit. These errors pre-exist the current PR. They are not addressed using the --autofix flag. Since they seem to be unrelated to this PR, I have left them as is, but also have not checked the box above since the tests did not all pass. Reproducing here for reference:
Click here to show errors.
flake8
7.3.0 (flake8-bugbear: 24.2.6, flake8-comprehensions: 3.17.0, mccabe: 0.7.0, pep8-naming: 0.15.1, pycodestyle: 2.14.0, pyflakes: 3.4.0) CPython 3.12.3 on Linux
/home/davis/Developer/MONAI/monai/apps/auto3dseg/bundle_gen.py:399:69: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/networks/retinanet_detector.py:790:102: E231 missing whitespace after ','
/home/davis/Developer/MONAI/monai/apps/detection/utils/anchor_utils.py:177:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/anchor_utils.py:183:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/detector_utils.py:94:88: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/apps/detection/utils/detector_utils.py:98:75: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:288:58: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:369:69: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:538:58: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:916:47: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/auto3dseg/analyzer.py:918:47: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/data/wsi_reader.py:213:80: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/losses/unified_focal_loss.py:220:91: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/metrics/meandice.py:163:44: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/networks/blocks/patchembedding.py:105:30: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/networks/layers/factories.py:98:51: E225 missing whitespace around operator
/home/davis/Developer/MONAI/monai/transforms/croppad/array.py:293:87: E226 missing whitespace around arithmetic operator
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:72: E251 unexpected spaces around keyword / parameter equals
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:74: E202 whitespace before '}'
/home/davis/Developer/MONAI/monai/transforms/regularization/array.py:44:74: E251 unexpected spaces around keyword / parameter equals
1     E202 whitespace before '}'
3     E225 missing whitespace around operator
13    E226 missing whitespace around arithmetic operator
1     E231 missing whitespace after ','
2     E251 unexpected spaces around keyword / parameter equals
20
Check failed!
Please run auto style fixes: ./runtests.sh --autofix
  • When running ./runtests.sh --quick --unittests --disttests, I'm getting an error relating to tests/networks/test_bundle_onnx_export.py. I get the same error when I run python -m tests.networks.test_onnx_export on the dev branch. Since this error seems to be unrelated to this PR, I have left it as is, but also did not check the box above since the tests did not all pass. Reproducing here for reference:
Click here to show errors.
======================================================================
ERROR: test_onnx_export_1_False (tests.networks.test_bundle_onnx_export.TestONNXExport.test_onnx_export_1_False)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/davis/Developer/MONAI/tests/test_utils.py", line 840, in command_line_tests
    normal_out = subprocess.run(cmd, env=test_env, check=True, capture_output=True)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python', '-m', 'monai.bundle', 'onnx_export', 'network_def', '--filepath', '/tmp/tmperf9v3hy/model.onnx', '--meta_file', '/home/davis/Developer/MONAI/tests/testing_data/metadata.json', '--config_file', "['/home/davis/Developer/MONAI/tests/testing_data/inference.json','/tmp/tmperf9v3hy/def_args.yaml']", '--ckpt_file', '/tmp/tmperf9v3hy/model.pt', '--args_file', '/tmp/tmperf9v3hy/def_args.yaml', '--input_shape', '[1, 1, 96, 96, 96]', '--use_trace', 'False']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/parameterized/parameterized.py", line 620, in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/tests/networks/test_bundle_onnx_export.py", line 65, in test_onnx_export
    command_line_tests(cmd)
  File "/home/davis/Developer/MONAI/tests/test_utils.py", line 846, in command_line_tests
    raise RuntimeError(f"subprocess call error {e.returncode}: {errors}, {output}") from e
RuntimeError: subprocess call error 1: b'Traceback (most recent call last):
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 118, in __call__
    exported_program = self._capture(model, args, kwargs, dynamic_shapes)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 210, in _capture
    return torch.export.export(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/export/__init__.py", line 270, in export
    raise ValueError(
ValueError: Exporting a ScriptModule is not supported. Maybe try converting your ScriptModule to an ExportedProgram using `TS2EPConverter(mod, args, kwargs).convert()` instead.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/davis/Developer/MONAI/monai/bundle/__main__.py", line 31, in <module>
    fire.Fire()
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
                                ^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/monai/bundle/scripts.py", line 1426, in onnx_export
    _export(
  File "/home/davis/Developer/MONAI/monai/bundle/scripts.py", line 1302, in _export
    net = converter(model=net, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/monai/networks/utils.py", line 735, in convert_to_onnx
    torch.onnx.export(
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/__init__.py", line 296, in export
    return _compat.export_compat(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py", line 143, in export_compat
    onnx_program = _core.export(
                   ^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_flags.py", line 23, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1385, in export
    raise _errors.TorchExportError(
torch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. \x1b[96mThis is step 1/3\x1b[0m of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and submit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the \x1b[96m*torch.export*\x1b[0m component and attach the full error stack as well as reproduction scripts.

## Exception summary

<class \'ValueError\'>: Exporting a ScriptModule is not supported. Maybe try converting your ScriptModule to an ExportedProgram using `TS2EPConverter(mod, args, kwargs).convert()` instead.

(Refer to the full stack trace above for more information.)
', b"2025-12-04 09:34:48,922 - INFO - --- input summary of monai.bundle.scripts.onnx_export ---
2025-12-04 09:34:48,922 - INFO - > meta_file: '/home/davis/Developer/MONAI/tests/testing_data/metadata.json'
2025-12-04 09:34:48,922 - INFO - > net_id: 'network_def'
2025-12-04 09:34:48,922 - INFO - > filepath: '/tmp/tmperf9v3hy/model.onnx'
2025-12-04 09:34:48,922 - INFO - > config_file: ['/home/davis/Developer/MONAI/tests/testing_data/inference.json',
 '/tmp/tmperf9v3hy/def_args.yaml']
2025-12-04 09:34:48,922 - INFO - > ckpt_file: '/tmp/tmperf9v3hy/model.pt'
2025-12-04 09:34:48,922 - INFO - > use_trace: False
2025-12-04 09:34:48,923 - INFO - > input_shape: [1, 1, 96, 96, 96]
2025-12-04 09:34:48,923 - INFO - ---


torch_versioned_kwargs={}
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`... \xe2\x9d\x8c
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=True)`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=True)`... \xe2\x9d\x8c
  • When running make html, I get an error related to indentation in meta_tensor.py. Since this error seems to be unrelated to this PR, I have left it as is, but also did not check the box above since the documentation did not build. Reproducing here for reference:
Click here to show errors.
/home/davis/Developer/MONAI/monai/data/meta_tensor.py:docstring of monai.data.MetaTensor.nonzero_static:24: ERROR: Unexpected indentation. [docutils]
/home/davis/Developer/MONAI/monai/data/meta_tensor.py:docstring of monai.data.MetaTensor.nonzero_static:33: ERROR: Unexpected indentation. [docutils]

Extension error (sphinx.ext.linkcode)!

Versions
========

* Platform:         linux; (Linux-6.9.3-76060903-generic-x86_64-with-glibc2.39)
* Python version:   3.12.3 (CPython)
* Sphinx version:   9.0.4
* Docutils version: 0.22.3
* Jinja2 version:   3.1.6
* Pygments version: 2.19.2

Last Messages
=============

    config_syntax


    reading sources... [ 48%]
    contrib


    reading sources... [ 50%]
    data

Loaded Extensions
=================

* sphinx.ext.mathjax (9.0.4)
* alabaster (1.0.0)
* sphinxcontrib.applehelp (2.0.0)
* sphinxcontrib.devhelp (2.0.0)
* sphinxcontrib.htmlhelp (2.1.0)
* sphinxcontrib.serializinghtml (2.0.0)
* sphinxcontrib.qthelp (2.0.0)
* recommonmark (0.6.0)
* sphinx.ext.intersphinx (9.0.4)
* sphinx.ext.autodoc (9.0.4)
* sphinx.ext.napoleon (9.0.4)
* sphinx.ext.linkcode (9.0.4)
* sphinx.ext.autosectionlabel (9.0.4)
* sphinx.ext.autosummary (9.0.4)
* sphinx_autodoc_typehints (unknown version)
* pydata_sphinx_theme (unknown version)

Traceback
=========

      File "/home/davis/Developer/MONAI/.venv/lib/python3.12/site-packages/sphinx/events.py", line 452, in emit
        raise ExtensionError(
    sphinx.errors.ExtensionError: Handler <function doctree_read at 0x7beb85b16980> for event 'doctree-read' threw an exception (exception: Empty module name)


The full traceback has been saved in:
/tmp/sphinx-err-s5bsuit5.log

To report this error to the developers, please open an issue at <https://github.com/sphinx-doc/sphinx/issues/>. Thanks!
Please also report this if it was a user error, so that a better error message can be provided next time.
make: *** [Makefile:24: html] Error 2
  • Since this PR is a bugfix and brings actual behavior in line with the documented behavior, I have not updated the in-line docstrings. However, I am of course open to feedback as to the best practice here.

Thank you to the maintainers for their work on this invaluable project and thank you in advance for considering this PR!

Signed-off-by: Davis Vigneault <davis.vigneault@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Walkthrough

Two files changed. In monai/data/image_writer.py two identical call sites replaced np.squeeze(data, -1) with the array method data.squeeze(-1) in convert_to_channel_last. In tests/data/test_itk_writer.py tests were updated to use Torch tensors and import MetaTensor; a new test test_metatensor_preserved was added to assert that passing a MetaTensor with channel_dim=-1 and squeeze_end_dims=True leaves the internal data_obj as a MetaTensor and preserves its metadata.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • Verify data.squeeze(-1) matches np.squeeze(data, -1) semantics for NumPy and Torch arrays at both call sites.
  • Check the new test imports MetaTensor correctly and that tensor creation uses torch as intended.
  • Ensure test_metatensor_preserved correctly asserts type and metadata are preserved.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed Title accurately summarizes the main change: preventing implicit MetaTensor-to-NumPy conversion in ImageWriter.convert_to_channel_last by replacing np.squeeze with data.squeeze.
Description check ✅ Passed Description covers the issue (fixes #8619), explains the solution (replacing np.squeeze with data.squeeze), documents the test addition, and transparently reports unrelated pre-existing CI/test failures.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/data/test_itk_writer.py (1)

67-72: Good regression test for MetaTensor preservation; consider using a tensor backend

The test cleanly exercises the MetaTensor + ITKWriter.set_data_array path with channel_dim=-1 and squeeze_end_dims=True and verifies both type and metadata preservation, which is exactly what this PR is fixing. If you want to match typical runtime usage more closely, you could optionally initialise with a torch tensor instead of a NumPy array, for example:

-        data = MetaTensor(np.arange(48).reshape(3, 4, 4, 1), meta={"test_key": "test_value"})
+        data = MetaTensor(torch.arange(48).reshape(3, 4, 4, 1), meta={"test_key": "test_value"})

but the current version is functionally fine.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 15fd428 and 1283125.

📒 Files selected for processing (2)
  • monai/data/image_writer.py (1 hunks)
  • tests/data/test_itk_writer.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/data/test_itk_writer.py
  • monai/data/image_writer.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: build-docs
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (2)
tests/data/test_itk_writer.py (1)

21-21: MetaTensor import aligns with new test usage

Importing MetaTensor here is necessary for the new test and matches existing import style; no changes needed.

monai/data/image_writer.py (1)

326-328: Squeeze change correctly preserves MetaTensor and other tensor types

Switching from np.squeeze(data, -1) to data.squeeze(-1) keeps data in its native type (including MetaTensor) while still removing trailing singleton dimensions, avoiding the previous implicit conversion to np.ndarray. Loop semantics and overall convert_to_channel_last behaviour remain intact, so this looks good and is well covered by the new MetaTensor test.

Signed-off-by: Davis Vigneault <davis.vigneault@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 1283125 and ced515b.

📒 Files selected for processing (1)
  • tests/data/test_itk_writer.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/data/test_itk_writer.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.11)
🔇 Additional comments (2)
tests/data/test_itk_writer.py (2)

21-21: LGTM!

MetaTensor import is necessary for the new test and follows the existing import pattern.


48-48: Clarify the rationale for changing to torch.arange.

The change from np.arange to torch.arange in test_rgb isn't directly related to MetaTensor preservation. Is this intentional alignment with torch usage, or should it remain as NumPy?

Comment on lines +67 to +72
def test_metatensor_preserved(self):
data = MetaTensor(np.arange(48).reshape(3, 4, 4, 1), meta={"test_key": "test_value"})
writer = ITKWriter()
writer.set_data_array(data, channel_dim=-1, squeeze_end_dims=True)
self.assertIsInstance(writer.data_obj, MetaTensor)
self.assertEqual(writer.data_obj.meta.get("test_key"), "test_value")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstring and consider additional test coverage.

Missing docstring violates coding guidelines. Additionally, consider testing more scenarios:

  • Different channel_dim values (e.g., 0, 1)
  • squeeze_end_dims=False case
  • Verifying the squeezed shape

Apply this diff to add a docstring:

     def test_metatensor_preserved(self):
+        """Test that MetaTensor and its metadata are preserved when squeeze_end_dims=True."""
         data = MetaTensor(np.arange(48).reshape(3, 4, 4, 1), meta={"test_key": "test_value"})

Consider expanding the test:

def test_metatensor_preserved(self):
    """Test that MetaTensor and its metadata are preserved when squeeze_end_dims=True."""
    data = MetaTensor(np.arange(48).reshape(3, 4, 4, 1), meta={"test_key": "test_value"})
    writer = ITKWriter()
    writer.set_data_array(data, channel_dim=-1, squeeze_end_dims=True)
    self.assertIsInstance(writer.data_obj, MetaTensor)
    self.assertEqual(writer.data_obj.meta.get("test_key"), "test_value")
    # Verify shape after squeeze
    self.assertEqual(writer.data_obj.shape, (3, 4, 4))
🤖 Prompt for AI Agents
In tests/data/test_itk_writer.py around lines 67 to 72, add a docstring to the
test_metatensor_preserved method and expand assertions to cover more scenarios:
keep the existing MetaTensor and metadata checks, assert the shape after squeeze
for the squeeze_end_dims=True case, add at least one additional case testing a
different channel_dim (e.g., 0 or 1) and assert expected shape/metadata, and add
a case with squeeze_end_dims=False to verify the shape remains unchanged;
implement these as additional assertions or small helper calls within the same
test (or split into small focused tests) so the test verifies preservation of
MetaTensor, metadata, and correct squeezing behavior for multiple parameter
combinations.

@sudomakeinstall
Copy link
Author

FYI, the two checks with errors both failed due to running out of disk space.

premerge / build-docs:

Error: No space left on device : '/home/runner/actions-runner/cached/_diag/pages/32e19e11-3e44-443a-90c8-08ee0934371d_1af45fa4-32a3-4cdb-ada0-4b63109d0edd_1.log'

premerge / packaging:

  Attempting uninstall: numpy
    Found existing installation: numpy 2.0.2
    Uninstalling numpy-2.0.2:
      Successfully uninstalled numpy-2.0.2
  Attempting uninstall: filelock
    Found existing installation: filelock 3.19.1
    Uninstalling filelock-3.19.1:
      Successfully uninstalled filelock-3.19.1
ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device

These seem to be unrelated to the PR. This PR is ready for review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ImageWriter.convert_to_channel_last implicitly converts from MetaTensor to Numpy Array

1 participant