SAM-MedUI is a clinician-friendly interactive tool for myocardial scar segmentation from Late Gadolinium Enhancement Cardiac MRI (LGE-CMR). It combines a fine-tuned Segment Anything Model (SAM) with YOLO-based automatic detection and an intuitive GUIβenabling fast, accurate, and reproducible scar quantification without requiring any coding knowledge.
Features β’ Installation β’ Quick Start β’ Usage β’ Architecture β’ Citation
| Feature | Description |
|---|---|
| Clinical-Grade Interface | Built specifically for clinicians with intuitive controls and real-time feedback |
| Multiple Input Formats | Native support for DICOM, NIfTI (3D volumes), JPEG, PNG, and BMP |
| Flexible Prompting | Point-based, bounding box, and automatic YOLO detection |
| Real-time Refinement | Morphological operations, confidence adjustment, and undo |
| Quantitative Analysis | Automatic pixel mass calculations using DICOM/NIfTI metadata |
| Runs on CPU | GUI inference works on any laptopβno GPU required |
- Point Prompts: Left-click to add foreground points (green)
- Bounding Box Prompts: Click and drag to define regions of interest with adjustable handles
- Auto-Detection: YOLO-based automatic cardiac region detection reduces manual prompting
- DICOM: Full metadata extraction (patient ID, pixel spacing, slice thickness)
- NIfTI: 3D volume support with automatic slice extraction and navigation
- Standard Formats: JPEG, PNG, BMP for preprocessed images
- Mask Overlay: Alpha-blended segmentation visualization
- Gamma Correction: Adjustable contrast (0.2β1.7) for enhanced visibility
- Zoom & Pan: 0.5x to 5.0x magnification with smooth navigation
- Morphological Operations: Expand/shrink masks with configurable iterations
- Confidence Threshold: Dynamic adjustment (0.3β0.99) with real-time preview
- Undo: Up to 10 levels of operation history
- Thumbnail Gallery: Patient-centric navigation with multi-slice support
- Batch Save: Export all masks with a single click
- CSV Export: Quantitative results including patient ID, slice, and scar mass
- Prompt Storage: JSON-based prompt saving for reproducibility
- Python 3.9 or higher
- pip package manager
- Git
git clone https://github.com/Danialmoa/SAM-MedUI.git
cd SAM-MedUIpip install -r requirements.txtModel weights are downloaded automatically on first launch from π€ Hugging Face.
cd GUI
python main.pyTo download weights manually instead:
pip install huggingface_hub
huggingface-cli download AidaAIDL/SAM_MEDUI --local-dir checkpoints/| File | Description |
|---|---|
best_model.pth |
Fine-tuned SAM for cardiac scar segmentation |
yolo_best.pt |
YOLO detection model for automatic ROI detection |
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 8 GB | 16 GB |
| GPU (Training) | CUDA-capable GPU | NVIDIA GPU with 8+ GB VRAM |
| GPU (Inference) | Not required | Optional for faster inference |
| Storage | 2 GB for models | 5+ GB with datasets |
cd GUI
python main.py- Load Images β Click
Load FolderorLoad Filesto import medical images - Add Prompts β Click for points, drag for bounding boxes, or use
Auto-Detect - Generate β Click
Generate Segmentationto create the mask - Refine β Adjust threshold or use
Expand/Shrinkfor fine-tuning - Save β Export with
Save Mask,Save All Masks, orExport Results
| Shortcut | Action |
|---|---|
β / β |
Navigate between images |
Ctrl+Z |
Undo last operation |
Ctrl++ / Ctrl+- |
Zoom in / out |
Ctrl+0 |
Reset zoom |
Ctrl+Arrow keys |
Pan view |
Hold Z |
Temporarily hide mask & prompts |
| Mode | How to Use | Best For |
|---|---|---|
| Point (Foreground) | Left-click on target region | Precise selection of scar tissue |
| Bounding Box | Click and drag rectangle | Defining region of interest |
| Auto-Detect | Click the Auto-Detect button | Quick initial detection |
- Save Mask: Export current segmentation as PNG
- Save All Masks: Batch export all processed images
- Export Results: Generate CSV with quantitative metrics:
- Patient ID
- Image/slice name
- Scar mass (calculated from pixel spacing and slice thickness)
SAM-MedUI/
βββ GUI/ # Main Application
β βββ main.py # GUI entry point and main window
β βββ model_handler.py # SAM & YOLO inference logic
β βββ canvas_view.py # Image display and annotation
β βββ thumbnail_gallery.py # Patient navigation and thumbnails
β βββ download_weights.py # Auto-download weights from HuggingFace
β
βββ SAM_finetune/ # Training Pipeline
β βββ models/
β β βββ sam_model.py # SAM wrapper with fine-tuning support
β β βββ dataset.py # Medical imaging dataset loader
β β βββ loss.py # Combined loss function
β β βββ prompt_generator.py # Bounding box & point generation
β β
β βββ train/
β β βββ trainer.py # Training loop with W&B logging
β β
β βββ utils/
β βββ config.py # Configuration dataclasses
β βββ logger_func.py # Logging setup with rotation
β βββ preprocessing.py # Image preprocessing utilities
β βββ z_score_norm.py # Percentile normalization
β βββ visualize.py # Visualization helpers
β
βββ checkpoints/ # Model weights (auto-downloaded)
βββ logs/ # Application logs
βββ requirements.txt # Python dependencies
βββ setup.py # Package setup
βββ README.md
βββββββββββββββ ββββββββββββββββ βββββββββββββββ ββββββββββββββββ
β Load Image βββββΆβ Add Prompts βββββΆβ SAM Forward βββββΆβ Apply Mask β
β (DICOM/NIfTI) β (Points/BBox) β β Pass β β Threshold β
βββββββββββββββ ββββββββββββββββ βββββββββββββββ ββββββββββββββββ
β β
ββββββββΌβββββββ ββββββββΌβββββββ
β YOLO Auto- β β Morphologicalβ
β Detection β β Refinement β
βββββββββββββββ βββββββββββββββ
The fine-tuning pipeline includes:
- Medical-specific augmentations via TorchIO (elastic deformation, motion artifacts, bias field)
- Combined loss function: Dice + BCE + Soft BCE + KL Divergence + Diversity Loss
- Experiment tracking with Weights & Biases
- Learning rate scheduling with Cosine Annealing
To fine-tune SAM on your own cardiac MRI dataset:
from SAM_finetune.utils.config import SAMFinetuneConfig, SAMDatasetConfig
from SAM_finetune.models.dataset import SAMDataset
from SAM_finetune.train.trainer import SAMTrainer
# Configure dataset
dataset_config = SAMDatasetConfig(
dataset_path="path/to/dataset",
point_prompt=True,
box_prompt=True,
number_of_prompts=2,
)
# Create dataset
train_dataset = SAMDataset(config=dataset_config)
# Configure training
train_config = SAMFinetuneConfig(
sam_path="pretrained_models/sam_vit_b_01ec64.pth",
learning_rate=1e-4,
num_epochs=100,
batch_size=4,
)
# Start training
trainer = SAMTrainer(config=train_config, train_dataset=train_dataset)
trainer.train()If you use SAM-MedUI in your research, please cite:
@article{moafi2026sammedui,
title={Interactive Deep Learning for Myocardial Scar Segmentation Using Cardiovascular Magnetic Resonance},
author={Moafi, Aida and Moafi, Danial and Shergil, Simran and Mirkes, Evgeny M. and Adlam, David and Samani, Nilesh J. and McCann, Gerry P. and Ghazi, Mostafa Mehdipour and Arnold, J. Ranjit},
journal={Journal of Cardiovascular Magnetic Resonance},
year={2026},
publisher={Elsevier},
url={https://www.sciencedirect.com/science/article/pii/S1097664726000384}
}
Aida MoafiΒΉ, Danial MoafiΒ², Simran ShergilΒΉ, Evgeny M. MirkesΒ³, David AdlamΒΉβ΅, Nilesh J. SamaniΒΉβ΅, Gerry P. McCannΒΉβ΅, Mostafa Mehdipour Ghaziβ΄*, J. Ranjit ArnoldΒΉ*
* Joint senior authorship
ΒΉ Department of Cardiovascular Sciences, University of Leicester, NIHR Leicester Biomedical Research Centre and BHF Centre of Research Excellence, Glenfield Hospital, Leicester, UK Β² Department of Information Engineering and Mathematics, University of Siena, Siena, Italy Β³ Department of Mathematics, University of Leicester, Leicester, UK β΄ Pioneer Centre for AI, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark β΅ Centre for Digital Health and Precision Medicine, University of Leicester
We gratefully acknowledge the following projects:
- Segment Anything (SAM) by Meta AI Research
- Ultralytics YOLO for object detection
- TorchIO for medical image augmentation
For questions or collaborations:
Aida Moafi am1392@leicester.ac.uk Danial Moafi d.moafi@student.unisi.it
This project is licensed under the MIT License. See the LICENSE file for details.
Made with care for the medical imaging community

