Skip to content

Commit ca776f1

Browse files
committed
All the tests are passing! I haven't verified if the tests are useful, but at least I have a unit and integration test suite, and they are passing!
1 parent 53e42d9 commit ca776f1

5 files changed

Lines changed: 13 additions & 11 deletions

File tree

PROJECT_STATUS.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,16 +51,15 @@ We have implemented a 2D prototype for refining organelle segmentations using a
5151
## Validation Status
5252
- **Synthetic Data**: Basic functionality works, but recent changes (template models, masking) have not been rigorously verified.
5353
- **Real Data**: The new template models and B-Spline refiner have **not yet been tested** on real data.
54-
- **Tests**: Major failures in loss and optimization tests. Investigation required.
54+
- **Tests**: Fixed 6 failing tests. The root causes were an incorrect gradient assertion in `test_loss`, a path type mismatch in `test_trainer`, a missing attribute access guard in `PerPointTemplateModel`, and an unstable learning rate for the `NeuralField` optimization test. All tests now pass.
5555

5656
## Design Decisions
5757
- **No Explicit Remeshing**: We avoid remeshing during the optimization loop to maintain differentiability. Instead, we rely on `EdgeLengthConsistencyLoss` and `LaplacianSmoothingLoss` to maintain mesh quality.
5858
- **B-Spline Regularization**: For the B-spline refiner, we apply Laplacian and Edge Length regularization to the **control points** to ensure a uniform parameterization and prevent control point bunching, even though the spline curve itself is inherently smooth.
5959
- **Implicit Regularization**: We prefer B-Spline or Neural Field parameterizations for template parameters to enforce smoothness implicitly, rather than relying solely on explicit smoothness losses.
6060

6161
## In Progress / Next Steps
62-
1. **Investigate Test Failures**: Debug and fix the failing loss and optimization tests.
63-
2. **2D Optimization Validation**: Run `train_2d.ipynb` notebook to verify the full training loop on synthetic data.
62+
1. **2D Optimization Validation**: Run `train_2d.ipynb` notebook to verify the full training loop on synthetic data with the various refiner and template combinations.
6463
3. **Real Data Validation**: Run `BSplineContourRefiner` with `BSplineTemplateModel` on the real data slice (`data/20289/...`).
6564
4. **Visualization**: Use the new visualization tools to inspect the learned template parameters on real data.
6665
5. **Port to 3D**: (Paused until 2D is fully robust).

diffmeshopt/opt2d/template.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -240,10 +240,9 @@ def get_params(
240240

241241
def get_regularization_loss(self) -> dict[str, torch.Tensor]:
242242
# Regularization: Gaussian prior on log(sigma) centered at initialization
243-
prior = (self.log_sigma - self.sigma_init.log()).pow(2).mean() + (
244-
self.log_sigma_ratio - self.sigma_ratio_init.log()
245-
).pow(2).mean()
246-
243+
prior = (self.log_sigma - self.sigma_init.log()).pow(2).mean()
244+
if not self.props.symmetric:
245+
prior = prior + (self.log_sigma_ratio - self.sigma_ratio_init.log()).pow(2).mean()
247246
# Reconstruct log_peak_dist for smoothness
248247
sigma = self.log_sigma.exp()
249248
peak_dist = sigma * (self.props.min_peak_ratio + self.log_excess.exp())

tests/opt2d/test_loss.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,11 +45,12 @@ def test_laplacian_smoothing_loss(simple_contour):
4545
loss_for_grad.backward()
4646

4747
assert perturbed_contour_grad.grad is not None
48-
# The gradient for the perturbed point (p0) should point inwards, opposite to the perturbation.
48+
# The gradient points in the direction of steepest ascent (increasing loss).
4949
# p0 was at (10, 0) and was moved to (12, 0).
50-
# The gradient vector should be approximately in the (-1, 0) direction.
50+
# Moving the point further out increases the Laplacian loss, so the gradient is positive (outwards).
51+
# The gradient vector should be approximately in the (+1, 0) direction.
5152
grad_p0 = perturbed_contour_grad.grad[0]
52-
assert grad_p0[0] < 0 # x-component should be negative
53+
assert grad_p0[0] > 0 # x-component should be positive
5354
assert torch.isclose(grad_p0[1], torch.tensor(0.0), atol=1e-4) # y-component should be ~0
5455

5556

tests/opt2d/test_optimize.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,9 @@ def test_refiner_template_combinations(
6060
)
6161
elif "neural" in template_name:
6262
template_props = NeuralFieldTemplateProps(**template_props.__dict__, neural_hidden_dim=16)
63+
# Neural fields can be sensitive to high learning rates in short tests
64+
if hasattr(props, "learning_rate"):
65+
props.learning_rate = 0.01
6366
elif "grid" in template_name:
6467
template_props = GridTemplateProps(**template_props.__dict__, grid_size=8)
6568
elif "splat" in template_name:

tests/opt2d/test_trainer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ def test_trainer_checkpointing(mock_refiner, temp_output_dir):
6464
(cb for cb in trainer.trainer.callbacks if isinstance(cb, ModelCheckpoint)), None
6565
)
6666
assert checkpoint_cb is not None
67-
assert checkpoint_cb.dirpath == trainer.output_dir
67+
assert Path(checkpoint_cb.dirpath) == trainer.output_dir
6868

6969
# Check for saved files
7070
# The test doesn't actually run, so we can't check for files.

0 commit comments

Comments
 (0)