Fix operator precedence bug in lvlb_weights and mutable default arguments#108
Open
Mr-Neutr0n wants to merge 1 commit intoAILab-CVC:mainfrom
Open
Fix operator precedence bug in lvlb_weights and mutable default arguments#108Mr-Neutr0n wants to merge 1 commit intoAILab-CVC:mainfrom
Mr-Neutr0n wants to merge 1 commit intoAILab-CVC:mainfrom
Conversation
…ents - Fix operator precedence in x0-parameterization lvlb_weights calculation: `(2. * 1 - torch.Tensor(alphas_cumprod))` incorrectly evaluates as `(2.0 - tensor)` due to multiplication binding tighter than subtraction. Changed to `(2. * (1 - torch.Tensor(alphas_cumprod)))` for correct scaling. - Fix NaN assertion using `.all()` instead of `.any()`: the original check `assert not isnan(...).all()` only fails when every element is NaN, silently allowing partial NaN corruption. Changed to `.any()` to catch any NaN values. - Replace mutable default arguments (`ignore_keys=[]` and `ignore_keys=list()`) with `None` + guard clause in DDPM.__init__, DDPM.init_from_ckpt, and AutoencoderKL.init_from_ckpt to prevent shared state across calls.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Operator precedence fix in
lvdm/models/ddpm3d.py: The x0-parameterization branch oflvlb_weightscomputes(2. * 1 - torch.Tensor(alphas_cumprod)), which due to Python's operator precedence evaluates as(2.0 - tensor)instead of the intended(2.0 * (1 - tensor)). This produces incorrect loss weighting. Fixed by adding explicit parentheses.NaN assertion fix in
lvdm/models/ddpm3d.py:assert not torch.isnan(self.lvlb_weights).all()only triggers when every element is NaN, silently allowing partial NaN corruption. Changed to.any()so any NaN value is caught.Mutable default argument fix in
DDPM.__init__(ignore_keys=[]),DDPM.init_from_ckpt(ignore_keys=list()), andAutoencoderKL.init_from_ckpt(ignore_keys=list()): Mutable defaults are shared across all calls, which can lead to subtle bugs if the list is ever modified. Replaced withNoneand a guard clause.Files changed
lvdm/models/ddpm3d.pylvdm/models/autoencoder.pyTest plan
lvlb_weightsvalues are numerically correct after the parenthesization fixlvlb_weightsignore_keysdefault behavior is unchanged (empty list when not provided)