-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
Hello, as far as I understand the code, it seems like during the training loop of Doppelganger, the gradients from the Generator forward step are not detached, meaning they get updated when the discriminator loss gets backpropagated. However Gen and Disc should be trained separately according to the Doppelganger architecture.
NetShare/netshare/models/doppelganger_torch/doppelganger.py
Lines 526 to 532 in af02603
| fake_attribute, _, fake_feature = self.generator( | |
| real_attribute_noise, | |
| addi_attribute_noise, | |
| feature_input_noise, | |
| h0, | |
| c0 | |
| ) |
Proposed fix:
with torch.no_grad():
fake_attribute, _, fake_feature = self.generator.forward(
real_attribute_noise,
addi_attribute_noise,
feature_input_noise,
h0,
c0,
)Metadata
Metadata
Assignees
Labels
No labels