Conversation
…nd debug message improvements. (cherry picked from commit a455316)
alf/algorithms/merlin_algorithm.py
Outdated
| if output_activation is None: | ||
| output_activation = alf.math.identity |
There was a problem hiding this comment.
Seems unnecessary. Can provide alf.math.identity as argument.
There was a problem hiding this comment.
Good point. Removed.
alf/utils/data_buffer_test.py
Outdated
| # ox = (x * torch.arange( | ||
| # batch_size, dtype=torch.float32, requires_grad=True, | ||
| # device="cpu").unsqueeze(1) * torch.arange( | ||
| # dim, dtype=torch.float32, requires_grad=True, | ||
| # device="cpu").unsqueeze(0)) | ||
| if batch_size > 1 and x.ndim > 0 and batch_size == x.shape[0]: | ||
| a = x | ||
| else: | ||
| a = x * torch.ones(batch_size, dtype=torch.float32, device="cpu") | ||
| if batch_size > 1 and t.ndim > 0 and batch_size == t.shape[0]: | ||
| pass | ||
| else: | ||
| t = t * torch.ones(batch_size, dtype=torch.int32, device="cpu") | ||
| ox = a.unsqueeze(1).clone().requires_grad_(True) |
There was a problem hiding this comment.
what is the purpose of this change?
There was a problem hiding this comment.
This is needed because we allow x and t inputs to be scalars, which will be expanded to be consistent with the batch_size. Made code easier to read, and commented.
alf/config_util.py
Outdated
| # Most of the times, for command line flags, this warning is a false alarm. | ||
| # This can be useful in other failures, e.g. when the Config has already been used, | ||
| # before configuring its value. | ||
| logging.warning("pre_config potential error: %s", e) |
There was a problem hiding this comment.
This warning is hard to understand. It's better to identify the case of the Config has already been used. Perhaps throw a different type of Exception when config has been used in config1()?
There was a problem hiding this comment.
Good point. Logging error in config1.
le-horizon
left a comment
There was a problem hiding this comment.
Thanks for the comments. Done all, PTAL.
Le
alf/config_util.py
Outdated
| # Most of the times, for command line flags, this warning is a false alarm. | ||
| # This can be useful in other failures, e.g. when the Config has already been used, | ||
| # before configuring its value. | ||
| logging.warning("pre_config potential error: %s", e) |
There was a problem hiding this comment.
Good point. Logging error in config1.
alf/algorithms/merlin_algorithm.py
Outdated
| if output_activation is None: | ||
| output_activation = alf.math.identity |
There was a problem hiding this comment.
Good point. Removed.
alf/utils/data_buffer_test.py
Outdated
| # ox = (x * torch.arange( | ||
| # batch_size, dtype=torch.float32, requires_grad=True, | ||
| # device="cpu").unsqueeze(1) * torch.arange( | ||
| # dim, dtype=torch.float32, requires_grad=True, | ||
| # device="cpu").unsqueeze(0)) | ||
| if batch_size > 1 and x.ndim > 0 and batch_size == x.shape[0]: | ||
| a = x | ||
| else: | ||
| a = x * torch.ones(batch_size, dtype=torch.float32, device="cpu") | ||
| if batch_size > 1 and t.ndim > 0 and batch_size == t.shape[0]: | ||
| pass | ||
| else: | ||
| t = t * torch.ones(batch_size, dtype=torch.int32, device="cpu") | ||
| ox = a.unsqueeze(1).clone().requires_grad_(True) |
There was a problem hiding this comment.
This is needed because we allow x and t inputs to be scalars, which will be expanded to be consistent with the batch_size. Made code easier to read, and commented.
adding various asserts, debug messages, and small improvements