-
Notifications
You must be signed in to change notification settings - Fork 17
Add grace to train #663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add grace to train #663
Conversation
janus train now takes the architecture as an argument. The architecture is match to the appropriate runner.
Now mace and nequip both output to ./janus_results by default.
Co-authored-by: Jacob Wilkins <46597752+oerc0122@users.noreply.github.com>
Co-authored-by: Jacob Wilkins <46597752+oerc0122@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file would not be required if we upgraded to tensorpotential 0.5.5 since that add extxyz support.
It is small though, and it requires various modifications to the xyz's we have as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We definitely should upgrade at some point, but it introduces conflicts with basically everything else via PyTorch/CUDA conflicts (see ICAMS/grace-tensorpotential#23), if I remember correctly
This may be more tractable once we cut out some of the unsupported MLIPs from our extras
Grace is quite slow it seems (60s for 1 epoch fine-tuning) and memory hungry (13Gb)