Dear Authors,
I am deveping a model that uses your dataset with docking energies. We realized that the docking energy from the AutoDock Vina can have quite a significant fluctuation depending on the input optimized molecule geometry, random seed, and the hyperparameters chosen for the docking process. Therefore, this makes it hard for us to reproduce the set of reward/energies obtained from your dataset.
Since the verification method can be unstable, instead, we would like to further study your model by looking at some of the top candidates genrated from GFlownet trained on this dataset.
In the paper, you described that $10^{6}$ candidates were generated and the top-1000 were analyzed in terms of their average reward.
Could you please kindly provide the $10^{6}$ or just the top-1000 generated ones along with their scores?
Thank you very much!
Haote
Dear Authors,
I am deveping a model that uses your dataset with docking energies. We realized that the docking energy from the AutoDock Vina can have quite a significant fluctuation depending on the input optimized molecule geometry, random seed, and the hyperparameters chosen for the docking process. Therefore, this makes it hard for us to reproduce the set of reward/energies obtained from your dataset.
Since the verification method can be unstable, instead, we would like to further study your model by looking at some of the top candidates genrated from GFlownet trained on this dataset.
In the paper, you described that$10^{6}$ candidates were generated and the top-1000 were analyzed in terms of their average reward.$10^{6}$ or just the top-1000 generated ones along with their scores?
Could you please kindly provide the
Thank you very much!
Haote