Hello, we are using adm_in256_stats.npz to reproduce the precision and recall indicators of MAR-L, and the results are 0.51 precision and 0.60 recall, which are different from 0.81 and 0.60 reported in the paper. Could you describe how the precision and recall results reported in the paper are calculated?
The following is our calculation code, setting prc to true:
metrics_dict = torch_fidelity.calculate_metrics( input1=save_folder, input2=input2, fid_statistics_file=fid_statistics_file, cuda=True, isc=True, fid=True, kid=False, prc=True, verbose=False, )
Hello, we are using adm_in256_stats.npz to reproduce the precision and recall indicators of MAR-L, and the results are 0.51 precision and 0.60 recall, which are different from 0.81 and 0.60 reported in the paper. Could you describe how the precision and recall results reported in the paper are calculated?
The following is our calculation code, setting prc to true:
metrics_dict = torch_fidelity.calculate_metrics( input1=save_folder, input2=input2, fid_statistics_file=fid_statistics_file, cuda=True, isc=True, fid=True, kid=False, prc=True, verbose=False, )