-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Still cannot reproduce the results using the released model #9
Comments
Can you know me the number of files in |
The number of files in |
@ToBeCodeCreater From my side, I have 6,557 test samples and generated results from running test.py. I know it takes some time in the testing stage, but can you try it again? Sorry for the inconvenience but I have made two clean containers and set up the repository from scratch and got similar numbers from the paper. |
@jihoonerd I had tested the released model three times and got similar results. I'm wondering what the problem is causing this result. The size of testing data in |
I find that the model trained using this repository performs well on testing dataset ( got similar numbers from the paper). But the results of the released model are very poor. Very confusing. |
I also got similar results when I use the pretrained weight (for HumanML3D). It will be thankful if the authors check whether the pretrained weight has some problem or not. |
Similar to #5, I still cannot reproduce the results using the released model and the results I got were extremely poor.
{"r_precision": {"top-1": 0.06051829268292683, "top-2": 0.1298780487804878, "top-3": 0.19603658536585367}, "fid": 1481.7516534444785, "clip_score": {"clip_score": 0.14643903637110403}, "mid": -53.25080871582031}
I had installed pytorch-lightning and transformers with correct versions ( 1.8.6 and 4.19.2). I tested the released model on GTX 1080Ti using the command
python test.py model=diffusion_hml3d.yaml datamodule=humanml3d.yaml ckpt_path=pretrained/flame_hml3d_bc.ckpt
. My python enviorment is shown as follows:The text was updated successfully, but these errors were encountered: