Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the data augmentation step #10

Open
Vote2Cap-DETR opened this issue Feb 6, 2021 · 3 comments
Open

Some questions about the data augmentation step #10

Vote2Cap-DETR opened this issue Feb 6, 2021 · 3 comments

Comments

@Vote2Cap-DETR
Copy link

No description provided.

@Vote2Cap-DETR
Copy link
Author

Vote2Cap-DETR commented Feb 6, 2021

In my experiments, the data augmentation step seems to have a negative effect on the accuracy in the ModelNet40 experiment.
Have you encountered the same problem?

@Vote2Cap-DETR Vote2Cap-DETR changed the title Could you provide the pre-trained weights on ModelNet40? I have followed your details in the paper, but only got around 89% accuracy. Additionally, The data augmentation Some questions about the data augmentation step Feb 6, 2021
@MenghaoGuo
Copy link
Owner

Hi,
I do not have the same question. Moreover, I want to know which data augmentation makes performance drop in your experiments.

@Vote2Cap-DETR
Copy link
Author

I performs the same augmentation as you mentioned in your paper: [-0.2, 0.2] translation, [-0.67, 1.5] scaling, and random input dropout. (does the random input dropout mean a uniform sampler of input points?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant