Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frank-Wolfe-L1 Attack #1

Open
tanimutomo opened this issue Oct 8, 2019 · 3 comments
Open

Frank-Wolfe-L1 Attack #1

tanimutomo opened this issue Oct 8, 2019 · 3 comments

Comments

@tanimutomo
Copy link

Hi, thanks for releasing the codes.
When I used FrankWolfeAttack in advex-uar/advex_uar/attacks/fw_attack.py for evaluating the UAR for the model trained by PGD-Linf AT, the accuracy for all eps are lower than 10%.
In addition to this evaluation problem, in the training with FrankWolfeAttack, the accuracy for the natural training data doesn't increase (stay in lower than 10%).

Could you tell me some advices for dealing with this attack method?

@ddkang
Copy link
Owner

ddkang commented Oct 9, 2019

Hi Tomoki, thanks for the interest in our work.

Could you tell us the exact model, command line parameters, and environment that you are using?

@tanimutomo
Copy link
Author

tanimutomo commented Oct 9, 2019

Thanks for replying.

I re-implemented the training and testing code that calculates UAR scores heavily based on your code by myself.

So, the used codes for my experiment are not exactly the same as this repo.

The experimental results based on my training and testing codes almost achieved the same UAR score as your paper except for the FW-L1 Attack.
The accuracies for the FW-L1 attack for various eps are around 3% - 10%.
The details about the experiment are below.

In addition, when I trained the model (ResNet56) using the same settings as your paper, all accuracies (both training and validation set ) stayed in 10% during all epochs.

Are there specific techniques for dealing with FW-L1 Attack?

Thanks.

Codes

But, the following codes is exactly the same as your codes.

  • dataset
  • model (ResNet56)
    Other codes (e.g. Trainer, testing code) are implemented by myself.

Params

  • Dataset: CIFAR10
  • model : ResNet56
  • Epochs: 200
  • Optimizer: SGD (lr = 0.1, scheduler([100, 150], gamma=0.1))
  • Attack: PGD-Linf (eps = 32.0, step_size = eps / sqrt(num_of_steps) , num_of_steps = 10)

Environment

  • python 3.7
  • torch 1.1.0
  • torchvision 0.3.0

@ddkang
Copy link
Owner

ddkang commented Oct 9, 2019

There can be many subtle issues that occur with adversarial training and attacks. It's difficult to debug without seeing code.

Have you tried using our PGD-Linf trained model with our FW attack code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants