-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Frank-Wolfe-L1 Attack #1
Comments
Hi Tomoki, thanks for the interest in our work. Could you tell us the exact model, command line parameters, and environment that you are using? |
Thanks for replying. I re-implemented the training and testing code that calculates UAR scores heavily based on your code by myself. So, the used codes for my experiment are not exactly the same as this repo. The experimental results based on my training and testing codes almost achieved the same UAR score as your paper except for the FW-L1 Attack. In addition, when I trained the model (ResNet56) using the same settings as your paper, all accuracies (both training and validation set ) stayed in 10% during all epochs. Are there specific techniques for dealing with FW-L1 Attack? Thanks. CodesBut, the following codes is exactly the same as your codes.
Params
Environment
|
There can be many subtle issues that occur with adversarial training and attacks. It's difficult to debug without seeing code. Have you tried using our PGD-Linf trained model with our FW attack code? |
Hi, thanks for releasing the codes.
When I used
FrankWolfeAttack
inadvex-uar/advex_uar/attacks/fw_attack.py
for evaluating the UAR for the model trained by PGD-Linf AT, the accuracy for all eps are lower than 10%.In addition to this evaluation problem, in the training with
FrankWolfeAttack
, the accuracy for the natural training data doesn't increase (stay in lower than 10%).Could you tell me some advices for dealing with this attack method?
The text was updated successfully, but these errors were encountered: