-
Notifications
You must be signed in to change notification settings - Fork 540
Closed
Description
Hi, thanks for the great work!
Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?
I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.
This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.
Thus, adversarial examples could be generated much more flexibly.
Thanks a lot!
Metadata
Metadata
Assignees
Labels
No labels