Skip to content

Mask for Adversarial Attacks #941

@reheinrich

Description

@reheinrich

Hi, thanks for the great work!

Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?

I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.

This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.

Thus, adversarial examples could be generated much more flexibly.

Thanks a lot!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions