Skip to content

Adding "Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation"  #694

@ieee8023

Description

@ieee8023

🚀 Feature

(I am interested in coding this, just need some guidance for a pull request)

Having an implementation of the "Latent Shift" method integrated into captum. Paper: https://openreview.net/forum?id=rnunjvgxAMt

You can look at some example gifs and 2d attribution maps here: https://mlmed.org/gifsplanation/
You can try a demo here too! https://colab.research.google.com/github/mlmed/gifsplanation/blob/main/demo.ipynb

Motivation

I am the author of this work and I would like to enable more people to use it. I have the current source code here: https://github.com/mlmed/gifsplanation

The approach uses a trained encoder/decoder pair along with a classifier to generate counterfactual explanations (and 2d attribution maps). I imagine this would integrate well into your library because it works with any blackbox encoder/decoder and classifier.

Pitch

  1. Are you interested to accept a pull request for this?
  2. How should it fit into the library? (can you provide some method stubs that I can implement)

The challenge I see is how best to add the autoencoder into things because it needs to be an extra argument. If you don't have an idea for the best interface for this I can brainstorm one.

Here is an overview of how the method works:
+++

gif-overview-fig2

And here is how 2d attribution maps are constructed:

2d-attribution-map

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions