Project title: Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Progressive Exaggeration on Chest X-rays
Background: The goal of a prediction explanation method is to
identify features which are relevant to the prediction of a neural
network and convey that information to the user.
The method shown
below is called Latent Shift. This method looks at what pixels change
when the image is simulated to have more or less of the specific
pathology. Each frame of this video is simulated. The video starts
with the pathology removed and proceeds to add the pathology before
reversing and starting the loop over again.
displays the method on 3 different classifiers trained on different
datasets. Variation in the concept learned is visible when the
explanations are viewed. The same autoencoder is used between the 3
visualizations so the only variable is the classifier. The examples are cherrypicked.