Project title: Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Progressive Exaggeration on Chest X-rays

Background: The goal of a prediction explanation method is to identify features which are relevant to the prediction of a neural network and convey that information to the user.

The method shown below is called Latent Shift. This method looks at what pixels change when the image is simulated to have more or less of the specific pathology. Each frame of this video is simulated. The video starts with the pathology removed and proceeds to add the pathology before reversing and starting the loop over again.

This demonstration displays the method on 3 different classifiers trained on different datasets. Variation in the concept learned is visible when the explanations are viewed. The same autoencoder is used between the 3 visualizations so the only variable is the classifier. The examples are cherrypicked.

This work is under review at MIDL 2021

Source code

View list of true positives and false positives here.

Video speed:

Input Image TorchXRayVision DenseNet121-all
Trained on PadChest, NIH, CheXpert, and MIMIC-CXR datasets
TorchXRayVision DenseNet121-mimic_ch
Trained on the MIMIC-CXR dataset
JF Healthcare DenseNet121
Trained on CheXpert data for the CheXpert challenge
Latent Shift 2D Latent Shift Gif Latent Shift 2D Latent Shift Gif Latent Shift 2D Latent Shift Gif

Prediction of Cardiomegaly

Prediction of Effusion

Prediction of Atelectasis

Prediction of Consolidation

Prediction of Mass

- - - -

Prediction of Pneumothorax

- -

Prediction of Infiltration

- - - -

Prediction of Edema

Prediction of Emphysema

- - - -

Prediction of Fibrosis

- - - -

Prediction of Pneumonia

- -

Prediction of Pleural_Thickening

- - - -

Prediction of Hernia

- - - -

Prediction of Lung Opacity

- -

Joseph Paul Cohen 2021