Interpretable Machine Learning for Computer Vision
Recent progress we made on visualization, interpretation, and explanation methodologies for analyzing both the data and the models in computer vision.
computer-vision interpretability cvpr-2020 article video

Complex machine learning models such as deep convolutional neural networks and recursive neural networks have recently made great progress in a wide range of computer vision applications, such as object/scene recognition, image captioning, visual question answering. But they are often perceived as black-boxes. As the models are going deeper in search of better recognition accuracy, it becomes even harder to understand the predictions given by the models and why.

Previous Interpretable Machine Learning Tutorials

Don't forget to tag @colah , @interpretablevision in your comment, otherwise they may not be notified.

Authors community post
I want to understand things clearly and explain them well. @openai formerly @brain-research.
Share this project
Similar projects
FlashTorch
Visualization toolkit for neural networks in PyTorch
Integrated Gradients in TensorFlow 2
In this tutorial, you will walk through an implementation of IG step-by-step in TensorFlow 2 to understand the pixel feature importances of an image ...
Face Mask Detector
A simple Streamlit frontend for face mask detection in images using a pre-trained Keras CNN model + OpenCV and model interpretability.
GANSpace: Discovering Interpretable GAN Controls
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis.
Top collections