You can apply feature visualization techniques (such as saliency maps and activation maximization) on your model, with as little as a few lines of code. It is compatible with pre-trained models that come with torchvision, and seamlessly integrates with other custom models built in PyTorch.

Don't forget to tag @MisaOgura in your comment, otherwise they may not be notified.

Authors
Research Software Engineer | Published Scientist | Co-founder of @womendrivendev
Share this project
Similar projects
Interpretable Machine Learning for Computer Vision
Recent progress we made on visualization, interpretation, and explanation methodologies for analyzing both the data and the models in computer vision.
GANSpace: Discovering Interpretable GAN Controls
This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis.
What does a CNN see?
First super clean notebook showcasing @TensorFlow 2.0. An example of end-to-end DL with interpretability.
Integrated Gradients in TensorFlow 2
In this tutorial, you will walk through an implementation of IG step-by-step in TensorFlow 2 to understand the pixel feature importances of an image ...
Top collections