Integrated Gradients for Interpretability
Integrated Gradients is a technique for attributing a classification model's prediction to its input features.
interpretability gradients convolutional-neural-networks deep-learning tutorial research paper arxiv:1703.01365

Don't forget to tag @AakashKumarNain in your comment, otherwise they may not be notified.

Authors original post
Machine Learning Engineer. Computer Vision with deep learning is fun. Pythonic in every way!
Share this project
Similar projects
Integrated Gradients
This tutorial walks you through an implementation of Integrated Gradients, an ML interpretabilit technique described in Axiomatic Attribution for Deep ...
Principles and Practice of Explainable Machine Learning
A survey to help industry practitioners understand the field of explainable machine learning better and apply the right tools.
ExplainX
ExplainX is an explainable AI framework for data scientists to explain any black-box model behavior to business stakeholders.
Opening Up the Black Box: Model Understanding w/ Captum & PyTorch
A look at using Captum for model interpretability with PyTorch.