This repository contains tools to interpret and explain machine learning models using Integrated Gradients and Expected Gradients. In addition, it contains code to explain interactions in deep networks using Integrated Hessians and Expected Hessians - methods that we introduced in our most recent paper: Explaining Explanations: Axiomatic Feature Interactions for Deep Networks.

Don't forget to tag @suinleelab , @jjanizek in your comment, otherwise they may not be notified.

Authors community post
Explainable AI for Biology and Precision Medicine
MD / PhD student at University of Washington interested in robust and explainable machine learning models for medical applications.
Share this project
Similar projects
Integrated Gradients
This tutorial walks you through an implementation of Integrated Gradients, an ML interpretabilit technique described in Axiomatic Attribution for Deep ...
Interpretability in ML: A Broad Overview
An overview of the sub-field of machine learning interpretability, with example models and graphics.
FlashTorch
Visualization toolkit for neural networks in PyTorch
BERTology Meets Biology
Interpreting Attention in Protein Language Models.
Top collections