SHAP: SHapley Additive exPlanations
A game theoretic approach to explain the output of any machine learning model.
interpretability shap explainability gradient-boosting shapely code library

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).

Don't forget to tag @slundberg in your comment, otherwise they may not be notified.

Authors
Share this project
Similar projects
Interpretable Machine Learning
A guide for making black box models explainable.
InterpretML
Fit interpretable machine learning models. Explain blackbox machine learning.
FlashTorch
Visualization toolkit for neural networks in PyTorch
SuperGlue: Learning Feature Matching with Graph Neural Networks
SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points.
Top collections