Interpretable Machine Learning
Extracting human understandable insights from any Machine Learning model.
interpretability ermutation-importance partial-dependence-plots shap-values article tutorial

Machine Learning doesn’t have to be a black box anymore. What use is a good model if we cannot explain the results to others. Interpretability is as important as creating a model. To achieve wider acceptance among the population, it is crucial that Machine learning systems are able to provide satisfactory explanations for their decisions. As Albert Einstein said,” If you can’t explain it simply, you don’t understand it well enough”.

Some of the benefits that interpretability brings along are:

  • Reliability
  • Debugging
  • Informing feature engineering
  • Directing future data collection
  • Informing human decision-making
  • Building Trust

Don't forget to tag @parulnith in your comment, otherwise they may not be notified.

Authors
Share this project
Similar projects
Opening Up the Black Box: Model Understanding w/ Captum & PyTorch
A look at using Captum for model interpretability with PyTorch.
Lucent
Lucid library adapted for PyTorch.
Using Selective Attention in Reinforcement Learning Agents
In this work, we establish that self-attention can be viewed as a form of indirect encoding, which enables us to construct highly parameter-efficient ...
Face Mask Detector
A simple Streamlit frontend for face mask detection in images using a pre-trained Keras CNN model + OpenCV and model interpretability.
Top collections