ELI5
A library for debugging/inspecting machine learning classifiers and explaining their predictions.
interpretability eli5 debugging inspection explain code library

It provides support for the following machine learning frameworks and packages: • scikit-learn. Currently ELI5 allows to explain weights and predictions of scikit-learn linear classifiers and regressors, print decision trees as text or as SVG, show feature importances and explain predictions of decision trees and tree-based ensembles. ELI5 understands text processing utilities from scikit-learn and can highlight text data accordingly. Pipeline and FeatureUnion are supported. It also allows to debug scikit-learn pipelines which contain HashingVectorizer, by undoing hashing. • Keras - explain predictions of image classifiers via Grad-CAM visualizations. • xgboost - show feature importances and explain predictions of • XGBClassifier, XGBRegressor and xgboost.Booster. • LightGBM - show feature importances and explain predictions of • LGBMClassifier and LGBMRegressor. • CatBoost - show feature importances of CatBoostClassifier, CatBoostRegressor and catboost.CatBoost. • lightning - explain weights and predictions of lightning classifiers and regressors. • sklearn-crfsuite. ELI5 allows to check weights of sklearn_crfsuite.CRF models.

ELI5 also implements several algorithms for inspecting black-box models (see Inspecting Black-Box Estimators): • TextExplainer allows to explain predictions of any text classifier using LIME algorithm (Ribeiro et al., 2016). There are utilities for using LIME with non-text data and arbitrary black-box classifiers as well, but this feature is currently experimental. • Permutation importance method can be used to compute feature importances for black box estimators.

Don't forget to tag @TeamHG-Memex in your comment, otherwise they may not be notified.

Authors
Share this project
Similar projects
Interpretable Machine Learning
Extracting human understandable insights from any Machine Learning model.
A Survey of the State of Explainable AI for NLP
Overview of the operations and explainability techniques currently available for generating explanations for NLP model predictions.
Lucent
Lucid library adapted for PyTorch.
Explainable ML Monitoring
The video covers an overview of some of the risks of AI, the need for explainable monitoring, and what exactly we mean when we talk about it.
Top collections