Lime: Local Interpretable Model-Agnostic Explanations
Explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.
interpretability lime code paper video arxiv:1602.04938 library research

This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).

Don't forget to tag @marcotcr in your comment, otherwise they may not be notified.

Authors
Share this project
Similar projects
InterpretML
Fit interpretable machine learning models. Explain blackbox machine learning.
How to Explain the Prediction of a Machine Learning Model?
Model interpretability, covering two aspects: (i) interpretable models w/ model-specific interpretation methods & (ii) approaches of explaining black-box ...
Rusklainer
Identification of contributing features towards the rupture risk prediction of intracranial aneurysms using LIME explainer
Interpretable Machine Learning
A guide for making black box models explainable.
Top collections