InterpretML is an open-source python package for training interpretable machine learning models and explaining blackbox systems. Interpretability is essential for: • Model debugging - Why did my model make this mistake? • Detecting bias - Does my model discriminate? • Human-AI cooperation - How can I understand and trust the model's decisions? • Regulatory compliance - Does my model satisfy legal requirements? • High-risk applications - Healthcare, finance, judicial, ...

Don't forget to tag @interpretml in your comment, otherwise they may not be notified.

Authors
InterpretML and related projects
Share this project
Similar projects
Rusklainer
Identification of contributing features towards the rupture risk prediction of intracranial aneurysms using LIME explainer
Interpretable Machine Learning
A guide for making black box models explainable.
Fairness and Machine Learning
This book gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought.
Explainable Deep Learning: A Field Guide for the Uninitiated
A field guide to deep learning explainability for those uninitiated in the field.
Top collections