• Multi-modal: Supports interpretability of models across modalities including vision, text, and more.
  • Built on PyTorch: Supports most types of PyTorch models and can be used with minimal modification to the original neural network.
  • Extensible: Open source, generic library for interpretability research. Easily implement and benchmark new algorithms.

Don't forget to tag @pytorch in your comment, otherwise they may not be notified.

Authors
Share this project
Similar projects
Opening Up the Black Box: Model Understanding w/ Captum & PyTorch
A look at using Captum for model interpretability with PyTorch.
Top collections