Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
Goals¶
Provide high quality reference implementations of black-box ML model explanation and interpretation algorithms
Define a consistent API for interpretable ML methods
Support multiple use cases (e.g. tabular, text and image data classification, regression)
- Anchor explanations for income prediction
- Anchor explanations on the Iris dataset
- Anchor explanations for movie sentiment
- Anchor explanations for ImageNet
- Anchor explanations for fashion MNIST
- Contrastive Explanations Method (CEM) applied to MNIST
- Contrastive Explanations Method (CEM) applied to Iris dataset
- Counterfactual instances on MNIST
- Counterfactuals guided by prototypes on MNIST
- Counterfactuals guided by prototypes on Boston housing dataset
- Counterfactual explanations with one-hot encoded categorical variables
- Counterfactual explanations with ordinally encoded categorical variables
- Kernel SHAP explanation for SVM models
- Kernel SHAP explanation for multinomial logistic regression models
- Handling categorical varbiables with KernelSHAP
- KernelSHAP: combining preprocessor and predictor
- Linearity measure applied to Iris
- Linearity measure applied to fashion MNIST
- Trust Scores applied to Iris
- Trust Scores applied to MNIST