Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
- Accumulated Local Effects for predicting house prices
- Accumulated Local Effects for classifying flowers
- Anchor explanations for income prediction
- Anchor explanations on the Iris dataset
- Anchor explanations for movie sentiment
- Anchor explanations for ImageNet
- Anchor explanations for fashion MNIST
- Contrastive Explanations Method (CEM) applied to MNIST
- Contrastive Explanations Method (CEM) applied to Iris dataset
- Counterfactual instances on MNIST
- Counterfactuals guided by prototypes on MNIST
- Counterfactuals guided by prototypes on Boston housing dataset
- Counterfactual explanations with one-hot encoded categorical variables
- Counterfactual explanations with ordinally encoded categorical variables
- Kernel SHAP explanation for SVM models
- Kernel SHAP explanation for multinomial logistic regression models
- Handling categorical variables with KernelSHAP
- KernelSHAP: combining preprocessor and predictor
- Linearity measure applied to Iris
- Linearity measure applied to fashion MNIST
- Trust Scores applied to Iris
- Trust Scores applied to MNIST
- Explaining Tree Models with Interventional Feature Perturbation Tree SHAP
- Explaining Tree Models with Path-Dependent Feature Perturbation Tree SHAP
- Integrated gradients for a ResNet model trained on Imagenet dataset
- Integrated gradients for MNIST
- Integrated gradients for text classification on the IMDB dataset