Local Interpretable Model-Agnostic Explanation
Explain a prediction by replacing the complex model with a locally interpretable surrogate model.
- It generates a new dataset consisting of perturbed samples and the corresponding predictions of the black box model
- On this new dataset LIME trains an interpretable model, which is weighted by the proximity of the sampled instances to the instance of interest
Lime is able to explain any black box classifier, with two or more classes

