lime (0.5.0)

Local Interpretable Model-Agnostic Explanations.

https://lime.data-imaginist.com
http://cran.r-project.org/web/packages/lime

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) .

Maintainer: Thomas Lin Pedersen
Author(s): Thomas Lin Pedersen [cre, aut] (<https://orcid.org/0000-0002-5147-4711>), Michal Benesty [aut]

License: MIT + file LICENSE

Uses: assertthat, ggplot2, glmnet, gower, htmlwidgets, Matrix, Rcpp, shiny, shinythemes, stringi, MASS, testthat, knitr, mlr, h2o, xgboost, rmarkdown, covr, ranger, text2vec, magick, sessioninfo, keras
Reverse suggests: iml

Released 5 months ago.