iml (0.8.0)

Interpretable Machine Learning.

https://github.com/christophM/iml
http://cran.r-project.org/web/packages/iml

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.

Maintainer: Christoph Molnar
Author(s): Christoph Molnar [aut, cre]

License: MIT + file LICENSE

Uses: checkmate, data.table, foreach, Formula, ggplot2, glmnet, Metrics, partykit, prediction, R6, yaImpute, caret, e1071, randomForest, rpart, MASS, testthat, devtools, doParallel, knitr, mlr, rmarkdown, covr, ranger, gower, ALEPlot

Released about 1 year ago.