iml (0.10.0)

Interpretable Machine Learning.

https://github.com/christophM/iml
http://cran.r-project.org/web/packages/iml

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.

Maintainer: Christoph Molnar
Author(s): Christoph Molnar [aut, cre], Patrick Schratz [aut] (<https://orcid.org/0000-0003-0748-6624>)

License: MIT + file LICENSE

Uses: checkmate, data.table, Formula, future, future.apply, ggplot2, gridExtra, Metrics, prediction, R6, caret, e1071, party, randomForest, rpart, yaImpute, glmnet, MASS, testthat, partykit, knitr, mlr, h2o, rmarkdown, covr, ranger, gower, keras, ALEPlot, future.callr, bench, mlr3, patchwork

Released 11 days ago.