iml (0.7.1)

0 users

Interpretable Machine Learning.

https://github.com/christophM/iml
http://cran.r-project.org/web/packages/iml

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.

Maintainer: Christoph Molnar
Author(s): Christoph Molnar [aut, cre]

License: MIT + file LICENSE

Uses: checkmate, data.table, foreach, ggplot2, glmnet, Metrics, partykit, R6, yaImpute, caret, e1071, randomForest, rpart, MASS, testthat, devtools, doParallel, knitr, mlr, rmarkdown, covr, ranger, gower, ALEPlot

Released 2 months ago.


6 previous versions

Ratings

Overall:

  (0 votes)

Documentation:

  (0 votes)

Log in to vote.

Reviews

No one has written a review of iml yet. Want to be the first? Write one now.


Related packages:(20 best matches, based on common tags.)


Search for iml on google, google scholar, r-help, r-devel.

Visit iml on R Graphical Manual.