iml (0.2.1)

0 users

Interpretable Machine Learning.

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) and tree surrogate models.

Maintainer: Christoph Molnar
Author(s): Christoph Molnar [aut, cre]

License: MIT + file LICENSE

Uses: checkmate, data.table, dplyr, ggplot2, glmnet, Metrics, partykit, R6, tidyr, caret, e1071, randomForest, rpart, MASS, testthat, mlr, gower, lime

Released 8 days ago.



  (0 votes)


  (0 votes)

Log in to vote.


No one has written a review of iml yet. Want to be the first? Write one now.

Related packages:(20 best matches, based on common tags.)

Search for iml on google, google scholar, r-help, r-devel.

Visit iml on R Graphical Manual.