Explainable decision forest: Transforming a decision forest into an interpretable tree

    Research output: Contribution to journalArticlepeer-review

    172 Scopus citations

    Abstract

    Decision forests are considered the best practice in many machine learning challenges, mainly due to their superior predictive performance. However, simple models like decision trees may be preferred over decision forests in cases in which the generated predictions must be efficient or interpretable (e.g. in insurance or health-related use cases). This paper presents a novel method for transforming a decision forest into an interpretable decision tree, which aims at preserving the predictive performance of decision forests while enabling efficient classifications that can be understood by humans. This is done by creating a set of rule conjunctions that represent the original decision forest; the conjunctions are then hierarchically organized to form a new decision tree. We evaluate the proposed method on 33 UCI datasets and show that the resulting model usually approximates the ROC AUC gained by random forest while providing an interpretable decision path for each classification.

    Original languageEnglish
    Pages (from-to)124-138
    Number of pages15
    JournalInformation Fusion
    Volume61
    DOIs
    StatePublished - 1 Sep 2020

    Keywords

    • Classification Trees
    • Decision forest
    • Ensemble learning

    ASJC Scopus subject areas

    • Software
    • Signal Processing
    • Information Systems
    • Hardware and Architecture

    Fingerprint

    Dive into the research topics of 'Explainable decision forest: Transforming a decision forest into an interpretable tree'. Together they form a unique fingerprint.

    Cite this