eXplainable Random Forest

Guy Amit, Shlomit Gur

Research output: Contribution to journalConference articlepeer-review

Abstract

Advanced machine learning models have become widely adopted in various domains due to their exceptional performance. However, their complexity often renders them difficult to interpret, which can be a significant limitation in high-stakes decision-making scenarios where explainability is crucial. In this study we propose eXplainable Random Forest (XRF), an extension of the Random Forest model that takes into consideration, crucially, during training, explainability constraints stemming from the users’ view of the problem and its feature space. While numerous methods have been suggested for explaining machine learning models, these methods often are suitable for use only after the model has been trained. Furthermore, the explanations provided by these methods may include features that are not human-understandable, which in turn may hinder the user’s comprehension of the model’s reasoning. Our proposed method addresses these two limitations. We apply our proposed method to six public benchmark datasets in a systematic way and demonstrate that XRF models manage to balance the trade-off between the models’ performance and the users’ explainability constraints.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3765
StatePublished - 1 Jan 2024
Externally publishedYes
Event2024 Workshop on Embracing Human-Aware AI in Industry 5.0, HAII5.0 2024 - Santiago de Compostela, Spain
Duration: 19 Oct 2024 → …

Keywords

  • Explainability
  • Machine Learning
  • Random Forest

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'eXplainable Random Forest'. Together they form a unique fingerprint.

Cite this