Interpretable AI for bio-medical applications

Anoop Sathyan, Abraham Itzhak Weinberg, Kelly Cohen

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide further insights into the relationship between the input features and the predictions. SHAP methodology additionally provides a more holistic view of the effect of the inputs on the output predictions. The results also present the commonalities between the insights gained using LIME and SHAP. Although this paper focuses on the use of deep neural networks trained on UCI Breast Cancer Wisconsin dataset, the methodology can be applied to other neural networks and architectures trained on other applications. The deep neural network trained in this work provides a high level of accuracy. Analyzing the model using LIME and SHAP adds the much desired benefit of providing explanations for the recommendations made by the trained model.

Original languageEnglish
Article number18
JournalComplex Engineering Systems
Volume2
Issue number4
DOIs
StatePublished - 1 Dec 2022
Externally publishedYes

Keywords

  • Explainable AI
  • LIME
  • neural networks
  • SHAP

ASJC Scopus subject areas

  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Interpretable AI for bio-medical applications'. Together they form a unique fingerprint.

Cite this