Empowering Interpretable, Explainable Machine Learning Using Bayesian Network Classifiers

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Even before the deep learning era, the machine learning (ML) community commonly believed that while decision trees, neural networks (NNs), support vector machines, and ensemble (bagging and boosting) methods are the ultimate tools for highly accurate classification, graphical models and their flagship Bayesian networks (BNs) are only appropriate for knowledge representation. This chapter challenges the belief that the unsupervised graphical model is inferior to the supervised classifier and provides evidence to the contrary. Moreover, it demonstrates how the graphical models’ knowledge representation capability promotes a level of interpretability and explainability that is not found in conventional ML classifiers. This chapter further challenges the ML community to invest even 1% of the efforts currently focused on increasing the accuracy of deep and non-deep ML classifiers and to equip them with the means for visualization and interpretation, on instead developing BN learning algorithms that would allow graphical models to complement and integrate with these classifiers to foster interpretability and explainability. One example could be to utilize the natural interpretability provided by conditional (in)dependencies among the nodes and causal pathways in the BN classifier to visualize, interpret, and explain deep NN results and important interactions among network units, layers, and activities that may be responsible for right and wrong classification decisions made by the network. Another example could be development of graphical user interface tools encouraging, promoting, and supporting human–machine interaction by which both users’ inquiries will help manipulate and extend the learned BN model to better address these and further inquiries, and the tools will inspire users’ curiosity to further investigate the model to enrich their understanding of the domain. Such efforts will further contribute to the attempts of the ML community to not only increase its impact on advancing and supporting the many fields that strive for innovation, but also to meet growing criticism concerning lack of explainability, transparency, and accountability in AI—criticism that may undermine and hinder the tremendous societal benefits that ML can bring.

Original languageEnglish
Title of host publicationMachine Learning for Data Science Handbook
Subtitle of host publicationData Mining and Knowledge Discovery Handbook, Third Edition
PublisherSpringer International Publishing
Pages111-142
Number of pages32
ISBN (Electronic)9783031246289
ISBN (Print)9783031246272
DOIs
StatePublished - 1 Jan 2023

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics

Fingerprint

Dive into the research topics of 'Empowering Interpretable, Explainable Machine Learning Using Bayesian Network Classifiers'. Together they form a unique fingerprint.

Cite this