Bridging Online Learning and Explainability in Image Classification

  • N. L. Adarsh
  • , P. V. Arun
  • , B. Krishna Mohan

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Explainability sheds light on the decision-making process of an opaque machine learning model. This helps humans to understand which set of features from an input sample has contributed to this decision/prediction by the model. Techniques like Grad-CAM (Gradient-weighted Class Activation Mapping) have been effective on static models but have not been extensively tested in online learning, where models evolve over incoming data. In this chapter, we aim to bridge this gap by experimenting with how we can preserve and adapt explainability in dynamic image classification networks by using the prominent Indian pines hyperspectral dataset. We propose a framework that performs model updates incrementally and generates an explanation that ensures transparency even when the model adapts to new distributions or concept drift. The experimental results show the trade-off between model adaptability and explainability. We show that post-hoc methods like Grad-CAM can be effective in online settings. This work contributes to the growing literature on the intersection of explainable AI (XAI) and Online Learning.

Original languageEnglish
Title of host publicationExplainable AI for Earth Observation Data Analysis
Subtitle of host publicationApplications, Opportunities, and Challenges
PublisherCRC Press
Pages191-203
Number of pages13
ISBN (Electronic)9781040436332
ISBN (Print)9781032980966
DOIs
StatePublished - 1 Jan 2025
Externally publishedYes

ASJC Scopus subject areas

  • General Earth and Planetary Sciences
  • General Environmental Science
  • General Energy
  • General Engineering
  • General Computer Science

Fingerprint

Dive into the research topics of 'Bridging Online Learning and Explainability in Image Classification'. Together they form a unique fingerprint.

Cite this