Explainability in Deep Learning Segmentation Models for Breast Cancer by Analogy with Texture Analysis

Md. Masum Billah, Pragati Manandhar, Sarosh Krishan, Alejandro Cedillo, Hergys Rexha, Sébastien Lafond, Kurt K Benke, Sepinoud Azimi, Janan Arslan

Research output: Chapter in Book/Conference proceedingConference contributionScientificpeer-review

Abstract

Despite their predictive capabilities and rapid advancement, the black-box nature of Artificial Intelligence (AI) models, particularly in healthcare, has sparked debate regarding their trustworthiness and accountability. In response, the field of Explainable AI (XAI) has emerged, aiming to create transparent AI technologies. We present a novel approach to enhance AI interpretability by leveraging texture analysis, with a focus on cancer datasets. By focusing on specific texture features and their correlations with a prediction outcome extracted from medical images, our proposed methodology aims to elucidate the underlying mechanics of AI, improve AI trustworthiness, and facilitate human understanding. The code is available at https://github.com/xrai-lib/xai-texture.
Original languageEnglish
Title of host publicationMedical Imaging with Deep Learning (MIDL 2024)
Place of PublicationParis, France
Publication statusPublished - 1 Jul 2024
MoE publication typeA4 Article in a conference publication
EventMedical Imaging with Deep Learning - Paris
Duration: 3 Jul 2024 → …

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498

Conference

ConferenceMedical Imaging with Deep Learning
Abbreviated titleMIDL
CityParis
Period03/07/24 → …

Keywords

  • Artificial Intelligence
  • Cancer Diagnosis
  • Explainable AI
  • Texture Analysis
  • Medical Imaging

Fingerprint

Dive into the research topics of 'Explainability in Deep Learning Segmentation Models for Breast Cancer by Analogy with Texture Analysis'. Together they form a unique fingerprint.

Cite this