Explainable Deep Learning Models in Medical Image Analysis.
TLDR
A review of the current applications of explainable deep learning for different medical imaging tasks is presented in this paper, where various approaches, challenges for clinical deployment, and the areas requiring further research are discussed from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.Abstract:
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those However, the black-box nature of the algorithms has restricted their clinical use Recent explainability studies aim to show the features that influence the decision of a model the most The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations A review of the current applications of explainable deep learning for different medical imaging tasks is presented here The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-usersread more
Citations
More filters
Journal ArticleDOI
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022)
Hui Wen Loh,Chui Ping Ooi,Silvia Seoni,Prabal Datta Barua,Filippo Molinari,U. Rajendra Acharya +5 more
TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.
Journal ArticleDOI
A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis.
TL;DR: Deep learning has been used for the analysis of neuroimages, such as structural magnetic resonance imaging (MRI), functional MRI, and positron emission tomography (PET), and it has achieved significant performance improvements over traditional machine learning in computer-aided diagnosis of brain disorders as mentioned in this paper.
Journal ArticleDOI
A Review on Explainability in Multimodal Deep Neural Nets
TL;DR: A comprehensive survey and commentary on the explainability in multimodal deep neural networks, especially for the vision and language tasks, is presented in this article, including the significance, datasets, fundamental building blocks of the methods and techniques, challenges, applications, and future trends in this domain.
Journal ArticleDOI
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging
TL;DR: A review of the deep learning explanation literature focused on cancer detection using MR images is presented and the gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Journal ArticleDOI
Explainable artificial intelligence: a comprehensive review
TL;DR: A review of explainable artificial intelligence (XAI) can be found in this article, where the authors analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-model explainability.
References
More filters
Journal Article
Visualizing Data using t-SNE
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Journal ArticleDOI
Principal component analysis
TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Journal ArticleDOI
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.
Sebastian Bach,Alexander Binder,Grégoire Montavon,Frederick Klauschen,Klaus-Robert Müller,Wojciech Samek +5 more
TL;DR: This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Journal ArticleDOI
A survey of decision tree classifier methodology
S.R. Safavian,David A. Landgrebe +1 more
TL;DR: The subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed, and the relation between decision trees and neutral networks (NN) is also discussed.
Journal ArticleDOI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.