scispace - formally typeset
Open AccessJournal ArticleDOI

Explainable Deep Learning Models in Medical Image Analysis.

TLDR
A review of the current applications of explainable deep learning for different medical imaging tasks is presented in this paper, where various approaches, challenges for clinical deployment, and the areas requiring further research are discussed from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
Abstract
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those However, the black-box nature of the algorithms has restricted their clinical use Recent explainability studies aim to show the features that influence the decision of a model the most The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations A review of the current applications of explainable deep learning for different medical imaging tasks is presented here The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users

read more

Citations
More filters
Journal ArticleDOI

Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022)

TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.
Journal ArticleDOI

A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis.

TL;DR: Deep learning has been used for the analysis of neuroimages, such as structural magnetic resonance imaging (MRI), functional MRI, and positron emission tomography (PET), and it has achieved significant performance improvements over traditional machine learning in computer-aided diagnosis of brain disorders as mentioned in this paper.
Journal ArticleDOI

A Review on Explainability in Multimodal Deep Neural Nets

TL;DR: A comprehensive survey and commentary on the explainability in multimodal deep neural networks, especially for the vision and language tasks, is presented in this article, including the significance, datasets, fundamental building blocks of the methods and techniques, challenges, applications, and future trends in this domain.
Journal ArticleDOI

A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging

TL;DR: A review of the deep learning explanation literature focused on cancer detection using MR images is presented and the gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Journal ArticleDOI

Explainable artificial intelligence: a comprehensive review

TL;DR: A review of explainable artificial intelligence (XAI) can be found in this article, where the authors analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-model explainability.
References
More filters
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Journal ArticleDOI

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.

TL;DR: This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Journal ArticleDOI

A survey of decision tree classifier methodology

TL;DR: The subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed, and the relation between decision trees and neutral networks (NN) is also discussed.
Related Papers (5)
Trending Questions (1)
What are the ethical implications of deep learning models for image recognition?

The paper does not specifically discuss the ethical implications of deep learning models for image recognition. The paper focuses on the need for explainable deep learning models in medical image analysis.