scispace - formally typeset
A

Avanti Shrikumar

Researcher at Stanford University

Publications -  46
Citations -  6442

Avanti Shrikumar is an academic researcher from Stanford University. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 16, co-authored 42 publications receiving 4317 citations. Previous affiliations of Avanti Shrikumar include Massachusetts Institute of Technology.

Papers
More filters
Proceedings Article

Learning important features through propagating activation differences

TL;DR: DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented.
Journal ArticleDOI

Opportunities and obstacles for deep learning in biology and medicine.

TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Posted Content

Learning Important Features Through Propagating Activation Differences

TL;DR: DeepLIFT as mentioned in this paper decomposes the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.
Journal ArticleDOI

Dynamic and Coordinated Epigenetic Regulation of Developmental Transitions in the Cardiac Lineage

TL;DR: A novel preactivation chromatin pattern at the promoters of genes associated with heart development and cardiac function is discovered and forms a basis for understanding developmentally regulated chromatin transitions during lineage commitment and the molecular etiology of congenital heart disease.
Posted Content

Not Just a Black Box: Learning Important Features Through Propagating Activation Differences

TL;DR: DeepLIFT (Learning Important FeaTures), an efficient and effective method for computing importance scores in a neural network that compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference.