Y
Yanjun Qi
Researcher at University of Virginia
Publications - 236
Citations - 10043
Yanjun Qi is an academic researcher from University of Virginia. The author has contributed to research in topics: Chemistry & Medicine. The author has an hindex of 35, co-authored 144 publications receiving 7265 citations. Previous affiliations of Yanjun Qi include Carnegie Mellon University & Princeton University.
Papers
More filters
Journal ArticleDOI
Opportunities and obstacles for deep learning in biology and medicine.
Travers Ching,Daniel Himmelstein,Brett K. Beaulieu-Jones,Alexandr A. Kalinin,Brian T. Do,Gregory P. Way,Enrico Ferrero,Paul-Michael Agapow,Michael Zietz,Michael M. Hoffman,Michael M. Hoffman,Wei Xie,Gail L. Rosen,Benjamin J. Lengerich,Johnny Israeli,Jack Lanchantin,Stephen Woloszynek,Anne E. Carpenter,Avanti Shrikumar,Jinbo Xu,Evan M. Cofer,Evan M. Cofer,Christopher A. Lavender,Srinivas C. Turaga,Amr Alexandari,Zhiyong Lu,David J. Harris,Dave DeCaprio,Yanjun Qi,Anshul Kundaje,Yifan Peng,Laura K. Wiley,Marwin H. S. Segler,Simina M. Boca,S. Joshua Swamidass,Austin Huang,Anthony Gitter,Anthony Gitter,Casey S. Greene +38 more
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Proceedings ArticleDOI
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.
Weilin Xu,David Evans,Yanjun Qi +2 more
Abstract: Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by \emph{adversarial examples} that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, \emph{feature squeezing}, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
Proceedings ArticleDOI
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu,David Evans,Yanjun Qi +2 more
TL;DR: Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
Proceedings ArticleDOI
Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
TL;DR: DeepWordBug as mentioned in this paper generates small text perturbations in a black-box setting that force a deep-learning classifier to misclassify a text input by scoring strategies to find the most important words to modify.
Book ChapterDOI
Random Forest for Bioinformatics
TL;DR: The Random Forest technique, which includes an ensemble of decision trees and incorporates feature selection and interactions naturally in the learning process, is a popular choice because it is nonparametric, interpretable, efficient, and has high prediction accuracy for many types of data.