scispace - formally typeset
Y

Yu Liu

Researcher at Hefei University of Technology

Publications -  88
Citations -  8101

Yu Liu is an academic researcher from Hefei University of Technology. The author has contributed to research in topics: Image fusion & Convolutional neural network. The author has an hindex of 21, co-authored 83 publications receiving 4133 citations. Previous affiliations of Yu Liu include University of Science and Technology of China.

Papers
More filters
Journal ArticleDOI

Deep learning in remote sensing applications: A meta-analysis and review

TL;DR: This review covers nearly every application and technology in the field of remote sensing, ranging from preprocessing to mapping, and a conclusion regarding the current state-of-the art methods, a critical conclusion on open challenges, and directions for future research are presented.
Journal ArticleDOI

A general framework for image fusion based on multi-scale transform and sparse representation

TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.
Journal ArticleDOI

Multi-focus image fusion with a deep convolutional neural network

TL;DR: A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
Journal ArticleDOI

Image Fusion With Convolutional Sparse Representation

TL;DR: A recently emerged signal decomposition model known as convolutional sparse representation (CSR) is introduced into image fusion to address this problem, motivated by the observation that the CSR model can effectively overcome the above two drawbacks.
Journal ArticleDOI

IFCNN: A general image fusion framework based on convolutional neural network

TL;DR: The experimental results show that the proposed model demonstrates better generalization ability than the existing image fusion models for fusing various types of images, such as multi-focus, infrared-visual, multi-modal medical and multi-exposure images.