scispace - formally typeset
J

Junping Zhang

Researcher at Fudan University

Publications -  139
Citations -  5504

Junping Zhang is an academic researcher from Fudan University. The author has contributed to research in topics: Computer science & Nonlinear dimensionality reduction. The author has an hindex of 28, co-authored 113 publications receiving 3953 citations. Previous affiliations of Junping Zhang include Chinese Academy of Sciences.

Papers
More filters
Journal ArticleDOI

Data-Driven Intelligent Transportation Systems: A Survey

TL;DR: A survey on the development of D2ITS is provided, discussing the functionality of its key components and some deployment issues associated with D2 ITS Future research directions for the developed system are presented.
Journal ArticleDOI

Hallucinating face by position-patch

TL;DR: Experiments show that the proposed method without residue compensation generates higher-quality images and costs less computational time than some recent face image super-resolution (hallucination) techniques.
Journal ArticleDOI

Visual Traffic Jam Analysis Based on Trajectory Data

TL;DR: An interactive system for visual analysis of urban traffic congestion based on GPS trajectories that provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level.
Journal ArticleDOI

Human Identification Using Temporal Information Preserving Gait Template

TL;DR: A novel temporal template, named Chrono-Gait Image (CGI), is developed that achieves competitive performance in gait recognition with robustness and efficiency and also proposes CGI-based real and synthetic temporal information preserving templates.
Journal ArticleDOI

GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition

TL;DR: GaitSet as discussed by the authors proposes a new network named GaitSet to learn identity information from the set of independent frames, which is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions.