scispace - formally typeset
Search or ask a question
Institution

University of Science and Technology of China

EducationHefei, China
About: University of Science and Technology of China is a education organization based out in Hefei, China. It is known for research contribution in the topics: Catalysis & Computer science. The organization has 73442 authors who have published 101099 publications receiving 2412680 citations. The organization is also known as: USTC & University of Science & Technology of China.


Papers
More filters
Journal ArticleDOI
TL;DR: This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

26,458 citations

Posted Content
TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

23,183 citations

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations

Journal ArticleDOI
TL;DR: In this article, a few-layer black phosphorus crystals with thickness down to a few nanometres are used to construct field effect transistors for nanoelectronic devices. But the performance of these materials is limited.
Abstract: Two-dimensional crystals have emerged as a class of materials that may impact future electronic technologies. Experimentally identifying and characterizing new functional two-dimensional materials is challenging, but also potentially rewarding. Here, we fabricate field-effect transistors based on few-layer black phosphorus crystals with thickness down to a few nanometres. Reliable transistor performance is achieved at room temperature in samples thinner than 7.5 nm, with drain current modulation on the order of 10(5) and well-developed current saturation in the I-V characteristics. The charge-carrier mobility is found to be thickness-dependent, with the highest values up to ∼ 1,000 cm(2) V(-1) s(-1) obtained for a thickness of ∼ 10 nm. Our results demonstrate the potential of black phosphorus thin crystals as a new two-dimensional material for applications in nanoelectronic devices.

6,924 citations

Journal ArticleDOI
TL;DR: This work equips the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement, and develops a new network structure, called SPP-net, which can generate a fixed-length representation regardless of image size/scale.
Abstract: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 $\times$ 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 $\times$ faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

5,919 citations


Authors

Showing all 74311 results

NameH-indexPapersCitations
Yi Cui2201015199725
Yi Chen2174342293080
Younan Xia216943175757
Jing Wang1844046202769
H. S. Chen1792401178529
Yang Yang1712644153049
Gang Chen1673372149819
Yang Yang1642704144071
Hua Zhang1631503116769
Alvio Renzini16290895452
Wei Li1581855124748
Leif Groop158919136056
Xiang Zhang1541733117576
Rui Zhang1512625107917
Zhenwei Yang150956109344
Network Information
Related Institutions (5)
Chinese Academy of Sciences
634.8K papers, 14.8M citations

96% related

Tsinghua University
200.5K papers, 4.5M citations

96% related

École Polytechnique Fédérale de Lausanne
98.2K papers, 4.3M citations

93% related

Peking University
181K papers, 4.1M citations

93% related

Zhejiang University
183.2K papers, 3.4M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023328
20221,954
202110,997
202010,179
20199,475
20187,893