scispace - formally typeset
V

Vivienne Sze

Researcher at Massachusetts Institute of Technology

Publications -  155
Citations -  14471

Vivienne Sze is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Context-adaptive binary arithmetic coding & Energy consumption. The author has an hindex of 39, co-authored 144 publications receiving 10365 citations. Previous affiliations of Vivienne Sze include Texas Instruments.

Papers
More filters
Journal ArticleDOI

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

TL;DR: In this paper, the authors provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support DNN, and highlight key trends in reducing the computation cost of deep neural networks either solely via hardware design changes or via joint hardware and DNN algorithm changes.
Journal ArticleDOI

Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks

TL;DR: Eyeriss as mentioned in this paper is an accelerator for state-of-the-art deep convolutional neural networks (CNNs) that optimizes for the energy efficiency of the entire system, including the accelerator chip and off-chip DRAM, by reconfiguring the architecture.
Journal ArticleDOI

Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks

TL;DR: A novel dataflow, called row-stationary (RS), is presented, that minimizes data movement energy consumption on a spatial architecture and can adapt to different CNN shape configurations and reduces all types of data movement through maximally utilizing the processing engine local storage, direct inter-PE communication and spatial parallelism.
Posted Content

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

TL;DR: In this article, the authors provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs, and discuss various hardware platforms and architectures that support deep neural networks.
Journal ArticleDOI

Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

TL;DR: Eyeriss v2 as mentioned in this paper is a DNN accelerator architecture designed for running compact and sparse DNNs, which can process sparse data directly in the compressed domain for both weights and activations and therefore is able to improve both processing speed and energy efficiency with sparse models.