scispace - formally typeset
Search or ask a question
Institution

University of Massachusetts Lowell

EducationLowell, Massachusetts, United States
About: University of Massachusetts Lowell is a education organization based out in Lowell, Massachusetts, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 5533 authors who have published 12640 publications receiving 306181 citations. The organization is also known as: UMass Lowell & UML.


Papers
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and shows such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
Abstract: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.

4,206 citations

Posted Content
TL;DR: A novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and shows such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
Abstract: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.

3,935 citations

Journal ArticleDOI
01 Jul 1992-Geology
TL;DR: The A-type granitoids can be divided into two chemical groups as mentioned in this paper : oceanic-island basalts and island-arc basalts, and these two types have very different sources and tectonic settings.
Abstract: The A-type granitoids can be divided into two chemical groups. The first group (A1) is characterized by element ratios similar to those observed for oceanic-island basalts. The second group (A2) is characterized by ratios that vary from those observed for continental crust to those observed for island-arc basalts. It is proposed that these two types have very different sources and tectonic settings. The A1 group represents differentiates of magmas derived from sources like those of oceanic-island basalts but emplaced in continental rifts or during intraplate magmatism. The A2 group represents magmas derived from continental crust or underplated crust that has been through a cycle of continent-continent collision or island-arc magmatism.

2,043 citations

Proceedings Article
10 Dec 2014
TL;DR: This work proposes a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant and shows that a domain confusion metric can be used for model selection to determine the dimension of an adaptationlayer and the best position for the layer in the CNN architecture.
Abstract: Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.

2,036 citations


Authors

Showing all 5622 results

NameH-indexPapersCitations
David L. Kaplan1771944146082
Yang Yang1712644153049
Krzysztof Matyjaszewski1691431128585
Yi Yang143245692268
Ernst J. Schaefer13160589168
Jose M. Ordovas123102470978
Michael R. Hamblin11789959533
Mike Clarke1131037164328
Katherine L. Tucker10668339404
Charles T. Driscoll9755437355
Louise Ryan8849226849
Zhongping Chen8174224249
Kate Saenko8028739066
Richard A. Gross7940222225
Dong-Yu Kim7034220340
Network Information
Related Institutions (5)
Pennsylvania State University
196.8K papers, 8.3M citations

92% related

University of Maryland, College Park
155.9K papers, 7.2M citations

91% related

Rutgers University
159.4K papers, 6.7M citations

91% related

University of Texas at Austin
206.2K papers, 9M citations

91% related

Case Western Reserve University
106.5K papers, 5M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202337
2022111
2021925
2020834
2019830
2018658