J
Jason J. Corso
Researcher at University of Michigan
Publications - 269
Citations - 12815
Jason J. Corso is an academic researcher from University of Michigan. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 41, co-authored 265 publications receiving 9871 citations. Previous affiliations of Jason J. Corso include Johns Hopkins University & State University of New York System.
Papers
More filters
Journal ArticleDOI
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Bjoern H. Menze,Andras Jakab,Stefan Bauer,Jayashree Kalpathy-Cramer,Keyvan Farahani,Justin Kirby,Yuliya Burren,N Porz,Johannes Slotboom,Roland Wiest,Levente Lanczi,Elizabeth R. Gerstner,Marc-André Weber,Tal Arbel,Brian B. Avants,Nicholas Ayache,Patricia Buendia,D. Louis Collins,Nicolas Cordier,Jason J. Corso,Antonio Criminisi,Tilak Das,Hervé Delingette,Çağatay Demiralp,Christopher R. Durst,Michel Dojat,Senan Doyle,Joana Festa,Florence Forbes,Ezequiel Geremia,Ben Glocker,Polina Golland,Xiaotao Guo,Andac Hamamci,Khan M. Iftekharuddin,Raj Jena,Nigel M. John,Ender Konukoglu,Danial Lashkari,José Mariz,Raphael Meier,Sérgio Pereira,Doina Precup,Stephen J. Price,Tammy Riklin Raviv,Syed M. S. Reza,Michael Ryan,Duygu Sarikaya,Lawrence H. Schwartz,Hoo-Chang Shin,Jamie Shotton,Carlos A. Silva,Nuno Sousa,Nagesh K. Subbanna,Gábor Székely,Thomas J. Taylor,Owen M. Thomas,Nicholas J. Tustison,Gozde Unal,Flor Vasseur,Max Wintermark,Dong Hye Ye,Liang Zhao,Binsheng Zhao,Darko Zikic,Marcel Prastawa,Mauricio Reyes,Koen Van Leemput +67 more
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Proceedings ArticleDOI
Action bank: A high-level representation of activity in video
TL;DR: Action bank as discussed by the authors is composed of many individual action detectors sampled broadly in semantic space as well as viewpoint space, which is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance.
Journal ArticleDOI
Unified Vision-Language Pre-Training for Image Captioning and VQA
TL;DR: VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions and VQA 2.0.
Journal ArticleDOI
Efficient Multilevel Brain Tumor Segmentation With Integrated Bayesian Model Classification
TL;DR: In this paper, a Bayesian formulation for incorporating soft model assignments into the calculation of affinities is presented. And the resulting soft model assignment is integrated into the multilevel segmentation by weighted aggregation algorithm, and applied to the task of detecting and segmenting brain tumor and edema in multichannel magnetic resonance (MR) volumes.
Proceedings Article
Jointly modeling deep video and compositional text to bridge vision and language in a unified framework
TL;DR: The results show the approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.