S
Sharada P. Mohanty
Researcher at École Polytechnique Fédérale de Lausanne
Publications - 43
Citations - 2804
Sharada P. Mohanty is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Reinforcement learning & Competition (economics). The author has an hindex of 10, co-authored 40 publications receiving 1555 citations. Previous affiliations of Sharada P. Mohanty include International Institute of Information Technology.
Papers
More filters
Journal ArticleDOI
Using Deep Learning for Image-Based Plant Disease Detection
TL;DR: In this article, a deep convolutional neural network was used to identify 14 crop species and 26 diseases (or absence thereof) using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions.
Journal ArticleDOI
Critical dynamics in population vaccinating behavior
A. Demetri Pananos,Thomas M. Bury,Clara Wang,Justin Schonfeld,Sharada P. Mohanty,Brendan Nyhan,Marcel Salathé,Chris T. Bauch +7 more
TL;DR: Twitter and Google search data about measles from California and the United States before and after the 2014–2015 Disneyland, California measles outbreak support the hypothesis that population vaccinating behavior near the disease elimination threshold is a critical phenomenon.
Posted Content
The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors.
William H. Guss,Mario Ynocente Castro,Sam Devlin,Brandon Houghton,Noboru Sean Kuno,Crissman Loomis,Stephanie Milani,Sharada P. Mohanty,Keisuke Nakata,Ruslan Salakhutdinov,John Schulman,Shinya Shiroshita,Nicholay Topin,Avinash Ummadisingu,Oriol Vinyals +14 more
TL;DR: The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors is introduced, to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the number of samples needed to solve complex, hierarchical, and sparse environments.
Book ChapterDOI
Learning to Run Challenge Solutions: Adapting Reinforcement Learning Methods for Neuromusculoskeletal Environments
Łukasz Kidziński,Sharada P. Mohanty,Carmichael F. Ong,Zhewei Huang,Shuchang Zhou,Anton Pechenko,Adam Stelmaszczyk,Piotr Jarosik,Mikhail Pavlov,Sergey Kolesnikov,Sergey M. Plis,Zhibo Chen,Zhizheng Zhang,Jiale Chen,Jun Shi,Zhuobin Zheng,Chun Yuan,Zhihui Lin,Henryk Michalewski,Piotr Milos,Blazej Osinski,Andrew Melnik,Malte Schilling,Helge Ritter,Sean F. Carroll,Jennifer L. Hicks,Sergey Levine,Marcel Salathé,Scott L. Delp +28 more
TL;DR: This work presents eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policyoptimization, to make it run as fast as possible through an obstacle course.
Book ChapterDOI
Adversarial Vision Challenge
Wieland Brendel,Jonas Rauber,Alexey Kurakin,Nicolas Papernot,Behar Veliqi,Sharada P. Mohanty,Florian Laurent,Marcel Salathé,Matthias Bethge,Yaodong Yu,Hongyang Zhang,Susu Xu,Hongbao Zhang,Pengtao Xie,Eric P. Xing,Thomas Brunner,Frederik Diehl,Jérôme Rony,Luiz G. Hafemann,Shuyu Cheng,Yinpeng Dong,Xuefei Ning,Wenshuo Li,Yu Wang +23 more
TL;DR: This chapter describes the organisation and structure of the challenge as well as the solutions developed by the top-ranking teams.