scispace - formally typeset
S

Sonia Chernova

Researcher at Georgia Institute of Technology

Publications -  179
Citations -  7737

Sonia Chernova is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Robot & Task (project management). The author has an hindex of 31, co-authored 163 publications receiving 5997 citations. Previous affiliations of Sonia Chernova include Massachusetts Institute of Technology & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI

A survey of robot learning from demonstration

TL;DR: A comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings, which analyzes and categorizes the multiple ways in which examples are gathered, as well as the various techniques for policy derivation.
Journal ArticleDOI

Recent Advances in Robot Learning from Demonstration

TL;DR: In the context of robotics and automation, learning from demonstration (LfD) is the paradigm in which robots acquire new skills by learning to imitate an expert.
Journal ArticleDOI

Interactive policy learning through confidence-based autonomy

TL;DR: The algorithm selects demonstrations based on a measure of action selection confidence, and results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher.
Book

Robot Learning from Human Teachers

TL;DR: This book provides an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers, and provides best practices for evaluation of LfD systems.
Proceedings Article

Reinforcement learning from demonstration through shaping

TL;DR: This paper investigates the intersection of reinforcement learning and expert demonstrations, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping.