scispace - formally typeset
S

Stefan Kopp

Researcher at Bielefeld University

Publications -  291
Citations -  7421

Stefan Kopp is an academic researcher from Bielefeld University. The author has contributed to research in topics: Gesture & Gesture recognition. The author has an hindex of 41, co-authored 276 publications receiving 6423 citations.

Papers
More filters
Book ChapterDOI

Towards a common framework for multimodal generation: the behavior markup language

TL;DR: An international effort to unify a multimodal behavior generation framework for Embodied Conversational Agents (ECAs) is described, where the stages represent intent planning, behavior planning and behavior realization is proposed.
Book ChapterDOI

A conversational agent as museum guide: design and evaluation of a real-world application

TL;DR: Results indicate that Max engages people in interactions where they are likely to use human-like communication strategies, suggesting the attribution of sociality to the agent.
Journal Article

Towards a common framework for multimodal generation : The behavior markup language

TL;DR: In this article, the authors propose a three-stage model called SAIBA, where the stages represent intent planning, behavior planning and behavior realization, and a Function Markup Language (FML), describing intent without referring to physical behavior, mediates between the first two stages.
Journal ArticleDOI

Guest Editorial: Gesture and speech in interaction: An overview

TL;DR: The current understanding of manual and head gesture form and function, of the principle functional interactions between gesture and speech aiding communication, transporting meaning and producing speech, and of research on temporal speech-gesture synchrony are provided.
Journal ArticleDOI

To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability

TL;DR: When the robot used co-verbal gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without gestures.