Embodied gesture learning from one-shot

Maria E. Cabrera, Juan P. Wachs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

This paper discusses the problem of one shot gesture recognition. This is relevant to the field of human-robot interaction, where the user's intentions are indicated through spontaneous gesturing (one shot) to the robot. The novelty of this work consists of learning the process that leads to the creation of a gesture, rather on the gesture itself. In our case, the context involves the way in which humans produce the gestures - the kinematic and anthropometric characteristics and the users' proxemics (the use of the space around them). In the method presented, the strategy is to generate a dataset of realistic samples based on biomechanical features extracted from a single gesture sample. These features, called 'the gist of a gesture', are considered to represent what humans remember when seeing a gesture and the cognitive process involved when trying to replicate it. By adding meaningful variability to these features, a large training data set is created while preserving the fundamental structure of the original gesture. Having a large dataset of realistic samples enables training classifiers for future recognition. Three classifiers were trained and tested using a subset of ChaLearn dataset, resulting in all three classifiers showing rather similar performance around 80% recognition rate Our classification results show the feasibility and adaptability of the presented technique regardless of the classifier.

Original languageEnglish
Title of host publication25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016
PublisherInstitute of Electrical and Electronics Engineers
Pages1092-1097
Number of pages6
ISBN (Electronic)9781509039296
DOIs
StatePublished - 15 Nov 2016
Externally publishedYes
Event25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016 - New York, United States
Duration: 26 Aug 201631 Aug 2016

Publication series

Name25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016

Conference

Conference25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016
Country/TerritoryUnited States
CityNew York
Period26/08/1631/08/16

ASJC Scopus subject areas

  • Artificial Intelligence
  • Social Psychology
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Embodied gesture learning from one-shot'. Together they form a unique fingerprint.

Cite this