ZSGL: Zero shot gestural learning

  • Naveen Madapana
  • , Juan Wachs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

17 Scopus citations

Abstract

Gesture recognition systems enable humans to interact with machines in an intuitive and a natural way. Humans tend to create the gestures on the fly and conventional systems lack adaptability to learn new gestures beyond the training stage. This problem can be best addressed using Zero Shot Learning (ZSL), a paradigm in machine learning that aims to recognize unseen objects by just having a description of them. ZSL for gestures has hardly been addressed in computer vision research due to the inherent ambiguity and the contextual dependency associated with the gestures. This work proposes an approach for Zero Shot Gestural Learning (ZSGL) by leveraging the semantic information that is embedded in the gestures. First, a human factors based approach has been followed to generate semantic descriptors for gestures that can generalize to the existing gesture classes. Second, we assess the performance of various existing state-of-The-Art algorithms on ZSL for gestures using two standard datasets: MSRC-12 and CGD2011 dataset. The obtained results (26.35% - unseen class accuracy) parallel the benchmark accuracies of attribute-based object recognition and justifies our claim that ZSL is a desirable paradigm for gesture based systems.

Original languageEnglish
Title of host publicationICMI 2017 - Proceedings of the 19th ACM International Conference on Multimodal Interaction
EditorsEdward Lank, Eve Hoggan, Sriram Subramanian, Alessandro Vinciarelli, Stephen A. Brewster
PublisherAssociation for Computing Machinery, Inc
Pages331-335
Number of pages5
ISBN (Electronic)9781450355438
DOIs
StatePublished - 3 Nov 2017
Externally publishedYes
Event19th ACM International Conference on Multimodal Interaction, ICMI 2017 - Glasgow, United Kingdom
Duration: 13 Nov 201717 Nov 2017

Publication series

NameICMI 2017 - Proceedings of the 19th ACM International Conference on Multimodal Interaction
Volume2017-January

Conference

Conference19th ACM International Conference on Multimodal Interaction, ICMI 2017
Country/TerritoryUnited Kingdom
CityGlasgow
Period13/11/1717/11/17

Keywords

  • Attribute based classification
  • Gesture recognition
  • Semantic descriptions
  • Transfer learning
  • Zero shot learning

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'ZSGL: Zero shot gestural learning'. Together they form a unique fingerprint.

Cite this