TY - GEN
T1 - MAGIC
T2 - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
AU - Rojas-Muñoz, Edgar
AU - Wachs, Juan P.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5/1
Y1 - 2019/5/1
N2 - Gestures play a fundamental role in instructional processes between agents. However, effectively transferring this non-verbal information becomes complex when the agents are not physically co-located. Recently, remote collaboration systems that transfer gestural information have been developed. Nonetheless, these systems relegate gestures to an illustrative role: only a representation of the gesture is transmitted. We argue that further comparisons between the gestures can provide information of how well the tasks are being understood and performed. While gesture comparison frameworks exist, they only rely on gesture's appearance, leaving semantics and pragmatical aspects aside. This work introduces the Multi-Agent Gestural Instructions Comparer (MAGIC), an architecture that represents and compares gestures at the morphological, semantical and pragmatical levels. MAGIC abstracts gestures via a three-stage pipeline based on a taxonomy classification, a dynamic semantics framework and a constituency parsing; and utilizes a comparison scheme based on subtrees intersections to describe gesture similarity. This work shows the feasibility of the framework by assessing MAGIC's gesture matching accuracy against other gesture comparison frameworks during a mentor-mentee remote collaborative physical task scenario.
AB - Gestures play a fundamental role in instructional processes between agents. However, effectively transferring this non-verbal information becomes complex when the agents are not physically co-located. Recently, remote collaboration systems that transfer gestural information have been developed. Nonetheless, these systems relegate gestures to an illustrative role: only a representation of the gesture is transmitted. We argue that further comparisons between the gestures can provide information of how well the tasks are being understood and performed. While gesture comparison frameworks exist, they only rely on gesture's appearance, leaving semantics and pragmatical aspects aside. This work introduces the Multi-Agent Gestural Instructions Comparer (MAGIC), an architecture that represents and compares gestures at the morphological, semantical and pragmatical levels. MAGIC abstracts gestures via a three-stage pipeline based on a taxonomy classification, a dynamic semantics framework and a constituency parsing; and utilizes a comparison scheme based on subtrees intersections to describe gesture similarity. This work shows the feasibility of the framework by assessing MAGIC's gesture matching accuracy against other gesture comparison frameworks during a mentor-mentee remote collaborative physical task scenario.
UR - http://www.scopus.com/inward/record.url?scp=85070459008&partnerID=8YFLogxK
U2 - 10.1109/FG.2019.8756534
DO - 10.1109/FG.2019.8756534
M3 - Conference contribution
AN - SCOPUS:85070459008
T3 - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
BT - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
PB - Institute of Electrical and Electronics Engineers
Y2 - 14 May 2019 through 18 May 2019
ER -