TY - GEN
T1 - GREG
T2 - 27th International Conference on Human-Computer Interaction, HCII 2025
AU - Vidra, Idan
AU - Yehezkel, Aviv
AU - Lumnitz, Udi
AU - Erel, Hadas
AU - Zuckerman, Oren
AU - Shamir, Ariel
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - This paper presents a Recurrent Neural Network (RNN)-based method for generating a large variety of social gestures for non-humanoid robotic devices based on a tiny sample of animator-generated gestures. Currently, the creation of robotic gestures relies on human animators, resulting in a limited number of manually designed, low-diversity gestures that are time-consuming and costly to produce. Generating a diverse range of gestures for non-humanoid robots is essential, as robots will be integrated into everyday life situations in the near future. Training an RNN model for robot gesture generation introduces challenges, such as the lack of relevant datasets for non-humanoid gestures and the absence of a direct mapping between human and robot movements. In this work, we propose a novel approach that leverages RNN and Transfer Learning to generate movement coordinates for non-verbal robotic gestures automatically. Based on a pre-trained RNN model for textual sentiment detection, we use Transfer Learning to learn the abstract “language” of robot non-verbal gestures and incorporate a text generation approach to create movement embeddings using word2vec methodology. The model is trained on two emotion categories: “happy” and “sad,” trained on its respective animations, and predicts the next movement of the robot based on previous ones. We conducted a user study, built from an “on-screen users’ evaluation”, with 66 Amazon Mechanical Turk service participants. The user study evaluated the generated gestures animation using on-screen videos and compared it to random-next-frame animation on the one hand as a baseline, and to the animator’s animations dataset on the other hand. The results indicate that our proposed model successfully generates diverse and expressive robot gestures.
AB - This paper presents a Recurrent Neural Network (RNN)-based method for generating a large variety of social gestures for non-humanoid robotic devices based on a tiny sample of animator-generated gestures. Currently, the creation of robotic gestures relies on human animators, resulting in a limited number of manually designed, low-diversity gestures that are time-consuming and costly to produce. Generating a diverse range of gestures for non-humanoid robots is essential, as robots will be integrated into everyday life situations in the near future. Training an RNN model for robot gesture generation introduces challenges, such as the lack of relevant datasets for non-humanoid gestures and the absence of a direct mapping between human and robot movements. In this work, we propose a novel approach that leverages RNN and Transfer Learning to generate movement coordinates for non-verbal robotic gestures automatically. Based on a pre-trained RNN model for textual sentiment detection, we use Transfer Learning to learn the abstract “language” of robot non-verbal gestures and incorporate a text generation approach to create movement embeddings using word2vec methodology. The model is trained on two emotion categories: “happy” and “sad,” trained on its respective animations, and predicts the next movement of the robot based on previous ones. We conducted a user study, built from an “on-screen users’ evaluation”, with 66 Amazon Mechanical Turk service participants. The user study evaluated the generated gestures animation using on-screen videos and compared it to random-next-frame animation on the one hand as a baseline, and to the animator’s animations dataset on the other hand. The results indicate that our proposed model successfully generates diverse and expressive robot gestures.
KW - Human-Robot Interaction
KW - Neural networks
KW - Robotic social gestures
KW - Transfer learning
UR - https://www.scopus.com/pages/publications/105008645927
U2 - 10.1007/978-3-031-94168-9_19
DO - 10.1007/978-3-031-94168-9_19
M3 - Conference contribution
AN - SCOPUS:105008645927
SN - 9783031941672
T3 - Communications in Computer and Information Science
SP - 199
EP - 209
BT - HCI International 2025 Posters - 27th International Conference on Human-Computer Interaction, HCII 2025, Proceedings
A2 - Stephanidis, Constantine
A2 - Antona, Margherita
A2 - Ntoa, Stavroula
A2 - Salvendy, Gavriel
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 22 June 2025 through 27 June 2025
ER -