Abstract
This article introduces the Turn-Taking Spiking Neural Network (TTSNet), which is a cognitive model to perform early turn-taking prediction about a human or agent’s intentions. The TTSNet framework relies on implicit and explicit multimodal communication cues (physical, neurological and physiological) to be able to predict when the turn-taking event will occur in a robust and unambiguous fashion. To test the theories proposed, the TTSNet framework was implemented on an assistant robotic nurse, which predicts surgeon’s turn-taking intentions and delivers surgical instruments accordingly. Experiments were conducted to evaluate TTSNet’s performance in early turn-taking prediction. It was found to reach an (Formula presented.) score of 0.683 given 10% of completed action, and an (Formula presented.) score of 0.852 at 50% and 0.894 at 100% of the completed action. This performance outperformed multiple state-of-the-art algorithms, and surpassed human performance when limited partial observation is given (<40%). Such early turn-taking prediction capability would allow robots to perform collaborative actions proactively, in order to facilitate collaboration and increase team efficiency.
Original language | English |
---|---|
Pages (from-to) | 1619-1643 |
Number of pages | 25 |
Journal | International Journal of Robotics Research |
Volume | 38 |
Issue number | 14 |
DOIs | |
State | Published - 1 Dec 2019 |
Externally published | Yes |
Keywords
- Cognitive human–robot interaction
- cognitive robotics
- cognitive robotics
- gesture
- human-centered and life-like robotics
- learning and adaptive systems
- medical robots and systems
- posture
- social spaces and facial expressions
ASJC Scopus subject areas
- Software
- Modeling and Simulation
- Mechanical Engineering
- Electrical and Electronic Engineering
- Artificial Intelligence
- Applied Mathematics