TY - GEN
T1 - Integrated vision-based robotic arm interface for operators with upper limb mobility impairments
AU - Jiang, Hairong
AU - Wachs, Juan P.
AU - Duerstock, Bradley S.
PY - 2013/12/31
Y1 - 2013/12/31
N2 - An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures to send as commands to control the WMRM and locate the operator's face for object positioning. The other sensor was used to automatically recognize different daily living objects for test subjects to select. The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon. An object recognition module employing Speeded Up Robust Features (SURF) algorithm was performed and recognition results were sent as a command for 'coarse positioning' of the robotic arm near the selected daily living object. Automatic face detection was also provided as a shortcut for the subjects to position the objects to the face by using a WMRM. Completion time tasks were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection and object recognition) WMRM control modes. The use of automatic face and object detection significantly increased the completion times for retrieving a variety of daily living objects.
AB - An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures to send as commands to control the WMRM and locate the operator's face for object positioning. The other sensor was used to automatically recognize different daily living objects for test subjects to select. The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon. An object recognition module employing Speeded Up Robust Features (SURF) algorithm was performed and recognition results were sent as a command for 'coarse positioning' of the robotic arm near the selected daily living object. Automatic face detection was also provided as a shortcut for the subjects to position the objects to the face by using a WMRM. Completion time tasks were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection and object recognition) WMRM control modes. The use of automatic face and object detection significantly increased the completion times for retrieving a variety of daily living objects.
KW - gesture recognition
KW - object recognition
KW - spinal cord injuries
KW - wheelchair-mounted robotic arm
UR - http://www.scopus.com/inward/record.url?scp=84891131067&partnerID=8YFLogxK
U2 - 10.1109/ICORR.2013.6650447
DO - 10.1109/ICORR.2013.6650447
M3 - Conference contribution
C2 - 24187264
AN - SCOPUS:84891131067
SN - 9781467360241
T3 - IEEE International Conference on Rehabilitation Robotics
BT - 2013 IEEE 13th International Conference on Rehabilitation Robotics, ICORR 2013
T2 - 2013 IEEE 13th International Conference on Rehabilitation Robotics, ICORR 2013
Y2 - 24 June 2013 through 26 June 2013
ER -