TY - GEN
T1 - Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation
AU - Rahman, Md Masudur
AU - Sanchez-Tamayo, Natalia
AU - Gonzalez, Glebys
AU - Agarwal, Mridul
AU - Aggarwal, Vaneet
AU - Voyles, Richard M.
AU - Xue, Yexiang
AU - Wachs, Juan
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10/1
Y1 - 2019/10/1
N2 - In the future, deployable, teleoperated surgical robots can save the lives of critically injured patients in battlefield environments. These robotic systems will need to have autonomous capabilities to take over during communication delays and unexpected environmental conditions during critical phases of the procedure. Understanding and predicting the next surgical actions (referred as 'surgemes') is essential for autonomous surgery. Most approaches for surgeme recognition cannot cope with the high variability associated with austere environments and thereby cannot 'transfer' well to field robotics. We propose a methodology that uses compact image representations with kinematic features for surgeme recognition in the DESK dataset. This dataset offers samples for surgical procedures over different robotic platforms with a high variability in the setup. We performed surgeme classification in two setups: 1) No transfer, 2) Transfer from a simulated scenario to two real deployable robots. Then, the results were compared with recognition accuracies using only kinematic data with the same experimental setup. The results show that our approach improves the recognition performance over kinematic data across different domains. The proposed approach produced a transfer accuracy gain up to 20% between the simulated and the real robot, and up to 31% between the simulated robot and a different robot. A transfer accuracy gain was observed for all cases, even those already above 90%.
AB - In the future, deployable, teleoperated surgical robots can save the lives of critically injured patients in battlefield environments. These robotic systems will need to have autonomous capabilities to take over during communication delays and unexpected environmental conditions during critical phases of the procedure. Understanding and predicting the next surgical actions (referred as 'surgemes') is essential for autonomous surgery. Most approaches for surgeme recognition cannot cope with the high variability associated with austere environments and thereby cannot 'transfer' well to field robotics. We propose a methodology that uses compact image representations with kinematic features for surgeme recognition in the DESK dataset. This dataset offers samples for surgical procedures over different robotic platforms with a high variability in the setup. We performed surgeme classification in two setups: 1) No transfer, 2) Transfer from a simulated scenario to two real deployable robots. Then, the results were compared with recognition accuracies using only kinematic data with the same experimental setup. The results show that our approach improves the recognition performance over kinematic data across different domains. The proposed approach produced a transfer accuracy gain up to 20% between the simulated and the real robot, and up to 31% between the simulated robot and a different robot. A transfer accuracy gain was observed for all cases, even those already above 90%.
UR - http://www.scopus.com/inward/record.url?scp=85078835031&partnerID=8YFLogxK
U2 - 10.1109/RO-MAN46459.2019.8956396
DO - 10.1109/RO-MAN46459.2019.8956396
M3 - Conference contribution
AN - SCOPUS:85078835031
T3 - 2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019
BT - 2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019
PB - Institute of Electrical and Electronics Engineers
T2 - 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019
Y2 - 14 October 2019 through 18 October 2019
ER -