Abstract
Teleoperated surgical robots can provide immediate medical assistance in austere and hostile environments. However, such scenarios are time-sensitive and require high-bandwidth and low-latency communication links that might be unavailable. The system presented in this paper has a standard surgical teleoperation interface, which provides surgeons with an environment in which they are trained. In our semi-autonomous robotic framework, high-level instructions are inferred from the surgeon’s actions and then executed semi-autonomously on the robot. The framework consists of two main modules: (i) Recognition Module–which recognises atomic sub-tasks (i.e., surgemes) performed at the operator end, and (ii) Execution Module–which executes the identified surgemes at the robot end using task contextual information. The peg transfer task was selected for this paper due to its importance in laparoscopic surgical training. The experiments were performed on the DESK surgical dataset to show our framework’s effectiveness using two metrics: user intervention (in the degree of autonomy) and success rate of surgeme execution. We achieved an average accuracy of 91.5% for surgeme recognition and 86% success during surgeme execution. Furthermore, we obtained an average success rate of 53.9% for the overall task, using a model-based approach with a degree of autonomy of 99.33%.
Original language | English |
---|---|
Pages (from-to) | 376-383 |
Number of pages | 8 |
Journal | Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization |
Volume | 9 |
Issue number | 4 |
DOIs | |
State | Published - 1 Jan 2021 |
Externally published | Yes |
Keywords
- Teleoperated Robotic Surgery
- surgical activity recognition
- surgical vision and perception
ASJC Scopus subject areas
- Computational Mechanics
- Biomedical Engineering
- Radiology Nuclear Medicine and imaging
- Computer Science Applications