Autonomy levels for surgery robots and their consequences for responsibility
Eduard Fosch-Villaronga, Pranav Khanna, Hadassah Drukarch, and Bart Custers from the eLaw Center for Law and Digital Technologies recently published a short piece in Nature Machine Intelligence on the legal and regulatory implications of surgery automation.
Entitled ‘A human in the loop in surgery automation,’ their contribution focuses on how the complex interplay between increasingly autonomous surgical robots, medical practitioners and support staff will soon complicate the understanding of how to allocate responsibility if something goes wrong.
Understanding the exact role of machines and humans in highly autonomous robotic surgeries is essential to map liability and avoid ascribing or extending responsibility to the surgical robot, which the literature has repeatedly highlighted as a legit course of action in complex robotic ecosystems.
The autonomy levels of medical robots
Their work builds on the contribution of Yang and colleagues (2017) at Science Robotics that highlights some considerations for the increasing levels of autonomy for medical robots. Yang and colleagues (2017) propose a state that surgeries will progressively be fully automated (autonomy level 5 “full autonomy”), highlighting that “No human needs to be in the loop, and the robot can perform an entire surgery.”
Although a great step in investigating the different levels of automation outside the automotive industry, Yang et al. (2017)’s model needs more detailing on how it applies to specific types of medical robots, including surgery robots, rehabilitation robots, and socially assistive robots.
Fosch-Villaronga and colleagues take a step forward in this direction and state that medical robotics’ embodiment and characteristics demand a domain-specific concretization and propose to tailor their model to surgery automation for further discussion:
The role of autonomy in highly automated surgeries
Yang and colleagues worry that “With decreasing human oversight and increasing robotic perception, decision-making, and action (the traditional “sense-think-act paradigm”), the risk of malfunction that can cause patient harm will increase.”
The more autonomous surgical robots become, the less active the human surgeon’s role will be. However, Fosch-Villaronga and colleagues argue that what in surgery automation decreases is not the human oversight, if we understand oversight as in oversee, i.e., supervise (a person or their work), especially in an official capacity.
With progressive robot autonomy, what decreases is the human surgeon’s active performance while in parallel the oversight augments.
A Human in the loop in Surgery Automation
Since autonomous robotic platforms rely heavily on sensory data, the medical support staff’s role remains integral and crucial for many functions ranging from patient positioning to port placement. Therefore, humans will not be eliminated entirely but will continue to participate actively even in highly automated surgical procedures in either performance, oversight, or support. Moreover, companies developing this technology will increasingly have an important role too.
The technically complex interplay between the surgical robot and the medical practitioner often determines the surgery outcome. In this sense, while an action that caused patient harm may have been performed by the robot acting autonomously, the doctor or the support staff may be responsible for selecting the task to be executed and configuring the robot.
From a legal and ethical standpoint, surgical robot autonomy complicates the allocation of responsibility if something goes wrong
A demand for clarity on responsibility allocation in highly automated environments
With this article Fosch-Villaronga and colleagues aim to raise awareness to the community of the role of autonomy in highly automated surgeries (including the role of companies and robot developers) to prevent having robotic surgeries with no human responsibility. Since there is an absence of legal frameworks to regulate the surgical robots’ ecosystem, they also hope that our efforts may benefit policy makers working towards optimal regulatory framing for robots and artificial intelligence.
Nature Machine Intelligence
Nature Machine Intelligence is part of Nature and is interested in the best research from across the fields of artificial intelligence, machine learning and robotics. Nat Mach Intell publishes high-quality original research and reviews in a wide range of topics in machine learning, robotics and AI. We also explore and discuss the significant impact that these fields are beginning to have on other scientific disciplines as well as many aspects of society and industry. There are countless opportunities where machine intelligence can augment human capabilities and knowledge in fields such as scientific discovery, healthcare, medical diagnostics and safe and sustainable cities, transport and agriculture. At the same time, many important questions on ethical, social and legal issues arise, especially given the fast pace of developments. Nature Machine Intelligence provides a platform to discuss these wide implications — encouraging a cross-disciplinary dialogue — with Comments, News Features, News & Views articles and also Correspondence.
Quote
Fosch-Villaronga, E., Khanna, P., Drukarch, H., & Custers, B. H. M. (2021). A human in the loop in surgery automation. Nature Machine Intelligence, 1–2.
SAILS Project
This article is part of the SAILS Project at Leiden University. SAILS stands for Society (Social & Behavioural Sciences, Humanities, Law, Archaeology, Governance & Global Affairs) Artificial Intelligence and Life Sciences and is one of Leiden University’s interdisciplinary programmes.