Introducing the future of robot regulation

Eduard Fosch-Villaronga
9 min readFeb 1, 2021

--

Written by Eduard Fosch-Villaronga and Hadassah Drukarch

“The art of progress is to preserve order amid change, and to preserve change amid order” — Alfred North Whitehead.

New technologies are a representation of the progress of science. They offer possibilities until recently unimaginable and can solve problems faster, better, and more innovative than ever before. While such new technologies may provide hope and change, they inevitably disrupt how we conceive reality, leading us to question and challenge existing norms and push us towards an increasingly louder call for legal change. While technology’s pace dramatically accelerates, however, legal responsiveness does not always follow as a consequent step.

Our current legal system works in horror vacui mode. We regulate almost everything from the day we are born until the day we die, and even far beyond. As such, we strive to prevent legal lacunas and ensure legal certainty in all cases, allowing individuals to know what boundaries they should respect and what happens if someone violates them. Consequently, our legal system has produced many laws covering a wide range of phenomena and developments, including newly developed technologies such as robot technologies. Therefore, it makes sense that regulators turn to legal frameworks to assess whether they have already responded to any associated issue before developing new law. Still, it could be that regulators find that technological developments call into question existing laws. When challenges of robot technologies arise, this may prompt developers, or even regulators, to seek an alternative response.

An artistic example of the latin term horror vacui. Jean-Honoré Fragonard, Les Hasards Heureux de L’Escarpolette (“The Swing”), 1767. Courtesy of the Wallace Collection.

Robot regulatory initiatives: a legal ‘tragedy of commons’

That the speed of technological change challenges regulation is not new. However, this encourages us to rethink how we can best identify the need to regulate new technologies. While legislation frames the rules of power and society’s conduct by establishing rights and obligations for the subjects within a legal system, it evolves as society changes. In turn, technological development represents scientific and technological progress, which challenges many areas of law and the boundaries of legislative application. Although both technology and regulation evolve, they do not always do so simultaneously or in the same direction.

Regulation is often very complicated, calling for a delicate interplay between various constraints, including the plurality and de-centeredness of our legal systems, the unclear fit of the regulated reality to the new development, and unforeseeable impacts of such emerging technology. The foolish race to regulate AI and robotics first is leading to numerous resolutions and regulatory initiatives from European Institutions and the wider global landscape.

As a result of the inability of regulators to keep up with the fast pace of technological innovation, private actors (e.g., ISO, BSI, IEEE) have increasingly taken the lead to develop standards that aim to mitigate the ethical and legal risks and concerns posed by robotics, implying a shift in the centralization of regulation. Nevertheless, the speed with which robotics advances and the absence of a formal communication process between robot developers and regulators from which policies could learn, significantly hinders the understanding of the need to create technology-specific, long-lasting, and adequately fitting benchmarks, both for private and public policymakers. Moreover, harmonization of existing standards at a European level is also significantly lacking. A recent open consultation launched by the European Commission acknowledges that current European Harmonized Standards do not cover areas such as automated vehicles, additive manufacturing, collaborative robots/systems, or robots outside the industrial environment, among others.

In this context of lacking communication processes between technology developers and regulators or policymakers, multiple regulatory bodies with mismatching interests, and the absence of an understanding of the exact gaps and inconsistencies in existing robot regulatory frameworks, neither the regulator nor the robot developers and addressees seem to know exactly what needs to be done while user rights might be at stake in any case.

The failure of existing proposals

Top-down approaches for robot compliance processes are common and have been used to address robot technologies’ legal and ethical aspects in the past. The top-down approach presupposes an existing regulation that can, one way or another, apply to a particular case, to a concrete robot. Directing producers to relevant legislation may ease compliance processes and improve legal certainty. However, this does not guarantee that the legislation itself is adequate and up to date with new technological developments in the field of robotics.

In light of all the issues robot technology introduces, part of the literature accentuates the need for an issue manager, and a proposal has been made for the creation of so-called “Governance Coordinating Committees (GCC)” for the governance of emerging technologies like AI. Moreover, the European Parliament proposed creating a European Agency for Robotics and Artificial Intelligence early in 2017, and Schatz put forward the creation of an emerging technology policy lab within the US general services administration in 2018.

However, the absence of a backstep mechanism that can coordinate and align robot and regulatory development in robot governance has long been overlooked and has only more recently been raised in the literature.

Hybridism as a solution

Upon acknowledgment of this absence, Fosch-Villaronga and Heldeweg built on and complemented these initiatives by proposing the modus operandi of such managers, a governance process that can serve as a backbone in coordinating and aligning robot and regulatory developers. This theoretical model envisions an iterative regulatory process for robot governance, which is hybrid and combines top-down — presupposes the existence of a normative framework that frames technology development — and bottom-up approaches — but that favors room for improvement, allowing this development to generate knowledge about the adequacy of this framework.

This process includes:

  • a Technology Impact Assessment (in this case a Robot Impact Assessment, ROBIA), linked to
  • a Shared Data Repository (SDR), linked to policy and standard maker activities to update the current frameworks, and
  • a Regulatory Impact Assessment.

Robot Impact Assessment (ROBIA)

The process starts with a comprehensive assessment of a robot’s new development or use and its compliance with the current regulatory system (ROBIA). Such an evaluation may forecast the full spectrum of possible consequences of technological advances given an existing legal framework. A ROBIA could be developed at the simulator, test bed, and living lab levels to gather different knowledge depending on the technology readiness level.

Shared Data Repository

Documenting and formalizing these processes (in lessons learned) would allow the regulatory framework to have a grounded knowledge and understanding of what characteristics and regulatory needs new robot technologies have. This knowledge could be highly valuable for future developers. Within this context, ROBIA could serve as an accountability tool and be used at the same time as data generators for policy purposes. In other words, the knowledge extracted from these assessments that would match robots and legislative affordances and limitations could be collected and saved in an SDR, allowing for the creation of an interactive learning process.

The data saved in such a shared database could include ROBIAs, permission processes, related robot legislation, and ethical committees’ decisions. Especially with regard to the latter, collecting such information and compiling barriers, constraints, limitations, and frustrations of any development could improve the transparency of the ethical committees’ decision-making process and contribute to legislation’s learning process.

Regulatory Impact Assessment

The information gathered in the SDR could be channeled to relevant policy and standard makers, who often have the obligation to revise their regulatory efforts from time to time.

An ex-post legislative evaluation mechanism could close and restart the iterative learning governance process, thereby allowing the regulator to extract the collected and saved knowledge and evidence from the SDR and decide on the basis of that information whether there is a need to take regulatory action or not. While the regulator’s eventual policy strategy decision depends on many variables (e.g. uncertainty, the nature of the interest or the context or scale of the development of use), these decisions could provide a common safe baseline to which all researchers, companies and technology developers within the ecosystem should adhere.

‘This model envisions an iterative regulatory process for robot governance, and includes a Technology Impact Assessment (in this case a Robot Impact Assessment, ROBIA), a Shared Data Repository (SDR), and a Regulatory Impact Assessment.’

Putting this into practice: The LIAISON Research Project

The LIAISON research project contributes to this approach and conceives an effective way to extract compliance and technical knowledge from compliance tools (tools that help comply with the applicable legislation), and direct it to policymakers to help them unravel an optimal regulatory framing (including change, revise, or reinterpret) for existing and emerging robot technologies.

LIAISON is part of the H2020 COVR Project. COVR stands for “being safe around collaborative and versatile robots in shared spaces’’ and aims to reduce the complexity in safety certifying cobots significantly. In this respect, the project has developed the COVR Toolkit, which is an online compliance tool that guides developers in their legal compliance process, from helping them find relevant standards/directives/protocols to guide them on how to perform a risk assessment. However, robot regulatory frameworks do not always frame technology development accurately.

To understand the inconsistencies, dissonances, and inaccuracies of existing frameworks, LIAISON focuses on personal care robots, rehabilitation robots, and agricultural robots. Accordingly, we look particularly at the following standards that frame such developments:

  • ISO 13482:2014 on personal care robots;
  • IEC 80601–2–78–2019 on rehabilitation robots; and
  • ISO 18497:2018 on agricultural machinery and tractors.

Seeing regulation (broadly understood) as a tool to advance social goals and subject to adjustments towards this end, the project will discuss in detail different regulatory approaches aiming at using iterative governance processes for robot governance. For that purpose LIAISON aims to engage with representatives from the industry, standardization organizations, and policymakers to present compliance tools as a potential source of information for policy action and understand what information would be useful to them (e.g. through exploratory meetings and questionnaires). Applying such a novel and interdisciplinary methodology is instrumental in identifying unregulated and underestimated challenges (e.g., over-time integrative and adaptive systems’ safety, cyber-physical safety, psychological harm) that regulations should cover, and in gauging the response to, support for, and perceived necessity among relevant stakeholders of the introduction of the LIAISON model.

Following the ideal that lawmaking ‘needs to become more proactive, dynamic, and responsive’, LIAISON proposes the formalization of a communication process between robot developers and regulators from which policies could learn, thereby channeling robot policy development from a bottom-up perspective towards a hybrid top-down/bottom-up approach. This is novel, as most approaches have been top-down solely, disregarding the richness field knowledge could provide in helping identify gaps and inconsistencies in frameworks governing the technology.

The above envisioned process will bring clarity to what regulatory actions policymakers have to take to provide compliance guidance, explain unclear concepts or uncertain applicability domains to improve legal certainty, and inform future regulatory developments for robot technology use and development at the European, National, Regional, or Municipal level.

Within this regard, LIAISON takes the lead in tackling the existing regulatory challenge, thereby linking robot development and policymaking to reduce the complexity in robot legal compliance. As such, LIAISON aligns with the overall H2020 COVR goal to reduce complexity in safety certifying robots by providing policy and standard makers with the necessary knowledge about legal inconsistencies, new categories, or new safety requirements (including psychological) to update existing frameworks, and by adding a link to public and private policymakers to complete the cobot value chain. In this way, LIAISON is a crucial stepback mechanism to help align robot and regulatory development and improve technology’s overall safety and market entrance ease.

‘LIAISON is a crucial stepback mechanism to help align robot and regulatory development and improve robot technology’s overall safety and ease of market entrance.’

Conclusion

We believe that the regulatory cycle is truly closed when it starts — or allows it to be started — again upon new challenges/technologies. LIAISON tests the theoretical model of a dynamic iterative regulatory process in practice, thereby aiming to channel robot policy development from a bottom-up perspective towards a combined top-down/bottom-up model, leaving the door open for future modifications.

In the long-term, the expected project results will complement the existing knowledge on the ‘ethical, legal, and societal (ELS)’ of robotics by providing clarity on how to address pressing, but still uncovered safety challenges raised by robots, and represent a practical, valuable tool to advance social goals in a robotized workplace. Overall, advances in safety robot legal oversight will provide a solid basis for designing safer robots, safeguarding users’ rights, and improving the overall safety and quality of efficiency delivered by robots.

The LIAISON Research Project is part of the H2020 COVR project, a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 779966. If you want to contribute to the project, please get in touch with us through the contact information displayed on this page.

--

--

Eduard Fosch-Villaronga
Eduard Fosch-Villaronga

Written by Eduard Fosch-Villaronga

Dr. E. Fosch-Villaronga is an Assistant Professor at the eLaw Center for Law and Digital Technologies at Leiden University (NL).

No responses yet