Given the increasing impact of AI on human lives, policymakers are asking an important question: How to ensure that AI systems currently deployed are effectively compliant with existing regulation?
To answer that question, let us wind the clock back a few years when AI technologies started to escape from labs and conquer human societies. They took regulators and policy-makers across the globe by surprise. It was as though science fiction suddenly got very real and nobody really knew what to do about it.
Thus, for a while, our journey with our new AI companions unfolded in a near regulatory vacuum. Then humanity recovered from this initial shock and engaged in frantic rule-making, determined to get a grip on AI innovation. What used to be an AI regulatory void is now filled with a bewildering variety of rules produced by a multitude of state and non-state actors.
This is a lot of progress. And yet, puzzlingly, yesterday’s Wild West and today’s crowded AI regulatory landscape have one thing in common: Neither is very effective in influencing behavior.
So how did we get here and how do we come up with rules that make a difference?
1 Which Rule, Why, and How Now?
As the recently released Feasibility Study of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) points out, there are many rules that apply to some aspect of AI. However, those of them that are binding hard law instruments typically pre-date the advent of AI and hence only tangentially and indirectly address challenges raised by AI systems.
Most rules specifically designed for AI, on the other hand, are very high-level, non-binding soft law instruments, which leave much room for diverging interpretation and implementation, inevitably leading to fragmentation and clashes. Not least because of the strongly differing interests of the stakeholders involved in the formulation of these rules, so far very little international consensus is observable on any tangible details.
In other words, the uncertainty of not knowing what rules to expect has given way to a different kind of uncertainty: One of not knowing which rules are really right, which one(s) should be complied with, and how exactly.
From the perspective of the addressees—individuals, organizations, and even states— there is little practical difference between these two settings. There are simply too many options to choose from and too much uncertainty involved with any given choice.
2 Less Is More
Before going into how to dial down this uncertainty, note that compliance has two sides: We are, of course, expected to abide by existing regulations and policies. But—perhaps less obviously—regulators and policymakers are supposed to act on behalf of society with the public interest at their heart. That is, they owe it to society to design AI rules that are not only right for a given context but also clear enough to effectively guide addressees towards the expected behavior.
It follows that we cannot reasonably expect compliance unless our rules meet these high standards, so here are a few thoughts on how to improve the status quo:
1. Appropriate and flexible rules: What is right depends on a combination of technical, economic, social, cultural, moral, legal, and governance factors, and is seldom easy to agree on.
Generally speaking, the purpose of regulation is to incentivize socially rather than individually optimal outcomes. Modern regulatory best practices rely on broad international and multi-stakeholder coordination. Such a participatory approach—at least in principle—ensures that rules are not only informed by the full range of relevant expertise but also widely accepted.
An additional challenge—particularly in the AI context—is the need for regulatory frameworks and processes to flexibly adapt to rapidly changing technological advances.
2. Clear, consolidated, and consistent AI regulatory frameworks: As for clarity, there are two main avenues for improvement:
First, having reached the limits of what can be done by vague, general principles, we need to start elaborating these provisions, giving them a more concrete and hence readily implementable content. The focus should shift from rule-making to practical implementation.
Second, the existing array of AI-related rules spawn confusion and pose excessive regulatory burden especially in the cross-border context. Given the global reach of AI technologies, consolidating them in consistent domestic and international AI regulatory frameworks—made up of a combination of hard and soft law instruments—would go a long way in enhancing clarity. Fewer, coordinated rules with wider reach are easier to identify and comply with than many different, likely competing local rules.
3. Incorporating bottom-up soft law self-regulatory initiatives: Innovative self- regulatory best practices designed by AI developers and other stakeholders are vital parts of the soft law elements of regulatory frameworks. Many AI developers are enthusiastic about furthering responsible AI development and adoption as well as informing regulatory dialogues by developing such best practices.
At Soul Machines, we are putting a lot of thought into creating agile corporate processes based on core AI ethics principles, integrating them into both our high-level policies and lower-level implementing processes. Our aim is to establish specific, readily implementable criteria for, e.g., robustness, transparency, and accountability for the entire lifecycle of our products. In doing so, we hope to provide clear guidance to our employees, enable due scrutiny by both regulators and society (to whom we are foremost morally accountable), and contribute to the creation of emerging standards.
For instance, we translate the accountability principle into concrete company practices and product features by providing a web tool where users can report objectionable content produced by our Digital Humans. This is valuable feedback that enables us to correct any anomalies in our products.
There is a lot of work ahead of us and all stakeholders have a vital role to play—but together we can do it.