EU tries to regulate AI. The European Commission published a proposal for an Artificial Intelligence Act (AIA) in April 2021.
The Act will, if accepted, establish common harmonized rules for European AI systems. The rules will likely ban a small number of unaccetable artificial intelligence practices, establish market entry requirements for what are called “high-risk” AI systems, set harmonised transparency rules for e.g. customer-facing chatbots, and establish a pan-European market monitoring, market surveillance and governance regime.
The battle of two AIs
Artificial intelligence system is the linchpin concept in the Act. Whether the Act works, depends on EU getting the definition of AI right. And now, there a battle over what AI is.
The original April 2021 Commission proposal framed articial intelligence systems “software that is developed” with “artificial intelligence techniques” such as “machine learning”, “logic- and knowledge-based approaches” and “statistical approaches”.
This AI is framed as static code. It does not learn, infer, reason, or model. The AI fairy dust stays inside laboratories.
A November 2021 Presidency compromise proposal AI put forward another AI. The mundane AI as sophisticated software frame was jettisoned. Now AI systems are hyped up. they “infer” “how to achieve an objective” “using learning, reasoning or modelling implemented with” the AI techniques. In short, the systems are dynamic and have acquired human-like cognitive capabilities. The AI fairy dust is sprinkled all over the systems.
What is at stake in the battle of AIs?
While researchers and industry actors have engaged in a quarrel over whether the proposals really capture the ”essence” of AI, the real stakes of the battle lie elsewhere. The fight is over risks and compliance costs.
How AI is defined will determine what AI systems will be subject to the AIA rules. Adopting the original Commission proposal would entail that a relatively large number of high-risk AI system would be regulated. The Presidency compromise would limit the coverage to select few of the most sophisticated and risky systems, if any.
Industry actors have argued that the AIA, if adopted, will impose excessive compliance costs on the industry, stifle innovation and create a competitive disadvantage for European companies. Limiting the scope of the Act to “real AI” would, thus, make sense.
NGOs and academic actors, have, in turn, argued that even relatively unsophisticated AI systems may cause a lot of havoc and should be regulated.
The risk averse seem to be losing. It seems likely that the EU will decide that “real” AI systems are dynamic unstable entities with human-like cognitive capabilities, and leave the normal, less sophisticated AI systems unregulated.
This is a disconcerting outlook as e.g. critical algorithm studies have demonstrated that even the stable, non-dynamic, non-anthropomorphic automated decision-making systems are difficult enough to control and may cause significant social harms.
The writer works as an assistant professor of private law at the University of Turku. Viljanen is also a member of the Executive committee for Future technologies and digital society -strategic profile at the University of Turku.