The European Parliament is advancing toward the West’s first AI regulations with the “European AI Act.” This new law proposes a risk-based approach to AI regulation, categorizing AI applications into four risk tiers: unacceptable, high, limited, and minimal or no risk. Unacceptable risk applications, which include manipulative or deceptive AI systems and those inferring emotions in critical areas, are banned.
The Act also imposes requirements on developers of “foundation models” such as large language models and generative AI. Developers must apply safety checks, data governance measures, and risk mitigation, while also ensuring compliance with copyright laws regarding the training data used.
At a high level, none of this sounds bad, but the devil is in the details. This proposed legislation goes too far and it is guaranteed to limit the EU’s future productivity. Like GDPR, the problem is that the European AI Act could set a “global standard” for AI regulation – and the unintended consequences of this ill-conceived legislation would then have global repercussions.
We live (and prosper) in a society of laws, and there is no doubt that AI needs some kind of regulation and oversight. However, whatever rules are enacted need to be flexible and adaptable to the pace of technological change we are experiencing. Said differently, the laws must be as innovative as the technology they are trying to regulate. To mix my metaphors, from that standpoint, the “European AI Act” is a big swing and a miss.
If you want to gain a deeper understanding of the areas in AI that truly need regulation, consider taking our free online course, Generative AI for Execs.