Unlock Editor’s Digest Lock for Free
FT editor Roula Khalaf will select your favorite stories in this weekly newsletter.
One of the biggest policy challenges of our time is how to properly regulate artificial intelligence. With powerful general-purpose technologies being adopted rapidly across society and the economy, the challenge is to maximize their advantages while minimizing their shortcomings. AI has already proven to increase productivity in areas such as software, marketing and management. However, its widespread use raises real concerns about more harmful effects, ranging from algorithm identification to deepfakes and disinformation. Grok Chatbot’s praise for Adolf Hitler last week highlighted the myriad issues that appear.
Until now, regulators and lawmakers have not been able to grasp the full dimension of the challenge. According to UNESCO, more than 30 governments have enacted some form of AI regulation since 2016. However, most of these initiatives are consistent with the rapidly evolving scale or complexity of this issue. A better approach is possible.
In the US, the Trump administration prioritizes innovation over regulation. AI is considered important to help the US maintain its technological advantage over China. But despite Washington’s failure to pass federal laws on AI, many states are rushing to fill the gap. At least 45 people have introduced 550 bills focused on AI, according to the National Conference covering privacy, cybersecurity, employment, education and public safety.
So, the large AI companies on this fragmentary regulation are some of the big companies on this fragmentary regulation, so they are lobbying the US Congress to impose a 10-year moratorium on all state laws on the ground. Naturally, the Senate rejected one idea of this rash that was included in the “big beautiful bill” from 99. However, the next logical step is for Congress to break the federal laws that prevent the need for such national activity before it can occur. It makes no sense for individual states to adopt a variety of rules, for example, regarding self-driving cars. National, or ideally international standards should be applied.
If Washington is at risk of underregulating AI, the EU is at risk of overregulating technology through EU AI laws, and is gradually taking effect. European Start-ups and Industry Associations have warned that the broad provisions of the law will place an undue burden on small and medium-sized businesses and will entrench the power of larger incumbents. The EU pushed last week to publish its code of practice for general purpose AI despite intense lobbying against it.
Other technicians emphasize the practical challenges of trying to regulate the basic technology itself, rather than simply focusing on applications. The intentions of EU lawmakers may be commendable, but AI law risks hobbing European companies trying to exploit that possibility. Startups fear they may end up spending more on lawyers than software engineers to comply with the law.
Rather than trying to regulate AI as its own category, it would make more sense to focus on technology applications and modify existing laws accordingly. Competition policy should be used to confirm the concentration of corporate strength among large corporations. Existing consumer, financial and employment regulations need to be changed to protect rights that have long been enforced by law.
Instead of adopting cleaning methods that are difficult to comply and enforce, it would be smarter to focus on reducing certain real-world harms and ensuring true accountability of technology deployers. Voting in many Western countries shows that users are naturally wary of the indiscriminate adoption of AI. A narrower, clearer, and more enforceable rule will help deepen consumer trust and accelerate its beneficial deployment.