Unlock Editor’s Digest Lock for Free
FT editor Roula Khalaf will select your favorite stories in this weekly newsletter.
For still-early technology, generative artificial intelligence already has an impressive resume. You can configure music, summarise the wad of legal documents in seconds, and generate TV ads based on minimal written input. To become smarter, exert errors and expand its use, AI models need to continually consume human-generated content to train. However, the legal framework required to promote this symbiosis between humans and machines is severely behind. This is causing harm to the long-term development of technology and the individuals and businesses that supply it with their own data and insights.
Generic AI models so far have owed the ability to a set of text, sounds, images and videos posted online. Much of this was cut off without the consent of the original creator. Disagreements about how copyright law applies to Gen-AI training also encourage fights of protests and litigation around the world. Model developers tend to argue that a “fair use” exemption applies, for example, to use copyrighted material under certain conditions by researchers using short citation excerpts. Artists, musicians and the media are strongly opposed. They argue that AI companies are violating their right to intellectual property protection. Because they don’t just excerpt the data.
With legal litigation continues across America in Europe over how EU AI law applies, the UK has at least taken an initiative to provide clarity. Last week we closed consultations on copyright and future AI plans. However, the UK government is also caught up in protecting the world-class creative industry while it hopes that AI companies will expand and promote economic growth.
Last week, Prime Minister Kiel suggested that the plan was not set on stone, but talks show that the government is allowing AI companies to use copyrighted work to train models without consent. That approach is wrong. It overturns default rights that have stood for centuries, overturns default rights that cannot benefit from other people’s established IPs, and tilts the arena for content creators. Although opt-out mechanisms are used in the EU, the systems required to handle and implement them in a number of platforms and use cases are patchy.
Lawmakers around the world should recognize that building a sustainable and rapidly growing Gen-AI ecosystem depends on the strength and trust of those who generate source data. Certainly, it allows tech companies to absorb content against their will and build highly scalable competitors, undermining the creative and innovative incentives of individuals and businesses in the first place.
There’s a better way: Supporting the licensing market. Compensation agreement between creators and AI companies encourages efforts by controlling content makers’ copyright (opt-in by design) and job compensation. Additionally, AI models maintain access to high-quality data and free themselves from legal disputes. Many creative businesses, including this newspaper, have already attacked individual content licensing transactions with AI companies. Moving from ad hoc deals to a wider market for training licensing is the next step. Governments can help by supporting industry-driven transparency standards on how training data is used, along with developing software to process and track licenses.
As we review the responses to the consultation, the UK government now has the opportunity to set global standards for how AI and human creativity can coexist. If you want to create a competitive environment to attract AI companies that actually endure, developing a free fair market for data is a lucrative solution.