Let us know about free updates
Simply sign up for myft AI digest and it will be delivered directly to your inbox.
Lloyd’s insurance company in London aims to benefit from concerns about the risk of expensive hallucinations and errors caused by chatbots, so it has launched a product that covers the losses caused by the malfunction of artificial intelligence tools.
The policy developed by Armilla, a startup backed by Y-combinator, covers the costs of court claims against the company if it is sued by a customer or another third party who caused harm due to poor performance of the AI tool.
The insurance is undertaken by several Lloyd insurance companies to cover expenses such as compensation and legal costs.
While companies have been rushing to adopt AI to increase efficiency, some tools, including customer service bots, face embarrassing and costly mistakes. Such mistakes can occur, for example, due to flaws in “hastising” AI language models and organizing things.
Virgin Money apologized in January after an AI-powered chatbot rebuked its customers for using the word “Virgin,” but last year it vowed to customers and disabled some of its customer service bots after calling its owner “the world’s worst delivery service company.”
Last year, a court ordered Air Canada to respect the discounts configured by customer service chatbots.
Armilla said that if Air Canada chatbots find performance worse than expected, the losses from selling tickets at a lower price are coverage for insurance contracts.
Armilla CEO Karthik Ramakrishnan said the new product could encourage more companies to adopt AI. This is because many people are thwarted by the fear of chatbots and other tools falling apart.
Some insurers already include general technical errors and AI-related losses within omission policies, but these generally include minimum payment limits. According to Rockton broker Preet Gill, a general policy covering losses of up to $5 million could provide a $25,000 sublimitation for AI-related liabilities.
AI language models are dynamic and mean “learning” over time. However, the losses caused by errors caused by this process of adaptation are not usually covered by typical technical errors and omission policies, said Logan Payne, a broker at Lockton.
Mistakes from AI tools are not sufficient to trigger payments under Armilla’s policy. Instead, coverage begins if the insurance company determines that AI has done below its initial expectations.
For example, if Chatbot only provides clients or employees with the correct information 85% of the time, Armilla’s insurance can pay.
“We evaluate AI models, get used to the probability of degradation and correct whether the model has deteriorated,” Ramakrishnan said.
Tom Graham, head of partnerships at Lloyd’s insurer Chaucer, is Lloyd’s insurer who has underwritten policies sold by Armilla, and said his group would not sign a policy covering AI systems that it deems prone to breakdowns. “Like any insurance company, we’re selective,” he said.