Stay informed with free updates
Simply sign up for the Artificial Intelligence myFT Digest, delivered straight to your inbox.
Visionaries in Silicon Valley dream of making millions with cool, futuristic products that excite consumers, like the Metaverse, self-driving cars, and health-monitoring apps. The more blunt reality is that most venture capitalists make their best returns by investing in frivolous things that are sold to other companies.
Over the past two decades, Software-as-a-Service has emerged as one of the most profitable areas for VC investment, producing 337 unicorns, or technology startups valued at over $1 billion. Ta. But typical SaaS businesses such as customer relationship management systems, payment processing platforms, and co-design tools rarely generate consumer excitement. Investors love them as well. They require little capital, can scale quickly, and can generate significant returns from reliable, and often price-neutral, corporate licenses.
That may also apply to generative artificial intelligence. For now, consumers are still dazzled by the underlying models’ seemingly magical ability to generate large volumes of plausible text, video, and music, and to clone sounds and images. Big AI companies are also touting the value of personal digital agents for consumers that supposedly make all of our lives easier.
“Agentic” will be the word of the year next year, OpenAI’s chief financial officer Sarah Friar recently told the FT. “It could be a researcher, or it could be a helpful assistant for ordinary people and working moms like me. In 2025, we will have the first highly successful agents in place to help people with their daily lives.” she said.
While major AI companies like OpenAI, Google, Amazon, and Meta are developing general-purpose agents that anyone can use, a small number of startups are working on developing more specialized AI agents for business. Currently, generative AI systems are primarily viewed as co-pilots that augment human employees and help them write better code. AI agents could soon become autonomous autopilots, completely replacing business teams and functions.
In a recent discussion, Y Combinator partners said the Silicon Valley incubator is being used by startups looking to apply AI agents to areas such as recruiting, onboarding, digital marketing, customer support, quality assurance, and debt. He said he is being flooded with amazing applications. Search and bid on collections, medical billing, and government contracts. Their advice was to “find as many boring, repetitive administrative tasks as possible and automate them.” Their conclusion was that vertical AI agents have a good chance of becoming the new SaaS. It is expected that more than 300 AI agent unicorns will be created.
However, two factors can slow adoption speed. First, line managers are unlikely to rush to deploy AI agents if they truly can replace entire teams or functions. Executive suicide is not a strategy taught in most business schools. A ruthless chief executive who understands technology can inflict brutality on his subordinates in the pursuit of greater efficiency. Or, more likely, new corporate structures will evolve as startups seek to take full advantage of AI agents. Some founders are already talking about creating autonomous companies with zero employees. However, their Christmas party can be a little messy.
A second frustrating factor may be concerns about what happens when agents increasingly interact with other agents and humans are left out of the loop. What is this multi-agent ecosystem and how does it work in practice? How can we ensure trust and ensure accountability?
“You have to be very careful,” says Silvio Savarese, a Stanford University professor and chief scientist at Salesforce, a giant SaaS company that is experimenting with AI agents. “Guardrails are needed for these systems to operate properly.”
Attempting to model and control intelligent multi-agent systems is one of the most interesting research areas today. One way is to train AI agents to flag areas of uncertainty and request assistance when faced with unrecognized challenges. “AI can’t be a competent liar. It has to come to humans and say, ‘Help me,'” Savarese says.
Otherwise, an improperly trained agent can run amok, like the magic broom in Johann Wolfgang von Goethe’s poem “The Sorcerer’s Apprentice” that is told to fetch a bucket of water. There are concerns that this may happen. “The spirits I summon ignore my commands. They are outside of my control,” the apprentice laments as he looks at the chaos caused by his inexperienced magic. It’s funny how old fictional dilemmas now take on surprising new computational forms.
john.thornhill@ft.com