Let us know about free updates
Technology Simply sign up for Myft Digest and it will be delivered directly to your inbox.
One day last month, a man with a neat white beard stood on the pavement outside his headquarters in Uber in London. He clenched a clear plastic folder containing piles of letters, emails printed, and carefully annotated hand-drawn maps.
Ghulam Qadir was “deactivated” by Uber in 2018 after he said it was a confusion for passengers who paid cash when the app canceled their trip. He tried to explain to Uber what had happened (and hence the map), but said he couldn’t listen to anyone. He even involved his lawmakers. When you rely on MP, it appears to be the Uber driver version where you repeatedly press the zero button when you want to talk to a human. Kadir thought it was crazy. “Parliament should be for world problems, but not for me,” he said.
He was in Uber’s office, handing over 10,000 powerful petitions from drivers and their supporters, protesting what the workers’ rights platform had organized and called “automated firing” and “unfair deactivation.”
Uber said its policy has improved significantly over the past 12-18 months, allowing all drivers to ask to review cases by a panel of experts. “We are constantly working to ensure that our approach is transparent and fair,” the spokesman said. However, the petition is one of the signs of underestimated tensions in the UK government’s approach to the future.
Meanwhile, the Prime Minister has promised to clear up the deficit to pump artificial intelligence “to the veins” of the UK economy, making sure it will increase productivity and therefore growth. Meanwhile, the government has also promised to make working lives more fair and uneasy for low-wage workers.
Of course, these goals are not necessarily mutually exclusive.
Assuming that you have a share of productivity gains, it is the benefit of workers to become more productive. And many staff have already chosen to use generative AI tools (sometimes not even know about it, even employers).
However, one increasingly commonly used use of AI and other algorithmic tools is to make high-stakes decisions about workers, from recruitment to performance management. A survey by OECD, which has more than 6,000 medium-sized managers in France, Germany, Italy, Japan, Spain and the US last year, has now spread the use of algorithm management tools, first known by gig companies like Uber.
Adoption rates ranged from 90% in the US to 40% in Japan. But even the manager seemed a little worried about the tools. Six in 10 said the technology has improved the quality of their decisions, but almost two-thirds said there is at least one concern. The most frequently cited concerns were unclear accountability in case of wrong decisions, followed by inability to follow the logic of algorithmic decisions, and inadequate protection of workers’ physical and mental health.
In the UK, these tensions focus on data (use and access) bills that have passed through Parliament. Unions are worried that one provision in the bill will undermine laws regarding the use of automated decisions. The law, with a few exceptions, moves from general prohibitions to general presumptions to allow and some safeguards that allow people to challenge decisions about them and get an explanation of how it was made and allow humans to intervene.
For TUC policy officer Adam Cantwell-Corn, the union’s umbrella group, the problem is that these safety measures put a burden on individuals and often kick them out after the fact. “(Yes) workers get decisions about firing, performance management, recruitment, etc. First of all, you need to know that it happened. And you have to go through various legal and bureaucratic hurdles to request information and fight it,” he said. “Even in an environment with active and robust union activity, it becomes very difficult. In a very pre-eminent workplace, it becomes virtually unfeasible.”
Current protections against automated decisions are no longer in force. However, the principles behind the changes in the law remain important. The Silicon Valley technological mantra of “moving fast, breaking things” doesn’t work that well when the “things” that can actually break are people, standing on the pavement and having a pile of printed emails in your hand. Especially if you are the government that promised to be by their side.
sarah.oconnor@ft.com