Everyone needs to calm down about AI
British politicians have fallen for the apocalyptic fantasies of effective altruists.
Want to read spiked ad-free? Become a spiked supporter.
Recent developments in artificial intelligence (AI), in particular large-language model (LLM) ChatGPT, have undoubtedly been causing a few problems.
Community websites that rely on the submission of accurate user-generated content, such as Stack Overflow, have had to temporarily close their submission queues. Students engaged in unsupervised learning can no longer be trusted to be producing their own work. And bogus AI-produced ‘books’ have flooded Amazon. Many of these issues were anticipated.
But there have been further, largely unexpected problems, too. ChatGPT has been prone to generating false information, all the while maintaining its smooth plausibility – something researchers call ‘hallucinations’.
And it’s helped fraudsters, too. Voice cloning allows swindlers to generate convincing speech from the briefest sample of a speaker’s voice. So parents need to exchange pre-agreed codewords before they can begin to converse remotely with their children. And we’re now beginning to see AI models ingest AI-generated material, strewn with errors, as their own training material, and spit them back out as established fact – a phenomenon researchers call ‘autophagy’. Our information environment is degrading more rapidly than anyone expected.
The technology industry, suddenly, is very keen to regulate itself, and a besotted UK government has been encouraging it to write those regulations. In April, prime minister Rishi Sunak announced a £100million Foundation Model Taskforce, since renamed the Frontier AI Taskforce, and an AI Safety Summit, which will be held at Bletchley Park at the start of November.
But far from tackling any of the new issues that the LLMs have created, the taskforce is preoccupied with something else altogether. It’s fascinated by speculative apocalyptic scenarios. For example, the Department for Science Innovation and Technology (DSIT) told the Telegraph recently that AI models will have devised new bioweapons within a year.
Such scenarios aren’t just implausible, they are impossible. Such ‘God-like’ AI, or an AGI (artificial general intelligence), capable of sophisticated reasoning and deduction, is far beyond the capabilities of the very dumb word-completion engines of today. Many experts dispute claims that LLMs are even the route to get to better AI. Ben Recht, a professor at the University of California, Berkeley, calls LLMs a parlour trick. Recently, both the popular usage of ChatGPT and its usefulness have been declining. ChatGPT’s latest iteration, GPT-4, is worse than GPT-3 in some situations, whereas it should be better.
If people are ‘concerned’ about AI, it’s because the British government, usually so keen to tackle misinformation, has become a conduit for it. The focus on fictional long-term harms, rather than real and immediate problems, is a consequence of the peculiar, self-selecting nature of the government’s expert advisers. For over a decade, many of Silicon Valley’s rationalist bros have been attracted to a philanthropic social movement called effective altruism (EA). EA is characterised by an imperative to make as much money as possible in order to give it away to charitable causes. And many of its adherents are also ‘long-termists’ – that is, they believe that the distant future should be given the same weight as the present in moral and political judgement. As a result, the in-tribe signalling of EA communities tends to reward those proposing the most imaginative and outlandish catastrophic scenarios.
High-status rationalist superstars like Eliezer Yudkowsky and Nick Bostrom think the very deepest and most terrible thoughts. But if you’re lower down the EA social hierarchy, you can still play, too, perhaps by forecasting imminent shifts in GDP or employment thanks to the baleful influence of AI. Their ‘long-term’ speculative fictions need a plot device – and AI serves the plot. This apocalypticism must be infectious, for even the MPs aren’t immune. As one improbably speculated earlier this year, AI may arbitrarily decide to remove cows from the planet.
Money rewards such wild speculation. For a while, disgraced FTX founder Sam Bankman-Fried, whose fraud trial begins this week, was a generous sponsor of EA causes. But such is the attraction of EA to the super-wealthy, that the money has continued to flow despite Bankman-Fried’s disgrace. For example, a posting by Open Philanthropy, a significant EA donor, confirms a significant expansion of its ‘global catastrophic-risks teams’.
We should not be surprised to discover that the rationalist bros of EA have such influence on UK policy, when they helped to devise it, and then captured it. As Laurie Clarke explains in Politico, three of the Frontier AI Taskforce’s four partner organisations have EA links: the Centre for AI Safety, Arc Evals and the Community Intelligence Project. The latter began life with a Bankman-Fried donation, Clarke notes. The outlandish notion of a future AI wiping out humanity – the details of how are rarely explained – is promoted by the major AI labs, who are all EA supporters.
‘People will be concerned by the reports that AI poses an existential risk like pandemics or nuclear wars’, claimed Sunak in June. ‘I want them to be reassured that the government is looking very carefully at this’, he assured us. But people will only be concerned because political figures like Sunak are lending their authority to such outlandish claims.
As Kathleen Stock has written, EA-driven policy tends to deprecate individual agency. Émile P Torres, an EA apostate and author of Human Extinction: A History of the Science and Ethics of Annihilation, tells Clarke that the EA cult leaves only extremes for policymakers to consider. ‘If it’s not utopia, it’s annihilation.’
The takeover of government policy by effective altruists is astonishing, not least for the ease with which it has been achieved. There is a striking parallel between AI catastrophism and apocalyptic climate alarmism, which allows policymakers to pose as saviours of the planet in the long-term, while ignoring our immediate concerns. Our rivers and beaches get filthier from lack of investment in water infrastructure, while MPs, their eyes fixed heroically on the long-term, congratulate themselves on being ‘world leaders’ in Net Zero. Similarly, the putative threat of AI allows Sunak to pose as the saviour of humanity, while the internet drowns in spam.
It seems our policymakers are in danger of losing themselves in an apocalyptic fantasy world. The sooner they free themselves from the grip of the EA doomsayers, the sooner they might get round to tackling some of the actual problems with AI.
Andrew Orlowski is a weekly columnist at the Telegraph. Visit his website here. Follow him on Twitter: @AndrewOrlowski.
Picture by: Getty.
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.
Comments
Want to join the conversation?
Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.