AI Doomers Are Finally Getting Some Long Overdue Blowback
Shortly after ChatGPT’s release, a cadre of critics rose to fame claiming AI would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and 60 Minutes interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about AI autonomously hacking the financial system — or worse. And last week, President Biden issued an executive order imposing some restraints on AI development.
That was enough for several prominent AI researchers who finally started pushing back hard after watching the so-called AI Doomers influence the narrative and, therefore, the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of AI destruction had led to a “massively, colossally dumb idea” of requiring licenses for AI work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding AI progress and exploiting “preposterous” concerns. A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pre-training data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”
Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind, and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.
Because all this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict AI development outside a few firms. Intense government involvement in AI research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller AI startups and open-source developers who don’t have the same luxury.
“There’s a possibility that AI doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of startup accelerator YCombinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”
Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” he told the Australian Financial Review.
The AI Doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably — and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Learning Research Institute, told a rapt audience at TED this year. He confessed he didn’t know how or why an AI would do it. “It could kill us because it doesn’t want us making other superintelligences to compete with it,” he offered.
After Sam Bankman Fried ran off with billions while professing to save the world through “effective altruism,” it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the Doomer narrative presses on, it threatens to rhyme with a familiar pattern.
Big Tech companies already have a significant lead in the AI race via cloud computing services that they lease out to preferred startups in exchange for equity. Further advantaging them might hamstring the promising open-source AI movement — a crucial area of competition — to the point of obsolescence. That’s probably why you’re hearing so much about AI destroying the world. And why it should be considered with a healthy degree of caution.
The post AI Doomers Are Finally Getting Some Long Overdue Blowback appeared first on TheWrap.