What could Donald Trump’s election victory mean for AI?
The second Donald Trump presidency will have significant impact on a wide-range of global issues - with the development of artificial intelligence (AI) likely to be among them.
In the build-up to his election victory, Trump’s election platform stated: “We will repeal Joe Biden's dangerous executive order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology."
Some have warned Trump’s potentially deregulatory approach to artificial intelligence could lead to a global AI ‘arms race’.
What could that mean for AI users in the UK and around the world?
What is Trump likely to mean for AI development?
Many predict that the second Trump term will see fewer regulations around AI – with Joe Biden’s executive order having mandated that companies share information around security with the government, for instance.
Prominent Trump supporter Elon Musk has described AI regulation as ‘annoying’, although he has admitted having a ‘referee’ is a good thing.
Former chief data officer for Network Rail, Caroline Carruthers, said that Trump’s arrival will significantly accelerate the global AI race.
“With Trump back in office, there is likely to be a shift in how AI and big tech companies are governed," she said. "As he appears to be inclined to pare down regulation, favouring a broad framework over clear, absolute rules.
“The US taking a more hands-off approach will significantly impact the global AI race and could accelerate AI funding and adoption as governments around the world will be forced to consider how to stay competitive.
But regulators need to think carefully about whether to follow Trump’s lead, Carruthers warned.
"AI can bring huge benefits, such as increasing efficiency and accuracy, for example detecting the early signs of diseases under a doctor's management, but maintaining a balance between innovation and regulation is an absolute priority to ensure the public's safety globally," she said.
Carruthers added that AI systems should always have a 'human in the loop'. This means a person either checking what the AI system is doing and, where necessary, intervening.
“I’m a big advocate for always having a human in the loop somewhere. So even if it’s just checking the consequences and the outcome of the actions that we’re taking and having the ability to go back if we need to, I think that the moment we’ve given that up, we’ve lost."
What could this mean for the UK?
Trump’s victory is likely to lead to a rapid push for AI supremacy, which could have important implications for the UK, says former Dragon’s Den investor Piers Linney.
He likens the race to gain dominance to the Manhattan Project where the first nuclear bomb was developed in the Second World War.
Linney, who works for consultancy firm Champions UK, said: “Trump’s election signals a pivot in US AI policy, with national security as a core focus. A rapid push for US dominance in AI is likely comparable to a modern ‘Manhattan Project', prioritising defence applications and competition with China.”
Trump’s likely ‘light touch’ approach could also empower US tech giants to write their own rules, Linney said. "For the UK, American export restrictions could limit access to advanced AI components, pressuring the UK to bolster domestic AI development or align more closely with EU standards," he observed.
"Overall, Trump’s AI stance is very likely to promote US tech supremacy, especially given its homegrown hyperscale infrastructure and AI processor manufacturers.
“This could result in a fragmented global AI landscape where smaller nations must adapt or risk falling behind in a game where the winners may take it all."
What are the risks?
The risks of taking an accelerated approach to AI could be considerable, experts have warned.
Carruthers warned that there are huge potential social risks around the technology – and an accelerated, unregulated ‘arms race’ could lead to disastrous effects.
She says that the risks include “rocketing consumer anxiety due to large scale AI powered job automation, and an increase in social manipulation, where AI algorithms fill users' feeds with potentially misleading or harmful content related to previous media they’ve viewed.
“Unregulated AI could also have a negative effect on privacy and security, as powerful groups could increasingly track an individual's movements, monitor their activity, relationships, and politics.”
Trump’s push against ‘woke’ bias in AI could also have negative effects, warns Simon Bain, AI expert and CEO of OmniIndex.
Bain said: “A lack of oversight could result in the development of AI systems that perpetuate biases and discriminate against marginalised groups, either accidentally or deliberately.
“Without the transparency that regulations can bring to AI and tech more widely, consumers will not necessarily know that the service they might be paying for is being done by an AI built on potentially discriminatory and inaccurate information," he added.
“In the healthcare sector, a deregulated approach could lead to the development of AI-powered diagnostic tools that are not rigorously tested or validated. These tools could misdiagnose patients, leading to potentially harmful outcomes, such serious conditions being missed and going untreated.
Bain also warns that an unregulated approach could be disastrous for privacy, with AI ‘harvested’ to train AI systems in a way that might expose private data.