Prepare For An AI Policy Upending Under Trump, Say Experts
Analysis President Biden has taken some steps to create a concrete AI policy in the United States. However, while there's plenty to argue about in terms of how effective the administration's moves have been, experts The Register spoke to agree that there are likely to be more big changes once Donald Trump begins his second term.
For starters, expect Biden's 2023 executive order on AI to be scrapped – possibly as soon as Trump takes office, Joel Meyer, Domino Labs public sector president and former Homeland Security strategic initiatives deputy assistant secretary, told The Register.
"Trump and members of his team have said publicly that they intend to repeal the AI executive order. I don't see any reason why they would have changed their minds. I think for the most part, in almost every area, the order of the day will be changed."
One element of Biden's AI executive order that might hang around, Meyer predicted, is the AI Safety Institute housed within NIST that was established as part of the action. While Meyer warns that any of Biden's executive actions could be fair game for the Trump administration to scrap in the name of political point scoring, such established offices are unlikely to vanish entirely.
"Maybe it'll have a lighter touch, but it will continue in some form," Meyer opined. "Frankly, a lot of industry thinks it's a good thing."
As for the national security memorandum (NSM) that's been one of the other recent hallmarks of Biden AI policy, Meyer believes that's likely to hang around too, as it includes many things that have bipartisan consensus.
"I think it was a very strong document and there is a much greater prospect that the next administration will continue more of the parts of the [NSM] than the executive order," Meyer said. "That's an indication of more of a bipartisan consensus on the national security aspects of AI [such as] energy, chip production, and export controls."
Industry's role likely to grow
Republicans tend to favor a more hands-off approach to regulation that lets the market sort things out, and that's exactly what Inna Tokarev Sela, former SAP director of machine learning and founder of Illumex AI, believes will happen.
"What I see in the market is enterprises, like big banks and financial services companies, are all introducing their own [AI] standards," Tokarev Sela told us. "I think it might be that [the Trump] administration is going to follow what happens in the market from regulated industries and not the other way around."
Tokarev Sela said that the EU has taken the opposite approach, with its bloc-wide AI Act passed earlier this year – something she doesn't believe would ever pass muster in the US. In some Asian countries, on the other hand, the industry has moved way ahead of regulation, Tokarev Sela said. She predicted the US would likely settle into a middle-of-the-road approach that waits to see what the industry does before taking regulatory steps to cement certain standards.
Unlike Meyer, however, Tokarev Sela doesn't believe the Biden administration has moved with enough urgency to regulate AI. She also believes that Biden's executive order and the NSM were largely ineffective.
The end result of measures like those – which tend to target the largest AI models and biggest players in the industry – has meant smaller businesses have been left to fend for themselves in a market that gives them little room to breathe while sandwiched between the likes of OpenAI, Microsoft, and Google.
Tokarev Sela believes the Trump administration is likely to be good for smaller companies and the AI industry as a whole, with the likelihood that the incoming president favors a framework that hands more responsibility back to the industry.
"I don't see regulation and innovation compromising with each other," Tokarev Sela told us. "I think this will actually be a responsible way to deal with this technology."
Meyer expressed some trepidation over whether having the federal government not take the lead on AI policy will be a good thing – but he believes the industry knows what sort of risks it needs to avoid.
"The frontier LLM companies have a really good understanding of what those risks are," Meyer explained. "But I think there's a real variance in how they're approaching investing in guarding against those risks."
Meyer cited Anthropic as focusing more on addressing risk, while other developers are pushing more on feature and technology development, he noted.
"Where you don't have the government pushing industry there becomes more variance, and you see more differentiation," Meyer added. "And I think there is certainly risk there."
And, of course, a federal government abdication of AI leadership doesn't mean the regulatory back-and-forth ends.
"I think we're going to see a lot of activity at the state level," Meyer predicted. California, which Meyer noted is home to all the major frontier AI leaders, has tried making its own regulations – not all of which worked out – and he expects more of the same.
"Governor Newsom has said he's going to take a bunch of executive actions to protect against the Trump administration in California," Meyer told us. "I think you might see that start to extend into AI policy as AI gets more and more powerful and [if] the federal government continues to take a lighter touch."
Trump will lack excuses for inaction
Johannes Himmelreich, assistant professor of ethics and public policy at Syracuse University, told us that, while there is technically a federal-level AI policy in the form of the 2020 National Artificial Intelligence Initiative Act [PDF], the lack of any actual federal AI regulation is an "embarrassing oversight."
The 2020 AI Initiative Act – passed during the first Trump administration, mind you – didn't create any real policy surrounding AI. Instead, it laid the structures necessary to stand up things like the White House National AI Initiative Office shortly before the end of Trump's term.
Himmelreich did place some blame on the Biden administration for the likely rollback of the only meaningful AI rules in the US – the 2023 executive order – because of the politicized nature of the document. He noted the inclusion of "equity" requirements in the AI executive order, and direct references to earlier executive action on racial equity, as putting a giant target on the policy's back.
"If they had strictly stuck to doing technocratic policy and formulated this as a matter of bureaucratic procedure, avoiding terms to which the Republican administration is allergic, maybe that executive order could have stood a chance," Himmelreich suggested.
- AI firms and civil society groups plead for passage of federal AI law ASAP
- Lawmakers advance bill to tighten White House grip on AI model exports
- California trims AI safety bill to stop tech heads from freaking out
- Global powers sign AI pact promising to preserve human rights, democracy
Of course, that doesn't mean executive action isn't called for. Himmelreich noted that the lack of legislative policy in the Biden years wasn't a deliberate choice – it was a result of failure on the part of lawmakers to reach a compromise.
"Executive action, whether from the president or from the heads of agencies, picks up the slack that Congress leaves," Himmelreich explained. With Republican control of the House, Senate, and White House, however, that excuse is gone.
"The industry will surely have draft legislation ready to suggest to their favorite senators," Himmelreich predicted. "We might actually have the capability to act with the unified government, so any inaction [under Trump] would be a deliberate decision."
What will Trumpian AI regulation look like?
One thing is for certain, Meyer said. It doesn't matter who is in office or who controls Congress – the AI hype train is rapidly becoming an unstoppable force.
"I do suspect that under the Trump administration there may be more of a go-fast approach," Meyer said, but broader adoption and use of AI is "going to happen no matter what."
With adoption growing rapidly – especially inside the federal government – policy will be needed. And it's likely coming, though whether spearheaded by industry or adopted after a wait-and-see period is still uncertain.
Himmelreich told us he suspects any adoption we see during the next four years will focus on clarifying liability – likely in a very industry-friendly way – and he doubts strong consumer protections will be adopted along with those rules.
At the end of the day, however, he believes some law is better than no law – especially when it comes to something as world-changing as AI.
"Having any sort of law can remove some uncertainty, and with legal uncertainty everyone loses," Himmelreich observed. "An incomplete law that's industry-friendly will result in winners and losers, and I don't have faith we'll protect the losers – but at least it's better than doing nothing."
Take these – and any other predictions surrounding Trump 2.0 – with a healthy heap of salt, Himmelreich warned: "If this administration will be anything, it will be unpredictable." ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more