Panel discussion with Tino Cuellar (President, Carnegie Endowment for International Peace), Dario Amodei (CEO, Anthropic) and Elizabeth Kelly (Executive Director, U.S. AISI)
The United States is in the countdown to both the festive season and a new administration. In Washington DC, Halloween pumpkins and Biden-Harris yard signs have given way to Christmas trees and intense speculation about what President-elect Trump will do once in office, including on the AI front.
Not much was said on the campaign trail about AI and technology. Priorities on tax, immigration, and tariffs may be front of mind in the initial phases of Trump’s new Presidency. That said, AI’s potential to strengthen U.S. industry, and its central role in America’s economic and geopolitical contest with China, will surely push the issue forward.
A stronger focus on innovation
The Trump campaign’s “Agenda 47” stated an intention to repeal President Biden’s Executive Order on “Safe, Secure and Trustworthy Development and Use of AI” to help champion innovation. Republican lawmakers considered the EO’s reporting requirements on firms to be onerous and potentially scaring away would-be innovators.
Tech firms – whose views permeate the policy discussion – don’t speak with one voice on AI regulation. But recently they’ve made large language models available for national security purposes (Scale AI, Meta and OpenAI) and offered recommendations to the government to tackle AI infrastructure bottlenecks (OpenAI), which seems in line with a strong pro-innovation narrative.
Meanwhile, Elon Musk, who played a visible role in the election campaign and remains closely connected to the President-elect, sees efforts to use guardrails to limit AI speech as efforts to censor the truth. His interests in AI (including xAI’s expansion of its Memphis data centre) will certainly weigh in.
Actual change could be limited
What may change in practice is unclear. Commentators say much of Biden’s EO is well in train and hard to reverse, though there is scope to roll back the reporting requirements on firms.
If you look back at documents issued under the first Trump administration, and compare them to Biden-era policies, there is also a shared interest in harnessing AI for competitiveness, building up U.S. AI talent and capabilities, and protecting civil liberties.
And of course, the Executive branch of the U.S. government is not the only source of AI policy. States continue to formulate draft bills (Republican Texas is mooted to be one to watch) and Congress is apparently keen to push legislation forward during the “lame duck” period ahead of the inauguration.
AI safety may be the first sign
I had the opportunity to attend the inaugural meeting of International AI Safety Institutes in San Francisco (in the midst of an “atmospheric river” storm event – TBC whether this was itself some sort of sign). The US AISI is spearheading the initiative, joined by Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, and the United Kingdom.
Alongside some keynotes (including a comment from the CEO of Anthropic – Dario Amodei – that he’s worried about autonomous systems “running wild” in future), the purpose of the meeting was to tee up AISI inputs for an AI Summit in France in early 2025. The French event follows AI safety meetings in Bletchley Park, UK and Seoul. It is expected to attract high-level attendance from countries (including, perhaps, New Zealand, given we have signed both the Bletchley Declaration and the Seoul Ministerial Statement).
Under Trump II, it’s not clear what the future of the U.S. AISI may be. Its work currently includes work on issues like content authentication (which may fall foul of “woke policy” concerns) as well as the development and use of chemical and biological AI models (which has clear national security relevance). The AISI has also just set up a TRAINS taskforce (Testing Risks of AI for National Security), which will coordinate research and testing of AI models across nuclear security, military capabilities, etc. It’s possible the U.S. AISI will be told to amplify work related to standard-setting and national security and to tone down work on issues such as bias.
It will be instructive to see what priority the Trump administration gives to the French event. This could be a sign of their willingness to engage internationally on AI and of their attitude towards AI safety issues. Would Marco Rubio, Trump’s pick for Secretary of State, participate? Or Howard Lutnick, the pick for Secretary of Commerce? Both these positions will oversee important work on AI. A Chief Technology Officer could also attend, though February may be too early to have this job filled. Under Trump I, the CTO (Michael Kratsios, who is advising on tech policy in the transition) played a strong role in pushing the OECD AI Principles and in setting up a refreshed policy direction for AI.
Much to watch…
All in all, it’s a pivotal time for AI policy in the U.S. Having a front-row seat for a few months has been fascinating, and there will be plenty to follow when I return to New Zealand in mid-December.

