
Panel discussion with Tino Cuellar (President, Carnegie Endowment for International Peace), Dario Amodei (CEO, Anthropic) and Elizabeth Kelly (Executive Director, U.S. AISI)
The United States is in the countdown to both the festive season and a new administration. In Washington DC, Halloween pumpkins and Biden-Harris yard signs have given way to Christmas trees and intense speculation about what President-elect Trump will do once in office, including on the AI front.
Not much was said on the campaign trail about AI and technology. Priorities on tax, immigration, and tariffs may be front of mind in the initial phases of Trump’s new Presidency. That said, AI’s potential to strengthen U.S. industry, and its central role in America’s economic and geopolitical contest with China, will surely push the issue forward.
A stronger focus on innovation
The Trump campaign’s “Agenda 47” stated an intention to repeal President Biden’s Executive Order on “Safe, Secure and Trustworthy Development and Use of AI” to help champion innovation. Republican lawmakers considered the EO’s reporting requirements on firms to be onerous and potentially scaring away would-be innovators.
Tech firms – whose views permeate the policy discussion – don’t speak with one voice on AI regulation. But recently they’ve made large language models available for national security purposes (Scale AI, Meta and OpenAI) and offered recommendations to the government to tackle AI infrastructure bottlenecks (OpenAI), which seems in line with a strong pro-innovation narrative.
Meanwhile, Elon Musk, who played a visible role in the election campaign and remains closely connected to the President-elect, sees efforts to use guardrails to limit AI speech as efforts to censor the truth. His interests in AI (including xAI’s expansion of its Memphis data centre) will certainly weigh in.
Actual change could be limited
What may change in practice is unclear. Commentators say much of Biden’s EO is well in train and hard to reverse, though there is scope to roll back the reporting requirements on firms.
If you look back at documents issued under the first Trump administration, and compare them to Biden-era policies, there is also a shared interest in harnessing AI for competitiveness, building up U.S. AI talent and capabilities, and protecting civil liberties.
And of course, the Executive branch of the U.S. government is not the only source of AI policy. States continue to formulate draft bills (Republican Texas is mooted to be one to watch) and Congress is apparently keen to push legislation forward during the “lame duck” period ahead of the inauguration.
AI safety may be the first sign
I had the opportunity to attend the inaugural meeting of International AI Safety Institutes in San Francisco (in the midst of an “atmospheric river” storm event – TBC whether this was itself some sort of sign). The US AISI is spearheading the initiative, joined by Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, and the United Kingdom.
Alongside some keynotes (including a comment from the CEO of Anthropic – Dario Amodei – that he’s worried about autonomous systems “running wild” in future), the purpose of the meeting was to tee up AISI inputs for an AI Summit in France in early 2025. The French event follows AI safety meetings in Bletchley Park, UK and Seoul. It is expected to attract high-level attendance from countries (including, perhaps, New Zealand, given we have signed both the Bletchley Declaration and the Seoul Ministerial Statement).
Under Trump II, it’s not clear what the future of the U.S. AISI may be. Its work currently includes work on issues like content authentication (which may fall foul of “woke policy” concerns) as well as the development and use of chemical and biological AI models (which has clear national security relevance). The AISI has also just set up a TRAINS taskforce (Testing Risks of AI for National Security), which will coordinate research and testing of AI models across nuclear security, military capabilities, etc. It’s possible the U.S. AISI will be told to amplify work related to standard-setting and national security and to tone down work on issues such as bias.
It will be instructive to see what priority the Trump administration gives to the French event. This could be a sign of their willingness to engage internationally on AI and of their attitude towards AI safety issues. Would Marco Rubio, Trump’s pick for Secretary of State, participate? Or Howard Lutnick, the pick for Secretary of Commerce? Both these positions will oversee important work on AI. A Chief Technology Officer could also attend, though February may be too early to have this job filled. Under Trump I, the CTO (Michael Kratsios, who is advising on tech policy in the transition) played a strong role in pushing the OECD AI Principles and in setting up a refreshed policy direction for AI.
Much to watch…
All in all, it’s a pivotal time for AI policy in the U.S. Having a front-row seat for a few months has been fascinating, and there will be plenty to follow when I return to New Zealand in mid-December.
By Sarah Box, 2024 New Zealand Harkness Fellow
The United States may have been thinking about AI governance for longer than New Zealand, but debate about the technology and potential guardrails is far from closed. The Harkness Fellowship is providing a wonderful opportunity for me to watch and learn as the US seeks to continue its leadership in AI.
Meetings in my first week quickly exposed me to opposing views. For one technology think tank, the answer was clearly to iterate governance over time and to recognise that “AI technology is neutral, it’s the person using it that can generate risk, not AI per se”. For another think tank, focused on democratic values for AI, the answer was plainly (and urgently) an AI Act and an AI regulator to map a path to transparency and accountability.
I’m now six weeks in, and I keep discovering new divisions. Is AI just the latest iteration of IT development or something novel meriting special treatment? Can (and should) the government act pre-emptively against potential AI risks or wait for more evidence? Should it target upstream or downstream of deployment? Should we worry about the extinction of humankind or focus on the here and now?
The pros and cons of diversity
Big markets can support stronger competition and innovation, and I think this is true also for the “AI ideas market”. In Washington DC alone there is a stream of AI events where diverse policy views can get airtime. And the combination of Federal and State government interest in AI is resulting in many efforts to experiment with regulatory settings.
That said, seasoned locals say this deluge of activity is becoming overwhelming and difficult to track. It comes on top of the many agency deliverables resulting from President Biden’s Executive Order (EO) on AI. The recent presidential election obviously adds uncertainty. Vice-President Harris was pivotal in the voluntary commitments agreed to by AI firms in mid-2023 and the earlier AI Bill of Rights, suggesting she may be active on AI regulation. It has been speculated that President Trump would likely repeal the Biden EO, though he has spoken about some AI risks and noted the energy demands of AI in recent campaign events.
Controls for Christmas?
The legislative branch is keen to act. The US Senate and House of Representatives each have bipartisan discussion groups on AI. Both appear to have appetite for introducing AI legislation before the end of 2024.
But “legislation” doesn’t necessarily equate to “regulation”. There is equal emphasis on enabling AI and promoting US technological leadership.
The Vice Chair of the Congressional AI Caucus, Don Beyer, recently suggested 14 bills could be put to the President before Christmas, including the CREATE Act (or “Creating Resources for Every American To Experiment with AI Act”) that would set up a National AI Research Resource to make it easier for researchers and students to access computing resources, data, educational tools and AI testbeds. (His remarks came in this symposium about AI in the US Department of Justice.)
Meanwhile a Senate AI Working Group Roadmap has identified priorities ranging from innovation funding to addressing deepfakes related to election content and nonconsensual intimate images.
Speedbumps at the State level?
I’ve heard one estimate suggesting a mind-boggling 400+ State-level bills related to AI are under discussion. People say a lack of Federal progress (coupled with political jockeying) is spurring States to take unilateral measures, leading to unhelpful policy fragmentation.
California’s SB 1047 (the “Safe and Secure Innovation for Frontier AI Models Act”) was prominent in the news when I arrived, getting attention for the level of liability it would place on AI developers and for its potential to shape AI regulation more widely in the US and globally.
Governor Newsom vetoed the bill at the eleventh hour, saying it was flawed and overly focused on model size rather than risk. But this is one to watch – the Californian Governor has already passed AI laws on deepfake nudes, political uses of AI, celebrity AI clones, and AI watermarking, and people have said the vetoed bill contained some useful actions on transparency (including whistleblowing) that could be revived.
What was that about extinction?
There is a vocal movement in the US that is concerned about the existential risks of artificial general intelligence (AGI). The definition of AGI is not settled, but broadly refers to AI that is more capable than humans at many tasks.
I had the opportunity to listen to arguments around this concept last week at a Georgia Tech workshop on “Does AI need governance?”. As a Kiwi, I couldn’t help but be impressed by this Tolkienesque map of AI Existential Safety presented by Nirit Weiss-Blatt. You can listen to her full presentation on X here, where she talks about the various groups – and deep pockets – involved in the AGI debate.
The main sentiment in this session seemed to be that AGI is a distraction that promotes a national security framing of AI policy; though participants “tipped their hats” to the success of the movement in pushing research, corporate and policy agendas. Presenting his paper on “The myth of AGI”, Milton Mueller (Georgia Tech) argued that there is an assumption of anthropomorphism that is not scientifically backed. Andrew Strait (Ada Lovelace Institute) gave more credence to AGI but wondered if firms may be advancing relevant research not so much for “human good” as to drive network effects, market expansion, and new channels for advertising revenues.
More in store…
With six weeks to go, my challenge is to soak up as much as possible about what US governments and firms are doing on AI governance, and what the impacts of current policies seem to be. Watch this space!

As Aotearoa develops its approach to adopting and using artificial intelligence, a senior government official will visit key centres of AI policy expertise in Washington D.C. as the 2024 New Zealand Harkness Fellow.
Sarah Box, Principal Policy Advisor – Digital Policy at the Ministry of Business, Innovation and Employment, beat out an impressive field of candidates to claim the prestigious New Zealand Harkness Fellowship for 2024.
As a senior member of MBIE’s Digital Policy team, Box has worked on significant areas of policy, including the development of the Digital Strategy for Aotearoa, and the Game Development Sector Rebate scheme.
Her current focus is working with policy and operational staff across 30+ government agencies to consider the opportunities and challenges posed by the rapid adoption of artificial intelligence, and to help formulate policy to guide its use in New Zealand.
With many of the rapid-paced developments in AI currently driven by US companies and institutions, the US Government has shifted into high gear in its efforts to ensure the technology is a force for good.
President Biden’s Executive Order on AI included policy initiatives such as the development of an internationally recognised AI Risk Management Framework and the establishment of the US AI Safety Institute to develop guidelines and undertake research to foster AI safety. The Office of Science and Technology Policy (OSTP) developed the US Blueprint for an AI Bill of Rights, runs the National AI Initiative Office, and is contributing to the Executive Order.
Learning from top US AI policy experts
Box will be hosted during her fellowship by the Washington D.C.-based Observer Research Foundation America, an independent non-profit that examines the implications of emerging technologies for areas of policy.
Her research will also see her spend time with experts in US government agencies, standards bodies, and research institutions to gain insights into approaches to AI-related policy development that could inform our own efforts to foster responsible use of AI.
“The US is a leader in AI policy and champions a pro-innovation, risk-based approach that aligns with our need to harness technologies like AI to underpin growth and economic resilience,” says Box, who also works closely with the Department of Internal Affairs on the country’s approach to “digitising government”.
The New Zealand Government has established an Algorithm Charter governing use of artificial intelligence systems by government agencies and last year issued guidance on the use of generative AI systems such as ChatGPT and Google Gemini.
MBIE is now leading the development of an AI Roadmap to support the adoption of AI across the economy as well as risk management-based guidance for business, with Box’s US visit well-timed to observe the latest developments.
Informing AI policy development in Aotearoa
“The aim in undertaking this fellowship is to gain knowledge that can directly feed the government’s policy work on AI, which seeks to support New Zealand’s economic performance, mitigate harms, and align with key international partners,” says Box.
Harkness Fellowships Trust Chair Aphra Green said the focus on emerging technologies this year, following previous fellows’ work on social and environmental issues in recent years, shows the breadth of important topics Fellows are supported to explore in the US.
“Sarah’s research project is perfectly timed to have input into an issue that is under active consideration both within New Zealand and internationally,” she says.
Sarah Box will depart for the US in September and share lessons from the project with the New Zealand policy community following her return.
Acting Te Kawa Mataaho Public Service Commissioner and Harkess Fellowships Trust Board member Heather Baggott, says the growing awareness of the Harkness fellowships across government has caught the attention of executive leaders working on issues integral to the country’s future.
“The Leadership Development Centre based within the Public Service Commission promotes the Harkness Fellowships as one of the best opportunities for executive leaders in the Public Sector to pursue US-based research in policy-related areas of relevance to their work.
“We are delighted to see Sarah selected as the 2024 Fellow and are looking forward to both supporting her through the fellowship and learning about the insights she gleans from the experience.”
About the Harkness Fellowships Trust
The New Zealand Harkness Fellowships were established in 2009 by the New Zealand Harkness Fellowships Trust Board to reinforce links between New Zealand and the US and to enable executive leaders in the Public Sector to benefit from study and travel in the US. The Fellowships are valued at up to $70,000, and offer an emerging leader in the public sector the opportunity to spend 3-6 months undertaking research in the United States.
The fellowships enable successful candidates to gain first-hand knowledge and build contacts in their chosen field of endeavour that will be highly relevant to the NZ context and future NZ/US links. The Trust Board works to administer the fellowships in partnership with the Leadership Development Centre, which is acting on behalf of the NZ Government.
The current fellowships continue a Harkness fellowship programme that stretches back over sixty years. Past fellows include scientist Professor Sir Richard Faull, former Director General of Health Dr Karen Poutasi, businessman Hugh Fletcher and Public Service Commissioner Peter Hughes.