HARKNESS

Sarah Box: Where to for AI policy under Trump II?

Panel discussion with Tino Cuellar (President, Carnegie Endowment for International Peace), Dario Amodei (CEO, Anthropic) and Elizabeth Kelly (Executive Director, U.S. AISI)

US AI policy – big country, many ideas

By Sarah Box, 2024 New Zealand Harkness Fellow

The United States may have been thinking about AI governance for longer than New Zealand, but debate about the technology and potential guardrails is far from closed. The Harkness Fellowship is providing a wonderful opportunity for me to watch and learn as the US seeks to continue its leadership in AI. 

Meetings in my first week quickly exposed me to opposing views. For one technology think tank, the answer was clearly to iterate governance over time and to recognise that “AI technology is neutral, it’s the person using it that can generate risk, not AI per se”. For another think tank, focused on democratic values for AI, the answer was plainly (and urgently) an AI Act and an AI regulator to map a path to transparency and accountability.  

I’m now six weeks in, and I keep discovering new divisions. Is AI just the latest iteration of IT development or something novel meriting special treatment? Can (and should) the government act pre-emptively against potential AI risks or wait for more evidence?  Should it target upstream or downstream of deployment? Should we worry about the extinction of humankind or focus on the here and now? 

The pros and cons of diversity

Big markets can support stronger competition and innovation, and I think this is true also for the “AI ideas market”. In Washington DC alone there is a stream of AI events where diverse policy views can get airtime. And the combination of Federal and State government interest in AI is resulting in many efforts to experiment with regulatory settings. 

That said, seasoned locals say this deluge of activity is becoming overwhelming and difficult to track. It comes on top of the many agency deliverables resulting from President Biden’s Executive Order (EO) on AI. The recent presidential election obviously adds uncertainty. Vice-President Harris was pivotal in the voluntary commitments agreed to by AI firms in mid-2023 and the earlier AI Bill of Rights, suggesting she may be active on AI regulation. It has been speculated that President Trump would likely repeal the Biden EO, though he has spoken about some AI risks and noted the energy demands of AI in recent campaign events.

Controls for Christmas?

The legislative branch is keen to act. The US Senate and House of Representatives each have bipartisan discussion groups on AI. Both appear to have appetite for introducing AI legislation before the end of 2024.

But “legislation” doesn’t necessarily equate to “regulation”. There is equal emphasis on enabling AI and promoting US technological leadership.

The Vice Chair of the Congressional AI Caucus, Don Beyer, recently suggested 14 bills could be put to the President before Christmas, including the CREATE Act (or “Creating Resources for Every American To Experiment with AI Act”) that would set up a National AI Research Resource to make it easier for researchers and students to access computing resources, data, educational tools and AI testbeds. (His remarks came in this symposium about AI in the US Department of Justice.)

Meanwhile a Senate AI Working Group Roadmap has identified priorities ranging from innovation funding to addressing deepfakes related to election content and nonconsensual intimate images.

Speedbumps at the State level?

I’ve heard one estimate suggesting a mind-boggling 400+ State-level bills related to AI are under discussion. People say a lack of Federal progress (coupled with political jockeying) is spurring States to take unilateral measures, leading to unhelpful policy fragmentation.

California’s SB 1047 (the “Safe and Secure Innovation for Frontier AI Models Act”) was prominent in the news when I arrived, getting attention for the level of liability it would place on AI developers and for its potential to shape AI regulation more widely in the US and globally. 

Governor Newsom vetoed the bill at the eleventh hour, saying it was flawed and overly focused on model size rather than risk. But this is one to watch – the Californian Governor has already passed AI laws on deepfake nudes, political uses of AI, celebrity AI clones, and AI watermarking, and people have said the vetoed bill contained some useful actions on transparency (including whistleblowing) that could be revived.

What was that about extinction?

There is a vocal movement in the US that is concerned about the existential risks of artificial general intelligence (AGI). The definition of AGI is not settled, but broadly refers to AI that is more capable than humans at many tasks.

A map of a river

Description automatically generated

I had the opportunity to listen to arguments around this concept last week at a Georgia Tech workshop on “Does AI need governance?”. As a Kiwi, I couldn’t help but be impressed by this Tolkienesque map of AI Existential Safety presented by Nirit Weiss-Blatt. You can listen to her full presentation on X here, where she talks about the various groups – and deep pockets – involved in the AGI debate. 

The main sentiment in this session seemed to be that AGI is a distraction that promotes a national security framing of AI policy; though participants “tipped their hats” to the success of the movement in pushing research, corporate and policy agendas. Presenting his paper on  “The myth of AGI”, Milton Mueller (Georgia Tech) argued that there is an assumption of anthropomorphism that is not scientifically backed. Andrew Strait (Ada Lovelace Institute) gave more credence to AGI but wondered if firms may be advancing relevant research not so much for “human good” as to drive network effects, market expansion, and new channels for advertising revenues. 

More in store…

With six weeks to go, my challenge is to soak up as much as possible about what US governments and firms are doing on AI governance, and what the impacts of current policies seem to be. Watch this space!

US-bound Harkness Fellow to explore AI policy development

As Aotearoa develops its approach to adopting and using artificial intelligence, a senior government official will visit key centres of AI policy expertise in Washington D.C. as the 2024 New Zealand Harkness Fellow.

Sarah Box, Principal Policy Advisor – Digital Policy at the Ministry of Business, Innovation and Employment, beat out an impressive field of candidates to claim the prestigious New Zealand Harkness Fellowship for 2024.

As a senior member of MBIE’s Digital Policy team, Box has worked on significant areas of policy, including the development of the Digital Strategy for Aotearoa, and the Game Development Sector Rebate scheme. 

Her current focus is working with policy and operational staff across 30+ government agencies to consider the opportunities and challenges posed by the rapid adoption of artificial intelligence, and to help formulate policy to guide its use in New Zealand.

With many of the rapid-paced developments in AI currently driven by US companies and institutions, the US Government has shifted into high gear in its efforts to ensure the technology is a force for good.

President Biden’s Executive Order on AI included policy initiatives such as the development of an internationally recognised AI Risk Management Framework and the establishment of the US AI Safety Institute to develop guidelines and undertake research to foster AI safety. The Office of Science and Technology Policy (OSTP) developed the US Blueprint for an AI Bill of Rights, runs the National AI Initiative Office, and is contributing to the Executive Order.

 Learning from top US AI policy experts

Box will be hosted during her fellowship by the Washington D.C.-based Observer Research Foundation America, an independent non-profit that examines the implications of emerging technologies for areas of policy. 

Her research will also see her spend time with experts in US government agencies, standards bodies, and research institutions to gain insights into approaches to AI-related policy development that could inform our own efforts to foster responsible use of AI.

“The US is a leader in AI policy and champions a pro-innovation, risk-based approach that aligns with our need to harness technologies like AI to underpin growth and economic resilience,” says Box, who also works closely with the Department of Internal Affairs on the country’s approach to “digitising government”.

The New Zealand Government has established an Algorithm Charter governing use of artificial intelligence systems by government agencies and last year issued guidance on the use of generative AI systems such as ChatGPT and Google Gemini.

MBIE is now leading the development of an AI Roadmap to support the adoption of AI across the economy as well as risk management-based guidance for business, with Box’s US visit well-timed to observe the latest developments.

Informing AI policy development in Aotearoa

“The aim in undertaking this fellowship is to gain knowledge that can directly feed the government’s policy work on AI, which seeks to support New Zealand’s economic performance, mitigate harms, and align with key international partners,” says Box.

Harkness Fellowships Trust Chair Aphra Green said the focus on emerging technologies this year, following previous fellows’ work on social and environmental issues in recent years, shows the breadth of important topics Fellows are supported to explore in the US.

“Sarah’s research project is perfectly timed to have input into an issue that is under active consideration both within New Zealand and internationally,” she says.

Sarah Box will depart for the US in September and share lessons from the project with the New Zealand policy community following her return.

Acting Te Kawa Mataaho Public Service Commissioner and Harkess Fellowships Trust Board member Heather Baggott, says the growing awareness of the Harkness fellowships across government has caught the attention of executive leaders working on issues integral to the country’s future.

“The Leadership Development Centre based within the Public Service Commission promotes the Harkness Fellowships as one of the best opportunities for executive leaders in the Public Sector to pursue US-based research in policy-related areas of relevance to their work.

“We are delighted to see Sarah selected as the 2024 Fellow and are looking forward to both supporting her through the fellowship and learning about the insights she gleans from the experience.”

About the Harkness Fellowships Trust

The New Zealand Harkness Fellowships were established in 2009 by the New Zealand Harkness Fellowships Trust Board to reinforce links between New Zealand and the US and to enable executive leaders in the Public Sector to benefit from study and travel in the US. The Fellowships are valued at up to $70,000, and offer an emerging leader in the public sector the opportunity to spend 3-6 months undertaking research in the United States.

The fellowships enable successful candidates to gain first-hand knowledge and build contacts in their chosen field of endeavour that will be highly relevant to the NZ context and future NZ/US links. The Trust Board works to administer the fellowships in partnership with the Leadership Development Centre, which is acting on behalf of the NZ Government.

The current fellowships continue a Harkness fellowship programme that stretches back over sixty years. Past fellows include scientist Professor Sir Richard Faull, former Director General of Health Dr Karen Poutasi, businessman Hugh Fletcher and Public Service Commissioner Peter Hughes.