HARKNESS

Harkness Fellowship – applications open!

The New Zealand Harkness Fellowship is for a high-potential senior leader in any field of study or vocation (excluding health care policy and practice) to study or research in the US for between 3-6 months.

One New Zealand Harkness Fellowship worth up to NZ$60,000 is being offered in 2025 (for travel in mid-late 2025), to a leader  currently employed in in the New Zealand Public Sector.  The length and total value of the Fellowship will be determined by the LDC and Harkness Trust Board, in conjunction with the successful applicant.

APPLY HERE

The New Zealand Harkness Fellowships were established in 2009 by the New Zealand Harkness Fellowships Trust Board to reinforce links between New Zealand and the US and to enable executive leaders in the Public Sector to benefit from study and travel in the US. Their purpose is to enable appointed Fellows to gain first-hand knowledge and build contacts in their chosen field of endeavour that will be highly relevant to the NZ context and future NZ/US links. The Trust Board is working in partnership with the Leadership Development Centre, which is acting on behalf of the NZ Government.

The programme has four goals:

  • Reinforce New Zealand-United States links by enabling actual or potential leaders and opinion formers in New Zealand to benefit from new ideas, practices and new contacts in the US;
  • Increase the Fellow’s ability to bring about change and improvements in New Zealand;
  • Help improve the cross-fertilisation of ideas and experience between New Zealand and the United States; and
  • Build a leadership network on both sides of the Pacific, encourage ongoing exchange between New Zealand and the United States and establish enduring relationships offering reciprocal benefits to both countries.

As part of your fellowship proposal, you will need to;

  • State the objectives and methodology of your proposed project.
  • Describe the significance of your project for your field in both the US and New Zealand.
  • Outline your ideas for how your experiences in the US will be communicated and applied to the New Zealand context to effect a particular outcome(s).
  • Demonstrate a track record of learning and growth in characteristics representative of the future leadership that the New Zealand Public Sector needs.
  • Provide evidence of a planned and purposeful approach to the Fellowship.

Fellows are expected to be based at a government agency, university, research institute or ‘think tank’ for a significant part of their stay in the US. 

Entitlement

One fellowship valued at up to NZ$60,000 will be offered in 2025 (for an award start date in mid-late 2025). New Zealand Harkness Fellowships are intended to contribute towards travel costs (international and domestic), accommodation and per diem expenses. Additional costs in excess of NZ$60,000 must be met by the Fellow and/or their New Zealand employer.
Read about previous fellow’s experience:

2023 fellow: Aimee Hadrup (Harkness Fellowship Trust) and Jym Clark

2016 fellow: Aphra Green (Public Service Commission)

Sarah Box: Where to for AI policy under Trump II?

Panel discussion with Tino Cuellar (President, Carnegie Endowment for International Peace), Dario Amodei (CEO, Anthropic) and Elizabeth Kelly (Executive Director, U.S. AISI)

US AI policy – big country, many ideas

By Sarah Box, 2024 New Zealand Harkness Fellow

The United States may have been thinking about AI governance for longer than New Zealand, but debate about the technology and potential guardrails is far from closed. The Harkness Fellowship is providing a wonderful opportunity for me to watch and learn as the US seeks to continue its leadership in AI. 

Meetings in my first week quickly exposed me to opposing views. For one technology think tank, the answer was clearly to iterate governance over time and to recognise that “AI technology is neutral, it’s the person using it that can generate risk, not AI per se”. For another think tank, focused on democratic values for AI, the answer was plainly (and urgently) an AI Act and an AI regulator to map a path to transparency and accountability.  

I’m now six weeks in, and I keep discovering new divisions. Is AI just the latest iteration of IT development or something novel meriting special treatment? Can (and should) the government act pre-emptively against potential AI risks or wait for more evidence?  Should it target upstream or downstream of deployment? Should we worry about the extinction of humankind or focus on the here and now? 

The pros and cons of diversity

Big markets can support stronger competition and innovation, and I think this is true also for the “AI ideas market”. In Washington DC alone there is a stream of AI events where diverse policy views can get airtime. And the combination of Federal and State government interest in AI is resulting in many efforts to experiment with regulatory settings. 

That said, seasoned locals say this deluge of activity is becoming overwhelming and difficult to track. It comes on top of the many agency deliverables resulting from President Biden’s Executive Order (EO) on AI. The recent presidential election obviously adds uncertainty. Vice-President Harris was pivotal in the voluntary commitments agreed to by AI firms in mid-2023 and the earlier AI Bill of Rights, suggesting she may be active on AI regulation. It has been speculated that President Trump would likely repeal the Biden EO, though he has spoken about some AI risks and noted the energy demands of AI in recent campaign events.

Controls for Christmas?

The legislative branch is keen to act. The US Senate and House of Representatives each have bipartisan discussion groups on AI. Both appear to have appetite for introducing AI legislation before the end of 2024.

But “legislation” doesn’t necessarily equate to “regulation”. There is equal emphasis on enabling AI and promoting US technological leadership.

The Vice Chair of the Congressional AI Caucus, Don Beyer, recently suggested 14 bills could be put to the President before Christmas, including the CREATE Act (or “Creating Resources for Every American To Experiment with AI Act”) that would set up a National AI Research Resource to make it easier for researchers and students to access computing resources, data, educational tools and AI testbeds. (His remarks came in this symposium about AI in the US Department of Justice.)

Meanwhile a Senate AI Working Group Roadmap has identified priorities ranging from innovation funding to addressing deepfakes related to election content and nonconsensual intimate images.

Speedbumps at the State level?

I’ve heard one estimate suggesting a mind-boggling 400+ State-level bills related to AI are under discussion. People say a lack of Federal progress (coupled with political jockeying) is spurring States to take unilateral measures, leading to unhelpful policy fragmentation.

California’s SB 1047 (the “Safe and Secure Innovation for Frontier AI Models Act”) was prominent in the news when I arrived, getting attention for the level of liability it would place on AI developers and for its potential to shape AI regulation more widely in the US and globally. 

Governor Newsom vetoed the bill at the eleventh hour, saying it was flawed and overly focused on model size rather than risk. But this is one to watch – the Californian Governor has already passed AI laws on deepfake nudes, political uses of AI, celebrity AI clones, and AI watermarking, and people have said the vetoed bill contained some useful actions on transparency (including whistleblowing) that could be revived.

What was that about extinction?

There is a vocal movement in the US that is concerned about the existential risks of artificial general intelligence (AGI). The definition of AGI is not settled, but broadly refers to AI that is more capable than humans at many tasks.

A map of a river

Description automatically generated

I had the opportunity to listen to arguments around this concept last week at a Georgia Tech workshop on “Does AI need governance?”. As a Kiwi, I couldn’t help but be impressed by this Tolkienesque map of AI Existential Safety presented by Nirit Weiss-Blatt. You can listen to her full presentation on X here, where she talks about the various groups – and deep pockets – involved in the AGI debate. 

The main sentiment in this session seemed to be that AGI is a distraction that promotes a national security framing of AI policy; though participants “tipped their hats” to the success of the movement in pushing research, corporate and policy agendas. Presenting his paper on  “The myth of AGI”, Milton Mueller (Georgia Tech) argued that there is an assumption of anthropomorphism that is not scientifically backed. Andrew Strait (Ada Lovelace Institute) gave more credence to AGI but wondered if firms may be advancing relevant research not so much for “human good” as to drive network effects, market expansion, and new channels for advertising revenues. 

More in store…

With six weeks to go, my challenge is to soak up as much as possible about what US governments and firms are doing on AI governance, and what the impacts of current policies seem to be. Watch this space!