
The New Zealand Harkness Fellowship is for a high-potential senior leader in any field of study or vocation (excluding health care policy and practice) to study or research in the US for between 3-6 months.
One New Zealand Harkness Fellowship worth up to NZ$60,000 is being offered in 2025 (for travel in mid-late 2025), to a leader currently employed in in the New Zealand Public Sector. The length and total value of the Fellowship will be determined by the LDC and Harkness Trust Board, in conjunction with the successful applicant.
The New Zealand Harkness Fellowships were established in 2009 by the New Zealand Harkness Fellowships Trust Board to reinforce links between New Zealand and the US and to enable executive leaders in the Public Sector to benefit from study and travel in the US. Their purpose is to enable appointed Fellows to gain first-hand knowledge and build contacts in their chosen field of endeavour that will be highly relevant to the NZ context and future NZ/US links. The Trust Board is working in partnership with the Leadership Development Centre, which is acting on behalf of the NZ Government.
The programme has four goals:
As part of your fellowship proposal, you will need to;
Fellows are expected to be based at a government agency, university, research institute or ‘think tank’ for a significant part of their stay in the US.
One fellowship valued at up to NZ$60,000 will be offered in 2025 (for an award start date in mid-late 2025). New Zealand Harkness Fellowships are intended to contribute towards travel costs (international and domestic), accommodation and per diem expenses. Additional costs in excess of NZ$60,000 must be met by the Fellow and/or their New Zealand employer.
Read about previous fellow’s experience:
2024 fellow: Sarah Box (Harkness Fellowship Trust)
2023 fellow: Aimee Hadrup (Harkness Fellowship Trust) and Jym Clark
2016 fellow: Aphra Green (Public Service Commission)
Panel discussion with Tino Cuellar (President, Carnegie Endowment for International Peace), Dario Amodei (CEO, Anthropic) and Elizabeth Kelly (Executive Director, U.S. AISI)
The United States is in the countdown to both the festive season and a new administration. In Washington DC, Halloween pumpkins and Biden-Harris yard signs have given way to Christmas trees and intense speculation about what President-elect Trump will do once in office, including on the AI front.
Not much was said on the campaign trail about AI and technology. Priorities on tax, immigration, and tariffs may be front of mind in the initial phases of Trump’s new Presidency. That said, AI’s potential to strengthen U.S. industry, and its central role in America’s economic and geopolitical contest with China, will surely push the issue forward.
A stronger focus on innovation
The Trump campaign’s “Agenda 47” stated an intention to repeal President Biden’s Executive Order on “Safe, Secure and Trustworthy Development and Use of AI” to help champion innovation. Republican lawmakers considered the EO’s reporting requirements on firms to be onerous and potentially scaring away would-be innovators.
Tech firms – whose views permeate the policy discussion – don’t speak with one voice on AI regulation. But recently they’ve made large language models available for national security purposes (Scale AI, Meta and OpenAI) and offered recommendations to the government to tackle AI infrastructure bottlenecks (OpenAI), which seems in line with a strong pro-innovation narrative.
Meanwhile, Elon Musk, who played a visible role in the election campaign and remains closely connected to the President-elect, sees efforts to use guardrails to limit AI speech as efforts to censor the truth. His interests in AI (including xAI’s expansion of its Memphis data centre) will certainly weigh in.
Actual change could be limited
What may change in practice is unclear. Commentators say much of Biden’s EO is well in train and hard to reverse, though there is scope to roll back the reporting requirements on firms.
If you look back at documents issued under the first Trump administration, and compare them to Biden-era policies, there is also a shared interest in harnessing AI for competitiveness, building up U.S. AI talent and capabilities, and protecting civil liberties.
And of course, the Executive branch of the U.S. government is not the only source of AI policy. States continue to formulate draft bills (Republican Texas is mooted to be one to watch) and Congress is apparently keen to push legislation forward during the “lame duck” period ahead of the inauguration.
AI safety may be the first sign
I had the opportunity to attend the inaugural meeting of International AI Safety Institutes in San Francisco (in the midst of an “atmospheric river” storm event – TBC whether this was itself some sort of sign). The US AISI is spearheading the initiative, joined by Australia, Canada, the EU, France, Japan, Kenya, Korea, Singapore, and the United Kingdom.
Alongside some keynotes (including a comment from the CEO of Anthropic – Dario Amodei – that he’s worried about autonomous systems “running wild” in future), the purpose of the meeting was to tee up AISI inputs for an AI Summit in France in early 2025. The French event follows AI safety meetings in Bletchley Park, UK and Seoul. It is expected to attract high-level attendance from countries (including, perhaps, New Zealand, given we have signed both the Bletchley Declaration and the Seoul Ministerial Statement).
Under Trump II, it’s not clear what the future of the U.S. AISI may be. Its work currently includes work on issues like content authentication (which may fall foul of “woke policy” concerns) as well as the development and use of chemical and biological AI models (which has clear national security relevance). The AISI has also just set up a TRAINS taskforce (Testing Risks of AI for National Security), which will coordinate research and testing of AI models across nuclear security, military capabilities, etc. It’s possible the U.S. AISI will be told to amplify work related to standard-setting and national security and to tone down work on issues such as bias.
It will be instructive to see what priority the Trump administration gives to the French event. This could be a sign of their willingness to engage internationally on AI and of their attitude towards AI safety issues. Would Marco Rubio, Trump’s pick for Secretary of State, participate? Or Howard Lutnick, the pick for Secretary of Commerce? Both these positions will oversee important work on AI. A Chief Technology Officer could also attend, though February may be too early to have this job filled. Under Trump I, the CTO (Michael Kratsios, who is advising on tech policy in the transition) played a strong role in pushing the OECD AI Principles and in setting up a refreshed policy direction for AI.
Much to watch…
All in all, it’s a pivotal time for AI policy in the U.S. Having a front-row seat for a few months has been fascinating, and there will be plenty to follow when I return to New Zealand in mid-December.
By Sarah Box, 2024 New Zealand Harkness Fellow
The United States may have been thinking about AI governance for longer than New Zealand, but debate about the technology and potential guardrails is far from closed. The Harkness Fellowship is providing a wonderful opportunity for me to watch and learn as the US seeks to continue its leadership in AI.
Meetings in my first week quickly exposed me to opposing views. For one technology think tank, the answer was clearly to iterate governance over time and to recognise that “AI technology is neutral, it’s the person using it that can generate risk, not AI per se”. For another think tank, focused on democratic values for AI, the answer was plainly (and urgently) an AI Act and an AI regulator to map a path to transparency and accountability.
I’m now six weeks in, and I keep discovering new divisions. Is AI just the latest iteration of IT development or something novel meriting special treatment? Can (and should) the government act pre-emptively against potential AI risks or wait for more evidence? Should it target upstream or downstream of deployment? Should we worry about the extinction of humankind or focus on the here and now?
The pros and cons of diversity
Big markets can support stronger competition and innovation, and I think this is true also for the “AI ideas market”. In Washington DC alone there is a stream of AI events where diverse policy views can get airtime. And the combination of Federal and State government interest in AI is resulting in many efforts to experiment with regulatory settings.
That said, seasoned locals say this deluge of activity is becoming overwhelming and difficult to track. It comes on top of the many agency deliverables resulting from President Biden’s Executive Order (EO) on AI. The recent presidential election obviously adds uncertainty. Vice-President Harris was pivotal in the voluntary commitments agreed to by AI firms in mid-2023 and the earlier AI Bill of Rights, suggesting she may be active on AI regulation. It has been speculated that President Trump would likely repeal the Biden EO, though he has spoken about some AI risks and noted the energy demands of AI in recent campaign events.
Controls for Christmas?
The legislative branch is keen to act. The US Senate and House of Representatives each have bipartisan discussion groups on AI. Both appear to have appetite for introducing AI legislation before the end of 2024.
But “legislation” doesn’t necessarily equate to “regulation”. There is equal emphasis on enabling AI and promoting US technological leadership.
The Vice Chair of the Congressional AI Caucus, Don Beyer, recently suggested 14 bills could be put to the President before Christmas, including the CREATE Act (or “Creating Resources for Every American To Experiment with AI Act”) that would set up a National AI Research Resource to make it easier for researchers and students to access computing resources, data, educational tools and AI testbeds. (His remarks came in this symposium about AI in the US Department of Justice.)
Meanwhile a Senate AI Working Group Roadmap has identified priorities ranging from innovation funding to addressing deepfakes related to election content and nonconsensual intimate images.
Speedbumps at the State level?
I’ve heard one estimate suggesting a mind-boggling 400+ State-level bills related to AI are under discussion. People say a lack of Federal progress (coupled with political jockeying) is spurring States to take unilateral measures, leading to unhelpful policy fragmentation.
California’s SB 1047 (the “Safe and Secure Innovation for Frontier AI Models Act”) was prominent in the news when I arrived, getting attention for the level of liability it would place on AI developers and for its potential to shape AI regulation more widely in the US and globally.
Governor Newsom vetoed the bill at the eleventh hour, saying it was flawed and overly focused on model size rather than risk. But this is one to watch – the Californian Governor has already passed AI laws on deepfake nudes, political uses of AI, celebrity AI clones, and AI watermarking, and people have said the vetoed bill contained some useful actions on transparency (including whistleblowing) that could be revived.
What was that about extinction?
There is a vocal movement in the US that is concerned about the existential risks of artificial general intelligence (AGI). The definition of AGI is not settled, but broadly refers to AI that is more capable than humans at many tasks.
I had the opportunity to listen to arguments around this concept last week at a Georgia Tech workshop on “Does AI need governance?”. As a Kiwi, I couldn’t help but be impressed by this Tolkienesque map of AI Existential Safety presented by Nirit Weiss-Blatt. You can listen to her full presentation on X here, where she talks about the various groups – and deep pockets – involved in the AGI debate.
The main sentiment in this session seemed to be that AGI is a distraction that promotes a national security framing of AI policy; though participants “tipped their hats” to the success of the movement in pushing research, corporate and policy agendas. Presenting his paper on “The myth of AGI”, Milton Mueller (Georgia Tech) argued that there is an assumption of anthropomorphism that is not scientifically backed. Andrew Strait (Ada Lovelace Institute) gave more credence to AGI but wondered if firms may be advancing relevant research not so much for “human good” as to drive network effects, market expansion, and new channels for advertising revenues.
More in store…
With six weeks to go, my challenge is to soak up as much as possible about what US governments and firms are doing on AI governance, and what the impacts of current policies seem to be. Watch this space!