The Australian federal election was held on the 3rd of May.

Join 356 signatories

Thank you to everyone who supported the open letter for the 2025 federal election.
← Back to All Open Letters

Australia Must Act on AI Risks Now

To realise AI’s immense benefits, we must first confront its escalating risks—starting now.

Published 24 March 2025

The next government will shape whether AI becomes a powerful force for good or causes catastrophic harm. Australia needs swift and globally coordinated action to address the risks of AI so Australians can fully seize its opportunities.

We can fly with confidence because we know airlines are subject to robust safety standards—the same should be true for AI.

Therefore, we call on the Australian Government to:

  • Deliver on its commitments by creating an Australian AI Safety Institute. While massive investments are accelerating AI capabilities, there is minimal funding dedicated to understanding and addressing its risks. We need independent technical expertise within government to join global AI risk research, and help ensure regulation and policy meet Australia's needs.
  • Introduce an Australian AI Act that imposes mandatory guardrails on AI developers and deployers. The Act should ensure that powerful AI systems meet robust safety standards and clarify liability for developers and deployers.

AI development is already rapid, can be unexpected, and will have significant consequences for Australians. AI stands to fundamentally transform our society, economy, and democracy – for better or worse. Australians expect our government to take the widespread implications of AI seriously, to work with the global community to ensure AI is well governed, and to be adaptable in protecting us from AI risks while enabling us to realise its benefits.

We, the undersigned, call on the Australian Government to make AI safe before it's too late.

Signatories

Note: Signatories endorse only the core letter text. Footnotes and additional content may not represent their views.

Prof. Michael A. Osborne

University of Oxford

Professor of Machine Learning

Co-author of well-known paper "The Future of Employment: How Susceptible Are Jobs to Computerisation?" co-authored with Carl Benedikt Frey in 2013.

Dr. Toby Ord

Oxford University

Senior Researcher

Author of The Precipice

Senator David Pocock

Independent

Senator for ACT

Prof. Huw Price

University of Cambridge

Emeritus Bertrand Russell Professor & Emeritus Fellow, Trinity College, Cambridge

Co-founder of the Centre for the Study of Existential Risk and former Academic Director of the Leverhulme Centre for the Future of Intelligence, Cambridge

Bill Simpson-Young

Gradient Institute

Chief Executive

Australia's AI Expert Group, NSW's AI Review Committee

Prof. Terry FlewFAHA

The University of Sydney

Co-Director, Centre for AI, Trust and Governance

Australian Research Council Laureate Fellow

Prof. Robert SparrowPhD

Monash University

Professor of Philosophy

Author of more than 50 refereed papers on AI and robot ethics

Prof. David Balding

University of Melbourne

Honorary Professor of Statistical Genetics

FAA

Dr. Peter SlatteryPhD

MIT FutureTech

Researcher

Lead at the MIT AI Risk Repository

Dr. Alexander SaeriPhD

The University of Queensland | MIT FutureTech

AI Governance Researcher

Director, MIT AI Risk Index

Dr. Gnana K BharathyPhD

Australian Research Data Commons, University of Technology Sydney

AI/ ML Research Data Specialist, Researcher, Practitioner

Dr. Tiberio Caetano

Gradient Institute

Chief Scientist

Assoc. Prof. Michael NoetelPhD

The University of Queensland

Associate Professor

Dr. Ryan Carey

Causal Incentives Working Group

Fmr AI Safety Lead @ Future of Humanity Institute, University of Oxford

Prof. Richard DazeleyPhD

Deakin University

Professor of Artificial Intelligence and Machine Learning

Researcher in AI Safety and Explainability

Prof. Paul SalmonPhD

Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast

Professor

Australia's discipline leader, Quality and Reliability, 2020 - 2024

Dan Braun

Apollo Research

Lead Engineer/Head of Security

Dr. Marcel Scharth

The University of Sydney

Lecturer in Business Analytics (Machine Learning)

Dr. Tom Everitt

Google DeepMind

Staff Research Scientist

Dr. Ryan KiddPhD

MATS Research

Co-Executive Director

Co-Founder of the London Initiative for Safe AI

Prof. Richard Middleton

The University of Newcastle

Emeritus Professor Automation, Control and Robotics

Dr. Simon O'Callaghan

Gradient Institute

Head of Technical AI Governance

Co-author of Implementing Australia’s AI Ethics Principles: A selection of Responsible AI practices and resources

Prof. Peter VamplewPhD

Federation University Australia

Professor, IT

Assoc. Prof. Zongyuan GePhD

Monash University

Director of AIM for Health

Dr. Alberto Maria Chierici

Gradient Institute

Entrepreneur and AI Specialist

Author of "The Ethics Of AI: Facts, Fictions, and Forecasts"

Dr. Thom DixonPhD

National Security College, Australian National University

Expert Associate

Member of the ARC Centre of Excellence in Synthetic Biology

Dr. Paul LessardPhD

Symbolica

Principal Scientist

Author of Categorical Deep Learning

Dr. Hrishikesh DesaiCFA, CMA, EA

Arkansas State University

Assistant Professor of Accounting

Director of the Master of Accountancy with Data Analytics Program

Assoc. Prof. Simon Goldstein

The University of Hong Kong

Associate Professor, AI & Humanity Lab

Dr. Aaron SnoswellPhD

Queensland University of Technology GenAI Lab

Senior Research Fellow

Harriet Farlow

Mileva Security Labs

CEO and Founder

Soroush Pour

Harmony Intelligence

CEO

Dr. Daniel Murfet

University of Melbourne

Mathematician, Deep Learning Researcher

Dr. Dan MacKinlay

Research Scientist, AI for Science

https://danmackinlay.name

Greg Sadler

Good Ancestors

CEO

Dr. Jamie Freestone

Australian National University

Philosopher

Nakshathra Suresh

AI & Cyber Safety Expert

Co-founder of digital safety consultancy eiris

Dr. Lorenzo Pacchiardi

Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK

Research Associate

David Quarel

Australian National University, Cambridge University (Fmr.)

PhD student, Research Assistant (Fmr.)

Jess Graham

The University of Queensland

Senior Research Coordinator

Dr. Andrew Childs

Griffith University

Lecturer in Technology and Crime

Matthew Farrugia-Roberts

Department of Computer Science, University of Oxford

Clarendon scholar

Dr. Daniel D'Hotman

DPhil AI Ethics, University of Oxford

Chief of Staff, Convergence Labs

Rhodes Scholar, Australia-at-Large (2019 & Brasenose)

Assoc. Prof. Tolga Soyata

George Mason University

Associate Professor, Department of Electrical and Computer Engineering

James GauciMBA

Cadent

CEO

Immediate Past Chair, IEEE Society on Social Implications of Technology (SSIT) Australia

Buck Shlegeris

Redwood Research

CEO

Joshua Krook

University of Southampton

Research Fellow in Responsible Artificial Intelligence

Dr. Cassidy NelsonMBBS MPH PhD

Centre for Long-Term Resilience

Head of Biosecurity Policy

Matthew Newman

TechInnocens

Director & AI Safety Researcher

Contributor & Reviewer: Safer Agentic AI Foundations, IEEE P7XXX, Ethically Aligned Design

JJ Hepburn

AI Safety Support and Ashgro

Founder and CEO

Sign the Open Letter

356
signatures and counting
Thank you to the 356 signatories who supported AI safety before the election.

Join experts, public figures, and concerned citizens in calling for the Australian Government to take decisive action on AI safety.

Your signature requires email verification. By signing, you accept our Terms and Privacy Policy. Your email remains private, and only your name and fields marked as public will be displayed on the signature list.