Signatories
Note: Signatories endorse only the core letter text. Footnotes and additional content may not represent their views.
Prof. Michael A. Osborne
University of Oxford
Professor of Machine Learning
Co-author of well-known paper "The Future of Employment: How Susceptible Are Jobs to Computerisation?" co-authored with Carl Benedikt Frey in 2013.
Dr. Toby Ord
Oxford University
Senior Researcher
Author of The Precipice
Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all.
Senator David Pocock
Independent
Senator for ACT
Prof. Huw Price
University of Cambridge
Emeritus Bertrand Russell Professor & Emeritus Fellow, Trinity College, Cambridge
Co-founder of the Centre for the Study of Existential Risk and former Academic Director of the Leverhulme Centre for the Future of Intelligence, Cambridge
Bill Simpson-Young
Gradient Institute
Chief Executive
Australia's AI Expert Group, NSW's AI Review Committee
Prof. Terry FlewFAHA
The University of Sydney
Co-Director, Centre for AI, Trust and Governance
Australian Research Council Laureate Fellow
Declining trust across society is a barrier to widespread adoption of AI in the community, and distrust in AI due to misuse will further weaken Australia's democratic institutions.
Prof. Robert SparrowPhD
Monash University
Professor of Philosophy
Author of more than 50 refereed papers on AI and robot ethics
Decisions about revolutionary technologies, like AI, should be made democratically.
Prof. David Balding
University of Melbourne
Honorary Professor of Statistical Genetics
FAA
Dr. Peter SlatteryPhD
MIT FutureTech
Researcher
Lead at the MIT AI Risk Repository
I support the establishment of an Australian AI Safety Institute and the introduction of an AI Act. We don’t want advanced AI that is unsafe, untrustworthy, or unreliable—no one is better off in that scenario. Unfortunately, that may be what we are racing toward.
Dr. Alexander SaeriPhD
The University of Queensland | MIT FutureTech
AI Governance Researcher
Director, MIT AI Risk Index
Dr. Gnana K BharathyPhD
Australian Research Data Commons, University of Technology Sydney
AI/ ML Research Data Specialist, Researcher, Practitioner
As someone who has worked across industry, research, and national infrastructure, with deep experience in AI systems, socio-technical modelling, and AI applications and risk, I see the establishment of an AI Safety Institute as a critical step for Australia. AI systems are increasingly being integrated into society. AI is not just technical but are socio-technical, shaped by human values, institutional processes, and data infrastructures. Australia needs an AI Safety Institute to guide, assess, and coordinate safe, ethical, and high-impact AI development. The institute should be framed around socio-technical considerations, and should bridge policy and practice in areas like research data infrastructure, model validation, and governance, provide guidance on responsible AI design and deployment, align with democratic values, Indigenous data sovereignty, and public trust, and offer a platform for multi-stakeholder dialogue. With Australia investing in national research infrastructure and expanding its AI capabilities, it is crucial to lead in ensuring safe, inclusive, and transparent systems for both current and future applications. The institute should connect policy formulation with implementation, provide guidance to public institutions, and create opportunities for diverse stakeholder engagement.
Dr. Tiberio Caetano
Gradient Institute
Chief Scientist
Assoc. Prof. Michael NoetelPhD
The University of Queensland
Associate Professor
Dr. Ryan Carey
Causal Incentives Working Group
Fmr AI Safety Lead @ Future of Humanity Institute, University of Oxford
Prof. Richard DazeleyPhD
Deakin University
Professor of Artificial Intelligence and Machine Learning
Researcher in AI Safety and Explainability
We need to ensure AI systems align with our society’s needs and values, while ensuring a populace with a healthy educated skepticism of these systems. This is best achieved through an Australian AI Safety Institute and AI Act.
Prof. Paul SalmonPhD
Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast
Professor
Australia's discipline leader, Quality and Reliability, 2020 - 2024
I support the creation of an Australian AI safety institute and the implementation of an AI act as both are urgently required to ensure that the risks associated with AI are effectively managed. We are fast losing the opportunity to ensure that all AI technologies are safe, ethical, and beneficial to humanity.
Dan Braun
Apollo Research
Lead Engineer/Head of Security
Dr. Marcel Scharth
The University of Sydney
Lecturer in Business Analytics (Machine Learning)
Intelligence shapes the world. We have the responsibility to design systems that extend not only our intelligence and creativity, but also our wisdom and conscience. AI should be deeply ethical by design.
Dr. Tom Everitt
Google DeepMind
Staff Research Scientist
Dr. Ryan KiddPhD
MATS Research
Co-Executive Director
Co-Founder of the London Initiative for Safe AI
Prof. Richard Middleton
The University of Newcastle
Emeritus Professor Automation, Control and Robotics
Dr. Simon O'Callaghan
Gradient Institute
Head of Technical AI Governance
Co-author of Implementing Australia’s AI Ethics Principles: A selection of Responsible AI practices and resources
Prof. Peter VamplewPhD
Federation University Australia
Professor, IT
Assoc. Prof. Zongyuan GePhD
Monash University
Director of AIM for Health
Dr. Alberto Maria Chierici
Gradient Institute
Entrepreneur and AI Specialist
Author of "The Ethics Of AI: Facts, Fictions, and Forecasts"
Dr. Thom DixonPhD
National Security College, Australian National University
Expert Associate
Member of the ARC Centre of Excellence in Synthetic Biology
Dr. Paul LessardPhD
Symbolica
Principal Scientist
Author of Categorical Deep Learning
Dr. Hrishikesh DesaiCFA, CMA, EA
Arkansas State University
Assistant Professor of Accounting
Director of the Master of Accountancy with Data Analytics Program
I have done research with AI tools and technologies in terms of understanding their bias, training data, and potential. I believe AI safety is important given the rapid proliferation of these tools and technologies in our everyday lives.
Assoc. Prof. Simon Goldstein
The University of Hong Kong
Associate Professor, AI & Humanity Lab
Dr. Aaron SnoswellPhD
Queensland University of Technology GenAI Lab
Senior Research Fellow
An AI Safety Center will allow Australia to coordinate and accelerate our disparate efforts in AI Safety and Ethics, paving the way for regional leadership in this strategic and important area.
Harriet Farlow
Mileva Security Labs
CEO and Founder
Soroush Pour
Harmony Intelligence
CEO
AI is as transformative as electricity and as powerful as nuclear technology. We wouldn’t handle those without clear mandatory safeguards, and AI should be no different. To support good policy, Australia's government also needs a dedicated AI Safety Institute to bring deep technical AI expertise into government.
Dr. Daniel Murfet
University of Melbourne
Mathematician, Deep Learning Researcher
Dr. Dan MacKinlay
Research Scientist, AI for Science
https://danmackinlay.name
AI is the test case for how we deal with every one of the seismic changes of the 21st century.
Greg Sadler
Good Ancestors
CEO
It sets a dangerous precedent for Australia to formally commit to specific actions but fail to follow through. Australia is the only signatory that is yet to meet its obligations by creating an AISI.
Dr. Jamie Freestone
Australian National University
Philosopher
Nakshathra Suresh
AI & Cyber Safety Expert
Co-founder of digital safety consultancy eiris
AI safety is crucial for Australia’s threat landscape. As I have been doing for the last few years, I strongly advocate for social scientists to be included at the forefront of this movement.
Dr. Lorenzo Pacchiardi
Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK
Research Associate
David Quarel
Australian National University, Cambridge University (Fmr.)
PhD student, Research Assistant (Fmr.)
Jess Graham
The University of Queensland
Senior Research Coordinator
Dr. Andrew Childs
Griffith University
Lecturer in Technology and Crime
Matthew Farrugia-Roberts
Department of Computer Science, University of Oxford
Clarendon scholar
Dr. Daniel D'Hotman
DPhil AI Ethics, University of Oxford
Chief of Staff, Convergence Labs
Rhodes Scholar, Australia-at-Large (2019 & Brasenose)
Working in London's tech cluster, near DeepMind and Google, gives a front-row seat to AI's rapid evolution. In 2019, my DPhil research on AI for detecting mental health issues on social media met blank stares or dismissal. Two years later, ChatGPT arrived, and those skeptics stopped laughing. AI's pace of change makes prediction nearly impossible. For these trailblazing companies, 18-24 months is the far future. While committees deliberate and leaders pontificate, developers in their early twenties create autonomous agents in hours and write code in seconds. This happens in countless apartments and incubators right now. Anthropic's CEO Dario Amodei recently spoke at global forums about AI scenarios that should give us pause. Not science fiction, but plausible futures where systems might operate beyond our understanding. Nuclear weapons have predictable effects. AI systems may not offer such clarity—operating in various planes, conducting operations against opposing AIs. Gradual escalation would be almost inevitable as computer logic leaves no room for empathy. What concerned me most wasn't worst-case scenarios, but Amodei's warnings about jobs and economic security. The coming wave won't just hit specialized roles. Potentially vast segments of our economy will cease to require human assistance. We see early signs. Freelancers in creative and technical fields are our canaries, watching tools emerge that do in seconds what experts took days. AI agents don't sleep, don't need benefits, and improve rapidly. No sector will remain untouched—whether you work with hands or mind. Most countries, including Australia, are unprepared. Policy discussions crawl while governments hide behind inquiries that produce reports and little else. Meanwhile, the gap widens daily. This change won't arrive as a dramatic collapse, but as a quiet tide reshaping our economic shoreline. What happens when your expertise becomes obsolete? What about your family? Your community? Imagine: what would you do if you couldn't rely on your job, with no clear path to employment? When tech companies race to launch advanced projects quarterly, when experts speak with increasing concern—it's worth paying attention. This isn't about preventing progress or succumbing to fear. It's about recognizing that significant change is coming, likely sooner than we think. The window for thoughtful action remains ajar, but won't stay open indefinitely.
Assoc. Prof. Tolga Soyata
George Mason University
Associate Professor, Department of Electrical and Computer Engineering
James GauciMBA
Cadent
CEO
Immediate Past Chair, IEEE Society on Social Implications of Technology (SSIT) Australia
Buck Shlegeris
Redwood Research
CEO
Joshua Krook
University of Southampton
Research Fellow in Responsible Artificial Intelligence
Dr. Cassidy NelsonMBBS MPH PhD
Centre for Long-Term Resilience
Head of Biosecurity Policy
Matthew Newman
TechInnocens
Director & AI Safety Researcher
Contributor & Reviewer: Safer Agentic AI Foundations, IEEE P7XXX, Ethically Aligned Design
One of the biggest mistakes in rapidly evolving situations is to assume the present defines the future, and our understanding of today's situation is sufficient to navigate what is unfolding. We must recognise that failing to research and understand advancing AI developments means we, as a nation, will be leaving our wellbeing in the hands of others who may care little. An Australian AI Safety Institute would help safeguard the nation by providing the understanding needed to make informed decisions about AI adoption in our unique nation. We are unable to express a preference if we do not understand the subject. The best time to plant this tree was 10 years ago. The second best time is now.
JJ Hepburn
AI Safety Support and Ashgro
Founder and CEO
Mitchell KingLL.M
HADR Institute
CEO of HADR Institute
As the CEO of the Humanitarian Assistance and Disaster Relief Institute, and as a researcher exploring the application of international law to AI Decision Support Systems in armed conflict, it's clear to me that these technologies offer immense opportunities alongside significant risks of harm. Navigating this delicate balance demands active, innovative governance and responsible leadership.
Yanni Kyriacos
AI Safety - Australia & New Zealand
Co-Founder & Director
Robust assurance justifies trust. We're all excited about the potential opportunities of AI, but not enough work is currently happening to address genuine safety concerns. It's easy to understand why Australians are hesitant to adopt AI while these big issues are outstanding.
Evan Markou
The Australian National University
PhD Researcher
Andrea Miotti
ControlAI
Founder and Executive Director
Author of A Narrow Path (narrowpath.co) and The Compendium (thecompendium.ai)
Dr. Sam BuckberryPhD
The Kids Institute Australia, Australian National University
Michael Chen
University of Oxford, Arcadia Impact, tangentially UK AISI
AI Technical Governance Researcher
Gregory Baker
Macquarie University & Australian National University
Lecturer in Cybersecurity and Artificial Intelligence
I wrote a number of the terms on digital rights management that were ratified as part of the Australia--USA free trade agreement
I believe -- along with many of my colleagues researching artificial intelligence -- that the 48th parliament of Australia may preside over an intelligence explosion in which human beings will no longer be the most intelligent beings on the planet. I commit to doing whatever I can to assist my local member and any other member of parliament in understanding the issues at hand and to offer whatever guidance I can.
Prof. Stefan Auer
The University of Hong Kong
Professor of European Studies
Rafiul Nakib
iskool AI
CEO, CTO, Founder
Dr. Huaming ChenPhD, FHEA
The University of Sydney
Senior Lecturer
Charles O'Neill
Parsed
CTO
DPhil Student @ University of Oxford (under Professor Jakob Foerster) | 2025 General Sir John Monash Scholar |Neel Nanda MATS Stream 2024
Dr. Hayden Wilkinson
University of Oxford; University of Western Australia
Research Fellow; Lecturer
Dr. Jiadong Mao
University of Melbourne
Postdoctoral research fellow in computational biology
Shiv MunagalaMMath (Oxon)
University of Oxford
Data Scientist
Elle BrookerBA BPPM MPPM FGIA MICDA GIAAOS
ForHumanity Center, Australia
Fellow
Fellow, Governance Institute of Australia, Fmr Principal advisor to the Shadow Minister for Technology and Innovation, Victoria on the Privacy, Health Records and Electronic Transactions bills
I support the calls for action of Australians for AI Safety to assure citizens are protected from the downside risks of automated, algorithmic and other technologies and 'AI'.
Christopher MacLeod
Aequum AI Consulting Group
Managing Partner
Karl Berzins
FAR.AI
Co-founder & COO
Eslam Zaher
University of Queensland
PhD Researcher
Michael J Clark
Three Springs Technology, Cytophenix, Woodside
Director & Machine Learning Engineer
Once something is smarter than you, it's too late. Before then, we need to make sure ASI robustly shares our values and won't enable totalitarian control.
Dr. Ariel Zeleznikow-JohnstonPhD
Monash University
Research Fellow
Author of 'The Future Loves You'
Samuel Coggins
Australian National University
PhD Candidate
Assoc. Prof. Gert Frahm-Jensen
Vascular Surgeon
James Dao
Harmony Intelligence
Research Engineer
Joseph Bloom
UK AI Security Institute
Head of White Box Evaluations
Oscar Delaney
Institute for AI Policy and Strategy
Research Assistant
AI safety and security is a global challenge. But middle powers like Australia have an important role in shaping the global discourse and strengthening safety measures.
Pooja Khatri
University of Sydney
Lawyer and AI Governance Researcher
Jiaranai Keatnuxsuo
Microsoft
AI Architect
AI safety matters to me because it sits at the intersection of my professional mission and personal values. As someone who designs and deploys AI solutions for public services, I’ve seen firsthand how the benefits of AI can be undermined by unintended consequences. Ensuring that the systems I build are safe, transparent, and aligned with human values is not just best practice, it’s a responsibility I owe to the communities I serve. Impact on society: AI-driven decisions now influence healthcare outcomes, justice processes, infrastructure planning and more. Without rigorous safety guardrails, biases and errors can amplify harm at scale, eroding trust in institutions and technology alike. Professional integrity: Having delivered over 500 hours of Data & AI workshops for government teams, I know that trust is earned through transparency and accountability. Embedding safety frameworks into every AI project reinforces that trust, ensuring stakeholders feel confident in adopting new solutions. Ethical stewardship: My passion for AI governance drives me to balance innovation with oversight. By championing policies and practices that anticipate risks, rather than simply reacting to them, I help steer AI toward outcomes that serve the common good. Community engagement: Hosting the Perth Machine Learning Group reminds me daily of the power of diverse perspectives. Inclusive safety practices not only catch blind spots but also democratise AI benefits across all segments of society. Long‑term viability: True innovation flourishes only when people feel safe using it. Prioritising AI safety today lays the groundwork for sustainable adoption tomorrow, unlocking human potential without compromising our ethical choices. By making AI safety central to my work, I strive to build resilient systems that empower users, protect vulnerable populations, and uphold the values that guide me - because a future worth building demands nothing less.
Dr. Morgan Tear
The University of Queensland
Senior Research Fellow
Nathan Sherburn
Effective Altruism Australia
CIO
Gareth Kindler
University of Queensland
MPhil student
Arush Tagade
Leap Labs
Research Scientist
Lara Nguyen
SRI4GoodAI
Founder
As artificial intelligence continues to evolve, the importance of AI safety grows exponentially. AI has the potential to enhance society in profound ways, but without careful oversight, its risks could outweigh its benefits. I believe that responsible AI development must be grounded in rigorous research, ethical considerations, and proactive safety measures. By signing this open letter, I am advocating for increased attention to AI safety within the research community. It is imperative that we prioritize transparency, fairness, and accountability in AI systems to safeguard against unintended consequences. Future advancements should not come at the expense of human values, privacy, or security. I urge researchers, policymakers, and industry leaders to come together in ensuring AI is developed with safety at its core. Only through collective responsibility can we build AI that is both innovative and safe for humanity.
Dr. Nicholas Ampt
Ty Wilson-Brown
Senior IT Professional
Machine Learning & Security Researcher
I support government and industry taking strong action on AI safety, but I'm neutral on the exact form that will take. Any equivalent of an AI Safety institute is fine with me!
Ben Robinson
The Centre for Long-Term Resilience
AI Policy Manager
Dr. Estela Valverde
The University of Sydney
Long standing academic.
AI is presently impacting our lives and it will become a real concern for the future of the coming generations. We need to legislate it before it is too late!
Chris Leong
AI Safety Australia and New Zealand
Co-Founder
Joshua Suh
International Olympic Committee
Artificial Intelligence Business Manager
Clément Dumas
ENS Paris-Saclay
Katherine Biewer
AI Safety Engineering Taskforce
Software Engineer
Liam Carroll
Gradient Institute
Researcher
Raymond Sun
Tech Lawyer and Developer and Founder of Global AI Regulation Tracker
Maxwell Clarke
NZX - New Zealand's Exchange
Data Developer
AI is the issue of our time. Technology is transforming the world faster and faster - we must keep up to prevent great harms.
Aristides Lintzeris
A Social Media Giant (Under NDA)
Computer Vision Engineer
AI Safety is the key to keeping Australian medical data safe from unauthorized abuse in the medical insurance industry.
Hugo Lyons Keenan
The University of Melbourne
ML PhD Student
Rumtin Sepasspour
Global Shield
Cofounder
Zach Furman
University of Melbourne
ML PhD Student
AI is getting increasingly capable, approaching or exceeding human performance on many real-world tasks, but our scientific understanding of how these systems work is remarkably nonexistent. Powerful technology that we understand poorly is a recipe for disaster. We need to act to fix this before it's too late.
Emmett Howard
University of Sydney
Simon Kennedy
The Australian Association of Voice Actors (AAVA)
Voice Actor & President of AAVA
Author of 9/11 and The Art of Happiness
Generative AI poses the greatest threat the creative sector has ever seen. In addition to this, the risk to democracy and truth is a price every Australian will pay if AI-generated media is not regulated.
Joachim Diederich
Psychology Network Pty Ltd
Clinical Psychologist and Director
Author of "The Psychology of Artificial Superintelligence"
With the advancement of AI, there are psychological consequences for the well-being of individuals as well as a significant impact on societies. Human work will continue to be transformed and will possibly be eliminated in the not so distant future. Interfaces that directly connect the brain with the internet will have an impact on how we think and communicate. The decisions and actions of an advanced form of artificial intelligence will be more and more difficult to understand, and hence, better forms of explanation for artificial intelligence are required. The technology is increasingly being used to manage significant parts of society, e.g. by use of social credit systems, with consequences for the entire population. Finally, advancements in military AI may include autonomous killing machines that can spread fear and terror. All these developments are happening as we speak and represent significant challenges to human psychological well-being. Advanced forms of artificial intelligence will have an impact on everybody: The developers and users of AI systems as well as individuals who have no direct contact with this form of technology. This is due to the soliciting nature of artificial intelligence: AI wants to be used and the "universal solicitation" of the technology is a challenge. We need an AI that is not just “human controlled” but beneficial in the sense that it explains itself and its operation to everybody. This includes the most vulnerable in a society, including children, the elderly and persons with an intellectual disability. Living with an artificial superintelligence is a critical area of research and resources should be allocated to it to help safeguard human existence as we know it.
Jisoo Kim
Clear AI
Co-Founder
AI safety is essential to both our national security and sovereign economic interests. I support a strong, coordinated approach - anchored by an AI Safety Institute and mandatory guardrails - to ensure safe, responsible AI development to support safe, responsible AI deployment. As a late adopter, Australia can now draw on global lessons - enabling the government to lead with clarity, embed sector-specific best practices and unlock AI’s full potential to strengthen our economy and way of life.
Dr. Brad Taylor
University of Southern Queensland
Senior Lecturer (Political Economy)
Rohan Hitchcock
University of Melbourne
PhD Candidate
Sandy Fraser
AI Safety Researcher
Matt Fisher
Software engineer
Maintainer of inspect_evals on behalf of UK AISI
AI systems are rapidly gaining capability and are very likely to transform the world in the next few years. We must do everything we can to ensure the changes are positive.
Miles Tidmarsh
Compassion in Machine Learning
CEO
Amy Wilson
White Cleland
Lawyer
Secretary - Victorian Society for Computers and the Law
William Baird
PauseAI
UK Director
Tseng Yun
EY
Managing Director of Digital Engineering
While promoting innovations, risks should also be managed effectively.
Chris Mathwin
Harmony Intelligence
Research Engineer
Oliver Sourbut
UK AI Safety Institute
Researcher
Sarvesh Tiku
Blue Dot Impact, Georgia Institute of Technology
Justin Olive
Arcadia Impact
Head of AI safety
Pip Foweraker
AI Governance researcher
Hunter Jay
Software Engineer
Previously: CEO of Ripe Robotics
We are building increasingly intelligent systems, but our methods of aligning their goals with ours are rudimentary and may not scale. Public funding to research this technical problem is essential if we wish future AI systems to remain safe.
Luke Freeman
Good Ancestors
COO
Liam Harman
TasNetworks
Lead Cyber Risk Analyst
Brett van Niekerk
Logistai
CEO
AI is a fundamental net-negative to society on various fronts, whether it be its upward pressures on scams, downwards pressure on critical thinking, or the outright replacement of various skills. Effective policing at the policy level will enable the public and private sectors to create solutions that mitigate the potential damages autonomous and AI-assisted systems will cause.
Jimmy Farrell
Pour Demain
EU AI Policy Co-Lead
Almost overnight we've arrived into a world where AI capabilities present serious risks to the safety, health and fundamental rights of Australian citizens, with little to no rules steering this technology in the right direction. The Australian government must act now.
Jasper Timm
Apart Research
AI Safety Researcher
Chris MacLeodMBA
Dr. Daniel Max McIntosh
La Trobe University
AI governance researcher
Joseph Miller
University of Oxford
Incoming PhD Student in Machine Learning
Dr. Verity CooperMBBS, DA , FRACGP
Independent Candidate for Sturt
AI is wonderful technology but absolutely needs regulation, and if we don't act now, we will be at risk of inundation of misinformation, false information, deepfakes and societal manipulation on scales that are unimaginable. The security implications alone are terrifying, let alone the risks of societal division and societal takeover by malign forces. We need to act constructively, together.
Davor Petreski
University of Melbourne
Graduate Researcher
Matthew Hyde
Global Power Energy
Director - Integrated Solutions
Bryan J. Rollins
Grok Ventures
Operator-in-Residence
Dustin Venini
Researcher
Utkarsh Sharma
UNSW Canberra
Software Engineer
Neil Coulson
Senior Manager Data & AI Literacy
People need to be at the centre of any AI initiative
Maria Santacaterina
Santacaterina Consulting
CEO
Lily Stelling
AI Governance researcher
Dane Sherburn
AI Safety Researcher
Ethan (EJ) Watkins
Federation University
AI Research Assistant
Dr. James CarterMChD
Dr. Richard Corry
University of Tasmania
Lecturer in Philosophy
Even if you don't think AIs are going to enslave humanity, there are plenty of potential dangers, including the loss of jobs, the spread of misinformation, and the automation of warfare, to name just a few. We need to go into the AI era with our eyes open.
Isabella Meltzer
Research Officer
Michi Chan
Adj. Assoc. Prof. Karl ReedFACS, FIE Aust,MSc(Thesis only, Monash),ARMIT,MIEEE
La Trobe University
Software Engineering Researcher and Teacher, Industry Policy Wonk
AI has arrived in waves, and no wave has delivered what it promised. This current wave, however, goes beyond those in the past. It offers management and decision makers the chance to bypass experts. There is a massive risk to sovereignty since the tools and their training sets are being supplied by sources in other countries who may seek to influence our domestic politics. Policy decisions made with the use of these tools can easily be tainted to achieve outcomes sought by foreign countries. An obvious example is Australia's various plans to control social media. These tools could permit covert influence being brought to bear via false data and conclusions.
Leigh J Kennedy
Steven Merriel
Katie Mills
Independent contractor currently specialising in AI
Software Engineer
Society is rapidly becoming reliant on AI without due consideration of the consequences.
Bowen FungPhD
Neuroscientist
Chris Carpenter
Lark Hill Winery
Director
Darryl Carr
HCA Advisory
Enterprise Architect
Dr. Mark BrownPhD
Social Researcher
It seems absolutely possible that smarter than human AI will be developed in the next 5 to 10 years. This will not be a new technology; it will be a new form of life. We will wish we had to put regulations in place when we still had the chance. Politicians will regret not having done more.
David Marti
Tyra Burgess
Computer Scientist / Mathematician
Dr. Laura Leighton
Molecular biologist
We are all beneficiaries of science and technology and the high standard of living they have created. Technological advancement is a wonderful thing, but when working with new technologies that are changing rapidly and have emergent properties that are difficult to interpret, caution and public consultation are essential. Australia has a moral obligation to meet its commitments regarding AI safety, and an opportunity to be a global leader in the responsible development of AI rather than merely reacting to developments as they happen. We should take this opportunity.
Christian Pearson
Australian National University
Public Health Masters Student
Tom Plant
Devicie
Technical Product Manager
Peter Horniak
Technical Systems Architect
Without government intervention, we're entrusting humanity's future to a few profit-driven companies developing artificial intelligences they barely understand, effectively giving these corporations and a small group of foreign officials permanent control over humanity's future.
Leticia García
ControlAI
Policy Advisor
Rumi Salazar
University of Melbourne
PhD student
Daniel Gotilla
AI Product Lead
Dr. Simon Zhang
Annie Davila Campanello
Dr. Michael Dello-IacovoPhD
Mac Jordan
Archana Atmakuri
Dr. Rupert McCallum
Researcher
Mark Freeman
University of Sydney (retired)
Associate Professor (retired)
Ben Auer
University of Melbourne
Student (Neuroscience & Pure Maths)
Max Creswick
Trumpet of Patriots
Candidate
Elle Macdonald
University of Queensland
Ross Tieman
Australian National University
Cain Hillier
University of Sydney
Adrian Gornall Bsc Hons
Business Performance Manger
Riley Harris
University of Oxford
DPhil Student
Jo Ann Stinson
University of Queensland
Engineering Project Manager, EC&I Engineer
I am hopeful that as AI technology matures it will contribute significantly towards improving people’s quality of life while being implemented in an ethical and responsible manner; however, the ethical use of AI cannot rely on the altruism and goodwill of individuals. Currently, day-to-day experiences of the limitations of AI, and observations of its misuse, have raised concerns around data integrity, privacy and its potential to cause harm. AI by itself is neither good nor bad; the purpose for which it is used and the way it is implemented determines that characteristic. The role of well-developed policies, standards, regulations, and governance framework in the risk management and use of AI will determine its overall benefit or harm to society.
Thomas Walker
Think Forward
CEO
Robert (Bob) Stevenson
Entertainer/Producer
AI presents a very real threat of "identity theft" of the highest order. Not simply usernames or password, but the very essence of what make "me" me. My appearance, my mannerisms, my voice and... to the observer, my message. A lack of checks and balances in this sphere threatens the potential for a lawless, parallel universe, where reality and "make believe challenge" each other for an audience attention and where influence can be brought to bear by those whose motives are suspect.
Chenoah Ellis
Lawyer
Bridget Loughhead
Effective Altruism Australia
Community Manager
Gordon Denoon
Active Engagement
CEO
The future of AI is critical to the development of the human race. It is essential that we take the time to get this right and to ensure appropriate safeguards are in place.
Evan Hockings
The University of Sydney
PhD student
Pohlee Chan
University of Melbourne
Associate Director
It is our largest opportunity to humanity but also the most significant risk to humanity. Understanding the risks, being transparent, active management and mitigation is the only way that AI can be sustainable in the future. AI risk is not a barrier but an enabler. This is what the world needs to understand and back.
Zac Broeren
University of New South Wales
I am a Master’s student at UNSW, studying AI. I am doing this specifically with the intent to work on research relating to technical AI safety. I’m also completing the Technical Alignment Research Accelerator in Sydney. My background is in mathematics and physics and I have chosen to alter my career path because I believe, based on the evidence available and the arguments made by leading experts, that AI advancements will be the most consequential events of our time, and that without policy driven safety measures we will be choosing to use the most powerful technology ever invented without any consideration for the most dire consequences. We must be prepared for this technology.
Peter BogatecLLB/LP, BIntSt (Flin)
Candidate for PHON Federal Electorate of Sturt
Kieran Greig
Rethink Priorities
Chief Strategy Officer
As someone who has closely followed AI development for years, I'm signing this open letter because I believe we stand at a critical juncture. The rapid advancement of AI technologies presents both extraordinary opportunities and unprecedented risks that demand thoughtful governance. My professional experience has given me insight into how transformative technologies can outpace regulatory frameworks. With AI, this gap is particularly concerning given the technology's potential to fundamentally reshape our economy, society, and democratic institutions.
Gabrial Pennicott
Trumpet of Patriots Wide Bay Candidate
I 100% support AI Safety. AI is evolving faster than anything we've seen—it's powerful, but dangerous if left unchecked. Entire industries will be disrupted. Jobs lost. Lives changed. We must plan for this, not react to it. Artificial Superintelligence is no longer science fiction—it’s on the horizon. We need strong, transparent, global safeguards now. AI must serve people—not control them.
Sheannal Anthony Obeyesekere
Effective Altruism Australia
Board Member
Andrew Taylor
Rockland Legal
Technology Lawyer
Elliot Teperman
Effective Altruism Australia
Head of Community
Daniel Ambler
DEECA (Vic)
Senior Digital Adviser
Jordan von Eitzen
University of Western Australia
Master of Economics
Sam Coggins
Australian National University
PhD student
Jaquelyne Vullinghs
Airtree Ventures
Partner
Christine Parkes
CEO
AI must be used for good only. We must have guardrails so as to protect ourselves against rogue actors who would chose to misuse it for power.
Mitchell Laughlin
Signing in my personal capacity
Economist
Samuel Nate Parson
Software Engineer
Michael Townsend
Open Philanthropy
Program Associate
Tim Allen
The University of Sydney
Mechatronic Engineer
Fredrick Ragg
Future Group
Senior Manager, Digital Marketing
Augustus Hebblewhite
Lexi Sekuless
LS Productions
Producer
Jenna Ong
Data Consultant, Newsletter Writer
Tristan DryMPH
Bryce Robertson
Alignment Ecosystem Development
Project Director
Steven Nguyen
Microsoft
Software Engineer
Jason Segal
The University of Sydney
Student
Richard Hudson
Writer, lay AI researcher
AI presents unimaginable emergent risks and will utterly transform society and cause the worst harm ever to the natural world, without careful regulation and oversight.
Martin Veron
Coral Reef Research
Data Engineer
The development of artificial intelligence is momentous, with implications on a civilizational scale. The contents of this letter are not radical; rather, they represent a baseline that any responsible governance structure should adopt as the bare minimum standard.
Andrew McAlister
Woolworths
Data ethics and Privacy Partner
The right regulations will enable Australian businesses to innovate. The safety institute is needed to provide the much needed interpretative guidance to businesses.
Yoshua Wakeham
Software Engineer
Michaela Morton
Teacher
Meenakshi Chaudhary
Kai Dowsett
Macquarie University, Aboriginal Legal Service, Parliament of NSW
Student
Matthew Blyth
Bradley Tjandra
Actuary
As an actuary I recognise that businesses and consumers both greatly benefit from clarity around responsibility for risk, and the value of supporting businesses in managing emerging risks. This is why I believe the Australian government must take action on AI Risk now.
Arshia Jain
Senior Policy Officer
Ryl ParkerBAppSc(Hons)
Senior Ecologist
Melanie Brennan
Effective Altruism Barcelona
Community Builder
Huw Evans
Kaya Guides
Co-Founder & CTO
Jarrah Bloomfield
Security Engineer
Tobin Smit
Vow
Systems Engineer
Wendy Gaol Parked
I'm concerned that there aren't enough safeguards. I'm also concerned about students and professionals relying on AI to do their research for them and them not doing enough themselves. Most particularly in the fields of science , medicine and law. ALL fields that require absolute correctness so that there is no risk of e.g. incorrect medication, insufficient or erronerous data ( in scientific areas) or incorrect judgments or findings in the law from ill researched information and case law. I realise that AI is SUPPOSED to work on information it receives, but that presupposes that information is pure and unbiased
Noah Quinlan
University of the Sunshine Coast
Undergraduate Ecology Student
Madeleine Cox
Writer
- I am concerned AI technology will develop without governance in a similar way to the advent of social media - I would like technology companies to be bound by a duty of care - I am appalled by the theft of original work by AI models - Whilst I acknowledge there are many benefits that may come with AI, I am concerned that the energy used by AI applications will make emissions targets unattainable
Ryan Whitelock
Data Scientist
Mr Michael Steer
Prince Alfred College
Teacher
Rohan Mitchell
Software engineer
Steven Deng
Energy Modelling Consultant
Joey Corea
Writer and Data Engineer
It's impact on our lives is going to only grow so we need to make sure that it is aligned with humanity's best interests.
Grace Adams
Australians deserve the safe application of technologies that may radically change our world or pose unforeseen risks.
Michael Huang
We should safeguard a powerful dual-use technology like AI, both nationally and internationally.
Lyndon Purcell
Software Engineer
Rupert Turner
Senior Recruiter
Ebony Jones
Data Analyst
Michael Kerrison
Independent AI safety researcher
Campbell Border
Software Engineer
David SadlerBSc(Hons)
Futures analyst (retired)
Over the last 30 years in particular, global inter connectivity has empowered state and non-state actors to deliver dangerous and deadly effects of all types across the world. AI will make this better or worse - we need to act now to make sure it is the former.
Nick Lane
Keiran Harris
Alien Cub Productions
Creative Director
Cameron Horsley
William Broom
Librarian
Nick F
Daniel George Sewell
Deanna Chamanaev
Carolyn Newson
CEO and Founder of Mantosa
To make use of AI and really progress society for the better, we need to be able to trust it and know it won't compromise our safety. There is not enough transparency for us to know how AI tools will be used by each government and how we'll be protected by criminals.
Michael Oechsle
Product Designer
Clemency Martell-Turner
Kacey Reynolds
Glenn Membrey
Christopher Wintergreen
Secondary School Teacher
Rebecca Howard
Stephen Fowler
Hurley Jack Diessel
Rebecca Cutter
Fit for Purpose WA
Emily Branwyn Roberts
Rachel Le Rossignol
Benjamin SmythBSci
Suzanne Connelly
Jen Truong
University of Melbourne
Student
David Colin Gould
mathematics teacher
I have long been optimistic about the future of humanity, and have watched in awe as our technology has progressed over my lifetime. However, the rapid increase in the capabilities of artificial intelligence has me afraid. With no means to control an entity significantly more intelligent than us, the chances of a positive future for humanity reduce day by day. All the many young people I have taught over the last decade are at risk, and I am terrified that what I promised for them - the potential for a wonderful future - is going to turn into a nightmare and then into nothing. For the sake of them and for all the young people like them across the globe, we must act. And act immediately.
Simon Newstead
Better Bite Ventures
Founding Partner
AI safety matters for the future of all of our society and the generations to follow. To have a flourishing future we need to roll out AI in a way where safety is at the forefront, not an afterthought.
Megan Goodwin
Angus Crawshaw
Computer Science Student
Dylan Vogel
Drew Skjellerup-Wakefield
Manas Choudhury
I regularly use AI, and LLM's for my work in the renewable energy field and see first hand tremendous opportunities and use cases. However, the more powerful it becomes, the more it is obvious to me that there must be strong standards to ensure public safety.
Sharon Li
Sebastian Peeler
A human being
I believe AI can be a force for great good in the world but current economic and sociopolitical pressures are creating an environment wherein AI will be utilised not for the good of all but for the good of very, very few-- and the decisions of those few will decide the priorities of AI development, priorities unlikely to include any real commitment to public safety over 'progress'.
Max McWhae
AI Safety Student
Imagine you are speeding down a winding, perilous road and a passenger says "Look, let's just slow down a bit. It's important we get there safely." That's AI Safety. They continue, "Wait, you guys aren't wearing seatbelts? Do you even know where we are going?"
Alistair Whitehouse
Alan Rayner Francis
Neil Lu
Zeke Coady
Pierre Taylor
AI has the potential to utterly change the world, and it is essential that we as a society direct that change to be beneficial rather than just hoping things turn out well by accident. Australia has the potential to be a thought leader in this field - if we come up with good policy here, we can be an example to all the other nations yet to legislate on the issue at all, which notably includes the US federal government.
Scott Simmons
James Newson
Bridget Mahy
Matilda Neame
AI safety is the most important issue to me this election. AI presents real risks to Australia's security, economy and the jobs, health and wellbeing of all Australians. Australia's leaders need to take a proactive stance to ensure that AI is safe and beneficial to humans, both here and on the world stage.
William Grant
Project Manager
Lucas Van Berkel
ICT Specialist
Jamie Muchall
Bernard Lovegrove
Julie Hepburn
Sam Moffitt
Jonathan Kurniawan
Valerie A Kennedy
Monika Janinski
Derek Synnott
Lindy Parker
Julia Elizabeth Duncan
Romy Gelber
Guy McDonald
N. Fitt
Jenny Chung
Rochelle Harris
Fergus Dall
Nicholas Holden
CMO
AI is going to have a future altering impact on society, whether that is positive or negative is finely in the balance. Progress is rapid, now is the time to think about safety before it becomes too late.
Benjamin Archibald
Carly Sheil
Regina Kidd
Zemyna Kuliukas
Rickey Fukazawa
Ryan Willows
Business Analyst
Wes Graham
Ronnie Taheny
Kyle Leyden
Philippa Evans
Paul Schnackenburg
Huw Cannon
MengyuanNiu
Joni Freeman
Max Tandy
Jemma Brown
Jake Ushida
Lucas Hakewill
William Kiely
Lawson Pegler
Matt Kay
Ainsley Pullen
Nicole Marie Porteous
K Lonergan
Sean
Natalie Darby
Tee Bee
Amanda Graham
Benjamin Hayward
Bailey White
Tony Evans
Scott Shimada
For the future of humanity