Australia must act on AI safety now
To seize AI's benefits, the Australian Government must address its risks. Join the call for safe and responsible AI governance.
Turn support into action. Get alerts when your voice matters most.
Why AI safety matters for Australia
AI could usher in unprecedented prosperity—or pose humanity's greatest challenge. Australia has a unique opportunity to help determine which future we get.
Extraordinary Potential
AI could help cure diseases, accelerate clean energy development, and spur unprecedented creativity and innovation. We have world-class researchers and the values to lead this transformation responsibly.
Serious Risks Ahead
But without proper safeguards, AI could eliminate jobs faster than we create them, enable new forms of warfare, or even become impossible to control as it grows more powerful.
Australia's Moment
We have a narrow window to shape AI's development globally. The choices we make in the next few years will determine whether AI becomes humanity's greatest tool or greatest threat.
Take action for AI safety
Your voice matters in shaping Australia's approach to AI safety. Contact your representatives and make a difference in how we develop and govern artificial intelligence.
Contact Your MP About AI Safety
Contact your local MP about AI safety policy priorities and concerns.
Contact Your Senators About AI Safety
Contact your senators about AI safety policy priorities and concerns.
Contact Key Ministers on AI Safety
Contact Australia's key ministers and decision-makers who have direct responsibility for AI policy, technology regulation, and related areas.
Australians demand immediate AI safety action
Recent polling shows overwhelming public support for Australian Government action on AI regulation and safety policies.
Australians believe Australia should lead international AI governance
Australians support the creation of a new government regulatory body for AI
Australians think preventing AI-driven extinction should be a global priority
AI regulation Australia needs: Two critical policies
Leading technical experts on AI and governance have identified two tractable policies that the current government should implement to put us on a much safer path and give Australia a seat at the table in shaping a beneficial AI future.
Australia's experts are calling for AI safety action
Leading voices from AI research, technology, and policy unite on the urgent need for safeguards
Expert-led letters that reached Parliament and media
Over 378 experts, public figures, and concerned citizens have joined our calls for the Australian Government to take decisive action on AI safety.
AI safety experts in the media
Our advocacy reaches national audience through expert commentary, interviews, and policy analysis across Australian media.
Your questions about AI safety answered
Common questions about AI safety in Australia and how you can take action
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.
Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.
AI safety covers:
- Current risks: Bias, privacy violations, misinformation, and job displacement
- Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
- Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.
Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.
AI safety covers:
- Current risks: Bias, privacy violations, misinformation, and job displacement
- Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
- Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.
Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.
AI safety covers:
- Current risks: Bias, privacy violations, misinformation, and job displacement
- Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
- Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.
Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.
AI safety covers:
- Current risks: Bias, privacy violations, misinformation, and job displacement
- Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
- Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.
Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.
AI safety covers:
- Current risks: Bias, privacy violations, misinformation, and job displacement
- Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
- Future risks: Loss of human control over highly capable AI systems
No. Today's LLMs are unlikely to pose catastrophic or existential risks. While there are valid ethical concerns, today's AI systems offer important economic opportunities while being unlikely to pose extreme safety risks. However, evaluations by leading labs and third parties show an increasing likelihood that LLMs will soon pose a range of risks, like being able to assist wrongdoers build chemical, biological, radiological and nuclear weapons, conduct sophisticated cyber attacks at scale, or even escape human control. Given AI scaling laws, the need to prepare for highly capable AI systems that could be dangerous has become pressing.
About Australians for AI Safety
Australians for AI Safety is a coalition advocating for the safe and responsible development of artificial intelligence in Australia. We unite experts in AI, ethics, policy, and other fields with concerned citizens to ensure Australia leads in AI governance.
Our mission is to promote informed public discussion about AI risks and benefits, translate expert knowledge into accessible policy recommendations, and advocate for appropriate governance frameworks that protect Australian interests while fostering beneficial AI development.
Through open letters, expert testimony, media engagement, and grassroots advocacy, we work to ensure Australia's voice is heard in global AI governance discussions and that our government implements policies that keep pace with rapid technological advancement.