Australia must act on AI safety now

To seize AI's benefits, the Australian Government must address its risks. Join the call for safe and responsible AI governance.

Turn support into action. Get alerts when your voice matters most.

Why AI safety matters for Australia

AI could usher in unprecedented prosperity—or pose humanity's greatest challenge. Australia has a unique opportunity to help determine which future we get.

Extraordinary Potential

AI could help cure diseases, accelerate clean energy development, and spur unprecedented creativity and innovation. We have world-class researchers and the values to lead this transformation responsibly.

Serious Risks Ahead

But without proper safeguards, AI could eliminate jobs faster than we create them, enable new forms of warfare, or even become impossible to control as it grows more powerful.

Australia's Moment

We have a narrow window to shape AI's development globally. The choices we make in the next few years will determine whether AI becomes humanity's greatest tool or greatest threat.

Australians demand immediate AI safety action

Recent polling shows overwhelming public support for Australian Government action on AI regulation and safety policies.

94%

Australians believe Australia should lead international AI governance

1 of 6
94%

Australians believe Australia should lead international AI governance

86%

Australians support the creation of a new government regulatory body for AI

80%

Australians think preventing AI-driven extinction should be a global priority

Australia's experts are calling for AI safety action

Leading voices from AI research, technology, and policy unite on the urgent need for safeguards

Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all.

Dr. Toby Ord

Senior Researcher, Oxford University

Author of The Precipice

1 of 15

AI safety experts in the media

Our advocacy reaches national audience through expert commentary, interviews, and policy analysis across Australian media.

Your questions about AI safety answered

Common questions about AI safety in Australia and how you can take action

AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.

Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.

AI safety covers:

  • Current risks: Bias, privacy violations, misinformation, and job displacement
  • Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
  • Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems. Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods. AI safety covers: Current risks: Bias, privacy violations, misinformation, and job displacement Emerging risks: Autonomous weapons, deepfakes, and surveillance systems Future risks: Loss of human control over highly capable AI systems

AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.

Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.

AI safety covers:

  • Current risks: Bias, privacy violations, misinformation, and job displacement
  • Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
  • Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems. Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods. AI safety covers: Current risks: Bias, privacy violations, misinformation, and job displacement Emerging risks: Autonomous weapons, deepfakes, and surveillance systems Future risks: Loss of human control over highly capable AI systems

AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.

Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.

AI safety covers:

  • Current risks: Bias, privacy violations, misinformation, and job displacement
  • Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
  • Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems. Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods. AI safety covers: Current risks: Bias, privacy violations, misinformation, and job displacement Emerging risks: Autonomous weapons, deepfakes, and surveillance systems Future risks: Loss of human control over highly capable AI systems

AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.

Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.

AI safety covers:

  • Current risks: Bias, privacy violations, misinformation, and job displacement
  • Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
  • Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems. Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods. AI safety covers: Current risks: Bias, privacy violations, misinformation, and job displacement Emerging risks: Autonomous weapons, deepfakes, and surveillance systems Future risks: Loss of human control over highly capable AI systems

AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems.

Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods.

AI safety covers:

  • Current risks: Bias, privacy violations, misinformation, and job displacement
  • Emerging risks: Autonomous weapons, deepfakes, and surveillance systems
  • Future risks: Loss of human control over highly capable AI systems
AI safety ensures artificial intelligence systems are designed and deployed to benefit humanity without causing harm. This includes addressing current risks like bias and misinformation, as well as preventing future risks from more advanced AI systems. Australia needs AI safety policies because we're rapidly adopting AI across healthcare, finance, and government services. Without proper safeguards, AI systems can make harmful decisions that affect people's lives and livelihoods. AI safety covers: Current risks: Bias, privacy violations, misinformation, and job displacement Emerging risks: Autonomous weapons, deepfakes, and surveillance systems Future risks: Loss of human control over highly capable AI systems

No. Today's LLMs are unlikely to pose catastrophic or existential risks. While there are valid ethical concerns, today's AI systems offer important economic opportunities while being unlikely to pose extreme safety risks. However, evaluations by leading labs and third parties show an increasing likelihood that LLMs will soon pose a range of risks, like being able to assist wrongdoers build chemical, biological, radiological and nuclear weapons, conduct sophisticated cyber attacks at scale, or even escape human control. Given AI scaling laws, the need to prepare for highly capable AI systems that could be dangerous has become pressing.

No. Today's LLMs are unlikely to pose catastrophic or existential risks. While there are valid ethical concerns, today's AI systems offer important economic opportunities while being unlikely to pose extreme safety risks. However, evaluations by leading labs and third parties show an increasing likelihood that LLMs will soon pose a range of risks, like being able to assist wrongdoers build chemical, biological, radiological and nuclear weapons, conduct sophisticated cyber attacks at scale, or even escape human control. Given AI scaling laws, the need to prepare for highly capable AI systems that could be dangerous has become pressing.

About Australians for AI Safety

Australians for AI Safety is a coalition advocating for the safe and responsible development of artificial intelligence in Australia. We unite experts in AI, ethics, policy, and other fields with concerned citizens to ensure Australia leads in AI governance.

Our mission is to promote informed public discussion about AI risks and benefits, translate expert knowledge into accessible policy recommendations, and advocate for appropriate governance frameworks that protect Australian interests while fostering beneficial AI development.

Through open letters, expert testimony, media engagement, and grassroots advocacy, we work to ensure Australia's voice is heard in global AI governance discussions and that our government implements policies that keep pace with rapid technological advancement.