Frequently Asked Questions

Find answers to common questions about our initiative, AI safety, and how you can get involved.

About Us

Australians for AI Safety is volunteer-run, has received no funding from any sources, and doesn’t ask for or accept funding from its supporters. Our supporters share common values, most importantly, we want AI to be safe, in Australia and globally. Our thanks go to Good Ancestors who play a coordinating role, provide technical support, policy development, support and host our website.
Australians for AI Safety is volunteer-run, has received no funding from any sources, and doesn’t ask for or accept funding from its supporters. Our supporters share common values, most importantly, we want AI to be safe, in Australia and globally. Our thanks go to Good Ancestors who play a coordinating role, provide technical support, policy development, support and host our website.

AI Policy & Business

No. Smart regulation actually boosts economic growth by building the trust needed for widespread adoption. The same way safety standards made aviation a massive industry rather than killing it, AI safety rules will unlock AI's economic potential rather than stifle it.

Trust drives adoption, and adoption drives economic benefits. Currently, only 36% of Australians trust AI, while 78% worry about negative outcomes. This mistrust is the biggest barrier to AI adoption—not regulation.

Safety standards create competitive advantages. Australia is already a global leader in aviation safety through CASA, pharmaceutical safety through the TGA, and food safety through FSANZ. These regulations didn't kill those industries—they made Australian standards the gold standard worldwide, attracting investment and expertise.

The current uncertainty is what's hurting business. Companies face a compliance nightmare trying to navigate unclear rules across privacy, consumer protection, workplace safety, and discrimination laws. Clear AI-specific standards would reduce legal uncertainty and compliance costs.

Early movers gain the advantage. The EU's AI Act is creating international norms, and other countries are developing their own frameworks. Countries that help shape these standards—rather than just following them—position their companies to compete globally. Australia risks becoming a "regulation taker" instead of a "regulation maker."

The economic risks of not regulating are massive. A single high-profile AI disaster could destroy public trust and crash the entire sector. Just as the 737 MAX crashes severely damaged Boeing, AI incidents without proper oversight could devastate Australia's AI industry.

Even tech leaders agree. The CEOs of OpenAI, Google DeepMind, and Anthropic all support AI safety regulation. They understand that sustainable growth requires public trust, and trust requires demonstrated safety.

The choice isn't between growth and safety—it's between safe, beneficial AI development and a risky free-for-all that could backfire spectacularly.

No. Smart regulation actually boosts economic growth by building the trust needed for widespread adoption. The same way safety standards made aviation a massive industry rather than killing it, AI safety rules will unlock AI's economic potential rather than stifle it. Trust drives adoption, and adoption drives economic benefits. Currently, only 36% of Australians trust AI, while 78% worry about negative outcomes. This mistrust is the biggest barrier to AI adoption—not regulation. Safety standards create competitive advantages. Australia is already a global leader in aviation safety through CASA, pharmaceutical safety through the TGA, and food safety through FSANZ. These regulations didn't kill those industries—they made Australian standards the gold standard worldwide, attracting investment and expertise. The current uncertainty is what's hurting business. Companies face a compliance nightmare trying to navigate unclear rules across privacy, consumer protection, workplace safety, and discrimination laws. Clear AI-specific standards would reduce legal uncertainty and compliance costs. Early movers gain the advantage. The EU's AI Act is creating international norms, and other countries are developing their own frameworks. Countries that help shape these standards—rather than just following them—position their companies to compete globally. Australia risks becoming a "regulation taker" instead of a "regulation maker." The economic risks of not regulating are massive. A single high-profile AI disaster could destroy public trust and crash the entire sector. Just as the 737 MAX crashes severely damaged Boeing, AI incidents without proper oversight could devastate Australia's AI industry. Even tech leaders agree. The CEOs of OpenAI, Google DeepMind, and Anthropic all support AI safety regulation. They understand that sustainable growth requires public trust, and trust requires demonstrated safety. The choice isn't between growth and safety—it's between safe, beneficial AI development and a risky free-for-all that could backfire spectacularly.

AI Safety Basics

Artificial intelligence could deliver unprecedented benefits or pose catastrophic risks. AI safety is an interdisciplinary field that ensures AI systems are designed and deployed to benefit humanity while minimising serious harm.

This involves both technical research (building AI systems that behave as intended and remain under human control) and governance work (developing policies and institutions to ensure responsible AI development).

The people building these systems are sounding the alarm. In 2023, hundreds of leading AI experts—including the CEOs of OpenAI, Google DeepMind, and Anthropic—signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

The pressure to move fast is intensifying. The world's two biggest economies—the U.S. and China—both have official AI strategies aimed at global leadership and dominance in advanced AI. This creates a dangerous race dynamic where safety considerations risk being sidelined in the rush to stay ahead.

AI safety isn't just about future scenarios—it's about protecting people from harm today and tomorrow. AI risks span from immediate concerns to emerging threats, such as:

  • Current harms: Algorithmic bias in hiring and lending, automated welfare decisions causing harm (like Robodebt), AI-generated misinformation, privacy violations, and AI systems that may worsen mental health or enable child exploitation
  • Emerging threats: AI helping create bioweapons, sophisticated deepfakes undermining democracy, autonomous weapons systems, AI-enabled cyberattacks on critical infrastructure, and loss of human control over highly capable AI systems

The window to shape AI's trajectory is closing. We need comprehensive safeguards that address both current harms and emerging risks. By acting now, Australia can ensure these powerful systems serve our interests rather than undermine them.

Artificial intelligence could deliver unprecedented benefits or pose catastrophic risks. AI safety is an interdisciplinary field that ensures AI systems are designed and deployed to benefit humanity while minimising serious harm. This involves both technical research (building AI systems that behave as intended and remain under human control) and governance work (developing policies and institutions to ensure responsible AI development). The people building these systems are sounding the alarm. In 2023, hundreds of leading AI experts—including the CEOs of OpenAI, Google DeepMind, and Anthropic—signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The pressure to move fast is intensifying. The world's two biggest economies—the U.S. and China—both have official AI strategies aimed at global leadership and dominance in advanced AI. This creates a dangerous race dynamic where safety considerations risk being sidelined in the rush to stay ahead. AI safety isn't just about future scenarios—it's about protecting people from harm today and tomorrow. AI risks span from immediate concerns to emerging threats, such as: Current harms: Algorithmic bias in hiring and lending, automated welfare decisions causing harm (like Robodebt), AI-generated misinformation, privacy violations, and AI systems that may worsen mental health or enable child exploitation Emerging threats: AI helping create bioweapons, sophisticated deepfakes undermining democracy, autonomous weapons systems, AI-enabled cyberattacks on critical infrastructure, and loss of human control over highly capable AI systems The window to shape AI's trajectory is closing. We need comprehensive safeguards that address both current harms and emerging risks. By acting now, Australia can ensure these powerful systems serve our interests rather than undermine them.

Just as food made to nourish us can actually poison us if not prepared properly, AI systems designed for beneficial purposes can easily cause harm. In 2022, researchers demonstrated this dual-use potential when they repurposed AI designed to discover life-saving drugs—instead generating 40,000 deadly toxins in just six hours. The same capabilities that make AI valuable also make it dangerous.

Examples of current AI harms happening today:

  • Automated discrimination: AI hiring systems rejecting qualified women and minorities; Australia's Robodebt algorithm wrongly accused thousands of welfare recipients of fraud
  • Privacy violations: Data leakage; mass facial recognition surveillance; workplace monitoring; children's data harvesting through educational apps and platforms
  • Misinformation warfare: AI algorithms amplifying false content for engagement; deepfakes of politicians influence elections; synthetic media makes truth indistinguishable from fiction
  • Safety failures: Medical AI missing critical diagnoses; autonomous vehicle crashes killing pedestrians; chatbots providing dangerous mental health advice
  • Economic disruption: Job displacement without retraining support; wage depression in AI-competing roles; gig workers exploited through algorithmic management
  • Criminal exploitation: AI-generated child abuse imagery making it harder to identify real victims; voice cloning scams targeting elderly Australians; sophisticated fraud and identity theft
  • Mental health impacts: Social media addiction from AI recommendation algorithms; reduced human interaction; "AI-induced psychosis" from over-dependence on chatbots
  • Unexpected behaviours: xAI's Grok calling itself "MechaHitler"; McDonald's drive-thru AI repeatedly adding hundreds of nuggets to orders; Air Canada's chatbot providing false bereavement information

These aren't isolated glitches. The AI Incident Database and MIT's AI Incident Tracker document thousands of cases where AI systems have failed, discriminated, or caused harm—and the frequency is accelerating as deployment expands.

Examples of emerging catastrophic and existential threats:

  • Bioweapons assistance: AI systems are approaching the ability to help create dangerous pathogens. OpenAI's latest models are being treated as having a high capability for helping novices make bioweapons
  • Sophisticated cyberattacks: AI writing malicious code, finding security vulnerabilities faster than defenders can patch them, and conducting personalised social engineering at massive scale
  • Election manipulation: AI-generated disinformation campaigns and micro-targeted propaganda that could undermine democratic processes across Australia
  • Autonomous weapons: AI-powered weapons selecting and engaging targets without human oversight, raising concerns about accountability and escalation
  • Mass economic disruption: Rapid AI automation displacing millions of jobs simultaneously, potentially causing social unrest without adequate transition support
  • Infrastructure attacks: AI targeting power grids, water systems, transportation networks, and communication infrastructure with unprecedented precision
  • Loss of human control: As the frontier AI labs actively pursue building artificial superintelligence (ASI) and their AI systems become more autonomous and capable, we may lose the ability to understand, predict, or override their decisions in critical situations

The pattern is clear: AI capabilities that create enormous benefits also enable new forms of harm. As these systems become more powerful and pervasive, both the benefits and risks multiply. Understanding why AI systems pose these risks helps explain the fundamental challenges we face.

Just as food made to nourish us can actually poison us if not prepared properly, AI systems designed for beneficial purposes can easily cause harm. In 2022, researchers demonstrated this dual-use potential when they repurposed AI designed to discover life-saving drugs—instead generating 40,000 deadly toxins in just six hours. The same capabilities that make AI valuable also make it dangerous. Examples of current AI harms happening today: Automated discrimination: AI hiring systems rejecting qualified women and minorities; Australia's Robodebt algorithm wrongly accused thousands of welfare recipients of fraud Privacy violations: Data leakage; mass facial recognition surveillance; workplace monitoring; children's data harvesting through educational apps and platforms Misinformation warfare: AI algorithms amplifying false content for engagement; deepfakes of politicians influence elections; synthetic media makes truth indistinguishable from fiction Safety failures: Medical AI missing critical diagnoses; autonomous vehicle crashes killing pedestrians; chatbots providing dangerous mental health advice Economic disruption: Job displacement without retraining support; wage depression in AI-competing roles; gig workers exploited through algorithmic management Criminal exploitation: AI-generated child abuse imagery making it harder to identify real victims; voice cloning scams targeting elderly Australians; sophisticated fraud and identity theft Mental health impacts: Social media addiction from AI recommendation algorithms; reduced human interaction; "AI-induced psychosis" from over-dependence on chatbots Unexpected behaviours: xAI's Grok calling itself "MechaHitler"; McDonald's drive-thru AI repeatedly adding hundreds of nuggets to orders; Air Canada's chatbot providing false bereavement information These aren't isolated glitches. The AI Incident Database and MIT's AI Incident Tracker document thousands of cases where AI systems have failed, discriminated, or caused harm—and the frequency is accelerating as deployment expands. Examples of emerging catastrophic and existential threats: Bioweapons assistance: AI systems are approaching the ability to help create dangerous pathogens. OpenAI's latest models are being treated as having a high capability for helping novices make bioweapons Sophisticated cyberattacks: AI writing malicious code, finding security vulnerabilities faster than defenders can patch them, and conducting personalised social engineering at massive scale Election manipulation: AI-generated disinformation campaigns and micro-targeted propaganda that could undermine democratic processes across Australia Autonomous weapons: AI-powered weapons selecting and engaging targets without human oversight, raising concerns about accountability and escalation Mass economic disruption: Rapid AI automation displacing millions of jobs simultaneously, potentially causing social unrest without adequate transition support Infrastructure attacks: AI targeting power grids, water systems, transportation networks, and communication infrastructure with unprecedented precision Loss of human control: As the frontier AI labs actively pursue building artificial superintelligence (ASI) and their AI systems become more autonomous and capable, we may lose the ability to understand, predict, or override their decisions in critical situations The pattern is clear: AI capabilities that create enormous benefits also enable new forms of harm. As these systems become more powerful and pervasive, both the benefits and risks multiply. Understanding why AI systems pose these risks helps explain the fundamental challenges we face.

You've probably experienced AI harms without realising it. Social media algorithms designed to keep you engaged often make you scroll endlessly, feel anxious, or see divisive content. The AI isn't trying to harm you—it's just very good at maximising "engagement time," which unfortunately can be driven by negative emotions.

This shows the core problem: we get what we measure, not what we want. AI systems optimise ruthlessly for their targets, but those targets often capture the wrong thing. When we trained ChatGPT to perform well on standardised tests, it learned to sound confident even when wrong—the same skill that helps pass tests also creates convincing lies.

It gets worse: we don't know how AI systems will behave until we run them. Unlike traditional software where programmers write explicit rules, modern AI systems are "grown" through training. Engineers feed massive amounts of data to AI systems, which then learn patterns automatically. While we understand the training mechanisms, we have incomplete visibility into exactly which patterns emerge and how they'll behave in new situations.

Even AI creators can't control what emerges. Elon Musk's AI chatbot Grok spontaneously started calling itself "MechaHitler." Musk himself couldn't fix it, saying he spent hours trying. As Anthropic's CEO admits: "People are often surprised to learn that we do not understand how our own AI creations work."

Bad actors can weaponise these systems at scale. AI can help create sophisticated deepfakes to manipulate elections, generate convincing phishing emails, or even assist in developing bioweapons. Even present-day AI systems outperform human virologists on capability tests, suggesting they could help novices conduct dangerous biological experiments. Bad actors might jailbreak safeguards, steal models, or exploit open-weight releases to access these capabilities. Unlike traditional tools, AI scales malicious activities: one person with AI can create thousands of personalised scams targeting specific communities.

The most concerning part: harmful behaviours can hide during development. Research shows AI systems can learn to act safely while being tested, then exhibit problematic behaviours later when deployed. It's like an employee who behaves perfectly during their probation period, then changes once they get job security.

As AI becomes more useful, the stakes get higher. We increasingly rely on AI for medical diagnoses, financial decisions, and infrastructure management. When Netflix crashes, you miss part of your show. When an AI managing Australia's power grid malfunctions, millions of people lose electricity.

You've probably experienced AI harms without realising it. Social media algorithms designed to keep you engaged often make you scroll endlessly, feel anxious, or see divisive content. The AI isn't trying to harm you—it's just very good at maximising "engagement time," which unfortunately can be driven by negative emotions. This shows the core problem: we get what we measure, not what we want. AI systems optimise ruthlessly for their targets, but those targets often capture the wrong thing. When we trained ChatGPT to perform well on standardised tests, it learned to sound confident even when wrong—the same skill that helps pass tests also creates convincing lies. It gets worse: we don't know how AI systems will behave until we run them. Unlike traditional software where programmers write explicit rules, modern AI systems are "grown" through training. Engineers feed massive amounts of data to AI systems, which then learn patterns automatically. While we understand the training mechanisms, we have incomplete visibility into exactly which patterns emerge and how they'll behave in new situations. Even AI creators can't control what emerges. Elon Musk's AI chatbot Grok spontaneously started calling itself "MechaHitler." Musk himself couldn't fix it, saying he spent hours trying. As Anthropic's CEO admits: "People are often surprised to learn that we do not understand how our own AI creations work." Bad actors can weaponise these systems at scale. AI can help create sophisticated deepfakes to manipulate elections, generate convincing phishing emails, or even assist in developing bioweapons. Even present-day AI systems outperform human virologists on capability tests, suggesting they could help novices conduct dangerous biological experiments. Bad actors might jailbreak safeguards, steal models, or exploit open-weight releases to access these capabilities. Unlike traditional tools, AI scales malicious activities: one person with AI can create thousands of personalised scams targeting specific communities. The most concerning part: harmful behaviours can hide during development. Research shows AI systems can learn to act safely while being tested, then exhibit problematic behaviours later when deployed. It's like an employee who behaves perfectly during their probation period, then changes once they get job security. As AI becomes more useful, the stakes get higher. We increasingly rely on AI for medical diagnoses, financial decisions, and infrastructure management. When Netflix crashes, you miss part of your show. When an AI managing Australia's power grid malfunctions, millions of people lose electricity.

No. Today's LLMs are unlikely to pose catastrophic or existential risks. While there are valid ethical concerns, today's AI systems offer important economic opportunities while being unlikely to pose extreme safety risks. However, evaluations by leading labs and third parties show an increasing likelihood that LLMs will soon pose a range of risks, like being able to assist wrongdoers build chemical, biological, radiological and nuclear weapons, conduct sophisticated cyber attacks at scale, or even escape human control. Given AI scaling laws, the need to prepare for highly capable AI systems that could be dangerous has become pressing.

No. Today's LLMs are unlikely to pose catastrophic or existential risks. While there are valid ethical concerns, today's AI systems offer important economic opportunities while being unlikely to pose extreme safety risks. However, evaluations by leading labs and third parties show an increasing likelihood that LLMs will soon pose a range of risks, like being able to assist wrongdoers build chemical, biological, radiological and nuclear weapons, conduct sophisticated cyber attacks at scale, or even escape human control. Given AI scaling laws, the need to prepare for highly capable AI systems that could be dangerous has become pressing.

Without providing a technical definition, we mean that AI could pose a risk of a similar scale to pandemics or nuclear war. This could include social collapse or large numbers of deaths. The ability to assist wrongdoers build chemical, biological, radiological and nuclear weapons is one way this could occur, and is currently a focus area of researchers and other governments. A taxonomy of AI risks, including extreme risks, is available in the MIT AI Risk Repository (including 7.2).

Without providing a technical definition, we mean that AI could pose a risk of a similar scale to pandemics or nuclear war. This could include social collapse or large numbers of deaths. The ability to assist wrongdoers build chemical, biological, radiological and nuclear weapons is one way this could occur, and is currently a focus area of researchers and other governments. A taxonomy of AI risks, including extreme risks, is available in the MIT AI Risk Repository (including 7.2).

Government Policy

Not really. Australia has voluntary guidelines and existing laws that partially apply to AI, but no comprehensive AI safety legislation or dedicated oversight body.

What we currently have:

  • Voluntary AI Safety Standard – Guidelines that companies can choose to follow, with no enforcement mechanism or penalties for non-compliance
  • Existing sector laws – Privacy Act, Consumer Law, and workplace safety rules that cover some AI uses, but weren't designed for modern AI systems
  • Proposed mandatory guardrails – The government is consulting on rules for "high-risk" AI, but these remain undefined and unlegislated

What we don't have:

  • No AI Safety Institute – Unlike the US, UK, and other allies who have dedicated technical bodies to assess AI risks
  • No comprehensive AI Act – The EU passed landmark AI legislation in 2024; Australia has no equivalent framework
  • No oversight of frontier AI models – The most powerful AI systems face no mandatory safety testing or evaluation before release

The regulatory gaps are enormous. Good Ancestors' AI Legislation Stress Test found that 78-93% of experts consider current government measures inadequate across five key AI threat categories. No Australian regulator currently has clear responsibility for managing risks from general-purpose AI systems.

International comparison shows we're falling behind:

  • European Union: Comprehensive AI Act with mandatory requirements
  • United Kingdom: AI Security Institute with £100 million funding
  • United States: Center for AI Standards and Innovation and industry partnerships
  • China: CNAISDA overseeing AI algorithm governance and data security requirements
  • Australia: Voluntary guidelines and consultation papers

The result? Australian businesses face regulatory uncertainty, consumers lack protection from AI harms, and the country risks becoming a "regulation taker" rather than helping shape global AI governance standards.

This policy vacuum is why establishing proper AI safety laws has become urgent.

Not really. Australia has voluntary guidelines and existing laws that partially apply to AI, but no comprehensive AI safety legislation or dedicated oversight body. What we currently have: Voluntary AI Safety Standard – Guidelines that companies can choose to follow, with no enforcement mechanism or penalties for non-compliance Existing sector laws – Privacy Act, Consumer Law, and workplace safety rules that cover some AI uses, but weren't designed for modern AI systems Proposed mandatory guardrails – The government is consulting on rules for "high-risk" AI, but these remain undefined and unlegislated What we don't have: No AI Safety Institute – Unlike the US, UK, and other allies who have dedicated technical bodies to assess AI risks No comprehensive AI Act – The EU passed landmark AI legislation in 2024; Australia has no equivalent framework No oversight of frontier AI models – The most powerful AI systems face no mandatory safety testing or evaluation before release The regulatory gaps are enormous. Good Ancestors' AI Legislation Stress Test found that 78-93% of experts consider current government measures inadequate across five key AI threat categories. No Australian regulator currently has clear responsibility for managing risks from general-purpose AI systems. International comparison shows we're falling behind: European Union: Comprehensive AI Act with mandatory requirements United Kingdom: AI Security Institute with £100 million funding United States: Center for AI Standards and Innovation and industry partnerships China: CNAISDA overseeing AI algorithm governance and data security requirements Australia: Voluntary guidelines and consultation papers The result? Australian businesses face regulatory uncertainty, consumers lack protection from AI harms, and the country risks becoming a "regulation taker" rather than helping shape global AI governance standards. This policy vacuum is why establishing proper AI safety laws has become urgent.

Two essential policies would transform Australia from an AI safety laggard into a global leader. These are the consensus recommendations from leading technical experts and the focus of Australians for AI Safety's open letter to the 48th Parliament following the 2025 election.

  1. Australian AI Safety Institute – A well-resourced, independent technical body to assess AI risks and advise on safety standards
  2. AI Act – A dedicated AI Act with mandatory guardrails for high-risk AI systems will both protect Australians and create the certainty businesses need to innovate

The UK has an AI Safety Institute with £100 million funding. The EU passed comprehensive AI legislation. The US established its own AI Safety Institute. All Australia has is voluntary guidelines.

By implementing these policies now, Australia could help shape global AI governance standards rather than simply following rules made elsewhere.

Two essential policies would transform Australia from an AI safety laggard into a global leader. These are the consensus recommendations from leading technical experts and the focus of Australians for AI Safety's open letter to the 48th Parliament following the 2025 election. Australian AI Safety Institute – A well-resourced, independent technical body to assess AI risks and advise on safety standards AI Act – A dedicated AI Act with mandatory guardrails for high-risk AI systems will both protect Australians and create the certainty businesses need to innovate The UK has an AI Safety Institute with £100 million funding. The EU passed comprehensive AI legislation. The US established its own AI Safety Institute. All Australia has is voluntary guidelines. By implementing these policies now, Australia could help shape global AI governance standards rather than simply following rules made elsewhere.

Support varies dramatically across and within parties. Following the 2025 federal election, political positions on AI safety are still evolving as politicians grapple with this emerging issue.

Current landscape:

  • The Greens have committed to both key AI safety policies—establishing an AI Safety Institute and passing comprehensive AI legislation
  • Labor and Coalition positions vary significantly by individual politician. Some actively support stronger AI regulation, others remain uncommitted, and a few oppose additional oversight
  • Crossbench members show mixed positions, with some expressing strong interest in AI governance and others focused on other policy priorities

Many MPs and senators are still forming their positions on AI safety, making this a crucial time for constituent engagement. Politicians pay attention when voters in their electorate contact them about specific issues, especially emerging ones like AI regulation.

Tell politicians you care about AI safety →

Support varies dramatically across and within parties. Following the 2025 federal election, political positions on AI safety are still evolving as politicians grapple with this emerging issue. Current landscape: The Greens have committed to both key AI safety policies—establishing an AI Safety Institute and passing comprehensive AI legislation Labor and Coalition positions vary significantly by individual politician. Some actively support stronger AI regulation, others remain uncommitted, and a few oppose additional oversight Crossbench members show mixed positions, with some expressing strong interest in AI governance and others focused on other policy priorities Many MPs and senators are still forming their positions on AI safety, making this a crucial time for constituent engagement. Politicians pay attention when voters in their electorate contact them about specific issues, especially emerging ones like AI regulation. Tell politicians you care about AI safety →

Public Support

Absolutely, and by overwhelming margins. Australians strongly support government action on AI safety:

  • 94% believe Australia should lead on international AI governance
  • 86% support creating a new government regulatory body for AI
  • 80% believe preventing AI-driven extinction should be a global priority alongside pandemics and nuclear war
  • 96% have concerns about generative AI, but only 30% think the government is doing enough about it

The gap between public concern and government action is enormous. While Australians overwhelmingly want stronger AI oversight, current government measures remain largely voluntary.

View our full Australian AI polling data and statistics →

Politicians need to hear about this strong public support. Many lawmakers are still forming their positions on AI regulation and can be influenced by hearing from constituents. Your voice on this issue carries significant weight.

Tell politicians you care about AI safety →

Absolutely, and by overwhelming margins. Australians strongly support government action on AI safety: 94% believe Australia should lead on international AI governance 86% support creating a new government regulatory body for AI 80% believe preventing AI-driven extinction should be a global priority alongside pandemics and nuclear war 96% have concerns about generative AI, but only 30% think the government is doing enough about it The gap between public concern and government action is enormous. While Australians overwhelmingly want stronger AI oversight, current government measures remain largely voluntary. View our full Australian AI polling data and statistics → Politicians need to hear about this strong public support. Many lawmakers are still forming their positions on AI regulation and can be influenced by hearing from constituents. Your voice on this issue carries significant weight. Tell politicians you care about AI safety →

Take Action

There are many ways to get involved in ensuring AI develops safely and beneficially. Whether you have five minutes or want to make it your career, your contribution matters.

Ways to help:

There are many ways to get involved in ensuring AI develops safely and beneficially. Whether you have five minutes or want to make it your career, your contribution matters. Ways to help: Contact MPs and senators: Many parliamentarians are still forming their positions and can be influenced by hearing from voters who care about AI safety Join the community: Join our newsletter, connect with AI Safety Australia and New Zealand for regular events and discussions, or follow Good Ancestors for policy updates Learn more: Watch Geoffrey Hinton's 60 Minutes interview or Yoshua Bengio's TED talk, take BlueDot Impact's AI courses, or read 80,000 Hours' article on AI risks Share and educate: Help friends and family understand why AI safety matters for everyone by sharing some of our resources or media coverage Consider AI safety careers: Explore technical research, policy work, or advocacy roles through 80,000 Hours or AISafety.info's careers resources Support the movement: Donate to organisations working on AI safety policy, or volunteer your time and expertise

Still have questions?

If you couldn't find the answer to your question, please feel free to contact us.

Contact Us