AI Legislation Stress Test: Presentation & Advocacy Workshop
Tuesday 26 August 2025 at 9:00 am UTC
📍 Online
Join the Event
62 people registered
After two years of consultations under Ed Husic brought Australia close to AI guardrails, recent events have reset the conversation. Join us as Good Ancestors presents key research findings from their AI Legislation Stress Test and walks through practical advocacy strategies.
After two years of consultations under Ed Husic brought Australia close to AI guardrails, recent events have reset the conversation. With new ministerial appointments, Productivity Commission's recommendation to pause regulating high-risk AI, and Labor's split between "lighter touch" and "comprehensive framework" approaches, government has called for a "gap analysis" before proceeding.
Good Ancestors delivered exactly that. Their new AI Legislation Stress Test collaborated with 64 experts from top universities, law firms, and AI labs to assess five critical AI threats against Australia's current laws.
The findings reveal critical gaps: While existing regulators can take steps to manage AI risks within their domains, significant gaps remain for national-scale risks from general-purpose AI. These risks don't fall neatly under any single regulator's remit. The vast majority of experts found current laws inadequate across all five threats, with 4 out of 5 threats assessed as "realistic probability" of causing at least moderate harm (9-40 fatalities or $20m-$200m costs) within five years.
Most concerning: experts ranked "Access to Dangerous Capabilities" (AI-assisted cyber attacks, bioweapon creation) as their top priority, with over 40% assessing its potential impact as "catastrophic" if left unmitigated.
Join us on Tuesday 26 August at 7pm AEST as Good Ancestors presents these findings and walks through practical advocacy strategies. We'll cover the current state of AI policy within government, how conversations are progressing with different parties, and how to use our new advocacy tools to effectively contact MPs, senators, and government ministers with evidence-based arguments.
This research arrives at a pivotal moment: Government is actively deciding their path on AI regulation. This expert assessment provides the analysis policymakers requested and gives our community concrete evidence to shape the conversation toward the two-pronged strategy the report recommends—empowering existing regulators while developing new legislation for cross-cutting risks.