Does Caz Heise support AI safety?
Caz Heise is an Independent candidate for Cowper (house).
Caz Heise supports both establishing an Australian AI Safety Institute and implementing an Australian AI Act with mandatory guardrails, according to her campaign's explicit authorisation. Her campaign recognises "the rapid emergence of AI is having a profound impact on Australia" and emphasises the importance of expert-led governance, clear enforceable standards guided by transparent public consultation, and ensuring AI technology and research benefit society broadly rather than just private interests. While her campaign directed us to her detailed platform, we found no specific AI policies outlined there, indicating these positions are newly stated in response to our policy questions.
Their score on expert-recommended AI safety policies
Over 356 experts, public figures, and concerned citizens endorsed these policies in their open letter before the 2025 election.
AI Safety Institute
A well-resourced, independent technical body to assess AI risks and advise on safety standards.
Caz Heise supports
Notes: Caz Heise supports establishing an Australian AI Safety Institute according to her campaign manager, who authorises reflecting "Caz's support for these measures" in the scorecard. While her campaign directed us to her website for more information, we found no specific mention of AI policy in her published platform. Her position emphasises that "investment in AI research benefits society writ large, not just private interests."
Mandatory Guardrails
A dedicated AI Act with mandatory guardrails for high-risk AI systems will both protect Australians and create the certainty businesses need to innovate.
Caz Heise supports
Notes: Caz Heise supports implementing an Australian AI Act with mandatory guardrails according to her campaign manager's response. Her detailed platform contains no specific mention of AI regulation. Her position recognizes that "the rapid emergence of AI is having a profound impact on Australia" and emphasises the need for "clear, enforceable standards for AI safety, guided by experts and transparent public consultation."