Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 24 additions & 74 deletions src/posts/australia.md
Original file line number Diff line number Diff line change
@@ -1,95 +1,45 @@
---
title: PauseAI in Australia
title: PauseAI Australia
slug: australia
description: the Australian chapter of PauseAI
---

## Calling all Australians. We need your help
Within a decade, artificial intelligence could become smarter than humans at almost everything — and able to improve itself without human control. If this happens without strong global safeguards, the consequences could be catastrophic.

AI Summits Need to Take Safety Seriously Again: [sign the petition](https://www.change.org/p/ai-summits-need-to-take-safety-seriously-again)
Australia should help stop that from happening. [Learn more](/australia–detail)

**A message from PauseAI volunteers in Australia:**
## Get involved

By 2030, artificial intelligence could be fully automated, self-improving and **smarter than humans at almost everything.** This isn't science fiction—it's the assessment of leading AI companies and researchers. When this happens, every aspect of life will change forever.

**[Join our community](/join)** | [Email us](mailto:australia@pauseai.info) | [Connect on Facebook](https://www.facebook.com/groups/571590459293618) | [YouTube channel](https://www.youtube.com/channel/UCjjMieiOlSFf7jud0yhHQSg) | [LinkedIn](https://www.linkedin.com/company/pauseai-australia) | [Instagram](https://www.instagram.com/pauseaiaustralia/)| [WhatsApp News/Action](https://chat.whatsapp.com/KLg8K9xSgfIJs8GQAHeI5b)|[Events](https://lu.ma/PauseAIAustralia)

### What risks are we facing?

Artificial Intelligence is advancing [at an astonishing rate](/urgency). Experts like [Sam Altman](https://time.com/7205596/sam-altman-superintelligence-agi/), [Dario Amodei](https://arstechnica.com/ai/2025/01/anthropic-chief-says-ai-could-surpass-almost-all-humans-at-almost-everything-shortly-after-2027/), and [Geoffrey Hinton](https://en.wikipedia.org/wiki/Artificial_general_intelligence) warn that **AI could surpass human intelligence within the next five years**. Without international cooperation, this could result in economic chaos, war, and even [human extinction](/xrisk).

> "As general-purpose AI becomes more capable, evidence of additional risks is gradually emerging. These include risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI."
>
> – [International AI Safety Report (2025)](https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf), co-authored by 96 experts from 30 countries, including Australia.

### Don't we want AI's benefits?

Sure. Artificial Intelligence already has the potential to be a powerful tool. If AI remains under control, it could be used to cure diseases, drive scientific breakthroughs, and spread opportunity and wellbeing. But it would be tragic to achieve these advances only to then [lose control](/ai-takeover) and suffer catastrophic losses.

> "We seem to be assuming AI will neatly fit into a benign pattern. That
> assumption only holds to the extent AI is analogous with most of what has come before. And in the circumstances, we’d be wise to examine it far more rigorously before settling on it because there are good reasons to suppose it is a different species altogether, for which history is a poor guide."
>
> – Waleed Aly
>
> [The Age](https://www.theage.com.au/politics/federal/the-treasurer-is-telling-us-to-stay-calm-but-this-could-be-the-time-to-panic-20250807-p5ml5k.html)

New technologies have always brought change, but humans need time to adjust, safeguard, and plan for the future. For any other technology—whether aeroplanes, skyscrapers, or new medications—we insist on expertly designed safety measures before exposing the public to risks. This is not happening with AI.

AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeeds, your life and that of your loved ones will become radically different, and you won't have any say in what this future holds. This isn't just a tech issue— it will affect everyone.

### What can be done?

PauseAI [proposes](/proposal) an international treaty to pause the development of smarter-than-human general AI until there is a credible plan to ensure it is safe. It is in Australia's interest to advocate for this.

> "Who will show leadership on negotiating an AI non-proliferation treaty? It is a collective responsibility and certainly one to which Australia could contribute."
>
> – Alan Finkel, Australia's Chief Scientist (2016–2020)
>
> [Sydney Morning Herald](https://www.smh.com.au/technology/the-ai-horse-has-bolted-it-s-time-for-the-nuclear-option-20230807-p5duel.html)

History shows that smaller countries can make a big difference in solving global problems. Take the 1982 ban on whale hunting and the 1987 agreement to protect the ozone layer. Australia, which used to hunt whales itself, became a leader in protecting ocean life by supporting the ban and even taking Japan to court over its whaling. Australia also helped protect the environment by quickly joining the agreement to stop using chemicals that were damaging the ozone layer. These stories show that countries like Australia can make real change happen worldwide by taking action and working with other nations.

### Aren't there more important issues?

We agree that there are many important issues facing Australia, but we won't be able to solve them in a world with uncontrolled AI. Australia should be advocating for an international treaty at the same time as it works on other issues.

### Why isn't anything being done already?

Australian politicians have looked at some of the smaller risks of AI, but rarely acknowledge the big ones.

We acknowledge that not everyone agrees about the risk of an AI catastrophe. We address some of the common objections [here](/faq). We don't claim to be 100% certain, but we think the probability of very bad outcomes is more than high enough to justify a pause.

It is [psychologically difficult](/psychology-of-x-risk) to think about potential catastrophes. Many people assume that the risks are out of their control and therefore not worth worrying about. Yet, anyone can take action right now by speaking up. We think it's better to act than to simply worry.

### How can I help in Australia?

You can make a difference. Volunteers in Australia raise awareness, protest, lobby, and support the global PauseAI movement.
If you’re in Australia, you can help:

- [Join our community](/join)
- [Attend our next Australian online or in-person event](https://lu.ma/PauseAIAustralia)
- [Contact Australian politicians (using this easy tool)](https://www.australiansforaisafety.com.au/advocacy/contact-politicians?utm_source=pauseai-australia)
- [Attend an event](https://lu.ma/PauseAIAustralia)
- [Contact Australian politicians](https://www.australiansforaisafety.com.au/advocacy/contact-politicians?utm_source=pauseai-australia)
- Talk to your friends and family about AI risk
- Donate to support our work via **PayID 85692218938** (not tax-deductible)
- [Or one of these ideas](https://docs.google.com/document/d/18ypsV5GkgiQQc7QitwsrNcaxuBPd_3t7AMuySBPJZMw/edit?usp=sharing)

### Campaigns

#### IABIED Canberra book launch

On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of [_If Anyone Builds It, Everyone Dies_](https://www.penguin.com.au/books/if-anyone-builds-it-everyone-dies-9781847928931). Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book.

#### Petition to the House of Representatives
## Connect with PauseAI Australia

In September 2025, [e-petition EN7777 ](https://www.aph.gov.au/e-petitions/petition/EN7777)to the Australian House of Representatives was open for 30 days and collected 168 signatures. The petition asked the House to legislate that all future frontier artificial intelligence systems must pass rigorous independent safety evaluations, and further asked the House to advocate proactively for an international treaty to pause frontier AI development until global safety mechanisms are in place. We await an official response from a minister of the government.
- [australia@pauseai.info](mailto:australia@pauseai.info)
- [Facebook](https://www.facebook.com/groups/571590459293618)
- [YouTube](https://www.youtube.com/channel/UCjjMieiOlSFf7jud0yhHQSg)
- [LinkedIn](https://www.linkedin.com/company/pauseai-australia)
- [Instagram](https://www.instagram.com/pauseaiaustralia/)
- [WhatsApp](https://chat.whatsapp.com/KLg8K9xSgfIJs8GQAHeI5b)
- [Events](https://lu.ma/PauseAIAustralia)

#### Productivity commission submission
## What we do

In September 2025, PauseAI Australia responded to the interim report on _Harnessing Data and Digital Technology_ with [this submission](https://drive.google.com/file/d/1Ea9I3jXCZAMdGAcN2D-UMRPyE2MzGB7k/view). Volunteers also made individual submissions ([David](https://docs.google.com/document/d/1DenTOorlnqQ02PJEEdRvsceFmfvvfTXxGubozjfx-yE/edit?usp=sharing), [Peter](https://docs.google.com/document/d/1aQcC5DYq3feyWyHAPFGcEvwgrX0vXSbgMNDBEYWA61E/edit?tab=t.0#heading=h.4hsb6c6hjc5f), [Michael](https://drive.google.com/file/d/1lWdtIiLatF1DOPvdjQSaonO9dEqFCFSV/view?usp=drive_link)).
We are the Australian national chapter of the PauseAI movement.

#### Investigate OpenAI
Volunteers across Australia work to:

[In July 2025](https://drive.google.com/file/d/1t9ntUlF2cZH4_f-1fsp0FFCf3RiGZ81g/view?usp=drive_link), volunteer Mark Brown brought OpenAI to the attention of the Australian Federal Police and the Attorney-General of Australia, alleging potential breaches of the _Crimes (Biological Weapons) Act 1976_. It was discussed in a [news story](https://ia.acs.org.au/article/2025/is-the-new-chatgpt-agent-really-a-weapons-risk-.html) and on [a video podcast](https://youtu.be/-YPhNdpA8Rk). We are still waiting for a response from the AFP and the Attorney-General.
- Raise public awareness of AI risks
- Advocate to politicians and policymakers
- Support international coordination for AI safety
- Strengthen the global PauseAI movement

#### Melbourne protest
PauseAI Australia Ltd is an incorporated not-for-profit.

In February 2025, volunteers in Melbourne protested the missed opportunity of the Paris AI Action Summit. The protest received [coverage](https://www.smh.com.au/technology/most-dangerous-technology-ever-protesters-urge-ai-pause-20250207-p5laaq.html) in the Nine newspapers.
[Learn more about our campaigns in Australia](/australia–detail)
Loading