Anthropic in Australia: High Claude Use, Low Transparency Revealed

Anthropic in Australia: High Claude Use, Low Transparency Revealed

The anthropic Economic Index sample shows Australia accounts for 1. 6% of global Claude. ai traffic and a per-capita adoption index (AUI) of 4. 1—more than four times higher than expected. That contrast between rapid local uptake and limited public detail about the government pact reframes what Australians should be asking about AI safety, infrastructure and oversight.

What does the anthropic Economic Index reveal about Claude use in Australia?

Verified facts: Anthropic’s Economic Index data show Australia ranked eleventh in global Claude. ai traffic in the February 2026 sample and sits among the highest per-capita adopters with an AUI of 4. 1. Within Australia, New South Wales accounts for 37. 2% of conversations and Victoria 30. 8%, with Queensland at 17. 7% and the remaining states and territories combining for 14%. Adjusting for working-age population, New South Wales has an AUI of 1. 20 and Victoria 1. 19; every other state or territory falls below an AUI of 1, with the lowest adoption in the Northern Territory and Tasmania. Anthropic reports that 46% of Australian conversations are work-related and 7% are coursework-related.

These figures are paired with operational moves: Anthropic is expanding to Australia, planning a new office in Sydney, and has signed a Memorandum of Understanding with the Australian government to cooperate on AI safety research and to support the goals of Australia’s National AI Plan. The company has also signalled commitments to collaborate with research institutions, participate in safety and security evaluations with the AI Safety Institute, and align future Australian operations with government expectations regarding data centres and AI infrastructure.

Anthropic’s pact: commitments, friction and political stakes

Verified facts: Anthropic chief executive Dario Amodei met with Prime Minister Anthony Albanese to sign the memorandum of understanding. The company agreed to share findings on risks and capabilities of AI, support the local AI ecosystem, and work with the AI Safety Institute on safety evaluations. Separately, Anthropic has filed lawsuits against the US Department of Defense, and the Pentagon has designated the company a supply-chain risk, barring US government contractors from using its technology in military work. At the signing, media access was limited in ways that included restricting news photography at an indoor event.

Analysis (clearly identified): The MOU positions government and company as collaborators on safety research while the legal and security frictions abroad complicate that narrative. A public commitment to cooperate on safety is substantial on paper, but the combination of rapid commercial expansion, limited public detail about operational guarantees, and prior legal conflict with the Department of Defense elevates the need for concrete oversight mechanisms. The concentration of Claude use in professional and tech-heavy workforces suggests substantial private-sector reliance in NSW and Victoria, increasing the domestic stakes for any mismatch between assurance and practice.

Who benefits, who is exposed, and what should be demanded now?

Verified facts: Anthropic has pledged to support Australia’s local AI ecosystem and to ensure its operations align with government expectations for data centres and infrastructure developers. Dario Amodei has publicly emphasised regulation and guardrails for the technology, warning about the risks of sophisticated surveillance and framing AI as a military and strategic concern.

Analysis (clearly identified): Benefits are concentrated among workplaces and sectors that already show high adoption—finance, professional services and tech—which may gain productivity advantages from Claude. Potential exposures include public-sector use, where the Australian Capital Territory shows lower-than-expected adoption despite higher incomes, and critical infrastructure choices tied to data-centre siting and governance. The prior legal dispute with the US Department of Defense and the Pentagon’s supply-chain designation underline that national security considerations are not hypothetical.

Accountability call (grounded in verified facts): The memorandum and expansion create a narrow window for policymakers to convert commitments into enforceable transparency measures. The Australian government and Anthropic should publish the text of the memorandum of understanding, define independent audit and redress mechanisms for safety evaluations carried out with the AI Safety Institute, and specify how data-centre and infrastructure decisions will be governed to protect privacy and national security. Independent review of adoption patterns—using the Anthropic Economic Index metrics already disclosed—should be mandated to track workplace impact and distributional effects across states.

Until those concrete disclosures and oversight arrangements are public, the contrast between the anthropic Economic Index’s evidence of deep Australian uptake and the limited public detail on governance leaves a governance gap that the public and Parliament should demand be closed.

Next