Austrac Warns of AI-Driven Money Laundering Risks as Australia Expands Anti-Money Laundering Rules July 1

According to Austrac, on May 12, Australia’s financial intelligence agency warned that artificial intelligence is raising money laundering risks by enabling criminals to fabricate identities, forge documents, and hide proceeds faster and at greater scale.

Starting July 1, 2026, real estate agents, precious metals and stones dealers, and trust and company service providers will fall under Australia’s anti-money laundering and counter-terrorism financing rules, with Austrac noting many face high or very high risks of misuse.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Hundred-million-dollar startup Thinking Machines releases a real-time interactive AI model, with the slogan “speak, listen, and execute at the same time”

Founded by former OpenAI executives Mira Murati and John Schulman, Thinking Machines, an AI startup valued at more than $100 million, on Tuesday rolled out a preview of its first full-duplex AI model that can “speak and listen at the same time,” with latency as low as 0.4 seconds, challenging today’s human-AI real-time interaction paradigm. (NVIDIA invests in Thinking Machines Lab to deploy Vera Rubin, boosting the performance of frontier models) Thinking Machines’ new model: breaking the old tu

ChainNewsAbmedia12m ago

Attackers Hijack TanStack, OpenSearch, Mistral Official Pipelines, Push 84 Malicious Versions on May 12

According to Beating's monitoring, on May 12 at 3:20–3:26 UTC+8, attackers affiliated with TeamPCP hijacked the official release pipelines of TanStack, Amazon's OpenSearch, and Mistral, pushing 84 malicious package versions across npm and PyPI. Affected packages include @tanstack/react-router (10M+

GateNews30m ago

Ixirpad Partners With Cware Labs to Support AI and Web3 Startups

According to an announcement on May 11, Ixirpad entered into a strategic partnership with Cware Labs to accelerate sustainable infrastructure development in the Web3 industry. Cware Labs, operating as a venture studio, will identify and support high-potential blockchain and AI projects. The

GateNews44m ago

Claude Code Agent View: single-screen management concurrent sessions

Anthropic on May 11 introduced a new feature for Claude Code called “Agent View,” which consolidates multiple simultaneously running Claude Code work sessions into a single screen for management, eliminating the need to switch back and forth between multiple terminal tabs. According to Anthropic’s official blog, the feature is rolling out in a Research Preview format and is available for Pro, Max, Team, Enterprise, and Claude API solutions. A single post on the official X account has received mo

ChainNewsAbmedia52m ago

Karpathy Endorses HTML Output for Large Language Models, Predicts Interactive Neural Video as Ultimate Form

According to Andrej Karpathy, OpenAI founding member and "vibe coding" concept creator, today he endorsed the Claude Code team's approach of using HTML instead of Markdown for large language model outputs. Karpathy outlined an evolution roadmap for AI interaction interfaces: from plain text to Markd

GateNews1h ago

Google: Large language models are being used for real-world attacks; AI can bypass dual-factor authentication security mechanisms

According to CoinEdition on May 12, Google’s Threat Intelligence Group released a report warning that attackers have used large language models in real-world cyberattacks affecting global systems, and confirmed that hackers have developed a Python-based zero-day vulnerability that can bypass two-factor authentication (2FA) security mechanisms; Google said that related activity is linked to state-sponsored cyberattacks and the abuse of AI tools within underground hacker networks. Specific Applica

MarketWhisper1h ago
Comment
0/400
No comments