The Australian and U.S. governments, along with other international partners, released guidance on Friday for safely deploying agentic AI systems.
The automation capabilities of AI agents create unique risks that can lead to “productivity losses, service disruption, privacy breaches or cybersecurity incidents,” the guidance document reads. “Organisations must therefore anticipate what could go wrong, assess how agentic AI risk scenarios might affect operations and establish ongoing visibility and assurance to maintain confidence in their agentic AI investments.”
Safely using AI agents means “never granting it broad or unrestricted access, especially to sensitive data or critical systems,” the document warns. Companies, it says, “should only use agentic AI for low-risk and non-sensitive tasks.”
The publication — co-issued by the Australian Signals Directorate, the U.S.’s Cybersecurity and Infrastructure Security Agency and National Security Agency and their British, Canadian and New Zealand counterparts — comes as businesses race to integrate AI tools into their workflows and increasingly embrace agentic AI for its ability to automate repetitive tasks.
Organizations evaluating agentic AI tools should understand the range of risks associated with them, the Western governments warned in their guidance document. AI agents are complex systems whose interconnected pieces create “systemic risks,” and they rely on large language models and external data sources that can introduce their own vulnerabilities, such as maliciously crafted web-search results enabling prompt-injection attacks.
“Every individual component in an agentic AI system widens the attack surface, exposing the system to additional avenues of exploitation,” the document says.
The immaturity of AI security standards and the difficulty of applying human-centric governance models to automated technologies also make it difficult to shield agentic AI tools from sabotage or malfunction, according to the publication.
The document describes several agentic AI-specific security risks, including abuse of privileges, identity spoofing, unexpected actions and deception. It also lists risks associated with the way AI agents integrate with other tools, including flawed orchestration parameters and corrupted third-party components.
Another section recommends best practices for using AI agents, including strict controls around behavior, strong identity management and robust divisions of labor to prevent cascading failures. Businesses should also conduct regular evaluations of their agents, the document says, including red-teaming exercises and third-party component verification.
Other advice covers continuous monitoring of agentic AI systems, validation of agents’ outputs and human-in-the-loop approval of any actions “where the cost of error is high, such as system resets, network egress or deletion of critical records.”
“Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites,” the document warns. “Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains.”