AI Governance in Australia: APRA’s Concerns and Recommendations
Australia’s financial landscape is undergoing a significant transformation with the increasing adoption of Artificial Intelligence (AI) by banks and superannuation trustees. However, recent findings from the Australian Prudential Regulation Authority (APRA) reveal that many financial firms are struggling with proper AI agent governance and assurance practices. This article delves into APRA’s insights, the challenges faced by institutions, and the implications for the future of AI in finance.
- APRA’s Targeted Review Findings
- The Importance of Understanding AI
- Inadequate Scrutiny of AI Risks
- Diverse Applications of AI in Finance
- Gaps in Governance and Monitoring
- Cybersecurity Concerns Linked to AI
- Implementing Robust Controls
- Dependency on AI Vendors
- Evolving Standards for Identity and Access
- Vendor Initiatives in AI Security
- Conclusion
APRA’s Targeted Review Findings
In late 2025, APRA conducted a focused review of selected large regulated entities to assess their AI adoption and related prudential risks. The review found AI integrated into the operations of all entities examined, yet the maturity of their risk management practices varied significantly. While enthusiasm for AI’s potential to enhance productivity and customer experience was evident among boards, a lack of robust AI risk management practices was also noted.
The Importance of Understanding AI
The regulator emphasized that financial sector boards need to cultivate a deeper understanding of AI to facilitate coherent strategy and oversight. APRA stresses that an institution’s AI strategy should align with its overall risk appetite. This involves not just monitoring AI performance but establishing defined procedures to address potential errors when they occur.
Inadequate Scrutiny of AI Risks
One alarming trend identified by APRA was the reliance on vendor presentations and summaries, which often led to insufficient scrutiny of significant risks. Issues such as unpredictable model behavior and the repercussions of AI failures on critical operations received inadequate attention. This raises concerns about operational resilience in a rapidly evolving technology landscape.
Diverse Applications of AI in Finance
The usage of AI in the financial sector is expansive, with entities experimenting in areas such as software engineering, claims triage, and loan application processing. APRA highlighted AI’s role in disrupting fraud and scams, alongside enhancing customer interactions. However, some institutions treated AI risks similarly to those of other technologies without accounting for the unique behavioral attributes of AI models.
Gaps in Governance and Monitoring
APRA pinpointed critical gaps in areas such as model behavior monitoring, change management, and the decommissioning of AI tools. The need for comprehensive inventories of AI implementations and designated ownership for these instances was stressed. Additionally, human oversight in high-risk decisions involving AI is imperative to mitigate potential adverse outcomes.
Cybersecurity Concerns Linked to AI
AI adoption has transformed the threat landscape, introducing new vulnerabilities such as prompt injection and insecure integrations. APRA raised alarms regarding outdated identity and access management practices, which often fail to accommodate non-human elements like AI agents. As the demand for AI-assisted software development escalates, significant pressure mounts on change and release controls.
Implementing Robust Controls
To combat these challenges, APRA recommended that financial entities apply stringent controls on agentic and autonomous workflows. These controls should encompass privileged access management, configuration guidelines, and regular patching processes. Moreover, security testing for AI-generated code must become a standard practice to ensure the integrity and security of newly implemented systems.
Dependency on AI Vendors
Another risk highlighted by APRA is the heavy reliance on a single provider for multiple AI instances. Many institutions lack exit strategies or substitution plans when it comes to AI vendors, posing a financial and operational risk. Moreover, the incorporation of AI in upstream dependencies remains a concern, as many entities may be unaware of how AI impacts their overall operational framework.
Evolving Standards for Identity and Access
As issues surrounding identity and permission controls gain prominence, new standards initiatives from organizations such as the FIDO Alliance have become essential. FIDO’s Agentic Authentication Technical Working Group is currently developing specifications designed to address the new complexities introduced by agent-initiated interactions in commerce. This reflects a growing recognition that existing authentication models must adapt to accommodate AI-related challenges.
Vendor Initiatives in AI Security
Various vendors have put forth their solutions to address these emerging challenges. Notable examples include Google’s Agent Payments Protocol and Mastercard’s Verifiable Intent framework. In parallel, the Centre for Internet Security has released AI security companion guides that align with best practices for large language models, AI agents, and the Model Context Protocol (MCP) environments.
Conclusion
The insights from APRA highlight a crucial need for the financial industry in Australia to refine their approach to AI governance. As AI technologies continue to evolve and become integral to financial operations, proactive measures in risk management, security, and vendor relationship strategies must be prioritized. The future of finance will rely heavily on how effectively institutions can navigate these challenges while harnessing the transformative potential of AI.
Inspired by: Source

