On 6 October 2025, HM Treasury published the G7 Cyber Expert Group’s statement on Artificial Intelligence and Cybersecurity. While the statement does not set regulatory expectations, it offers timely insights for legal and compliance professionals navigating the intersection of AI and cyber risk in financial institutions.
The statement highlights both the opportunities and threats posed by AI technologies — particularly generative and agentic AI — and urges jurisdictions to monitor developments, foster cross-sector collaboration, and proactively address emerging risks. It reiterates familiar cyber resilience use cases for AI, such as fraud detection, predictive maintenance, and supply chain risk monitoring. At the same time, it warns that AI can amplify existing cyber threats or introduce new ones, especially when vulnerabilities within AI tools are exploited by malicious actors.
Importantly, the statement identifies three financial sector-specific risks:
- Acceleration of cyber exploitation: Malicious actors can use AI to more rapidly identify and exploit vulnerabilities.
- Concentration risk: The financial sector’s reliance on a small number of AI vendors means a cyber incident at a major provider could have systemic consequences.
- Capability gaps: Firms lacking sufficient AI expertise may be disproportionately exposed to emerging cybersecurity risks.
A key quote from the paper underscores the dual threat of AI system use in this context:
“AI uptake by malicious actors could increase the frequency and impact of malicious cyber activity. And the increasing complexity and autonomy of AI systems, particularly generative and agentic AI, introduce novel cybersecurity risks.”
Fittingly for the G7, the statement outlines seven key considerations for financial institutions:
- Governance frameworks responsive to emerging AI risks
- Secure-by-design principles for AI systems
- Data lineage and source vetting
- Logging and anomaly detection
- Resilience against AI-enabled fraud
- Updated incident response plans
- Workforce skills and awareness
The G7 encourages financial institutions to update risk frameworks, engage in collaborative research, and promote public-private dialogue to support secure and trustworthy AI adoption, emphasizing proactivity in the approach to monitoring AI cybersecurity risk. It also highlights the importance of the collaboration of financial authorities with financial institutions, AI developers, technology firms, academic researchers, and other stakeholders to promote a shared understanding of AI-related cybersecurity issues.
This statement should be read alongside the G7’s Fundamental Elements series, which provides further guidance on cyber risk management across jurisdictions. It also aligns with growing UK parliamentary interest in AI oversight. As part of the Treasury Committee’s ongoing inquiry into AI in financial services, the Committee recently wrote to major AI providers seeking views on the regulation and supervision of AI in the UK. Responses, due by 1 October 2025, may help shape future policy direction. The inquiry continues next week, with the financial regulators due to give oral evidence on 15 October 2025.