Insights from RiskMinds International – 2025

The most sobering observation from RiskMinds International 2025 wasn’t the escalation of  geopolitical tensions or the relentless march of agentic AI. It was in the opening address when Lord Darroch noted ‘we’re in the most unpredictable and chaotic situation in a generation’. Yet most risk functions are responding by automating processes designed for a world that no longer exists. Annual risk limit review and setting. Static control frameworks. Pretty dashboards. Risk-type siloes that miss the compounding effects of interconnected threats. The uncomfortable truth is that the tools we’ve relied on for a generation are increasingly inadequate for the velocity of change we now face.

Throughout the sessions, speakers described a world moving from rules-based to power-based order, where geopolitical decisions change overnight and technological disruption creates competitive advantages measured in weeks, not years. Against this backdrop, the gap between organisational pace and environmental pace becomes stark. One CRO articulated what many were thinking, “It’s not a once a year board meeting where it’s theoretical. It happens and changes every day. It’s a constant thing that needs to be monitored’.

But as a risk management community, what has our response looked like? Largely, to automate what we already do. There has been heavy investment in digitising RCSAs, streamlining control testing, and automating operational resilience monitoring. These improvements deliver productivity gains, certainly. And for some, it really has delivered strengthening in the control environment. But it also risks entrenching processes built for a different era. In session polling, results revealed that key challenges persist despite automation efforts: fragmented and manual-intensive processes, the burden of regulatory remediation, a lack of first-line engagement, and poorly designed GRC systems. Faster execution of outdated processes doesn’t solve for strategic inadequacy.

Consider these fundamental questions: why are we assessing our risk exposures with weeks old data when market conditions can shift in hours? Why do we rely on annual risk profiles when the threat horizon is continuously evolving? Axel Weber, former Chair of UBS quoted Mark Twain, ‘history never repeats, but it rhymes’ – but our response mechanisms haven’t evolved to match these new manifestations. The currency of our risk intelligence is depreciating faster than we’re updating our processes.

Perhaps the most telling gap in current risk management practice is our continued reliance on self-assessments as a primary mechanism for assessing risk. RCSAs, control assessments and attestations, risk registers – they are all fundamentally dependent on individuals’ judgement (and understanding) about likelihood and impact, typically expressed through crude heat maps or 9×9 matrices.

The problems are structural. Self-assessments are inherently biased toward optimism – making your area look bad requires more work, more scrutiny, more uncomfortable conversations, demand on a budget that is ever-stretched. They’re constrained by the assessor’s capability and understanding – you can’t assess risks you don’t comprehend or haven’t encountered. And they create perverse incentives: the penalty for honest, conservative assessment is additional oversight; the reward for optimistic ratings is being left alone.

Conference discussions revealed growing recognition of these limitations. One participant noted bluntly: it’s way too crude to ask someone to put a point in a 9 by 9 matrix’. Another observed that likelihood-impact frameworks don’t actually tell you what you need – ‘it’s triage at best’. The consensus was clear: self-assessments provide the illusion of risk visibility without the substance.

Yet we continue investing in automating these processes. Better RCSA platforms. Streamlined control testing workflows. Digital risk registers. We’re making it easier to execute a methodology that may no longer hold. What we need instead is to move from subjective assessment to objective data and evidence. Can we measure actual control effectiveness through telemetry rather than asking control owners to rate themselves? Can we use transaction data, system logs, and behavioural signals to understand exposure rather than relying on annual workshops? As one executive noted, ‘there’s wealth of data in there’ – but we’re not designing processes to extract and analyse it.

The shift required is from self-assessment as foundation to data-driven risk intelligence with self-assessment relegated to supplementary context. This isn’t about eliminating human judgment – it’s about grounding that judgment in evidence rather than optimistic self-perception. Organisations that make this transition will have materially different visibility into their actual risk profile compared to those still relying on the self-assessment theatre that passes for risk management in too many organisations.

The enthusiasm for AI in risk management is understandable. Agentic systems could continuously monitor controls, detect patterns humans miss, and provide real-time risk intelligence. GenAI could transform horizon scanning, policy interpretation, and scenario generation. The potential is genuine – but it rests entirely on a foundation most organisations haven’t built: impeccable data quality, lineage, and governance.

Conference discussions revealed this as a persistent challenge. ‘Data leakage is a very big risk’ one CRO noted. Another observed: ‘crap in and crap out, AI won’t solve your data warehouse.’ A third emphasised: ‘all of these things hinge on data and its accuracy… hardest thing is to drive data-driven transformations.’ The pattern was consistent: organisations are piloting AI use cases while simultaneously acknowledging that their data foundations are insufficient at best.

Consider what robust risk data actually requires: complete lineage so you understand data provenance and transformations; consistent taxonomies across systems and business units; quality controls that ensure accuracy and completeness; appropriate context so data can be interpreted correctly; and governance frameworks that manage access, privacy, and appropriate use. Speakers described ‘data families rather than lakes and warehouses’ and the need for ‘federated data models with continuous data lineage’. The complexity is substantial, the investment significant, and the organisational change challenging.

Yet without this foundation, AI in risk management becomes another source of false confidence. Models trained on incomplete data miss exposures. Systems that lack data lineage can’t explain their conclusions. Analytics built on poor-quality inputs generate precise-looking outputs that are fundamentally unreliable: ‘models are roughly right but mostly wrong’. One executive captured the Catch-22: organisations know they need better data to use AI effectively, but they’re under pressure to deploy AI quickly to remain competitive, which drives shortcuts that compromise the data foundation that AI requires.

This creates a critical inflection point. Organisations investing in data foundations now – even though it’s expensive, unglamorous work that delivers no immediate headlines – will be positioned to realise AI’s potential in risk management. Those deploying AI on shaky data infrastructure are building elaborate capabilities on unstable ground. The gap between these two groups will compound over time, creating meaningful differences in actual risk visibility regardless of how sophisticated the AI appears. These differences can create longer-term competitive advantage.

A consistent theme emerged across sessions: we can no longer consider risks in isolation. Geopolitical uncertainty affects credit obligors. Cyber attacks create liquidity risks when depositor confidence erodes in milliseconds. Technology failures intersect with operational resilience, which compounds into reputational damage and manifests in balance sheet impacts.

Yet most organisations remain structured by risk type – credit, market, operational, compliance – with specialists working in parallel rather than integration. The conference discussions revealed banks beginning to recognise this gap, with efforts to integrate non-financial and financial risk, but the consensus was clear: we need people who can take an integrated approach to understanding risks and drawing conclusions across domains. As the tails get fatter and interconnections multiply, the ability to connect dots becomes more valuable than deep expertise in any single risk class.

This integration challenge extends beyond risk functions. Multiple speakers emphasised convergence across risk, resilience, and strategy. One CRO described his evolving role as a ‘strategy integrator’ and as the ‘first line of providing foresight’, ensuring risk considerations inform strategic decisions from day one. Another positioned the CRO as moving from ‘head of the department of no to head of the department of how’. The implication is profound: risk management must shift from gate-keeping to strategic enabling, from reactive assessment to proactive foresight. The question isn’t whether your risk function can say no effectively – it’s whether it can help the organisation identify which opportunities to pursue and how to pursue them within acceptable risk boundaries.

If the operating environment has fundamentally changed and risks are deeply interconnected, what becomes the organising principle for decision-making? Conference discussions repeatedly circled back to risk appetite as the fundamental anchor across all parts of the business. As one participant observed, ‘risk appetite will set the boundaries for what we do, not regulation, as we move through a deregulatory agenda’.

The insight here is critical. In a world of accelerating uncertainty, clear risk appetite boundaries enable speed. They provide the framework for autonomous decision-making at pace, particularly as organisations deploy agentic AI and other technologies that require clear parameters. One bank described using risk appetite to help first-line teams decide focus areas, essentially translating strategic intent into operational guidance. Another noted that for some products and regions, regulation is no longer the tightest constraint, ‘risk appetite is finally becoming the real guardrail for us’.

The honest assessment is that most organisations haven’t made this transition. Risk appetite statements remain static documents reviewed annually, rather than dynamic frameworks integrated into daily decision-making. The gap between aspiration and implementation is an opportunity: organisations that genuinely embed risk appetite as an active decision framework gain advantage in environments where speed and clear boundaries become competitive differentiators. But this requires fundamentally reconceiving risk appetite from artefact and limit to strategic imperative.

Have you ever seen a room of excited risk managers? No topic generated more energy than agentic AI and its implications for risk management. The paradox is striking: AI represents both the acceleration mechanism that’s outpacing current risk processes and potentially the solution for keeping pace with that acceleration. Almost all surveyed banks have deployed GenAI, with about half having several use cases in production.

The challenge isn’t just managing the risks of AI itself – model drift, data leakage, systemic dependencies on concentrated providers, the potential for algorithmic correlation creating new systemic vulnerabilities. It’s also about reimagining risk processes for an AI-enabled environment. Traditional point-in-time validation doesn’t work for continuously learning systems. Risk-based governance frameworks designed for static controls struggle with adaptive agents that make autonomous decisions. And the cognitive shift required is substantial: as AI handles operational work, human risk managers need different capabilities – critical thinking across risk types, curiosity to question “what if” and “why”, and the critical thinking that connects seemingly unrelated dots.

The real opportunity, as several executives noted, is process redesign rather than process overlay. Don’t take AI and throw it on top of existing workflows; redesign the entire end-to-end process, then determine where AI creates value. This includes thinking about how agentic systems could transform traditionally manual, low-value activities – horizon scanning, policy interpretation, regulatory change management, scenario generation – while preserving human judgment for strategic integration and decision-making. One participant described the ideal future state as ‘fading the framework to the background’ – making risk management so integrated into workflow that it feels natural rather than burdensome. AI enables this, but only if we reimagine the work fundamentally.

RiskMinds left little doubt: uncertainty isn’t a temporary condition to manage through but a permanent feature to design for. This requires more than incremental improvement or selective automation. It demands fundamental reconception of how risk functions operate, what capabilities they prioritise, and how they create value.

For boards and executive teams, the strategic questions become:

  • Does your risk function provide genuine foresight or primarily backward-looking assurance?
  • Can your risk appetite framework enable rapid decision-making or does it operate as after the fact limit monitoring?
  • Are you developing risk integrators who can connect dots across domains, or optimising specialists within silos?
  • Are you investing in the data foundations that make AI effective, or deploying AI on inadequate infrastructure?

And perhaps most critically: are you automating the processes that served the last generation’s risks, or reimagining capabilities for the velocity of change ahead?

Organisations that navigate this transition successfully will likely share several characteristics: risk appetite embedded as an active operating framework, not static documentation; risk teams structured around integration and foresight rather than risk-type specialisation; data-driven risk intelligence rather than self-assessment theatre; continuous monitoring rather than periodic assessments; and deliberate capability building for critical thinking, scenario imagination, and cross-domain synthesis.

These aren’t technology problems – they’re design problems. The tools exist. The question is whether leadership will make the difficult choices to deploy them against fundamentally reimagined processes rather than incrementally improved legacy approaches.

The most uncomfortable insight is not that the world has become more unpredictable. It’s that many organisations know their current approaches are inadequate but continue investing in automation of those approaches rather than transformation toward what’s needed. The window for genuine reinvention remains open: organisations that move decisively to build adaptive, integrated, data-driven, strategically-oriented risk capabilities will create competitive advantages that compound over time. Those that remain anchored to risk management paradigms from a more stable era will find the velocity of uncertainty increasingly difficult to navigate – regardless of how efficiently they execute outdated processes.