When Risk Didn’t Disappear, It Just Moved
Posted by Colin Lambert. Last updated: February 9, 2026
In the first of two articles for The Full FX, Jamie Rose argues that FX market participants need to think differently about one of the core elements of their business – the pricing engine.
For most banks, electronic FX now feels operationally mature. Execution is fast, routing is sophisticated, spreads are dynamically managed, and internalisation rates are high. On the surface, the machine works.
Yet scratch beneath that surface and a different picture emerges. Risk has not been eliminated, it has been displaced. Increasingly, it lives inside deterministic systems that are treated as neutral infrastructure rather than active expressions of judgement.
This matters, because the way risk is embedded is just as important as the amount of risk taken.
From Judgement to Machinery
Over the past decade, banks have done exactly what regulators, shareholders and internal governance demanded. Discretion was reduced. Holding periods collapsed. Inventory risk was compressed into seconds or minutes. Internalisation became a primary success metric. The unintended consequence is that judgement did not disappear. It was simply pushed down the stack.
Today, many of the most consequential decisions are no longer made by traders leaning into a view of the market. They are made implicitly by pricing engines deciding which signal fires first, how spreads respond to bursts of activity, or how inventory overlays interact with volatility filters. These are not neutral choices, they encode a worldview about markets, liquidity and risk. The problem is that these choices are rarely examined as such.
Pricing Engines as “Mature Plumbing”
Inside most banks, price construction is treated as a solved problem. It is industrialised, stable, and largely invisible when it works. Teams focus their energy on execution quality, analytics, client pricing and distribution, assuming the core price is sound.
In reality, the pricing engine is the point where market structure, internal risk appetite and external behaviour intersect. It synthesises signals from primary venues, futures markets, correlated assets, internal inventory and volatility models. The sequencing and weighting of those inputs defines the bank’s pricing footprint.
When that logic behaves differently in production than it does in design discussions or test environments, the consequences are rarely obvious failures. They show up as subtle P&L drift, unexplained changes in flow quality, or persistent hedge slippage that is hard to attribute.
Predictability is the New Vulnerability
Much of the industry’s effort has gone into speed. Faster ingestion, fewer hops, tighter synchronisation. Speed itself is not the problem – predictable speed is.
When pricing behaviour follows consistent, observable patterns, external participants do not need to know a bank’s internal logic to exploit it. They only need to model the relationship between that bank’s price and the rest of the market. Lead-lag effects, mechanical spread widening, or repeatable sequencing under stress become information signals in their own right.
Ironically, this can leave both faster and slower banks worse off. One becomes a signalling beacon, the other becomes the stale price of last resort. What looks like natural order flow is often nothing more than systematic arbitrage against deterministic behaviour.
Internalisation as a Comfort Blanket
Internalisation has rightly been celebrated. It reduces market impact, improves margins, and supports client execution, but internalisation does not remove risk, it merely transforms it. When more flow is retained internally, the quality of the internal price matters more, not less. Any embedded bias, sequencing flaw or mechanical response is amplified because it now drives internal books, client fills, hedging decisions and downstream models simultaneously.
Treating internalisation as an end state rather than a risk transformation creates a false sense of safety. The risk is still there, it is just harder to see.
Why this Persists
These issues survive because they rarely cause dramatic failures. Test environments validate correctness, not behaviour. Latency checks confirm speed, not exploitability. Correlation breaks, venue dislocations and regime shifts are difficult to script and expensive to simulate properly.
As a result, the first time many banks truly observe how their pricing behaves is in live flow, under stress, when changing it is both costly and politically difficult. The industry has become very good at managing visible risk. It is less comfortable examining the implicit assumptions encoded in its systems.
A Mindset Problem, not a Technology One
This is not an argument against automation, speed or systematic market making. It is an argument for recognising that pricing engines are not passive utilities. They are expressions of judgement, whether acknowledged or not.
Until banks treat price construction as an examinable discipline rather than background plumbing, risk will continue to migrate into places that feel safe precisely because they are no longer actively questioned.
In the next piece, I will look at what can realistically be adjusted without tearing up existing models, and where small changes in approach can materially improve resilience.




