Reintroducing Judgement Without Breaking the Machine
Posted by Colin Lambert. Last updated: February 16, 2026
In the second article for The Full FX, Jamie Rose discusses potential solutions to the challenges highlighted in last’s week’s first article, about making e-FX pricing engines operate less deterministically
In my previous piece, I argued that risk has not disappeared from electronic FX, it has migrated into deterministic systems that are rarely examined as active expressions of judgement. The question that follows is an obvious one: what, realistically, can banks do about it?
The answer is not to abandon systematic market making, slow prices down, or recreate discretionary desks from another era – the industry has moved on for good reason. The opportunity lies in making small but deliberate adjustments to how pricing behaviour is defined, tested and governed.
Make Sequencing Explicit, Not Assumed
Most pricing engines fail not because individual signals are wrong, but because their order of precedence is never fully agreed. Traders, quants and technologists often share the same high-level intent but hold different mental models of how the logic actually fires. Inventory overlays versus volatility filters, correlated asset triggers versus stabilisation logic, cadence rules versus spread widening. All of these can be individually correct and collectively inconsistent.
The simplest improvement is also the most uncomfortable – force the organisation to write down, explicitly, what is meant to happen and in what order. Not as code, but as behavioural specification. Once sequencing is explicit, it becomes examinable. Until then, disagreement only surfaces after P&L has already leaked.
Test Behaviour, Not Just Correctness
Most test environments are designed to answer binary questions. Does the engine run? Does it stay up? Does it meet latency targets? Those are necessary, but insufficient.
What rarely gets tested is how the price behaves when assumptions fail. Correlation breaks, venue dislocations, bursty update regimes, cross-asset lead-lag events – these are precisely the conditions under which deterministic behaviour becomes exploitable, yet they are almost never replayed in a meaningful way.
Behavioural testing does not need to be perfect or exhaustive, it needs to be directional. The goal is not to predict every scenario, but to observe whether the engine reacts mechanically, asymmetrically or predictably when the market stops behaving politely.
Rethink Time as the Market Experiences It
A recurring weakness in pricing logic is the implicit assumption that time is uniform. Machine clocks may be synchronised to nanoseconds, but markets do not operate on wall-clock time. Different currency pairs, venues and sessions have different rhythms. Some markets move in smooth flows, others move in bursts. Some respond instantly to futures, others lag until liquidity regenerates.
Dynamic spreads, volatility filters and stabilisation logic that operate purely on elapsed time risk acting on the wrong clock. The result is widening too late, tightening too early, or doing both in ways that are externally predictable.
Treating tick density, update cadence and liquidity regeneration as first-class inputs is not about slowing prices down, it is about aligning behaviour with how information actually arrives.
Accept that Identical Parameters do not Mean Identical Behaviour
Many banks still operate under the assumption that a single global parameter set produces a single global behaviour. In practice, London, New York and Tokyo are three different microstructure environments.
The same volatility filter, inventory threshold or spread curve will behave differently depending on flow mix, venue composition and correlated asset influence. When that is ignored, banks unintentionally operate multiple pricing engines while believing they only have one.
The fix is not radical localisation – it is measurement. Explicitly observing how behaviour diverges by region, and deciding where that divergence is acceptable and where it is not.
Look for predictability, not just performance
One of the most dangerous failure modes in modern e-FX is good performance for the wrong reason. A pricing engine can be profitable and still be predictable. It can internalise flow efficiently while advertising its own logic to adversarial participants. It can show stable spreads while embedding mechanical responses that are easy to model during stress.
Measuring end-to-end timing across signals, not just outcomes, helps surface this. The objective is not perfection, but surprise. If external participants can reliably anticipate how a price will react, the bank has already ceded ground.
Small Changes, Material Impact
None of these adjustments require a rewrite of pricing infrastructure or a wholesale shift in strategy. They require something more subtle – a willingness to treat pricing behaviour as an active risk surface rather than background machinery.
The industry has done an excellent job of suppressing visible risk. The next phase is recognising that invisible, deterministic risk is no safer simply because it is harder to see.
Judgement does not need to replace the machine, it needs to sit alongside it again, deliberately, before the market forces the issue.
Jamie Rose is founder of Isomiq




