Platform Performance: Time Equals Money?
Posted by Colin Lambert. Last updated: October 13, 2025
A new white paper from Reactive Markets calls for greater transparency in trading technology, arguing that while principal market participants operate under well-defined standards, the technology that underpins their activities is “largely opaque” and can contribute to degraded execution quality, especially during periods of high volatility.
The paper is authored by Henry Durrant, chief revenue officer at Reactive Markets, who says the firm wants to highlight the role played by internal platform performance and help establish standardised reporting, which will raise transparency in this area and also help prospective clients better understand which provider most meets their needs. “It’s not about necessarily being the fastest provider,” he explains. “It is about overall performance and in particular the outliers, which is when reject rates spike because the tech can’t handle the level of traffic.
“It is also not about offering direct comparisons, because different tech providers optimise for different problems or have a different business model,” he adds. “But if the client has access to the metrics that are important to them, for example what happens to performance when things get busy, they can make a better-informed judgement as to who best meets their needs.”
The paper is very much aimed at the bilateral relationship trading space (Durrant points out that ECNs, for example, operate in a different space, where average metrics can be sufficient as everyone is on the same tech stack) where one size very much does not fit all, and it is for this reason that Reactive Markets is commencing the initiative. “Anyone can publish latency metrics, but what is it really reflecting?” Durrant observes. “Every time a client asks you to add a process, for example like aggregate market data, this inevitably adds latency – it’s a factor of doing business, but makes average data largely meaningless.”
Reactive’s proposal in the white paper is for standards around reporting this data. Noting that many trading technology providers operate as “black boxes,” and offer little to no insight into their internal performance, it argues this means clients must take execution quality on faith, with no independent verification of latency, data processing speeds, or failure recovery mechanisms. “This lack of transparency can lead to inefficiencies, hidden costs, and, in many cases, degraded execution quality especially during volatile, high data frequency periods,” it states.
By generating industry-wide standards for transparency around performance, the paper argues a level playing field enabling fair competition among technology providers would result, bringing greater confidence in the integrity and reliability of platforms on the part of the buy side in particular.
The Full FX View
Few will argue the FX Global Code has not been a success in raising standards and establishing base line expectations for how participants conduct themselves, but the performance of technology is one area that has largely been ignored beyond the high-level governance and operational guidelines. This is why this white paper could benefit the FX industry.
By throwing a spotlight on an area that has rarely been discussed and even less frequently disclosed to the wider world (currently just three platforms/ECNs publish system performance metrics), there is the potential for this paper to, if it does not fill the gap, at least provide the basis for doing so. Currently, the GFXC publishes draft templates looking at LP and platform performance, but they are exclusively focused on how orders and data are handled, not on variations in technology performance. This paper could be the start of a process that adds another template to the FX Global Code at some stage.
The crucial elements in the paper for me are the lack of emphasis on outright speed and the focus on the outliers. “Fast enough is good enough” is a phrase used quite often in the industry, reflecting the fact that not only do many clients not need to operate at ultra-fast speeds, often their systems will not enable them to. In the platform space, it is not so much about the fastest it can operate – albeit this is a nice ‘hook’ to promote it – it is about consistency of performance. Can it operate at, or close to, its fastest in all conditions for example? Many of us have heard tales of clients complaining to LPs about rejects when the request has not reached the LP due to latency in a platform’s network. At the very least, and to be fair to both sides, this should be reported to the client. It would, support the bank-client relationship – which is one of the core tenets of a relationship-based platform – and if nothing else, avert potentially fractious and occasionally embarrassing conversations!
There will be challenges is getting this initiative over the line, not least the workload some platforms’ tech teams are under (and even whether some providers actually have access to such data), but there should at least be an earnest and broad discussion over the practicality of such a development. Ideally this could take place at a regional FX Committee meeting (or even the next GFXC gathering in December) with Reactive (and other relationship-based providers) as a guest presenter(s).
The Code was developed, and has been maintained, as a living breathing document, to quote one of its founders, and generally it has managed to identify and provide guidance on most of the problems facing the industry. This is not a high-profile problem as such, but in an e-dominated world, tech performance can impact execution quality, and as such, it should be investigated – and be subject to increased transparency.
Standardisation is also vital, which is where the GFXC’s templates have helped, for we need to ensure we are comparing apples with apples. After all, as Mark Twain is credited with saying, “there are lies, damned lies, and statistics”. If we are going to report data, it needs to be properly comparable, which is at the core of this paper.
The paper proposes three main performance metrics for reporting: market data and order processing latency, as well as market data throughput. On market data latency it suggests the metric should be the total time taken from when an LP’s quote enters the network, to when it published to the client. Likewise, for order processing latency it should be the time taken from when the client order enters the network to when it is transmitted to the LP. Both metrics should be provided in microseconds.
On market data throughput, the paper suggests this is measured in updates per second, including all quote updates received from LPs across all FIX sessions and data centres, in all currency pairs and products.
It further recommends, as noted, that standardised APIs be developed to allow clients to directly access performance data that can be provided to third parties for independent validation. It notes that Reactive currently uses Corvil Analytics for this purpose.
While the paper is very much the first step in building industry engagement on the issue, Durrant says Reactive is already planning follow up papers to maintain momentum. The first step, he observes, is getting the industry talking about the issue and agreeing on what a published metric should reflect.
The key, he explains, is the outliers. “We think the real value is in publishing outliers as well as averages, the more traffic you have the more likely you are to slow down. We currently publish at the mean, 90th, 95th, 99thand 99.999 percentiles, so it includes all outliers, which is what matters to the buy side. Clearly, the max latency is likely to occur in busy conditions, which is when clients most need performance, so if your max is 10 times your average, that should be reported.”
For its part, the paper concludes, “By implementing standardised performance reporting and fostering industry-wide accountability, we can create a more efficient, fair, and trustworthy electronic trading environment. By setting an example in transparency and advocating for industry-wide adoption, we believe that FX market participants will benefit from greater efficiency, improved execution quality, and stronger trust in the technology that powers their trading.”
The full paper can be read here.
