The Full FX Conversation…The Need for Speed – Part One
Posted by Colin Lambert. Last updated: July 22, 2021
In the first of a series of articles looking at the benefits of ultra-low latency, David Faulkner, global head of sales and business development, Fluent Trade Technologies talks to The Full FX about the importance of speed, and how it is democratising the e-FX market.
Colin Lambert: How do you see ultra-low latency technology benefiting the e-FX ecosystem?
David Faulkner: Ultra-low latency technology is transforming the whole value-chain within e-FX. Turn-key commoditised elements of any trading technology stack are ensuring markets and participants operate more efficiently and more effectively.
Data drives every decision throughout the end-to-end trading lifecycle. The speed of data consumption and distribution has a huge impact on the accuracy of that decision making and is felt by every link in the value chain – market makers and takers, venues, intermediaries – and can even make regulators happier.
OK, let’s go into some specifics, how does it help various aspects of the business?
The speed and quality of your data drives many decisions, ranging from principal trading activity, price creation ability, pricing distribution capability, reaction times, trade acceptance ratios, smart algo hedging performance, flow risk management success – the list goes on.
Any delay in processing data, in any of these elements of the trading lifecycle, can make even the most sophisticated strategy or IP mean very little.
Making decisions is IP, while processing decisions can be seen as purely mechanical, and in our experience can kill potential IP benefits.
Optimising the mechanical parts of the stack can, and does, supercharge your IP. For example, as an LP the speed of your market connectivity is of course vital, along with your price creation methodology. Once pricing decisions are made the LP wants to distribute that pricing to their end clients as quickly as possible. Naturally, if the pricing decision was based on slower data, the chance of their price being inaccurate is greater. For the end client, the chance of a rejected trade attempt on that price is therefore greater too.
Data processing speed means more accurate data, more accurate pricing, faster price updates, reduced hold times, dramatically improved order acceptance – all resulting in a greater certainty of execution.
That transforms the business of the LPs, the end clients, and any execution avenue or venue.
Something we hear a lot about is big data, are the challenges increasing with the amount of data available?
They are, but the technology is available to help, off the shelf. What is key to understand is that any latency in data processing compounds quickly. If there is a surge in the amount of available data, you need to be able to handle that throughput. If you can’t, latencies aggregate, which inhibits your ability to get the latest update. You can’t skip updates and say, ‘Ok, we just had 20,000 updates but I only want to look at the last 2,000’, it doesn’t work like that, you have to process each and every one.
Then there is the challenge of optimising the data for your use case. We clean the data as per requirements before presenting it to the client – we also have tools available to help them clean it as well – allowing them to insert their own IP.
How you slice and dice, and interpret, the data is important. That might sound obvious, but there is an art to it and the view that ultra-low latency means high efficiency is becoming more widely accepted. It has become a ‘must have’, rather than a ‘nice to have’.
I understand the point about not being able to skip a lot of updates, but how bad can the backlog get?
We’ve observed multiple seconds delay for participants outside the Fluent ecosystem, while they have processed a backlog of updates. It goes without saying, any decision in today’s markets made upon data that is seconds old is completely outdated.
This all speaks to a quicker trade cycle, but to throw in one of my favourite themes, last look surely slows everything up as well?
It can if you need to use it for a market check and you are receiving slow market updates. If you are receiving ultra-low latency updates, you can be assured your system is making pricing decisions based on relevant data – that your price is correct.
Our market data feed eliminates this – to a person, every client of ours has been able to significantly reduce, or eliminate their hold time, because we have effectively eliminated the need for a market price check.
Not only that, we help a multitude of other risk checks to be performed in ultra-low latency single digit microseconds too – credit checks, position/inventory management, throughput, order structure, and more. In our view, end-to-end price discovery, price creation and distribution, flow checks and end client trade matching in under 50 microseconds is the new normal.
So that tells me that more broader low latency checks are just as vital to reducing hold times?
Absolutely. Sophisticated and dynamically-driven risk and throughput checks should again be performed in single digit microseconds to keep trade lifecycles efficient. The key here is the technological design, all checks within the same critical path, or low latency API driven and updated.
This is all very well for the top end of town, but smaller organisations may not have the resources to do this surely?
If they try to build the entire stack themselves, probably not, that’s why we believe the solution – and the future – is outsourced processing. We can offer a degree of capability that levels the playing field for all institutions, using a turnkey solution, that is a fraction of the cost of hiring a team to design, build and maintain their own tech stack.
Such a solution ensures the whole value chain is kept honest, accountable, performant and efficient.
We provide the mechanical processing in the trade lifecycle, the client can then add their IP. Ultra-low latency connectivity and modular components are commoditised, but some participants dedicate 50% of their resources to managing these areas of the tech stack. Unbundling the data processing and distribution means banks can free up a large proportion of that 50% to focus on true IP.
Ultra-low latency in the end-to-end process means a firm can be more competitive on a sustainable basis. If they want to tweak their IP, for example their pricing algo, they can. If they want to win more more RFQs, they can by staying at the top of the stack in an aggregated liquidity pool for longer by updating faster thanks to their processing speed which is making everything more efficient.
With data comes analytics, I suppose being able to handle more data should give a firm more visibility on their service levels?
The analytical tools need to be able to handle the weight of data, absolutely, and again there is a performance aspect to this because the quicker you can slice, dice and interpret the data, as well as recall the data, the quicker you can identify issues.
Add to that the value of being able to store and recall data very quickly, not to mention regulations requiring system and data timestamping to microsecond accuracy – it all points to operational improvement.
Interestingly, we have also analysed multiple clients’ business and identified periods of latency caused by a change at a third party, for instance an ECN, LP or other trading avenue. Often it is just an oversight when something is changed, but the quicker you pick up on it the less potential harm to your business.
We have seen dramatic improvements in LP rankings on venues thanks to the performance and analytics, and of course for those LPs’ end clients, they have seen the equivalent in improved pricing and trading volumes.
I understand the argument for an outsourced model, but is there not going be a natural reluctance to embrace this model due to previous experience? The industry is littered with firms who signed a deal that worked really well for a year or two and then went off a cliff.
The answer is smarter contracts. If you believe in your software and your ability to keep it at the cutting edge at all times by optimising it – this is our IP after all – then the confidence should be there to be more flexible. Fluent Trade Technologies has a 90-day cancellation period – if your needs change, you’re not tied in.
If you have enough clients, your finger is on the pulse of what is cutting edge – we are constantly optimising our technology to ensure that no matter what the clients want to add in terms of their IP, we can handle it without detriment to the speed of processing.
So if that is achieved, the resulting flexibility allows clients to add their IP, That’s a theme here isn’t it?
It’s the key. We provide an efficient, ultra-low latency technology framework for clients to work on. Functionality can easily be added to good architecture without impacting performance, because that performance is ring-fenced, scalable and battle tested.
Good architecture means a single framework, where all of the end-to-end module components are interlinked, interoperable, and talking to each other. It also means being able to offer clients components – we have some that use one module only, others who use a handful, some that use all – without making any difference to performance or the delivery.
This is important because if there’s a market dislocation, you need visibility and control over your pricing and risk, which means the modules have to be working seamlessly with each other. If there is a dislocation and you get hit on the first price, fair enough, but if you can’t identify conditions and act to pull your other pricing that risk builds quickly and dangerously, and not only with your own business, there could be a prime broker involved. PBs generally are providing access and trusting that the client has adequate controls – poor tech at the client could undermine that.
In an exchange world this works because everything is checked at the front door, so to speak, but in an OTC environment, you have thousands of avenues, all of different character, and the software needs to be able to not only look, but act, across all of them, without being intrusive – and without slowing the process down.