The Full FX Conversation: The Need for Speed – Part Three
Posted by Colin Lambert. Last updated: February 7, 2022
Recent events in financial markets have highlighted the need for the highest possible standards in operational risk. In the third instalment of their conversation David Faulkner, global head of sales and business development at Fluent Trade Technologies, talks to Colin Lambert about how faster and more accurate data can not only reduce what remains significant operational risk, but also drive continued growth in trading activity.
Colin Lambert: Previously we have discussed the benefits of ultra-low latency and how e-FX technology is enabling regional and specialist players to differentiate themselves and compete at the top table, but I wanted to look at the benefits across the entire industry and the complete trade process, specifically in how it helps firms manage risk more efficiently.
David Faulkner: Most people understand the importance of data speed and accuracy when making trading decisions, but it’s vital in driving credit and risk calculations – and that is something of importance to every market participant.
The market is so fragmented and with execution technologies typically operating separately from risk systems, frankly, most market participants can only detect risk breaches once they’ve happened. In effect, the existing FX ecosystem only serves to heighten the operational risks in the business, not help avert risk events, which is what the technology should do.
Where are the bottlenecks creating these potential issues?
Many risk management controls are outsourced to the trading venues and participants interact with them manually. The risk profiles are static, and there is no standard calculation methodology, furthermore there is no way to update risk profiles dynamically, in real time, which is what is needed. Let’s take credit risk management as an example – the current method of credit calculation does not represent the actual risk to the credit provider. Manually updated, static credit lines encourage the practice of over allocation to minimise potential exhaustions. This just shouldn’t happen.
Stale risk profiles, coupled with the existing over-reliance upon post-trade data, means that often a firm’s risk calculations are inaccurate. Faster data can change this, it’s how most, if not all, risk events are prevented.
You’re talking standardisation and what sounds like an industry-wide model for risk management – is that accurate?
I certainly think It’s time for the industry to work together to re-imagine and re-define how FX credit and trading risk management capabilities evolve. Participants need to see the entire risk exposure picture from a more holistic point of view if they are going to be able to prevent events, and that means a more open framework.
Combining in-line ‘pre-trade’ credit and trading risk checking capabilities together with existing post trade solutions would ensure real time knowledge and dynamic control of all risk profiles.
There is an opportunity to combine the best solutions available and create a modern, scalable technology framework that gives complete control and management of credit and trading exposure risk, back where it belongs – with the risk owners, the LPs.
Pre-trade monitoring systems can be lightweight, unintrusive and applied wherever required. It’s certainly not just reserved for high frequency activity, real time ability to calculate true risk can be incredibly useful in many ways.
At best, slow risk checks can result in missed trading opportunities, at worst they create potentially serious market risk events – both happen far too often. As things stand in the industry, the disconnect between siloed risk monitoring and trading technologies means that true, real-time, risk control, is actually not real-time at all.
I guess a high-profile example, while not FX, of the need for better risk controls, can be found in the Archegos debacle?
Regulators around the world are taking a significantly closer look at how risk managers monitor and control such risks, that’s for sure, and events like Archegos serve to highlight how incredibly important accurate and real-time credit & trading risk calculations are. But it also happens in FX – look at the recent volatility in Turkey and the challenges firms faced because of that.
Controlling trading in the Turkish lira recently for a firm’s own trading team, or its clients, is an incredibly difficult and complex task, especially when it must be done across multiple channels. For a prime broker, as an example, it’s extremely complicated.
Let’s say a firm needs to limit maximum order size, or limit transactions in a certain currency or currency pair to ‘reduce position only’. It must apply that restriction, in the main manually, across every execution avenue, and that’s if the venues even support these restrictions. The level of risk exposure can grow exponentially, to extremely serious levels, before it’s had a reasonable chance to prevent it.
A lot of people are talking about how 2022/23 will see more volatility in FX markets generally, so will these issues become more frequent and not just in the odd emerging market?
The existing framework will certainly not help the industry cope with busier markets. The credit allocation and management issues emanating from fragmentation are well known. Manually carving out allocations is, of course, not optimal. Methodologies differ, with firms using gross, net, regional, or global methodologies, some factoring in open orders, others scaling NDFs and CLS – it’s a minefield.
The bottom line is credit exhaustions are frustrating for everyone, and credit breaches due to over-allocation, or worse delayed transaction data, can cause significant problems.
This would be even worse for prime brokers I imagine?
It’s a significant challenge and one that I think the industry is starting to face up, but, to date, hasn’t done so fully. For prime brokers the allocation of credit for clients is currently a blunt instrument. As an example, the PB doesn’t really care that a client is 100 million long on one venue, and 99 short on another, it simply cares that the client is net one million long.
The credit allocation carve outs, however, may prevent the client from trading optimally on its chosen venues at that moment, which is unfortunate because technology exists to make all this more intuitive and more dynamic.
Smarter decisions, allied to the smarter allocation of risk profiles, and improved understanding of in-flight risks, can positively affect overall risk exposure, including capital requirements and balance sheet.
So, as an industry, what needs to be done to redress what looks like a serious, but solvable, issue?
Market participants already rely upon quick and accurate data as part of their execution path when they are calculating risk positions, we believe that the credit and trading risk calculations and controls should reside there as well.
Look at credit risk. The lack of automation is a serious problem and has resulted in participants reducing credit availability – even the number of relationships, particularly in prime brokerage. At the moment, the ability to 100% prevent a client from breaching a risk level is restricted to zeroing down credit for the client at each venue – a credit based ‘kill-switch’ in other words.
There are a number of issues with that. First, and perhaps most obviously, that process is largely manual, so it takes time – and it can be hours or even days before it is complete. Secondly, where a credit API does exist, it’s driven by post-trade data, meaning a risk reducing trade, or a credit netting trade may have already taken place before the firm hits the kill switch, but they just don’t know about it yet, because they have no idea about any “in-flight” activity. Thirdly, and most importantly a client can be ‘kill-switched’ from doing anything – effectively meaning they are unable to get out of any positions at all, which creates new risks of its own.
The reality is that kill-switches are flawed, meaning they are rarely used because of these shortcomings, therefore PBs have little, if any, dynamic ability to ensure their clients don’t unintentionally create a risk event.
You’re talking an industry-wide solution then, one that needs buy in from all participants?
Yes, the industry as a whole will benefit from a solution that provides access for all and brings together the best of pre- and post-trade risk check technologies. The industry needs a true API bridge from risk manager system calculations that can be applied in real time, at the time of execution when required. This would provide all market participants and venues with the ability to manage credit, order throughput and API behaviour more efficiently, on a fully synchronised global basis, thanks to a dynamic or decision framework.
It would be an open framework, but one where each firm can add its risk calculation intellectual property and build in their own checks. A well-designed framework is highly scalable and future proofed and will standardise and normalise any number of risk methodologies – including firm’s bespoke approaches. It can also empower ‘surgical’ restrictions, for example, that target a specific currency pair or direction, and be applied down to an individual user at an institution.
Finally, and importantly, the risk framework should only have the ability to stop certain activity when particular thresholds are reached – it needs to be unintrusive until it needs to step in.
It’s hard to over-emphasise how vital these risk controls are, and how important it is that market participants can access the right information in a timely and efficient manner through a solution that creates the ability to bridge an LP’s existing risk calculation systems to an ‘at-trade’ or pre-trade level by, for example, the consumption of real time execution reports.
Credit utilisation, position and exposure calculations in real time, and throughput controls to ensure events like off-market transactions don’t happen, are all possible and will make the industry a safer place.
Such a solution can also help the industry grow. As an example, some FX PBs have pulled out of supporting certain client segments – in part because of their inability to control or measure the risks effectively. Others have built their own holistic risk calculation models, sometimes using a VaR (value at risk) principal, rather than NOP (net open position). The issue is that venues only offer basic risk calculation methodologies, and that is what participants have to use. It’s outdated, and it does not mitigate the risk.
As was the case with our earlier chats then, effectively what you are saying is that while speed is important in the trading process, it’s in the risk management that the real benefits can be gained?
Getting the right data, to the right systems, in the quickest time possible, within the firm, is important because that way a firm reduces risk levels across the business. But at an industry level, better credit and risk management – thanks to faster data speeds – means that LPs can price their clients more efficiently, at reduced operational risk levels.
There is also a tremendous benefit for the end clients themselves, who will no longer struggle to get sufficient margin or credit to transact in the way they wish. Again, at the PB level, it’s our experience that some PBs are willing to discuss more favourable terms if the client is willing embrace a different level of risk management such as a pre-trade check.
Our sense is that the industry recognises that not only is there a need, but there is also the opportunity, to improve the technology framework that supports their Markets businesses. The technology exists, it now needs to be deployed.