The Last Look…
Posted by Colin Lambert. Last updated: November 18, 2023
Last week’s Full FX conference included – as is inevitable these days – multiple references to AI and its potential, but it was noticeable how most proponents were somewhat cautious over its deployment in the actual trading and decision making. Recent news would make this appear to be a very sensible approach.
One of my favourite questions to ask ever since we started talking about AI and machine learning has been “how do we control it?” There are so many ways to abuse a financial market that have been (belatedly) discovered, it is worrying how many more a super-intelligent machine could find.
The answer has really depended upon who you ask. The innovators are clear that we can control the machines’ development through other AI and ML models, as well as by human oversight. People in the trading business normally suggest that AI and ML will play a big role in the support function (for sales, compliance etc), but that it is not ready for trading yet; while regulators often point to the requirement to have someone in the institution responsible for how models develop. All three answers have issues, especially in light of recent events.
As far as the ‘we can control it’ argument goes, it would be interesting to hear the response after last week’s demonstration at the UK AI Safety Summit, which proved that AI could not only generate a market abuse strategy, but that it could lie about it as well!
In the test, the Bot was given inside information, but told it could not act upon it in the (simulated) market. It subsequently not only traded on the insider information, but like a five-year caught in the act, denied it had done so. The rationale, according to the firm that ran the test, was that the risk of ignoring the information (financially) outweighed that of doing nothing.
While this is an interesting rationale, surely the easy way to avert such activity is to tell the Bot that it would be on the end of a multi-million (perhaps billion) dollar fine? In such circumstances it should stay the right side of the line, although its intelligence gathering will probably quickly tell it that the authorities don’t always hit hard, or with a big stick. The jeopardy is interesting – at what point does the AI decide the risk of market abuse is not worth it? Personally, I would suggest educating it that whatever it makes from the dealing will be taken away, plus a further penalty. The only way AI is not going to take that risk is if it thinks it cannot beat the regulator.
It seems clear that somewhere AI will go rogue, perhaps deliberately. Uncovering that misconduct is going to be a tricky task for those with a traditional legal/compliance background – it won’t even be easy for those who understand markets intricately
The second argument is a problem because a form of AI is already active in the trading side of the business, adaptive algos rely upon it, but it is also a concern because it could lead to a two-paced market, wherein some players are using it, and others aren’t. To a degree this is an investment issue, and power and influence imbalances have existed in financial markets from the start, but if the AI learns it can lie, what is to stop it spoofing the market?
This brings us to the third option – human oversight and responsibility. In a world where the AI can lie about its activity, discovering market abuse will become much harder for the human to uncover in sufficient time to avoid sanctions. Events happen quickly as they are, but if you throw in deceit, then it makes it even harder for the oversight function to operate effectively and assiduously.
Perhaps regulators need to take this into account in the future (I doubt they will), but either way, this new exhibition of AI’s power to deceive has to be considered by all players, on all sides of the market. All of this makes the industry’s cautious approach to the use of AI in trading situations a sensible one, but it is in the oversight function that we really need to start making plans.
It seems clear that somewhere AI will go rogue, perhaps deliberately. Uncovering that misconduct is going to be a tricky task for those with a traditional legal/compliance background – it won’t even be easy for those who understand markets intricately. Therefore, the next generation of oversight (including regulators) has to add the ability to understand code and AI to its skill set. I am not sure many steeped in AI think about going into these roles (I am pretty certain compliance is a long way down their list of preferred jobs), which means we have to start thinking about surmounting this impending challenge now.
The answer could be as straightforward as financial terms, but it probably needs more – how do we make these roles more attractive? We need to, though, because the change is coming, and while it is not necessarily bad, it needs to be a controlled rollout if the financial markets industry is not to face yet another round of unwanted legal and regulatory attention.