• Home
  • Insights
  • FCA Market Watch 79 - Is your market abuse surveillance model up to scratch and are your surveillance systems effective?

04 Jun 2024

FCA Market Watch 79 - Is your market abuse surveillance model up to scratch and are your surveillance systems effective?

Linkedin

The FCA have recently published their Market Watch 79 focussing on failures of market abuse surveillance they have identified, and on their recent peer review of firms’ testing of front-running surveillance models.

Under UK Market Abuse Regulation (UK MAR) regime, firms are required to identify and report instances of potential market abuse. Firms must have effective arrangements, systems and procedures to detect and report suspicious activity. These should be appropriate and proportionate to the scale, size and nature of their business.

Market abuse surveillance failures

The FCA highlight:

  • problems with surveillance alerts not working as intended, sometimes as a result of faulty implementation;
  • bugs being inadvertently introduced when making changes;
  • required data for successful monitoring not having been ingested/inputted; and
  • inadequate testing before and after implementation.

Impacts include:

  • an entire segment of business sent to a particular exchange might not be monitored;
  • an alert scenario could be partially effective, generating alerts, but not in all instances; and
  • an alert scenario for a specific type of market abuse being completely ineffective.

Failure of these systems can occur with both in-house and third-party systems. The FCA give three examples of failures they have observed.

The examples

Their examples include the following scenarios:

  • Firm A adopted a new third-party automated surveillance system to flag any potentially suspicious trading. The insider dealing model needed a significant price move and the release of news. However, Firm A did not undertake the necessary testing, so that the automated news feed system designed to generate alerts for review was not functioning. Firm A only identified the issue when the FCA questioned why they had not submitted a STOR.
  • Firm B implemented an in-house surveillance model for potential insider dealing in corporate bonds, utilising a price movement trigger above a defined threshold within a defined period after a trade. However, a coding mistake meant that for an alert to trigger, the firm had to trade on the day the price moved. It did not trigger if the price moved the following day or later. The model was also not compatible with less liquid instruments. Identification of the fault was impeded by the fact that the model generated correct alerts in reasonable numbers and of good quality, leading the firm to submit STORs, but this created a false sense of security. The problem was discovered when a front office escalation was checked to see if a surveillance alert had been generated, but it had not.
  • Firm C offered clients direct market access (DMA) to certain venues. Some clients were given direct connectivity to one of these venues (sponsored DMA or SDMA) rather than connecting through Firm C. This activity was captured using a trading private order feed (POF) for inclusion in Firm C’s third-party automated surveillance system. Firm C believed that all POF trade and order data was being sent into the system, but in fact only non- POF trade data was being reviewed. The latter data generated alerts which likewise gave false comfort that the surveillance system was working properly and capturing all trades.

The FCA indicate Firm C is not the only one to have had issues around POF and SDMA.

FCA peer review of firms’ testing of automated surveillance models

In 2023, the FCA reviewed the frequency and methods used by 9 investment banks to test the efficacy of their client order front-running models. The FCA summarise key findings and recommendations flowing from it.

The FCA found there were differing approaches to testing including:

  • breadth of the testing;
  • frequency;
  • the degree to which it constituted a formalised process; and
  • the governance arrangements around it.

Key findings

While most firms had formal procedures describing the frequency of testing, which elements of the model were subject to review and the form of review, others had no formal or a semi-formalised process.

Most firms undertook an annual test of some type. The different types of testing were:

  • parameter calibration;
  • model logic;
  • model code; and
  • data (comprehensiveness and accuracy).

About half of the firms focused their reviews mainly on parameter calibration.

Some firms used a risk-based approach, with the frequency of testing dependant on the inherent risk of the relevant market abuse type. Calibration testing was in many cases split away from reviews of logic, coding and data, with the former generally more frequent.

The number of surveillance models for client order front running varied, depending upon factors such as the range of asset classes involved and the degree of tailoring of parameters that was applied between and within asset classes.

FCA's observations

Surveillance arrangements can often be complex, particularly in larger firms, where there are a wide range of:

  • assets traded;
  • actors involved;
  • trading methods;
  • venues accessed; and
  • other (but unnamed) factors.

The FCA say that firms should take all of these into account as part of their market abuse governance processes.

Steps to avoid surveillance failures

The FCA suggest firms may wish to consider the following steps to mitigate these risks:

Data governance

  • What steps are taken to ensure all relevant trade and order data is being captured?
  • Is the data accurate and comprehensive?
  • Is the ownership and management of data clearly defined and understood?
  • Are there measures to regularly conduct checks/identify issues if and when they occur?
  • Where issues are identified, can remediation be prioritised, based on (possibly relative) risk?

Model testing

  • Are governance arrangements around testing sufficiently robust, formalised and optimised to take account of the testing programme applied?
  • Should testing of model scenarios involve parameter calibration, logic, coding or data, or a combination of these?
  • How frequently should testing take place?
  • Is it better to do "light-touch" testing more frequently, or undertake less frequent "deep dives"?
  • Should firms consider a risk-based approach when designing testing policies and procedures, or when selecting models for testing (and the frequency and depth)?
  • Is the testing programme sufficiently robust and effective?
  • When using third-party surveillance systems, can firms independently gain comfort that models are operating as intended?

Model implementation and amendment

  • What form of testing is undertaken before introducing new surveillance models or amendments?
  • Is this testing formalised and robust enough, (but not so onerous as to hinder swift action to implement, modify, recalibrate and fix surveillance models)?
  • Is regression testing undertaken when changes are made to other systems that might adversely affect market abuse surveillance systems?

The FCA say these examples, alongside existing UK MAR guidance, provide a useful framework for firms to implement best practice to ensure compliance. They encourage all firms to study our observations and consider whether modifying their testing arrangements would be useful.

Commentary

While many firms will already be taking steps to ensure their market surveillance models and processes are fit for purpose, both the examples the FCA gives in this Market Watch and the number of FCA enforcement cases we have seen over the years in this area suggest that at least some firms have more work to do. The FCA's suggested steps are a good checklist that firms might be well advised to apply to their own models and systems. Furthermore, for firms buying in third party automated surveillance models, it is particularly important that they are properly calibrated to the firm's business.

Authors

David Capps and Cara Haslam

Linkedin