Header image

Law and error

AI tools face scrutiny in the joined cases of Ayinde v London Borough of Haringey  and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin)

Background

Following on from our recent consideration of the use and impact of artificial intelligence in the context of commercial disputes, a High Court judgment has been recently handed down addressing the inclusion of fictional citations and quotations in legal arguments and witness statements suspected to have been created using generative artificial intelligence tools in two cases: Ayinde v London Borough of Haringey ("Ayinde") and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC ("Al-Haroun").

These matters were addressed pursuant to the court's jurisdiction to enforce duties owed by lawyers to the Court, known as the Hamid jurisdiction, and the judgment highlights the significant risks associated with blind reliance on legal research conducted using generative AI tools.

Ayinde v London Borough of Haringey

Ayinde involved judicial review proceedings brought by the Claimant, who was represented by Mr Amadigwe of Haringey Law Centre (and Ms Forey as counsel), against the London Borough of Haringey for failure to provide interim accommodation pending a statutory review of a decision that he did not have a priority need for housing. The grounds for judicial review misstated the effect of a provision of the Housing Act 1996 and also referenced a number of cases which did not exist. The Defendant made a number of requests that the Claimant's solicitors provide copies of the cases in question, to which the Claimant's solicitors responded by email stating inter alia that:

"Admittedly, there could be some concessions from our side in relation to any erroneous citation in the grounds, which are easily explained and can be corrected on the record if it were immediately necessary…[w]e hope that you are not raising these errors as technicalities to avoid undertaking really serious legal research…[i]t appears to us improper to barter our client's legal position for cosmetic errors as serious as those can be for us as legal practitioners."

The Defendant subsequently made an application for a wasted costs order against Haringey Law Centre and Ms Forey on the basis that they had cited fake cases, failed to produce copies of those cases when requested and misstated the effect of 188(3) of the Housing Act 1996. At the hearing of that application, Ms Forey attempted to explain the errors on the basis that she had "dragged and dropped" the case references from a digital list of cases and their ratios. This explanation was roundly rejected by the judge determining that application, who ordered Ms Forey and the Haringey Law Centre to each pay £2,000 to the Defendant, referred the matter to the Bar Standards Board and the Solicitors Regulation Authority, and also referred the case for consideration under the court's Hamid jurisdiction.

At the Hamid hearing, Ms Forey maintained her denial that she had used generative AI tools when preparing the grounds for judicial review or her digital list of cases. She also refused to accept that her conduct was improper and stated that the underlying legal principles for which the cases were cited were sound, and that there were other authorities that could be cited to support those principles.

Whilst the Court considered that the threshold for initiating contempt proceedings against Ms Forey had been met, it decided not to initiate contempt proceedings for a number of reasons, including the fact that she had been referred to the regulator, was a very junior lawyer and that there were questions as to the adequacy of her supervision. The Court also referred Mr Amadigwe to the regulator.

Hamad Al-Haroun v (1) Qatar National Bank QPSC & (2) QNB Capital LLC

Al-Haroun involved a claim for damages for breaches of a financing agreement brought by the Claimant, who was represented by Mr Hussain of Primus Solicitors, against Qatar National Bank and QNB Capital. The Defendants filed applications to dispute the Court's jurisdiction and to strike out the claim or enter summary judgment (the "Application"). The judge determining the application dismissed said application and also referred the case for consideration pursuant to the Court's Hamid jurisdiction on the basis that witness statements filed in support of the Application on behalf of the Claimant and Mr Hussain relied on a number of authorities which were either completely fictitious or which, if they existed at all, did not contain the passages supposedly quoted from them, or did not support the propositions for which they were cited.

At the Hamid hearing, Mr Al-Haroun accepted responsibility for the inclusion of inaccurate citations in his statement and confirmed that those citations had been generated using publicly available AI tools, amongst other resources. Mr Hussain accepted that he had relied on his client's research without independently verifying the authorities. Whilst the Court did not consider that the threshold for the initiation of contempt proceedings had been met, it did refer Mr Hussain to the regulator.

The Court's comments on the use of AI to conduct legal research

The Court made a number of critical observations concerning the use of AI to conduct legal research in its judgment, noting that:

  1. "Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research."
  2. "Such tools can produce…coherent and plausible responses [which] may turn out to be entirely incorrect…make confident assertions that are simply untrue…[and] may cite sources that do not exist."
  3. "Those who use artificial intelligence to conduct legal research…have a professional duty…to check the accuracy of such research by reference to authoritative sources before using it in the course of their professional work."
  4. "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused."
  5. "Practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities…and by those with the responsibility for regulating the provision of legal services…[to] ensure that every individual currently providing legal services within this jurisdiction…understands and complies with their professional and ethical obligations and duties to the court if using artificial intelligence."

The Court also made reference to the following, in support of its conclusions:

  1. Existing professional guidance produced by the Solicitors Regulation Authority, Bar Council, Bar Standards Board and judiciary's website regarding the limitations of artificial intelligence and the risks of using it for legal research;
  2. Relevant professional conduct rules governing barristers and solicitors, including their duties not to knowingly or recklessly mislead or attempt to mislead the court or anyone else;
  3. The sanctions that can be imposed by a Court to the extent that lawyers fail to comply with their duties to the Court including but not limited to public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, initiation of contempt proceedings, and referral to the police to consider undertaking a criminal investigation;
  4. The factors that a Court will take into account in deciding whether to implement such sanctions, including but not limited to (i) whether a truthful explanation is provided; (ii) the impact on the underlying litigation; (iii) the steps taken to mitigate the damage; and (iv) the overriding objective of dealing with cases justly and at proportionate cost; and
  5. A number of examples from different jurisdictions of erroneous material being put before a Court that was generated by an artificial intelligence tool.

Significance of the judgment

This case serves as a stark warning to lawyers and clients alike that, notwithstanding (i) the fact that generative AI tools are becoming increasingly sophisticated and embedded in legal practice, and (ii) the recent comments from the Master of the Rolls to the effect that "[we] should not be using silly examples of bad practice as a reason to shun the entirety of a new technology", the output of AI tools cannot be taken at face value. The risks of unverified AI-generated content - particularly hallucinated case citations and misstatements of law - are real and can have severe consequences.

Indeed, a recent focus group study has explored common perceptions of AI across a number of members of the judiciary in the UK and identified additional potential risks associated with the use of AI in the context of judicial decision-making, including the fact that (i) it may develop misconceptions from uninformed user behaviour, (ii) its use of language may not be sufficiently sophisticated and precise for the legal context, (iii) it may give rise to privacy concerns; (iv) AI bias is not yet sufficiently understood; and (v) it use may lead to de-skilling for new judges.

Ultimately, the message is not to shun AI, but to use it responsibly and to be aware of its limitations. Human oversight remains essential and every AI-generated output must be checked against authoritative sources before being relied upon in the context of legal proceedings. This message should be heeded by all working within the legal profession, and, despite the fact that this case concerned small teams and a publicly funded law centre, it is of equal significance for those staffed on large case teams within well-resourced firms which may rely heavily on junior lawyer and paralegal support.

As AI continues to evolve, so too must the profession’s approach to training, supervision, and risk management, to ensure that the integrity of the justice system is maintained and public confidence is preserved. We welcome, in this regard, any guidance and/or proposals set out the consultation paper and final report being prepared by the working group recently set up by the Civil Justice Council to examine the use of AI by legal representatives in preparing court documents including pleadings, witness statements and expert reports.

Share Article

Related Expertise

Contributors