Following on from our recent consideration of the use and impact of artificial intelligence in the context of commercial disputes, a High Court judgment has been recently handed down addressing the inclusion of fictional citations and quotations in legal arguments and witness statements suspected to have been created using generative artificial intelligence tools in two cases: Ayinde v London Borough of Haringey ("Ayinde") and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC ("Al-Haroun").
These matters were addressed pursuant to the court's jurisdiction to enforce duties owed by lawyers to the Court, known as the Hamid jurisdiction, and the judgment highlights the significant risks associated with blind reliance on legal research conducted using generative AI tools.
Ayinde involved judicial review proceedings brought by the Claimant, who was represented by Mr Amadigwe of Haringey Law Centre (and Ms Forey as counsel), against the London Borough of Haringey for failure to provide interim accommodation pending a statutory review of a decision that he did not have a priority need for housing. The grounds for judicial review misstated the effect of a provision of the Housing Act 1996 and also referenced a number of cases which did not exist. The Defendant made a number of requests that the Claimant's solicitors provide copies of the cases in question, to which the Claimant's solicitors responded by email stating inter alia that:
"Admittedly, there could be some concessions from our side in relation to any erroneous citation in the grounds, which are easily explained and can be corrected on the record if it were immediately necessary…[w]e hope that you are not raising these errors as technicalities to avoid undertaking really serious legal research…[i]t appears to us improper to barter our client's legal position for cosmetic errors as serious as those can be for us as legal practitioners."
The Defendant subsequently made an application for a wasted costs order against Haringey Law Centre and Ms Forey on the basis that they had cited fake cases, failed to produce copies of those cases when requested and misstated the effect of 188(3) of the Housing Act 1996. At the hearing of that application, Ms Forey attempted to explain the errors on the basis that she had "dragged and dropped" the case references from a digital list of cases and their ratios. This explanation was roundly rejected by the judge determining that application, who ordered Ms Forey and the Haringey Law Centre to each pay £2,000 to the Defendant, referred the matter to the Bar Standards Board and the Solicitors Regulation Authority, and also referred the case for consideration under the court's Hamid jurisdiction.
At the Hamid hearing, Ms Forey maintained her denial that she had used generative AI tools when preparing the grounds for judicial review or her digital list of cases. She also refused to accept that her conduct was improper and stated that the underlying legal principles for which the cases were cited were sound, and that there were other authorities that could be cited to support those principles.
Whilst the Court considered that the threshold for initiating contempt proceedings against Ms Forey had been met, it decided not to initiate contempt proceedings for a number of reasons, including the fact that she had been referred to the regulator, was a very junior lawyer and that there were questions as to the adequacy of her supervision. The Court also referred Mr Amadigwe to the regulator.
Al-Haroun involved a claim for damages for breaches of a financing agreement brought by the Claimant, who was represented by Mr Hussain of Primus Solicitors, against Qatar National Bank and QNB Capital. The Defendants filed applications to dispute the Court's jurisdiction and to strike out the claim or enter summary judgment (the "Application"). The judge determining the application dismissed said application and also referred the case for consideration pursuant to the Court's Hamid jurisdiction on the basis that witness statements filed in support of the Application on behalf of the Claimant and Mr Hussain relied on a number of authorities which were either completely fictitious or which, if they existed at all, did not contain the passages supposedly quoted from them, or did not support the propositions for which they were cited.
At the Hamid hearing, Mr Al-Haroun accepted responsibility for the inclusion of inaccurate citations in his statement and confirmed that those citations had been generated using publicly available AI tools, amongst other resources. Mr Hussain accepted that he had relied on his client's research without independently verifying the authorities. Whilst the Court did not consider that the threshold for the initiation of contempt proceedings had been met, it did refer Mr Hussain to the regulator.
The Court made a number of critical observations concerning the use of AI to conduct legal research in its judgment, noting that:
The Court also made reference to the following, in support of its conclusions:
This case serves as a stark warning to lawyers and clients alike that, notwithstanding (i) the fact that generative AI tools are becoming increasingly sophisticated and embedded in legal practice, and (ii) the recent comments from the Master of the Rolls to the effect that "[we] should not be using silly examples of bad practice as a reason to shun the entirety of a new technology", the output of AI tools cannot be taken at face value. The risks of unverified AI-generated content - particularly hallucinated case citations and misstatements of law - are real and can have severe consequences.
Indeed, a recent focus group study has explored common perceptions of AI across a number of members of the judiciary in the UK and identified additional potential risks associated with the use of AI in the context of judicial decision-making, including the fact that (i) it may develop misconceptions from uninformed user behaviour, (ii) its use of language may not be sufficiently sophisticated and precise for the legal context, (iii) it may give rise to privacy concerns; (iv) AI bias is not yet sufficiently understood; and (v) it use may lead to de-skilling for new judges.
Ultimately, the message is not to shun AI, but to use it responsibly and to be aware of its limitations. Human oversight remains essential and every AI-generated output must be checked against authoritative sources before being relied upon in the context of legal proceedings. This message should be heeded by all working within the legal profession, and, despite the fact that this case concerned small teams and a publicly funded law centre, it is of equal significance for those staffed on large case teams within well-resourced firms which may rely heavily on junior lawyer and paralegal support.
As AI continues to evolve, so too must the profession’s approach to training, supervision, and risk management, to ensure that the integrity of the justice system is maintained and public confidence is preserved. We welcome, in this regard, any guidance and/or proposals set out the consultation paper and final report being prepared by the working group recently set up by the Civil Justice Council to examine the use of AI by legal representatives in preparing court documents including pleadings, witness statements and expert reports.