With the advent of large language models (‘LLMs’), the use of artificial intelligence (‘AI’) in the legal sector is becoming increasingly prevalent – from conducting legal research to drafting legal documents. In the past few months, a series of judgments from courts around the world has addressed the implications of lawyers’ reliance on AI-generated information and in certain instances, its ensuing dangers. More recently, the King’s Bench Division of the High Court of Justice of England and Wales in the case of Ayinde v London Borough of Haringey, and Al-Haroun v Qatar National Bank[1] issued a judgment addressing the use of AI when conducting legal research, and the risks and consequences involved in any misuse.

In the aforementioned case, counsel for the claimant cited five cases that did not exist in her grounds for judicial review. The High Court held that a lawyer who fails to cross check AI-generated citations and subsequently presents false authorities to the court is likely to be in breach of the following duties of barristers under the English Bar Standards Board Handbook:

  1. the duty of barristers to observe their duty to the court in the administration of justice (CD1);
  2. the duty to act with honesty and integrity (CD3);
  3. the duty to not act in a way which is likely to diminish the trust and confidence which the public places in barristers or the profession (CD 5);
  4. the duty to provide a competent standard of work to each client (CD 7); and
  5. the duty to not knowingly or recklessly mislead or attempt to mislead the court or anyone else (Rules C3.1 and C9.1).

In such instances, according to the High Court, “it is likely to be appropriate for the court to make a reference to the regulator”. At worst, deliberately placing false material before the court with the intention of interfering with the administration of justice can amount to (a) the English criminal offence of perverting the course of justice (which carries a maximum sentence of life imprisonment) and (b) contempt of court.

Similar duties are also enshrined in the Mauritian Code of Ethics for Barristers (the ‘Code of Ethics’):

  1. barristers have an overriding duty to the Court to ensure in the public interest that the proper and efficient administration of justice is achieved (paragraph 3.10);
  2. barristers have a duty to refrain from engaging in conduct which is dishonest or otherwise discreditable to a barrister (paragraph 2.3(a)(i));
  3. barristers must not engage in conduct which is likely to diminish public confidence in the legal profession or the administration of justice or otherwise bring the legal profession into disrepute (paragraph 2.3(a)(iii));
  4. barristers must not handle matters which they know or ought to know they are not competent to handle, without co-operating with another barrister who is competent to handle the matter (paragraph 8.5); and
  5. barristers have a duty to not deceive or knowingly or recklessly mislead the court (paragraph 3.10).

It will be interesting to observe whether the Mauritian courts respond in a similar manner to cases involving AI-generated misinformation.

Moreover, the Bar Council and the Bar Standards Board have issued a guidance note on “Considerations when using ChatGPT and generative AI software based on large language models”[2] in January 2024 and advice on “ChatGPT in the Courts: Safely and Effectively Navigating AI in the Legal Practice”[3] in October 2023 respectively. These guidance notes detail the risks associated with the use of generative AI tools in the delivery of legal services and project one clear message: the information provided by LLMs must be independently verified.

From the Mauritian perspective, it may be time to update the Code of Ethics to reflect the growing risks and concerns associated with the use of AI. Moreover, the adoption of guidelines similar to those issued by the Bar Council and the Bar Standards Board of England would be more than welcome in bringing further clarity to the precautions to be taken by practitioners if they use AI tools in their practice.

Against this background, we tested ChatGPT out of curiosity on its limitations by asking questions on some well-established principles under Mauritius law and requesting it to generate summaries of judgements of the Supreme Court. In some instances, ChatGPT indeed gave hallucinated results such as cases which do not exist, or cases which do exist but the citations given by ChatGPT were the wrong ones.

Pushing this little exercise even further, we told ChatGPT that it gave us wrong results and prompted whether it had any limitations in its database. ChatGPT interestingly acknowledged its limitations, responded with a statement akin to signs of remorse, and apologised before stating that its repertoire extends only up to June 2024. Amongst other limitations given (including on reasoning, understanding and language accuracy), it is worth noting that ChatGPT acknowledged that it may not fully capture jurisdiction-specific laws, customs or cultural nuances especially for smaller or developing jurisdictions and may occasionally give “inaccurate or completely fabricated (“hallucinated”) information, even with a confident tone”.

Contrary to popular belief, information provided by LLMs may therefore not be up to date. Although a person can only discover this limitation by directly asking the LLM, ChatGPT’s webpage does include a fine-print disclaimer stating “ChatGPT can make mistakes. Check important info.

Therefore, it is questionable how far lawyers can rely on the excuse that they believed in the authenticity of material produced by generative AI tools. As Dame Victoria Sharp P. stated, “this is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.”

However, where do we as counsel draw the line on the use of generative AI tools in our daily practice? In the case of Kohls v Ellison No 24-cv-3754 (D Minn 10 January 2025) (cited in the annex of the Ayinde judgement), one of the experts used generative AI to produce a report that cited non-existent academic articles. Lawyers rely on expert witnesses and evidence precisely because they are not the appropriate specialists or resource persons on technical subjects. How are lawyers expected to protect their clients’ interests and implement appropriate safeguards if  expert witnesses are using AI tools and LLMs to generate their findings which may or may not be verified and cross-checked?

The growing role of generative AI in legal practice brings forward valuable opportunities which may also pose significant risks. While these tools can aid lawyers in their research and case preparation, reliance on them without proper oversight can undermine fairness, accuracy, and confidence in the legal system. The case of Ayinde and Al-Haroun highlight the necessity for lawyers to treat AI as a tool for efficiency in certain circumstances whilst always ensuring that any generated results are checked — and not a blind substitute for their own skills and exercise of judgment.

[1] [2025] EWHC 1383

[2] https://www.barcouncilethics.co.uk/documents/considerations-when-using-chatgpt-and-generative-ai-software-based-on-large-language-models/

[3] https://www.barstandardsboard.org.uk/resources/chatgpt-in-the-courts-safely-and-effectively-navigating-ai-in-legal-practice.html