Barrister Uses AI to Cite ‘Fictitious’ Cases in Hearing Preparation

An immigration barrister has been reprimanded for using AI tools to generate fictitious case citations during a tribunal hearing. Chowdhury Rahman relied on software similar to ChatGPT for his legal research, which led to significant inaccuracies in his submissions. A judge found that Rahman failed to ensure the accuracy of the information he presented.
Barrister’s Misconduct in Tribunal Hearing
This revelation emerged during a case involving two Honduran sisters seeking asylum. The sisters, aged 29 and 35, claimed they faced threats from a criminal gang in their native Honduras. Rahman represented them as the case advanced to the upper tribunal.
Judge’s Findings
Upper tribunal judge Mark Blundell criticized Rahman’s use of AI, stating that it wasted the tribunal’s time. He considered notifying the Bar Standards Board about Rahman’s conduct. Blundell noted that Rahman attempted to conceal his reliance on AI.
- Judge Blundell rejected Rahman’s arguments, emphasizing that he found no legal errors in prior judgements.
- In total, Rahman cited 12 authorities, but many did not exist or were irrelevant to the case.
- The judge identified 10 specific fictitious cases presented by Rahman.
Response from the Barrister
During the hearing, Rahman argued that the inaccuracies stemmed from his drafting style. He acknowledged some “confusion and vagueness” in his submissions. However, Judge Blundell firmly stated that the issues were not merely drafting mistakes but involved significant misrepresentation.
Implications of AI Use in Legal Advocacy
The judge indicated that it is highly probable that Rahman utilized generative AI to draft the grounds for appeal. He pointed out that this method could mislead judicial proceedings.
Judge Blundell remarked, “The grounds of appeal were drafted in whole or in part by generative artificial intelligence.” He highlighted that one of the fictitious cases cited had been misapplied by AI in another instance, signaling a broader concern about the reliability of AI-generated legal content.
Rahman’s case serves as a cautionary tale about the risks of relying on generative AI in legal work. Its use raises questions about the integrity of legal documentation and the responsibilities of barristers in ensuring accurate legal research.
Conclusion
This incident stresses the need for legal professionals to critically evaluate their research methods. As AI continues to develop, safeguarding the accuracy of legal arguments remains paramount.