The Ethical and Professional Negligence of AI-Generated Citations, A Consequential Scrutiny Into the Global World

The advent of artificial intelligence (AI) has not left the legal parlance untouched; its emerging integration and developing capabilities has come in contact with many legal aspects of research, writing and advocacy. As a double edged sword, it presents both opportunities and challenges. While Artificial Intelligence tools and technologies can enhance efficiency, correct spellings and improve performance, some recent incidents involving AI-generated fictitious legal citations have raised concerns about the integrity of legal proceedings, the possibility of an AI lawyer and the future of the profession which seems at stake. This article seeks to examine this ethical and professional negligence, as well as the implications of such occurrences, scrutinizing the urgent need for stringent oversight and  guidelines to ensure the responsible use of AI in the legal field.

Introduction

It is an acceptable recourse that the legal profession is taking a new phase with the advent of artificial intelligence (AI). From manual routine of tasking in first-hand approach, to an assisted way of working. Evidently, AI has become an indispensable tool for our developing world and so is to law as a profession. However, the reliance on AI has led to so many unforeseen challenges, particularly concerning the authenticity, accuracy and reliability of AI-generated content. A notable concern is the emergence of AI-generated fictitious legal citations, which could undermine the credibility of legal arguments and the justice system as a whole.

Conceptual Clarification  

AI-generated legal citations refer to legal references,  case law, statutes, scholarly authorities, or legal commentary produced by A-I tools such as ChatGPT, Google Gemini, or other specialized models. However, AI is neither a court empowered to establish precedent nor a legislature authorized to enact statutes. Therefore, unless verified, re-verified and cross-verified with authentic sources, citations generated by AI may be entirely fictitious or misleading. As aptly noted by Justice Victoria Sharp of the High Court of England and Wales, generative AI tools like ChatGPT “are not capable of conducting reliable legal research.”

Recently, headlines in the U.K. and beyond have been dominated by reports of lawyers being cautioned or sanctioned for fictitious AI-generated legal citations. This is a direct slap to the  legal profession, for if artificial intelligence can replicate and do tasks traditionally meant for a lawyer, one might not keep wondering and questioning the value of investing years in legal education and training. If truly are we approaching a time where machines replace humans, not in judgments out of reasoning. It is worrisome to witness both practicing lawyers and law students increasingly relinquish their intellectual rigor and creativity in favor of automated shortcuts. It is noteworthy in emphasis that A.I could complement and not compromise the ethical and noblest integrity known to our profession.

See also  Impact of Globalization on Collective Bargaining and Negotiation in Int. Labour Law – Sharma & Dr Raj

This growing concern is not hypothetical as it is already unfolding in courtrooms. A clear-cut example is in California, U.S., where a federal judge imposed a $31,000 fine on lawyers from the prominent law firm Ellis George. Judge Michael Wilner had scrutinized a legal brief submitted by the firm and discovered that several citations referenced non-existent cases. Upon inquiry, the lawyers confessed to using generative AI tools, including Google Gemini and other law-specific models, to draft the brief. Alarmingly, rather than correcting the inaccuracies, they submitted a follow-up document riddled with even more fabricated citations commonly known as “hallucinations” in AI parlance. The court responded by demanding sworn testimonies and issuing a substantial financial penalty. This case pictures the consequences of over-relying on AI tools without resonance on our human oversight.

Out of the risks of these fake laws are many in number, inaccuracy and hallucinations can be one.Generative AI models, such as ChatGPT and Google Gemini, operate by predicting text based on patterns in vast datasets. However, they lack genuine understanding and can produce “hallucinations” plausible-sounding but entirely fictitious information. In the legal context, this means AI can generate case names, statutes, or legal principles that do not exist, leading to flawed arguments and potential miscarriages of justice. For instance, In a decided case in the U.K., attorneys submitted a brief containing six non-existent cases generated by ChatGPT, resulting in a $5,000 fine and a judicial reprimand for acting in bad faith.

So also, AI-generated citations often lack verifiable sources, making it challenging to assess their accuracy. Unlike traditional legal research, where sources can be cross-checked in databases, AI outputs may not provide traceable references. This opacity undermines the reliability of legal documents and can erode trust in legal proceedings.

See also  The Role Of The Nigerian Lawyer In The Metaverse – Ani Freedolyn Chinechere

Thus, it can also amount to professional negligence on the part of a lawyer. Lawyers have an ethical duty to ensure the accuracy of their submissions. Relying on AI-generated content without proper verification can constitute professional negligence. Courts have begun to impose sanctions on attorneys who fail to uphold this duty.

Conclusion

The integration of AI into legal research and writing offers numerous benefits, including efficiency and accessibility. However, the ethical and professional challenges posed by AIgenerated citations are significant. Inaccuracies, lack of transparency, and potential negligence can undermine the justice system’s integrity. To harness AI’s advantages while safeguarding legal standards, a balanced approach emphasizing verification and accountability is essential.

Recommendations

  1. Rigorous Verification

Legal professionals must diligently verify AI-generated citations against authoritative sources before including them in legal documents. Rigorous verification involves a proactive and systematic process of cross-checking every AI-generated citation with recognized legal databases and authoritative sources before including them in legal submissions or academic work. This goes beyond a surface-level review or cursory internet search. Lawyers must confirm that each case cited exists, that its interpretation is correct, and that it is applicable within the jurisdiction and legal context of the matter at hand. Neglecting this step can lead to severe professional and judicial consequences, as demonstrated in several recent global cases. This practice ensures the accuracy of information and upholds the credibility of legal proceedings.

  1. Transparency in AI Usage

Attorneys should disclose the use of AI tools in their legal work, detailing the extent of AI involvement and the steps taken to verify the content. Such transparency fosters trust and allows for appropriate scrutiny of legal documents.

  1. Development of Regulatory Frameworks
See also  Data Privacy or Data Exploitation? Testing Nigeria’s Data Protection Act, 2023

The development of regulatory frameworks is crucial in addressing the challenges posed by AIgenerated citations. Regulatory bodies, such as bar associations and courts, must establish clear guidelines on the use of AI in legal research, including the verification and validation of AIgenerated citations. These guidelines should emphasize the importance of human oversight and the need for lawyers to take responsibility for the accuracy and reliability of the information they provide to clients and courts.

The regulatory bodies should establish standards for verifying AI-generated citations, including the use of authoritative sources and fact-checking procedures.

So, Lawyers who fail to verify AI-generated citations should face disciplinary action, including fines, sanctions, or other penalties.

  1. Education and Training Education and training are essential in ensuring that lawyers understand the risks and benefits associated with AI-generated citations. Law schools and bar associations should provide education and training programs that focus on the responsible use of AI in legal research, including the verification and validation of AI-generated citations.

Lawyers should receive training on the capabilities and limitations of AI tools, including the potential for errors and inaccuracies. Lawyers should also learn how to verify and validate AIgenerated citations, including the use of authoritative sources and fact-checking protocols. They should be taught best practices for using AI-generated citations, including the importance of transparency and disclosure.

By prioritizing regulatory frameworks and education and training, the legal profession can ensure that AI-generated citations are used responsibly and ethically, maintaining the integrity of legal proceedings and public trust in the legal system.


About Author

Salisu Abdulazeez Lawal 


Leave a Reply

Your email address will not be published. Required fields are marked *