AI is the way of the future. There can be no question that the developing technology will have a material impact on our world – and very likely the utilisation of legal services.
Law firms and stakeholders within the industry are spending billions of dollars and developing reliable and accurate forms of AI.
At this stage, however, a warning must be issued to those seeking to avoid obtaining legal advice and, instead, preferring to access artificial intelligence to provide them with answers to their legal questions.
This article highlights some real-life examples we have seen which illustrate the risks in undertaking legal research utilising AI. Whilst there is no doubt this is improving (and by the time this article is published, there may be more reliable sources), these examples serve as a warning to anyone seeking to rely on AI only for legal research.
Example 1: AI drafting legal documents
A client of ours borrowed some money from a friend. Although the loan was for a significant amount of money, both parties decided it was not necessary to obtain legal advice in relation to any loan agreement – instead, they asked AI to draft them a simple loan agreement which they used and both signed. Neither our client nor the lender obtained advice on the document.
Unfortunately, the loan was defaulted upon and the lender had no option but to enforce their rights under the loan agreement. Our client sought our advice when a caveat was registered over his property.
Unbeknownst to our client, he had mortgaged his property. Although the word “mortgage” never appeared in the loan document, the wording was sufficient to create an interest in real property. This was never our client’s intention, but he was left with an unpleasant situation, heightened by the reality that his home was now at risk.
Example 2: Legal research using AI – case authorities
A junior solicitor in a law firm was asked to do some research on a particular set of facts and determine whether this set of facts had arisen in relation to any government body previously. They undertook their research by utilising AI and were extremely pleased to find eight cases that were directly on point. The cases had all the relevant court citations and general descriptions of the facts and outcomes of those cases.
The research was presented to an experienced partner of the firm who was very surprised to find so many cases directly on point. The partner asked for copies of those cases to be provided. Upon searching the court lists, the solicitor found that all eight cases were made up – they did not exist, had never existed but were generated by the AI in an effort to satisfy the research outcomes that have been sought. These "hallucinations" are a common part of artificial intelligence.
Another example of fictitious case authorities being generated by AI is the recent Family Court of Australia decision in Dayal [2024] FedCFamC2F 1166. In that case, a solicitor was referred to the Office of the Victorian Legal Services Board and Commissioner for handing to the Court a document containing summaries of case authorities that do not exist. The list and summaries were generated by an AI tool within the solicitor’s reputable legal practice management software. The solicitor did not review the output generated by the AI tool prior to submitting the document to the Court. The Judge’s decision in that case included the following:
Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the court.
Example 3: Legal research using AI – application in different jurisdictions
A client of ours was interested in finding out the meaning of a legal phrase and whether or not they were able to take action against a third-party based on the law associated with that phrase. They became very enthused when AI responded with the definition that they thought was of great assistance.
The client approached us for our advice with a view to initiating proceedings against the third-party. Unfortunately, the definition that had been provided by AI was not a definition that applied to Australian law. The definition related to an overseas jurisdiction and not Australia. This was not commented upon or identified by AI – it merely presented the definition in answer to the question asked.
Deriving information from the Internet
It must be noted that most AI sources derive their information from the Internet. The Internet is not an accurate source of legal information. There is active misinformation, historic and institutional bias, and a new industry that is creating inaccurate and misleading content in order to corrupt research.
Practice Note for legal practitioners and unrepresented parties in NSW
If you are intending to represent yourself in court, and are relying on generative AI to assist you in your presentation and help with identifying legal authorities that support your case, you need to be aware that on 21 November 2024, the Supreme Court of New South Wales issued Practice Note SC Gen 23 – Use of Generative Artificial Intelligence. Consistent with the risks highlighted above, Section 7 of the Practice Note contains a list of limits, risks and shortcomings of any particular Gen AI program that may be used. The Practice Note can be accessed here.
Final thoughts
There is no question that things are improving as these issues are well known within the industry. Limiting source material and creating more certainty and accuracy is a focus of a large number of stakeholders.
Things are and will continue to improve (and prompt engineering will become more relevant), but caution must be exercised now. If you choose to utilise AI to undertake any legal research, or to draft any document, or to review any document, we strongly suggest that you continue to seek bespoke professional legal advice before automatically relying upon any output from an AI application.