By Kallie Cox for Missouri Lawyers Media
• Law schools, including those in Missouri, are already experimenting with AI tools in teaching.
• AI’s potential pitfalls, such as academic integrity and bias, are key concerns for educators.
• AI is expected to significantly impact legal practice, prompting a need for widespread understanding and adaptation.
One of the first things the newly appointed dean of Washington University’s School of Law did when she entered the role was to establish a task force to evaluate the use of artificial intelligence.
Stefanie Lindquist stepped into this role on July 1 and one of her top priorities for her tenure is finding a way to adapt to AI, including how to teach it and what impact it may have on the practice of law.
Lindquist hopes to ensure the law school pivots towards the technology and its positive uses, while also being aware of its many pitfalls.
“We would not be doing our jobs properly if we weren’t thinking about how technology changes the very practice of law,” Lindquist said. “That’s what we’re teaching students to enter and so we have to think about all those things and any law school that doesn’t really isn’t paying attention to the changing landscape of legal practice that is being wrought by AI.”
Washington University is among a growing number of law schools across the nation in a race with artificial intelligence as they work to include it in their curriculum, despite not fully understanding how it could shape the legal world a decade from now.
The American Bar Association’s Task Force on Law and Artificial Intelligence surveyed law school administrators between December 2023 and February 2024 to examine how they plan to integrate AI into their curriculum.
Out of 29 law schools surveyed from across the country, 55 percent indicated they already offer classes to students covering the use of AI. While the survey was informal and doesn’t necessarily offer an accurate statistical picture of how the entire legal education community is responding to AI, it offers a window into how a majority of schools seem to be adapting to a future that utilizes the technology in some capacity.
According to the survey, 83 percent of law schools already offer some form of education surrounding AI, even if it is not a class. This includes clinics and workshops.
Andrew Perlman, the dean of Suffolk University and a member of the ABA’s task force on artificial intelligence, says he has been pleasantly surprised to see how many schools are quickly embracing AI in education. He says everyone should at least try the tools to see what they are capable of.
“Twenty-five years ago, when email was starting to get widely used, bar associations issued ethics opinions suggesting that lawyers shouldn’t use it because it created confidentiality concerns. Now we’ve come so far that you can’t be a licensed member of the legal profession in many states unless you have an email address,” he said. “I think for any lawyer who’s looking skeptically at these tools, I would caution them that we are likely to go through a very similar transformation here, that as much as people are raising concerns about the current state of the technology and that it might be ethically dangerous to use these tools, we will reach a point where it is considered unethical or incompetent not to use these tools.”
Each of Missouri’s four ABA-accredited law schools has already begun teaching, researching, or experimenting with AI.
Experts from Saint Louis University, Washington University, Mizzou, and the University of Missouri-Kansas City, spoke with Missouri Lawyers Media about how they are teaching future attorneys about generative AI and machine learning.
Teaching AI
Jayne Woods, an associate teaching professor at Mizzou, was inspired to begin teaching AI in her classes by her 13-year-old son who showed her the technology in 2022 shortly after Chat-GPT was released.
At first, Woods dismissed AI. “Then the more I started learning about it, I was like, ‘Oh my gosh, this is going to change everything,’” she said. Now, she believes it will have profound impacts on the practice of law.
“It is going to change it in the way that calculators changed math classes,” Woods said. “It is going to take some tasks that we used to spend a lot of time on and it’s going to really take that time away so that we can spend it on other things that probably require more of our expertise and knowledge.”
Woods teaches legal writing classes and began experimenting with AI in the classroom in the spring of 2023. She started by asking students to evaluate a complaint that was written by AI and having them compare that with what they had already learned about the process for writing federal complaints. Her students were shocked at what it was capable of, she said.
Ryan Copus, an associate professor at UMKC, doesn’t simply allow the use of AI in his classroom, he requires it. He says it can level the playing field among lawyers who may be incredibly talented and knowledgeable but struggle with writing.
“I don’t just encourage, I expect,” he said. “You should be using generative AI to help write your papers, test your ideas with it, there’s no excuse anymore for bad grammar. There’s no excuse for not getting your writing more concise. You have such an easy tool at your disposal.”
Copus is no stranger to AI and has researched what he refers to as “machine learning,” since he was a student at Berkeley University. Machine learning involves using technology to determine predictive patterns such as what a judge might decide in a specific case, or whether or not a prisoner who is granted parole is likely to re-offend.
Copus’ class, Data Decisions and Justice, focuses heavily on this type of research. His students learn to code, build their own machine learning models, perform exercises with these models by determining whether or not their hypothetical client will be released on parole and study the current regulations in place governing this technology.
As the technology has evolved since 2022, it is becoming more clear within the legal community that these tools are here to stay and to improve, Copus said.
“I wouldn’t say there’s been a seismic shift, but I think a greater at least acceptance that this is part of the world now,” Copus said.
Karen Sanner, a professor at SLU, uses Chat-GPT as her assistant and to summarize articles.
“I use it sometimes to help me develop hypotheticals for problems and exams and things like that,” Sanner said.
She refers to 2022 as “The big bang” of generative AI after ChatGPT was made available commercially. That winter, SLU paid for a few subscriptions for the technology so professors could determine what it was capable of.
When Lexis+AI came out this winter, Sanner began to gradually introduce it to her classes. She quickly realized that it was not developed enough for her students to rely on while they were still novices at legal analysis and communication. These tools will continue to improve and ChatGPT has already been enhanced significantly, but until some of the legal tools like Lexis+AI evolve, law schools have what Sanner calls the “chicken and the egg problem.”
“If you’re going to use it effectively to help you do legal research and legal writing — which it’s going to get better at over time — you have to understand the basic concepts of legal analysis,” Sanner said. “95 to 99 percent of our students in the first year of law school come to us without that skill. And they can’t learn that skill by using Lexis AI or any other legal-based products or ChatGPT.”
Without these nuts and bolts of legal analysis, Sanner said, students are unable to properly provide prompts to AI to produce the work they are looking for.
“So, what we’re finding is, at the beginning of the first year, we’re still going to have to teach them legal analysis and basic principles of legal communication before we introduce the AI into the situation,” she said.
Pitfalls
As the technology continues to develop at an astonishing rate and regulations struggle to adapt to AI’s ever-changing landscape, its use is rife with pitfalls and ethical concerns for attorneys.
One of the aspects emerging AI law schools are most concerned with is how it will impact academic integrity.
A substantial number of the schools surveyed by the ABA (69 percent) said they have already adapted their academic integrity policies in response to AI.
Educators who spoke with Missouri Lawyers Media point out that as an important tool that’s here to stay, a ban on AI is not the answer in the classroom.
Similar to Copus who expects the use of AI in his class, Woods has no restrictions on how students use AI in her class. That comes with a caveat that students must be vigilant when using the technology.
“I do make a very clear reminder in the syllabus that you are responsible for whatever you submit, whether you wrote it, or AI wrote it. Like an attorney, your ethical obligation is to only submit things that are true, not misleading, all of that stuff,” Woods said. “So, if AI gives you something that has been lifted from somebody else’s argument, then you are going to be responsible for that. Or if it has given you a false case or incorrect law, the responsibility is still going to fall on you as the student, I’m going to hold you responsible, and you don’t get out of it because it was the AI.”
Additionally, as Sanner discussed, educators worry the use of this technology might impair the education of future attorneys.
Lindquist says students need to learn the basic skills of law school and develop the critical thinking skills necessary to ensure AI produces accurate work before they can use it.
“We want to be sure that AI doesn’t replace the process of learning that takes place in law school now, which requires many different writing projects, taking an exam in which you’re bringing to the table your own knowledge and understanding of the legal principles involved. That is very, very important to the learning process in law school,” Lindquist said. “We want to make sure that AI doesn’t short-circuit that learning process.”
Missouri courts and practicing lawyers seem most concerned with the hallucinations and false information some widely available AI tools insert into work and the potential confidentiality concerns that come with sharing client information with an AI tool.
Perlman says the extent to which these tools hallucinate is concerning.
“Although that problem has gotten better over time, it’s still a real one. And so, anyone who is using these tools to generate law-related content has to scrutinize everything carefully,” Perlman said.
Woods agrees that these hallucinations make it difficult to trust the work of AI.
“I think it’s very much kind of a ‘yes man’ in the sense that it knows what it is that you’re looking for and it tries to be helpful by giving you exactly what you want,” she said. “It does it in a way that it sounds so confident and sure of itself. And so, it’s like, ‘Oh yes, I have this case about this jurisdictional issue you asked, and it came out exactly the way you were wanting. Isn’t that great?’ It sounds very real, but once you look beneath the surface, you can see that it’s not a real case, or it’s not real law, or even if it is a real case in real law, it has misinterpreted the holding, something along those lines.”
While AI may be a good place to start for research, it is never the place you want to stop, Woods cautions.
Perlman concurs with that assessment.
“There are ethical concerns aside from the inaccuracy, there are confidentiality considerations that we need to take into account, for example,” he said. “So, I think there are definitely some downsides, but I think the promise and peril have to be considered together.”
Through his research, Copus has discovered that some machine learning models that are being used to evaluate whether or not a person should be released on parole have had some success.
However, these models have biases that need to be addressed and there are ethical concerns surrounding them. Some of this involves pre-trial fairness, procedural concerns and the fact that a person cannot plead their case to a machine.
“There are certainly bias concerns — racial, gender bias, things of those sorts,” Copus said. “I tend to think those concerns are overstated and there’s — I’ll steal a word from one of my friends — There’s some ‘Robophobia’ that we critique algorithms in a way that we never subject humans to. So, we’ll call an algorithm racist, but not compare to what humans are doing.”
In a recent paper Copus co-wrote, he discovered that if New York City were to utilize a machine learning model they created, the courts could release twice as many individuals on parole, without increasing the arrest rate.
“I hope that they are used more in the future. I think the evidence is piling up that they are very useful. And if we want to get serious about shrinking the number of people in prison, that these tools are a really valuable way to do that,” he said.
Already some Missouri courts are developing rules governing the use of AI.
“So far, courts have imposed at least 22 rules addressing AI use in court, according to data from Bloomberg Law. Of those, a 2023 rule from the U.S. District Court for the Eastern District of Missouri is the only one to exclusively ban self-represented (“pro se”) litigants (SRLs) from using generative AI to draft filings in court,” according to Bloomberg Law.
This rule states: “No portion of any pleading, written motion, or other paper may be drafted by any form of generative artificial intelligence. By presenting to the Court (whether by signing, filing, submitting, or later advocating) a pleading, written motion, or other paper, self-represented parties and attorneys acknowledge they will be held responsible for its contents.”
Earlier this year, the U.S. District Court for the Eastern District of Missouri rejected an appeal (Kruse v. Karlan No. ED111172) from an attorney who used AI to prepare a brief that had multiple fictitious citations. The court fined him $10,000 for frivolously appealing and for wasting his opponent’s time they spent researching the hallucinated cases.
Despite these setbacks, blanket bans on AI are not the answer for a technology that is here to stay, Perlman says.
“The other piece that I think is important to consider when we’re looking at the downside is we always have to ask: ‘relative to what?’ so when lawyers raise concerns that these tools will sometimes get things wrong, well, the reality is, lawyers will sometimes use summer associates who will sometimes get things wrong,” Perlman said. “We have a responsibility to check what’s given to us.”
AI and the future practice of law
The pitfalls of AI and the apprehension surrounding it shouldn’t mean attorneys and educators bury their heads in the sand, Copus said.
“It has the potential to be a transformative technology, but I wouldn’t let fear of it drag you to put your head in the sand. Know what’s coming. Get used to it. Use these tools. Start to explore, start to understand,” Copus said. “We’re all going to have to navigate this collectively and it’s going to take a lot of people having a better understanding of how these things work.”
Generative AI is the most important technology for the delivery of legal services ever invented, Perlman said.
“I think it’s going to change in fairly fundamental ways how lawyers go about the delivery of legal services. I don’t think that’s going to happen next year or even the year after, but I do think that when we look 10 years out, certainly within the time horizon of today’s law students, the tools that are being developed now are going to be transformative, and it’s going to cut across lots of different practice areas and legal tasks,” he said. “I really do think that we are at a pivotal moment in the legal profession, the legal industry and society at large.”
Pat Whalen, managing partner of one of the state’s largest law firms, Spencer Fane, is excited to see how graduating law students with a background in this technology will impact his firm and the future of law.
“I think they’re going to have a profound impact, both in terms of adding value to clients, as well as to the operations within our law firm. So, I’m excited to welcome a generation of students that got trained on this at the ground level,” he said.
With the firm’s recent redesign, it already has a team of experts studying the use of AI.
“I think it will affect every aspect of how we run our business,” Whalen said. “So, I for one, don’t believe that there’s any hype in AI, I think it will play out to be one of the biggest transformations that the legal industry has ever faced.”