University of Minnesota introduces ChatGPT to its law school

0

When ChatGPT, an artificial intelligence chatbot, was tasked with responding to University of Minnesota Law School questions from exams in several legal subject areas and graded blindly with a group of real student tests, it consistently achieved low but passing grades.

The team of U of M law professors who conducted the experiment — Jonathan Choi, Kristin Hickman, Amy Monahan and Daniel Schwarcz — recently published their findings in a white paper.

The professors used ChatGPT to produce answers for final exams for four actual law school courses: constitutional law – federalism and separation of powers, employee benefits, taxation and torts.

ChatGPT was given the same set of prompts as the law students for 95 multiple choice questions and 12 essay questions. One of the co-authors formatted and shuffled ChatGPT exams in with the student exams, and all were graded blindly by the three other co-authors. The ChatGPT exams were subsequently removed and the curve recalculated before finalizing actual student grades.

They found:

ChatGPT passed all four classes based on its final exam.
ChatGPT grades averaged a C+ across all exams, which would place a student on academic probation.
If such performance were consistent throughout law school, the grades earned by ChatGPT would be sufficient to graduate with a J.D.
The ChatGPT performance on law school exams, while currently uneven at best, suggests considerable promise and peril.

“Overall, ChatGPT wasn’t a great law student acting alone, but we expect that collaborating with humans, language models like ChatGPT would be very useful to law students taking exams and to practicing lawyers,” said Law Professor Jonathan Choi.

This is especially true for low-performing students and those who suffer under time constraints.

The team suggests that professors intending to test unassisted recall of legal rules and unassisted analysis should establish guidelines for the use of these technologies in advance. In addition, academic administrations should consider how to reshape honor codes to regulate the use of language models in general.

“It is becoming increasingly likely that in the near future many lawyers will need to collaborate with AIs, like ChatGPT, both to save time and money and to improve the quality of their work product,” said Law Professor Daniel Schwarcz.

For example, a lawyer could have ChatGPT prepare the initial draft of a memo and then tweak that draft as needed; she could use ChatGPT to draft her way out of writer’s block; she could use ChatGPT to produce an initial batch of arguments and then winnow them down to the most effective; she could use ChatGPT to use these tools most effectively in their practices while, at the same time, emphasizing to students that the fundamental skills of legal research and reasoning cannot merely be delegated to language models.

In the future, this team also plans to develop and test different ways for lawyers and law students to effectively and ethically use ChatGPT to help produce legal work.