ChatGPT, known formally as Chat Generative Pre-Trained Transformer, is an artificial intelligence tool that uses large amounts of data and computing techniques to string together words in a way that makes sense. Created by OpenAI, an artificial intelligence and research company, ChatGPT has been available to the public since November 30, 2022.
ChatGPT can write essays, cover letters and resumes, social media posts, and even homework assigned by teachers and professors. But ChatGPT is not perfect, particularly in areas like the practice of law where details matter. Two New York attorneys recently learned that lesson the hard way.
The lawyers, based in Manhattan, represented the plaintiff in Mata v. Avianca, Inc., a personal injury lawsuit filed in New York State Supreme Court, New York County. After the case was transferred to U.S. District (federal) Court, lawyers for the defendant filed a motion asking the Court to dismiss the case. Mata’s lawyers submitted papers to the Court opposing the motion to dismiss – later acknowledging that the opposition papers were composed by using ChatGPT.
District Judge P. Kevin Castel, assigned to decide the motion to dismiss the lawsuit, determined that the papers submitted to Court by the plaintiff’s lawyers made reference to several judicial decisions that did not, in fact, exist; and contained non-existent quotes attributed to the judges identified in the phony cases. Finding the lawyers’ ChatGPT-constructed filings could have a negative impact much broader than this one single case (“It promotes cynicism about the legal profession and the American judicial system”), Judge Castel sanctioned the lawyers and their law firm $5,000.