The lawyer is under investigation for filing false cases created by an AI

A lawyer is at the center of hundreds of criticisms after she was found to have used an artificial intelligence (AI) chatbot to create false cases during a legal investigation.

This is Vancouver lawyer Chong Ke, who is now facing investigation for using ChatGPT to develop legal presentations to the Supreme Court of British Columbia while a custody case is pending.

According to court documents, the woman represented a father who wanted to take his children abroad but was separating from the children’s mother.

Ke asked ChatGPT for case law examples so he could apply them to his client’s situation. The AI ​​gave him three results and he presented two of them in court.

Despite repeated requests, the legal representatives of the minor’s mother found no records of the cases.

When there was a confrontation between both parties, the lawyer recanted and assured that she did not believe that the cases were false.

“I had no idea that these two cases could be wrong. After my colleague pointed out that they could not be located, I did my own research and was also unable to identify the issues,” Ke wrote in an email to the court. “I did not intend to mislead opposing counsel or the court and I sincerely apologize for the mistake I made.”

Although chatbots have helped many people because they are so easy to find data, it is worth noting that these programs do not always get it right and, on the contrary, are prone to errors.

Lawyer uses ChatGPT

Reference image. Photo: Creative Commons

The opposing lawyers were outraged by the incident.

The mother’s representatives called Ke’s behavior “reprehensible and reprehensible” and argued that they had spent “significant time and expense” trying to determine whether the cases she cited were real.

“Quoting false cases in court records and other materials submitted to the court constitutes an abuse of process and amounts to making a false statement to the court,” Judge David Masuhara wrote. “If left unchecked, it can lead to a miscarriage of justice.”

Masuhara claimed Ke’s actions caused “significant negative publicity”; Additionally, he described the lawyer as “naïve about the risks of using ChatGPT.” However, he stressed that he has taken steps to correct his mistakes.

“I don’t believe she had any intention to deceive or mislead. I accept the sincerity of Ms Ke’s apology to the lawyer and the court. “His remorse was clearly evident in his appearance and oral arguments in court.”

Although Judge Masuhara emphasized that he saw no malice in Ke’s actions, the Law Society of British Columbia is investigating Ke’s conduct.

Read Also:  The story of 400 years was created in 2 minutes and 24 seconds

You might be interested in:

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here