Since its unveiling, OpenAI’s ChatGPT has been one of the biggest developments of the year. Suddenly there is a fear that Artificial Intelligence (AI) will make all jobs unnecessary. But an American judge does not like the smart chatbot. ChatGPT was used by a lawyer in a lawsuit, but that led to wrong conclusions.
ChatGPT provides incorrect information
This is evident from CourtListener published documents of a lawsuit between the Colombian airline Avianca Airlines and prosecutor Robert Mata. The latter hired New York law firm Levidow, Levidow & Oberman to resolve an issue with the Colombian company in court.
It’s no secret that litigation is time consuming and difficult, AI can get there In principle so nice to help. Steven Schwartz, an attorney at this law firm, thought so too. He asked ChatGPT about six different ones lawsuits similar to the case in question, including ‘Varghese v. China South Airlines’. But this turned out not to have been a real trial at all. However, it turns out that this case is based on a case that was real. The case numbers would have been misquoted by the chatbot.
Yet it did not stop there. For example, complete court rulings are said to have been invented, with non-existent quotes and non-existent quotes of details from the lawsuit. It is not clear whether ChatGPT came up with these details themselves or took them from unspecified sources.
Schwartz says in the document that he regrets the decision to use ChatGPT. Little did he know that the AI ​​model can sometimes be wrong.
Border AI and crypto increasingly blurred
Nevertheless, developments with Artificial Intelligence are going very fast. In the meantime, AI and blockchain are even starting to merge. For example, you can ask ChatGPT about real-time information coming from Solana (SOL). Circle CEO also believes that AI bots are already trading USDC on their own.
Yet there are also people who are less happy with the progress. For example, a prominent AI researcher and former Google employee finds the technique very dangerous. Even OpenAI CEO Sam Altman has warned the US government about the risks.
Judging by this courtroom situation, we can therefore assume that our artificial overlords are not quite ready to take over the world. So we can go to sleep with peace of mind for a while.