How to avoid

One of the biggest flaws of chatbots with generative artificial intelligence (AI) is that they sometimes give well-structured but completely incorrect answers – which can range from an erroneous fact to a disturbing conversation – which is known in the technology industry. as “hallucinations”, and experts now face the challenge of eradicating them.

Since this technology became popular last fall, millions of people have started using these chats on a daily basis for tasks such as writing an email, organizing their vacations, searching for information or learning about specific topics.

However, the uncertainty about the precision of their answers causes concern both for users, as well as for researchers and companies that offer these services.

FROM DISTURBING ANSWERS TO OTHERS INVENTED

There are several examples of this type of hallucination, and among the best known are the conversations that several people had in February with “Sydney”, Bing’s alter ego, shortly after the launch of the Microsoft Chatbot.

In one case, “Sydney” confessed to a New York Times reporter that he would like to be human, that he wanted to be destructive, and that he was in love with the person he was chatting with.

Another of the most notorious “hallucinations” was that of the Google chatbot, Bard, which produced an inaccurate promotional video about the James Webb Space Telescope.

No company seems to be safe from slip-ups: OpenAI’s chatbot, ChatGPT, misled a lawyer by creating for him a series of totally fabricated alleged legal precedents that the lawyer later used in court, but was caught and now faces to possible sanctions.

WITHOUT “HALLUCINATIONS” THERE IS NO CREATIVITY

Generative AI is powered by a complex algorithm that analyzes the way humans put words together based on the vast amount of information on the internet, but it isn’t programmed to decide whether their answers are true.

These “hallucinations” are not so easy to eradicate, since they are part of the same system that allows bots to be creative and generate unrepeated conversations or stories. That is, if this AI feature is removed or stopped, it would not be so easy for the chat to generate poems in any style, create jokes or suggest ideas.

“These hallucinations are particularly problematic when multi-step reasoning is required, since a single logical error is enough to derail a much larger solution,” OpenAI – the company behind the technology of Bing and ChatGPT chats – details in a study. .

THE CONCERN OF THE TECHNOLOGICAL GIANTS

Read Also:  Sweden has passed a new law on gender reassignment: it lowers the minimum age for gender reassignment from 18 to 16 years

Microsoft and Google, the two great technological giants that compete in the race to be the benchmark company for AI chatbots, have tried measures to try to avoid these errors; Microsoft has tried limiting the number of questions Bing can answer, after finding that the more dystopian “hallucinations” tended to appear in longer conversations.

For its part, when Google generates search results using its chatbot technology, it simultaneously executes a search in its traditional search engine; it compares the answers obtained by the two technologies and if the answer is not the same, the AI ​​result is not even displayed.

This makes your chatbot less creative, so it’s not as good at writing poems or having conversations as its competitors, but it’s less prone to error.

“Nobody in the field (of generative AI) has yet solved the problems of ‘hallucinations’. All models have this problem,” Google CEO Sundar Pichai said in an interview with CBS in April.

TWO CHATBOTS BETTER THAN ONE

One of the solutions proposed by the study entitled: “Improving facticity and reasoning in language models through multi-agent debate”, from the Massachusetts Institute of Technology (MIT), is to make, before responding to a human, multiple chatbots “debate” with each other about which is the correct answer.

If chatbots produce multiple answers to the same question, they must first come to an agreement on which one is correct.

For their part, a group of Cambridge researchers point out in their article “SelfCheckGPT: Detection of black box hallucinations without resources for generative language models (LLM)” that a technique that helps the AI ​​make fewer mistakes is to make the same chatbot the same question several times to see if the answer is always the same -in that case it would be correct- or not.

Other experts such as Geoffrey Hinton, who was dubbed the “godfather” of AI and spent part of his career at Google, believe that “hallucinations” will be controlled so that they are less frequent, but that there is no way to completely get rid of them. they.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here