Can you imagine that in the future Wikipedia will be written by ChatGPT? Its founder believes this will happen

ChatGPT, the AI-powered chatbot that can find us the answer to almost any question, seems to be able to write reports, letters or even compose poetry effortlessly.

The technology, developed by OpenAI, scours the web to gather a wealth of knowledge and find answers to users’ questions.

But now let’s imagine that the roles are reversed. Instead of artificial intelligence robots based on information written by humans, let’s imagine that in the future, most of the material we humans will read will be written by artificial intelligence.

This shift could topple the world of publishing, newsgathering, and social media. ANDIt’s a future Jimmy Wales, founder of Wikipedia, is already thinking aboutconsidering how the world’s largest online encyclopedia will evolve over the next few years.

I think we’re still a long way from ‘ChatGPT, please write a Wikipedia entry about the Empire State Building’, but I don’t know how far away we are from that, certainly closer than I would have thought two years ago.

Wales says that as much as ChatGPT has captured the world’s imagination in recent weeks, its own tests of the technology show that there are still many flaws.

One of the problems with the current ChatGPT is what in this field is called ‘mind-blowing’, I call it a lie. He has a tendency to make things up out of thin air, which is too bad for Wikipedia, isn’t it right. We have to be very careful with that.

While full AI authorship is not anticipated in the short term, there is already a discussion on Wikipedia about the role that AI technology can play in improving the encyclopedia in the coming months.

I think there are some interesting opportunities for human assistance if you have an AI that’s been trained on the right corpus of things – to say, for example, here are two Wikipedia entries, check them out and see if there are any statements that contradict each other. the tensions where one article seems to be saying something slightly different than the other.

A human could identify it, but they would have to read the two articles side by side and think about it carefully. If you automate data entry and get hundreds of examples, I think our community might find that quite useful.

One of the criticisms leveled at Wikipedia over the years is possible bias. both in the way their volunteers describe specific themes and in the way they choose which themes deserve more or less emphasis.

Read Also:  You'll soon be able to install apps on your iPhone from a website

Could AI combat some of these biases and make Wikipedia a truly neutral and unbiased source of information once and for all? Wales is not convinced.

We know that a lot of AI work has gone into bias very quickly, because if you train an AI on biased data, it will follow that bias, and a lot of people in the AI ​​world are focused on this problem. and they are aware of it.

In other words, there is a real possibility that an AI-powered Wikipedia will not end prejudice, but will exacerbate and perpetuate it.

But where it could be most useful is identifying things that are missing from the encyclopedia’s coverage by looking at all the information available in the world, matching parts of it to the corresponding Wiki entries, and identifying gaps. And there could be too many, which would lead to an overgrowth of Wikipedia content.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here