OpenAI launched an exciting new AI (artificial intelligence) product called ChatGPT last November, and this was a strong wake-up call for tech companies like Meta Platforms and Google, which had taken their time producing their own similar products. Microsoft, which are active supporters of OpenAI, said in February that they wanted to use the technology to improve their search engine, Bing. The next day, Google’s artificial intelligence product Bard made his cameo. Meta, for its part, was not going to be left behind, saying they were prepared to “accelerate” their own AI products.
When looking at the medical field, where ChatGPT has many potential applications, including reading CT scans and performing actual diagnostics, Bloomberg’s Faye Flam cautions that “we’re nowhere near understanding when and where it would be practical or ethical to follow its recommendations”. Similarly, when it comes to the practicality of the product for searching the internet in novel ways, ethical questions have been raised about its applicability. Both GPT-4 (the latest incarnation of ChatGPT as of April) and Bard will efficiently, with the right prompts, produce biased and even misinformed articles that can be used to propagate extremist views.
When Google introduced Bard, they made sure to emphasize their “focus on quality and safety,” but research confirmed that the danger was not imaginary. On the other hand, there seems to be a real-life usability to the technology that we wouldn’t want to just dismiss, for example, to help software engineers write better code and make sure our medical diagnoses aren’t wrong. In any case, ChatGPT and its peers have arrived on the scene and don’t seem to be ready to go anytime soon.
One of the questions surrounding all of this relates to the future prospects of all the Big Tech companies whose price movements are traded in online CFD trading. Will this new development be as significant as, say, the internet in determining its future courses? Will ethical concerns be too difficult to handle? Let’s talk.
CEO Mark Zuckerberg makes no secret of the fact that his “biggest investment is advancing AI and incorporating it into every one of our products.” In addition to using it to search for content, he also wants to use it to improve the operational efficiency of his company. Last year, Meta’s growth slowed to a snail’s pace, while Zuckerberg devoted most of his energy to advancing VR research. 2023 brought a new note from the CEO, one that spoke of the need for “efficiency,” which ties back to what happened in mid-March when another 10,000 employees were laid off from the company. Shareholders like his approach so far, which we see in gains of more than 50% of Meta’s shares between January 1 and mid-March.
The top search engine?
A traditional Google search works by searching the web for specific terms and then placing the appropriate texts before you, so you can draw your own conclusions. There’s an important reason Google might tend to prefer this model, namely that they take in as much as $54.48 billion in a single quarter from advertisers exploiting the space around their search results.
ChatGPT works by using vast databases of languages to understand questions and answer directly (without sources). The straightforward question-and-answer format has its draw points for users who avoid ambiguity, and the long-language models are also cheaper, according to one opinion, than regular internet searches. Still, it’s not clear that AI searches are better in every way. Google’s traditional model works based on how websites link to each other, which helps us assess the authenticity of a source in a way that ChatGPT currently can’t.
sketches of the future
Consumers were able to buy Google’s Bard starting in March of this year, when it came out and openly challenged OpenAI in the marketplace. Booming competition between the two may lead to an unfortunate situation where the quality of these AI products is judged by high-sounding criteria rather than what is “best for humanity,” says Nara Logics’ Jana Eggers.
The questionable applications of Bard or ChatGPT in the spread of misinformation are not surprising because they stem from their functionality to determine which words should follow a given text, regardless of its ethical value. Even when doctors used GPT-4 to generate medical opinions, they found that machine responses differed widely, depending on the wording of the doctors’ instructions. “There’s really no universal way” to make sure Bard “stops generating misinformation,” says Max Kreminski of Santa Clara University.
As with many other things in life, it seems that ChatGPT is both potentially useful and potentially threatening. Jana Eggers is optimistic that “there are ways to approach this that would build more responsible responses generated by large language models.” We can only hope that these forms are quickly identified and searched.
If you have acquired a taste for CFD trading on share prices from Big Tech, stay tuned for their latest AI models, as well as their future plans.