OpenAI Chief Advocates Regulating Artificial Intelligence; warns risks

OpenAI CEO Sam Altman, creator of the ChatGPT interface, which arouses passions and fears, told a US Senate panel on Tuesday that regulating Artificial Intelligence (AI) It will be essential to limit the risks of using this technology.

“We believe that regulatory intervention by governments will be crucial to mitigate the risks of increasingly powerful models,” estimated the 38-year-old businessman, the latest figure to emerge from Silicon Valley. “It is critical that the most powerful AI is developed with democratic values, which means that the leadership of the United States is determinative,” Altman said when testifying before the Senate Judiciary Subcommittee on Privacy, Technology and Law.

Governments around the world are under pressure to take action after the November launch of ChatGPT, a conversational bot that can generate human-like content in an instant. Senator Richard Blumenthal, the subcommittee’s chairman, delivered his opening remarks on the dangers of AI, written using ChatGPT and read by software trained on his real voice.

“If you were listening from home, you might have thought that voice was mine and the words were mine, but in fact, that voice was not mine,” he said. AI technologies “are no longer science fiction fantasies, they are real and present,” she emphasized. “If this technology goes wrong, it can go very wrong.”

Lawmakers debated with Altman and two other experts the need to regulate computer systems that could “literally destroy our lives,” in the words of Senator Lindsey Graham. The United States Congress regularly evokes the need to regulate the internet, to better protect the confidentiality of data and promote greater competition. But political divisions have blocked most bills on the issue for years.

The so-called generative AI, deployed by OpenAI, Microsoft and Google —capable of creating content, text, images, sounds or videos with just one request— has raised the problem of technological regulation. Many are concerned about its eventual impact on numerous professions, with possible massive job cuts.s, and fundamentally in society as a whole. The senators brushed up on these areas, such as biased algorithms and the spread of increasingly sophisticated misinformation.

“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but it also creates serious risks,” Altman acknowledged. “One of my biggest fears is that we, this industry, this technology, will cause significant harm to society,” he said. “If this technology goes down the wrong path, it can go quite far. (…) And we want to work with the government to prevent that from happening.”

The businessman recalled that although OpenAI is a private company, it is controlled by a non-profit organization, which obliges it to “work for a wide distribution of the benefits of AI and to maximize the security of AI-based systems.” Altman has already expressed support for establishing a regulatory framework for AI, preferably at an international level.

“There is no way to put this genie in the bottle. Globally, this is exploding,” said Sen. Cory Booker, one of many lawmakers with questions about how best to regulate AI during a Senate panel on AI. uses of the AI ​​on Tuesday.

The idea of ​​creating a federal agency aroused some enthusiasm, but it would have to be well-resourced, Blumenthal stressed. However, Altman noted that the regulation itself was not without its risks.

“I know it seems naive to propose something like this, it seems very difficult” to achieve, but “there are precedents,” he said, citing the example of the International Atomic Energy Agency (IAEA). “And I’m not just talking about dollars, I’m talking about scientific expertise,” she said. “If the US industry lags behind, China or some other country can move faster,” she said.

He also insisted that any moves not stifle independent investigation and instead focus on dominant companies like his.

AI threatens the future of humanity, says survey

The rapid growth of artificial intelligence technology could put the future of humanity at risk, according to a majority of Americans polled in a Reuters/Ipsos poll released on Wednesday. More than two thirds of those are concerned about the negative effects of AI and 61% believe that it could threaten civilization.

Since OpenAI’s ChatGPT chatbot became the fastest growing application of all time, the widespread integration of AI into everyday life has catapulted AI to the forefront of public discourse. ChatGPT has kicked off an AI arms race, with tech heavyweights like Microsoft and Google competing to outpace each other’s AI achievements.

According to the data, 61% of respondents believe that AI poses risks to humanity, while only 22% disagree and 17% are not sure. Those who voted for Donald Trump in 2020 expressed higher levels of concern; 70% of Trump voters compared to 60% of Joe Biden voters agreed that AI could threaten humanity.

When it comes to religious beliefs, evangelical Christians were more likely to “strongly agree” that AI poses risks to humanityat 32% compared to 24% for non-evangelical Christians.

“He’s saying that such a broad cross-section of Americans cares about the negative effects of AI,” said Landon Klein, director of US policy at the Future of Life Institute, the organization behind an open letter co-signed by Tesla, Elon Musk , demanding a six-month hiatus on AI research. “We see the current moment as similar to the beginning of the nuclear age, and we have the benefit of public perception that is consistent with the need to take action.”

While Americans are concerned about AI, crime and the economy rank higher on the list of table issues: 77% support increasing police funding to fight crime and 82% are concerned about the risk of a recession. Those in the industry said the public should understand more about the benefits of AI.

“The concerns are very legitimate, but I think what’s missing from the overall dialogue is why are we doing this in the first place?” said Sebastian Thrun, a Stanford computer science professor who founded Google X. “AI will improve the quality of people’s lives and help them become more competent and efficient.”

The positive applications of AI, such as revolutionizing drug discovery, aren’t as visible as ChatGPT, said Ion Stoica, a UC Berkeley professor who also co-founded the AI ​​company Anyscale.

“Americans may not realize how pervasive AI already is in their daily lives, both at home and at work,” he said.


Related News

Leave A Reply

Please enter your comment!
Please enter your name here