Before OpenAI CEO Sam Altman’s four-day exile, several programmers wrote a letter to the board warning of a powerful artificial intelligence discovery which, in their opinion, could pose a danger to humanity, as two people involved in the matter revealed to Reuters.
According to the same sources who told Reuters, this was the discovery Decisive for the board to remove Altman. However, the specialist publication The Verge assures that the letter did not reach the council and had nothing to do with Altman’s dismissal.
Before his triumphant return on Tuesday evening More than 700 employees had threatened to quit and join their defender Microsoft in solidarity with their sacked leader. Sources cited the letter as one factor in a longer list of board grievances that led to Altman’s firing, including concerns about commercializing artificial intelligence without understanding the consequences. Reuters was not able to verify a copy of the letter, and the letter’s signatories also declined to respond to the agency’s questions.
From OpenAI, they declined to comment, but confirmed this in an internal message to their employees a project called Q* and a letter to the board before weekend events, one of the people said. An OpenAI spokesperson said the message sent by executive director Mira Murati alerted employees to certain media reports without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s pursuit of so-called artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that outperform humans in the most economically valuable tasks.
Given the enormous computing resources The new model was able to solve certain mathematical problemsthis person told Reuters confidentially. Although he only knew math at an elementary school level, passing those tests made researchers very optimistic about Q*’s future success, the source said. Reuters could not independently verify the researchers’ claimed Q* capabilities.
Researchers view mathematics as a frontier of generative AI development. Nowadays, generative AI is well suited to writing and translating languages by statistically predicting the next word, and the answers to the same question can vary greatly. But Gaining the ability to do mathematics (where there is only one right answer) implies that AI would have greater thinking abilities similar to human intelligence. This could apply, for example, to novel scientific research, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and understand. In their letter to the board, the researchers pointed out the capabilities and potential danger of AIsaid the sources, without specifying the exact security concerns raised in the letter. Computer scientists have long discussed the danger posed by highly intelligent machines, such as when they decide that the destruction of humanity is in their interest.
Researchers have also pointed this out Work of a team of “AI scientists”, whose existence has been confirmed by several sources. The group, formed Combination of previous “Code Gen” and “Math Gen” teamsexplored how to optimize existing AI models to improve their reasoning and ultimately be able to do scientific work, one of the respondents said.
Altman led the effort to make ChatGPT one of the fastest growing software applications in history and attracted the necessary investments (and computing resources) from Microsoft to turn to AGI. In addition to announcing a range of new tools at a demonstration this month, Altman joked at a summit of world leaders in San Francisco last week that he thought big advances were on the horizon.
“Four times in the history of OpenAI, most recently in the last two weeks, I have been in the room when In a sense, we are pushing back the veil of ignorance and pushing the frontier of discovery forward. “This is the professional honor of my life,” he said at the Asia-Pacific Economic Cooperation summit.
A day later, the board fired Altman.
.