OpenAI brings together experts to prevent the annihilation of humanity at the hands of AI

open AI brought together a team aiming to avert an artificial intelligence (AI) cataclysmas the organization is concerned about the possible annihilation of humanity caused by AI.

According to OpenAI, AI has the ability to evolve into a superintelligence, which may represent the most important technological leap ever achieved by humanity, facilitating solutions to fundamental global dilemmas.

However, Ilya Sutskever and Jan Leike of OpenAI warn that humans are ill-equipped to deal with technology that exceeds their own cognitive abilities.

The OpenAI team recognizes the imminent danger of superintelligent AI and admits that currently lack a solution to control or redirect their behavior. Existing methodologies for aligning AI, such as reinforcement learning from human feedback, rely on human oversight. However, as AI systems outnumber humans, this monitoring becomes unreliable.

Sutskever, co-founder and chief scientific officer of OpenAI, and Leike, head of alignment department at OpenAI, are creating a group of researchers and engineers charged with overcoming the technical obstacles imposed by superintelligence. They set a four-year deadline for the team to reach this goal.

While the prospect of the end of humanity is alarming, OpenAI leaders remain optimistic that a solution will be found. They strongly believe that with concentrated and concerted efforts, the situation can be resolved.

Sutskever and Leike underscore their unwavering dedication to widely disseminating the results of their work. OpenAI is actively recruiting research engineers, scientists, and managers who have an interest in preventing AI’s domination or eradication of humanity.

Read Also:  Sandor Martin: "Eddie Hearn is doing everything possible to prevent him from getting the title shot"

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here