OpenAI’s GPT-5: A Scientific Copilot, Not Autonomous AGI

OpenAI’s new GPT-5 model is actively accelerating complex scientific research in fields like immunology and mathematics, but the company stresses it functions strictly as a supervised “copilot” and is not an autonomous researcher or a harbinger of Artificial General Intelligence (AGI).

The model is demonstrating capabilities in areas such as immunology, number theory, and graph theory, marking a deeper impact in scientific laboratories despite generating lukewarm reactions from general consumers. This progress is detailed in the inaugural report from the “OpenAI for Science” program, an initiative focused on integrating advanced AI models into scientific workflows.

At The Jackson Laboratory, for instance, GPT-5 analyzed unpublished immunology data and, within minutes, identified the probable cause of an observed change in immune cells, also suggesting an experiment to confirm it. Human scientists had been working on the assay for months.

In another instance at the same laboratory, GPT-5 assisted a team working on a geometry theorem by conducting an extensive literature search. It uncovered connections and references, including sources in various languages, that the human team had previously missed.

OpenAI emphasizes that the model helps “shorten critical parts of the workflow” and “expands the exploration surface,” enabling human teams to achieve results and make more informed experimental decisions faster.

The report also credits GPT-5 with contributing to four new, human-verified results in mathematics, showcasing its ability to propose missing steps in previously open problems.

For example, researchers Mehtaab Sawhney of Columbia University and Mark Sellke of OpenAI leveraged a crucial density estimate proposed by GPT-5 to complete a proof for Paul Erdős’s number theory problem #848, which had remained unsolved.

Combinatorialist Tim Gowers described GPT-5 as a “very fast and informed critic” for stress-testing mathematical ideas, noting its ability to identify flaws and suggest alternative arguments.

Despite these advancements, OpenAI issues strong warnings regarding GPT-5’s limitations. The company states the model can “hallucinate” scientific citations, biological mechanisms, or mathematical proofs that appear plausible but are incorrect.

It also exhibits sensitivity to how problems are formulated and can sometimes overlook domain-specific subtleties.

Consequently, OpenAI mandates that GPT-5 must be used under strict expert supervision. It should not replace existing rigorous validation methods or specialized tools like simulators.

The company is now shifting its focus to evaluating models within real-world workflows rather than relying solely on traditional synthetic benchmarks.

This cautious stance contrasts with previous ambitious statements from CEO Sam Altman, who had discussed developing AGI with “researcher intern” capabilities by September 2026. The new report qualifies that promise.

OpenAI describes productive interaction with the model as a continuous “dialogue,” requiring scientists to learn specific communication techniques akin to prompt engineering.

The “OpenAI for Science” team, established in September, collaborates with leading institutions such as Vanderbilt, UC Berkeley, Columbia, Cambridge, and Oxford. This collaboration aims to systematically determine where these models are genuinely useful and where they falter.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here