ChatGPT just went down for users across the globe. OpenAI’s flagship conversational model and its Codex programming tool started throwing connection errors around 10:05 a.m. ET today. The crash comes right as the company attempts a massive infrastructure overhaul. Just weeks ago, OpenAI secured a historic $122 billion funding round led by Amazon and SoftBank to build out next-generation server capacity. The physical friction of upgrading those backend systems while supporting massive global demand is now causing visible fractures.
The tracking platform Downdetector lit up with complaints almost immediately. The outage hit the United Kingdom the hardest with over 8,000 rapid reports. The United States saw 1,875 flags, while India registered over 900. You can see the regional severity and the exact morning start time in the live tracking data from Monday.
General connection failures accounted for 79% of the user issues. But a secondary glitch caught companies completely off guard. Users upgrading to “ChatGPT Business” or adding new employee seats faced complete lockouts for up to an hour. OpenAI acknowledged the disruption on their official status page. The company confirmed they were investigating degraded performance as system uptime briefly dropped to 99.85%, a metric actively monitored during the recovery phase. This kind of disruption sends ripples through the entire technology sector.
The Messy Reality of Massive Scaling
OpenAI is no longer a contained research lab. It is a commercial utility. That massive $122 billion capital injection pushed the company to an $852 billion valuation. But that money carries an immediate operational burden.
OpenAI has to transition its current architecture to vastly larger infrastructure packed with next-generation Nvidia silicon.
Bridging the gap between legacy servers and future supercomputers without knocking millions of users offline is incredibly difficult. We are watching the growing pains of scaling planetary-level artificial intelligence in real-time. Until the new hardware is fully integrated, these capacity limits will continue to test OpenAI’s engineering teams.
