OpenAI warns Superintelligent AI may soon beat humans, calls for urgent action to keep society intact
OpenAI has warned that superintelligent AI could soon surpass human capabilities, urging governments to rethink jobs, taxes, and economic systems. The company's CEO Sam Altman says such systems may even outperform top executives and scientists within a few years.
by Ankita Garg · India TodayIn Short
- OpenAI alerts about imminent superintelligent AI surpassing humans
- The organisation calls for urgent measures to safeguard society
- The company calls for new policies, including shorter workweeks
OpenAI has issued one of its strongest warnings yet about the future of artificial intelligence, saying the world may soon enter an era where machines outperform even the smartest humans and society is not fully prepared for what comes next. In a new policy paper and recent public remarks, the company has called for urgent changes to how economies, jobs, and governance systems function, arguing that the transition to "superintelligence" could happen faster than expected.
The 13-page document, titled "Industrial Policy for the Intelligence Age," lays out a broad set of ideas aimed at preparing governments and institutions for this change. It describes a future where AI systems are no longer just tools for specific tasks, but entities capable of handling complex work that currently takes humans weeks or even months to complete.
Recently, speaking at the AI Impact Summit, Sam Altman suggested that this transformation may arrive sooner than many expect. "By the end of 2028, more of the world's intellectual capacity could reside inside data centers than outside them," he said, hinting at a dramatic change in where knowledge and problem-solving power exist.
Altman also spoke about how such systems could change leadership and research roles. "A superintelligence, at some point on its development curve, would be capable of doing a better job as CEO of a major company than any executive could, or certainly doing better research than our best scientists," he said, emphasising both the promise and the uncertainty of these systems.
From AI tools to superintelligence
The document explains how AI has evolved rapidly over the past few years. Systems that once handled narrow, repetitive tasks can now complete more general work that previously required human effort for hours. If this pace continues, the next step is systems that can outperform humans across a wide range of intellectual tasks, even when humans use AI as assistance.
OpenAI describes this stage as “superintelligence,” where machines exceed human capabilities in meaningful ways. While the company acknowledges that no one knows exactly how this transition will unfold, it believes preparation must begin now, rather than after disruption has already taken place.
At the same time, the paper highlights the potential upside. It compares superintelligence to past breakthroughs like electricity and industrial machinery, suggesting it could accelerate scientific discoveries, reduce the cost of essential goods, and open up new forms of work and creativity.
Why the current system may not be enough
Despite the benefits, OpenAI warns that existing policies may not be equipped to handle the scale of change. The report notes that earlier technological revolutions also caused disruption, but AI could move faster and impact more sectors at once.
Some of the key concerns include large-scale job disruption, concentration of wealth in a few companies, misuse of powerful AI systems, and the inability of current regulations to keep up. The document clearly states that small policy tweaks will not be enough to address these challenges.
Instead, it calls for a complete rethink of economic structures, including how people are paid, how work is organised, and how governments collect taxes.
OpenAI looks at rethinking jobs, taxes, and work hours
One of the more striking suggestions in the report is the possibility of shorter workweeks. As AI boosts productivity, companies may be able to maintain output while reducing working hours, with ideas like 32-hour workweek pilots being explored.
The document also raises questions about the future of taxation. As income patterns shift due to automation, governments may need to rely more on taxing capital rather than labour. It also mentions the possibility of taxes linked directly to automation, along with incentives for companies to retain and retrain workers instead of replacing them.
Another proposal is the creation of a “Public Wealth Fund,” which would allow citizens to benefit directly from AI-driven economic growth, rather than concentrating gains in a few organisations.
A major theme throughout the paper is the idea of keeping people at the centre of the AI transition. OpenAI proposes something it calls the “Right to AI,” which would treat access to AI tools as a basic necessity, similar to electricity or the internet. This would include affordability, infrastructure, and training so that more people can benefit from the technology.
At the same time, the company stresses the need to manage risks carefully. It outlines concerns around cybersecurity threats, biological misuse, and systems behaving in ways that are not aligned with human intent.
To address this, the report proposes stronger safety frameworks, including continuous monitoring, independent audits of high-risk systems, and tools to verify AI-generated content. It also suggests creating “model-containment” strategies for situations where systems behave unpredictably.
OpenAI wants the transition to superintelligence should not be decided by a small group of companies or governments alone. The document calls for a more democratic approach, where public input and transparency play a larger role in shaping how AI is developed and used.
It also suggests the need for international cooperation, arguing that global standards and shared frameworks will be essential to manage risks effectively.
- Ends