OpenAI CEO Sam Altman says superintelligence could arrive in "a few thousand days"

A new AI age "will not be an entirely positive story"

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

A hot potato: The rapid advancement of generative AI in recent years has led to questions about when we might see a superintelligence – an AI that is vastly smarter than humans. According to OpenAI boss Sam Altman, that moment is a lot closer than you might think: "a few thousand days." But then his bold prediction comes at a time when his company is trying to raise $6 billion to 6.5 billion in a funding round.

In a personal post titled The Intelligence Age, Altman waxes lyrical about AI and how it will "give people tools to solve hard problems." He also talks about the emergence of a superintelligence, which Altman believes will arrive sooner than expected.

"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.

Plenty of industry names have talked about artificial general intelligence, or AGI, being the next step in AI evolution. Nvidia boss Jensen Huang thinks it will be here within the next 5 years, whereas Softbank CEO Masayoshi Son predicted a similar timeline, stating that AGI will land by 2030.

AGI is defined as a theoretical type of artificial intelligence that matches or surpasses human capabilities across a wide range of cognitive tasks.

// Related Stories

Superintelligence, or ASI, outperforms AGI by being vastly smarter than humans, according to OpenAI. In December, the company said the technology could be developed within the next ten years. Altman's prediction – a thousand days is about 2.7 years – sounds more optimistic, but he is being quite vague by saying "a few thousand days," which might mean, for example, 3,000 days, or around 8.2 years. Masayoshi Son thinks ASI won't be here for another 20 years, or 7,300 days.

Back in July 2023, OpenAI said it was forming a "superalignment" team and dedicating 20% of the compute it had secured toward developing scientific and technical breakthroughs that could help control AI systems much smarter than people. The firm believes superintelligence will be the most impactful technology ever invented and could help solve many of the world's problems. But its vast power might also be dangerous, leading to the disempowerment of humanity or even human extinction.

The dangers of this technology were highlighted in June when OpenAI co-founder and former Chief Scientist Ilya Sutskever left to found a company called Safe Superintelligence.

Altman says we are approaching the cusp of the next generation of AI thanks to deep learning. "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data)," he wrote.

"To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."

The post also claims that AI models will soon serve as autonomous personal assistants that carry out specific tasks for people. Altman admits there are hurdles, such as the need to drive down the cost of compute and make it abundant, requiring lots of energy and chips.

The CEO also acknowledges that the dawn of a new AI age "will not be an entirely positive story." Altman mentions the negative impact it will have on the jobs market, something we're already seeing, though he has "no fear that we'll run out of things to do (even if they don't look like 'real jobs' to us today)."

It's significant that Altman wrote the post on his personal website, rather than OpenAI's, suggesting his claim isn't the official company line. The fact that OpenAI is reportedly looking to raise up to $6.5 billion in a funding round might also have prompted the hyperbolic post.