Science without the gobbledygook

Science without the gobbledygook

Share this post

Science without the gobbledygook
Science without the gobbledygook
This Week’s Science News from SWTG

This Week’s Science News from SWTG

AGI Problems & Cosmological Crises

Marcus's avatar
Marcus
Mar 28, 2025
∙ Paid
5

Share this post

Science without the gobbledygook
Science without the gobbledygook
This Week’s Science News from SWTG
Share

The Path to AGI is Coming Into View

Artificial General Intelligence. I think I can finally see a path for how we’ll get there. Let’s have a look.

There’s no consensus definition for what exactly Artificial General Intelligence, AGI for short, means. But I think most people mean intelligence comparable to that of humans or above.

And the general sense I am getting from people in AI research is that AGI is a few years away, maybe a little more, but likely less than a decade.

Demis Hassabis, chief executive of Google DeepMind, for example, has recently said that AGI is probably just a handful of years away.

Hassabis: “We've been working on this for more than 20 plus years. We’ve sort of had a consistent view about AGI being a system that’s capable of exhibiting all the cognitive capabilities humans can, and I think we're getting, you know, closer and closer. But I think we’re still probably a handful of years away.”

Sam Altman, the CEO of OpenAI wrote last year that “It is possible that we will have superintelligence in a few thousand days.”

And last month he added somewhat vaguely on his blog that “systems that start to point to AGI are coming into view” where he defines AGI generally speaking as “a system that can tackle increasingly complex problems, at human level, in many fields.”

If nothing else, we can credit Altman for single-handedly keeping blogs alive.

Dario Amodei, the CEO of Anthropic, doesn’t like the expression AGI because to him it’s more of a “marketing term”. But he also believes that the-AGI-he-doesn’t-want-to-call-AGI is just a few years away.

Amodei: “AGI has never been a well-defined term for me. I’ve always thought of it as a marketing term. But the way I think about it is at some point we’re going to get to AI systems that are better than almost all humans at almost all tasks. The term I’ve used for it is “a country of geniuses in a data centre”. It’s a sort of evocative phrase for all the power and all the positive things and, you know, all of the negative things. That’s the thing that I think that we are quite likely to get in the next two or three years.”

Generally, people in Silicon Valley say they “Feel the AGI.” Kevin Roose, who clearly spends too much time talking to these people, wrote in a recent New York Times essay that the first claims of AGI will come “probably in 2026 or 2027, but possibly as soon as this year.”

Then again, people who work in tech developments tend to be, shall we say, some what over optimistic. The cognitive scientist Gary Marcus who has a somewhat more grounded outsider opinion recently wrote: “I think there is almost zero chance that artificial general intelligence will arrive in the next two to three years, especially given how disappointing GPT 4.5 turned out to be.”

What he means is that GPT 4.5 was basically very little, very late. But even Marcus thinks that AGI will come, it’s just that the current models, large language models or transformer models, aren’t going to get us there. As I said previously, I agree with him on that. Honestly the idea that you can breed intelligence from text seems somewhat idiotic to me.

Keep reading with a 7-day free trial

Subscribe to Science without the gobbledygook to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Sabine
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share