Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>We are now confident we know how to build AGI as we have traditionally understood it

But we don't even have good definitions to work with. Does he mean AGI as in "sentience", AGI as in "superintelligence", AGI as in "can do everything textual (text in, text out) a 95th percentile human can do", or "can do everything a human on a computer can do" (closed-loop interaction with compiler, debugging, etc.).



He means "AGI is whatever it is will get me funding". AGI will be one thing to researchers, another to finance, another to your Grandma, and he will claim it to be here but also just around the corner.


FWIW OpenAI themselves give a reasonably specific definition of AGI in their Charter [1]:

highly autonomous systems that outperform humans at most economically valuable work

But I guess the "as we have traditionally understood it" bit from Sam's phrasing may imply that in fact he means something other than OpenAI's own definition?

[1] https://openai.com/charter/


The circularity is an issue: as machines do work, it becomes less valuable.

Highly autonomous systems already outperform humans for the vast majority of the economically valuable work of the 1400s economy.


It’s AGI as in Attract Gullible Investors. ;)


> Does he mean AGI as in "sentience", AGI as in "superintelligence"

No. OpenAI and Microsoft changed their definitions of AGI to raise more money: [0]

AGI used to mean something years ago, but at this point is a meaningless term. Since the definition is different depending on who you ask.

It may mean "Super Intelligence" to AI researchers, "Raise more money to reach AGI" to investors, "Replace workers with AI Agents" to companies or "Universal Basic Income for all" to governments.

It could mean any of the above.

More accurately, it may also mean: "To raise more and more money to achieve "AGI" and replace all economically valuable work with AI agents (with no alternatives) whilst selling millions of shares to investors to enrich ourselves and changing from a non-profit to a for-profit for the benefit of humanity."

The last definition is what is happening and that looks like a scam.

[0] https://archive.ph/pmudc


AGI definition was never about superintelligence - that's ASI. The current LLM is ANI - Artificial Narrow Intelligence. For me AGI would be "95th percentile human on a computer can do".

It doesn't have to be maybe even that smart - if you would take someone with 80 IQ we would still classify them as human level intelligence - definitely smarter than most other animals and such people still useful to society and can provide value with many labour tasks.

I think what many people assume wrongly ChatGPT is not just one 1 AI - it is like millions on instances of such AI at the same time and you can probably scale to hundreds of millions of such Dumb AI. For humans you would have to do hundreds of babies and babysit/train them for minimum 5-10 years to be useful.


Allegedly, attaining AGI will get them out of the Microsoft deal (if I understood correctly on a recent Pivot episode).

Notice the lawyerly « AGI as we have traditionally understood it »


The specific definition they have is that they make 100 billion in revenue.

https://www.theverge.com/2024/12/26/24329618/openai-microsof...


That's so orthogonal to AGI that it was a huge win for Altman to make it the criteria.


He did though mention that it is AGI as "they" have traditionally understood. I think it's O3, the super expensive O1.


To be clear, Altman doesn't say they have achieved AGI, but that they "know how to build AGI". That's the difference between a product and a roadmap. Which are very different things, especially in cutting edge technology. Personally, I don't think they have the pieces to build anything more than an expensive agent that can act on hallucinations just as readily as accurate assessments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: