Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.

3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

First, if the cost is coming down so fast, why the need for "exponentially increasing investment"? One could make the same exponential growth claim for, say, the electric power industry, which had a growth period around a century ago and eventually stabilized near 5% of GDP. The "tech sector" in total is around 9% of US GDP, and relatively stable.

Second, only about half the people with college degrees in the US have jobs that need college degrees. The demand for educated people is finite, as is painfully obvious to those paying off college loans.

This screed comes across as a desperate attempt to justify OpenAI's bloated valuation.



First, it requires exponential investment because

> 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Incremental improvements in intelligence require exponentially more resources. Only once that step is achieved can costs be reduced.

Second, intelligence is not the same as education. Education is specialized, intelligence is general. Every modern convenience around you is the result of intelligence.


> The intelligence of an AI model roughly equals the log of the resources used to train and run it.

Didn't Deepseek disprove this? They trained a roughly equal model on an order of magnitude less compute.


No, advances in efficiency were always expected (and indeed required because of this 'law')

The principle is that if DeepSeek had have spent 10 or 100 times as much as they did their model would have been a few times better.

This rule is intended to be applied on top of all of the other advances.


> The principle is that if DeepSeek had have spent 10 or 100 times as much as they did their model would have been a few times better.

If this is the case then OpenAI must have a model that is a few times better. Where is it?


If they have one I presume it would be in one of their data centres.

Are you implying that they might be hiding something by not releasing a new model based upon technology that's a month old?


"cost to use" != "cost to train"

Elsewhere he says that model intelligence is determined by the log of the resources used to train it, and this relationship has been constant for many orders of magnitude.

The implication is that it takes exponentially increasing investment to achieve a linear increase in intelligence.

Exponentially increasing costs sound like a bad thing.

But then he says the linear increase in intelligence generates exponentially increasing economic benefits.

Those exponentially increasing benefits sounds like they might justify the exponentially increasing costs.


> Elsewhere he says that model intelligence is determined by the log of the resources used to train it, and this relationship has been constant for many orders of magnitude.

If this relationship is constant then you must have a way to quantify "intelligence" in such a way that it can be compared like this. Care to share it?


Ok, but according to this, linear investments generate linear increase in economic benefits.

So why the need to go exponential?


Because increasing investments and profits linearly would mean sub-linear growth in intelligence.

Since Sam's only shot at immortality is reaching ASI in the next few years, sub-linear isn't fast enough.

The purpose of ASI isn't to generate economic benefits for everyone else.

The purpose of everyone else's economic benefits is to generate the ASI so Sam can live forever.


The costs are coming down fast because of the investment and would not happen independently of the investment. The same reason for Moore's law can be generalized as the learning/experience curve. You could make the argument that the demand for AI has a higher growth rate than the learning rate ex. if AI becomes 10x more cost efficient and results in demand growing 100x, you would also need to follow with an exponential growth in inference investment.


You could also make an argument that the bulk of college degrees are not economically useful. The demand for certain kinds of education and knowledge is certainly finite - but the demand ceiling for general intelligence seems much much higher, and appears to be limited only by the cost of hiring ever smarter people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: