> The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. ... Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.
> Moore waited at least five years [1] before deriving his law.
OpenAI has been around since 2015. Even if we give them four years to ramp up, that's still five years worth of data. If you're referring to the example he gave of token cost, that could just be him pulling two points off his data set to serve as an example. I don't know that's the case, of course, but I don't see anything in his text that contradicts the point.
> I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.
Yeah, price performance definitely seems to be the more important metric here. Anyone can get more compute by building a bigger and more expensive chip, but per-dollar metrics can't be gamed so easily. Though even in that plot, it's only doubled every ~2.3 years since 2008.
(I'd be especially interested in amortized price performance, i.e., the number of useful computations from a system over its lifetime, divided by the total cost to build, maintain, and operate it. That's going to be the ultimate constraint on what you can do with a given amount of funding.)
Yeah that was my first thought, don’t sully the name of Gordon Moore with this.
This sounds more like an insight into how things are working at open ai than anything else. And I’m not sure if deep seek and others are going to follow his nice rules.
More generally, transistors are a technical phenomenon…they are either smaller (and work) or don’t. The thing I really don’t feel enough folks appreciate about AGI is it’s a social phenomenon - not in the making of it but in the pragmatic reality of it.
To a sufficient number of folks the current version is AGI, I see students every day trust it more than themselves. To bosses it also might be, if it’s more intelligent than you average employee than that’s sufficiently general intelligence to replace them. So far, I’ve really tried but don’t beyond generating the most basic outline I have yet to find a model that helps with my work, so it’s not intelligent for me.
I’m aware of the benchmarks but they don’t matter outside of places like HN. intelligence is and, likely, will always be social before it is technical and that makes these laws..not useful?
1 year of data is indeed too little if you are trying to forecast one year ahead. Also the pricing is set by OpenAI. We don't know their actual costs decreased by that factor. Only that they cut their prices.
The retail price, or the actual cost to deliver? Those are not the same thing. Cost to deliver could actually mean something. Retail pricing is approximately meaningless.
In context, retail pricing is very meaningful. The next sentence is "lower prices lead to much more use". That is, price elasticity of demand is large, and here price is retail price.
Cost and price are two different things. Sam said cost, but really he meant price, because even by his own admission, ChatGPT's services are running at a loss
I know this is subjective but he is comparing GPT-4 to 4o. The new model definitely felt lighter and faster, so probably cheaper for them to maintain, but at the same time very often gave worse answers than GPT-4.
Moore waited at least five years [1] before deriving his law. On top of that, I don't think that it makes much sense to compare commercial pricing schemes to technical advancements.
[1] http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm...