Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inference cost for leading models and more complex tasks is high. However, inference cost for a stationary model and task has dropped drastically.

https://a16z.com/llmflation-llm-inference-cost/ for example shows this to be true.

The report from OpenRouter https://openrouter.ai/state-of-ai also makes the same observation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: