Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chain of thought is just a way of trying to squeeze more juice out of the lemon of LLM's; I suspect we're at the stage of running up against diminishing returns and we'll have to move to different foundational models to see any serious improvement.


The so-called "scaling laws" are expressing diminishing returns.

How is it that "if we grow resources used exponentially errors decrease linearly" ever seen as a good sign?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: