Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you suggesting that there will be some future, more efficient architecture of machine learning that'll not be able to make use of extra compute at all (or at least have greatly diminishing returns while doing so)?

Otherwise, I don't see why it wouldn't make sense to continue grabbing as many GPUs as feasible for the foreseeable future, at least until LLMs and other GenAI models have been determined to have plateaued without much doubt.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: