Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> let's take for granted that if one has a theory, then they have understanding

Leaving aside what is actually meant by "theory" and "understanding". Could it not be argued that eventually LLMs will simulate understanding well enough that - for all intents and purposes - they might as well be said to have a theory?

The parallel I've got in my head is the travelling salesman problem. Yes, it's NP-Hard, which means we are unlikely to ever get a polynomial-time algorithm to solve it. But that doesn't stop us solving TSP problems near-optimally at an industrial scales.

Similarly, although LLMs may not literally have a theory, they could become powerful enough that the edge cases in which a theory is really needed are infinitesimally unlikely.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: