Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Another issue is that for many tasks, no much data is actually available. The big corps like Google/Meta etc. push the big ¨foundational¨ models because in the consumer space there is ample data available. But there are very important segments (notably in the professional space - applications for accountants, lawyers, journalists, pharmacologists - all of which I have conducted projects in/for), where training data can be constructed for a lot of money, but it will never reach the size of the set of today`s FB likes.

This is a really important point. GPT-x knows nothing about my database schema, let alone the data in that schema, it can’t it learn it, and it’s too big to fit in a prompt.

Until we have AI that can learn on the job it’s like some delusional consultant who thinks they have all the solutions on day 1 and understands nothing about the business.



Not as big a deal as you thin. Embedding chunking and retrieval already works well for any arbitrary outside knowledge (database, books, Codebase etc)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: