Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm always confused by how people expect ChatGPT to follow constraints it wasn't given.

My hypothesis is, that excel has a somwwhat clear context (and constraints): Math and formulas in the compute part and data however we give it to excel. The interface is abstract in a way that makes it clear you have to interact with it in a special way respecting this context.

On the other hand, the context of large language models is unknown. Should we consider everything in the training data as part of the context? What should I expect when asking chatGPT:

> Tell me about when Christopher Columbus came to the US in 2015

Should I expect:

* a story of when Columbus came to the US in 2015, because that is the context I gave it?

* to be corrected by the AI because my factuals are wrong?

OpenAIs example with this prompt is noticing that columbus is dead, but if he were to arrive what would happen.

Further, the interface of large language models, asking things in natural language, leads us humas to interact with it as if it were a fellow human. This interface, and the examples we are usually shown, makes it easy to assume that it has a large context and we do not have to specify all constraints on something we expect other humas to pick up on based on the language used.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: