Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"LLM’s can’t scope themselves to be strictly true or accurate"

This isn't true though the techniques to do so are 1. Not as yet widespread 2. Decrease the generality of the model and its perceived effectiveness.



I'm interested to hear what these techniques are. Decreasing the generality will help, but I fail to see how that scopes the output. At best that mitigates the errors to an extent.


Requiring the answers to automatically verifiable, or having answers be inputs to a reliable query system?


I’d rather use the reliable query system!


conformal prediction


Predicting a set of answers to some confidence interval would still result in hallucinated answers.


low probability answers get shot to a human and reviewed for model improvement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: