> And are you really trying to present that you have practical experience building comparable tools to an LLM prior to the Transformer paper being written?
I believe (could be wrong) they were talking about their prior GOFAI/NLP experience when referencing scaling systems.
In any case, is it really necessary to be so harsh about over-confidence and then go on to predict the future of solving hallucinations with your formal verification ideas?
I believe (could be wrong) they were talking about their prior GOFAI/NLP experience when referencing scaling systems.
In any case, is it really necessary to be so harsh about over-confidence and then go on to predict the future of solving hallucinations with your formal verification ideas?
Talk is cheap. Show me the code.