Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.


A classic case.

I work on Veracity https://groundedai.company/veracity/ which does citation checking for academic publishers. I see stuff like this all the time in paper submissions. Publishers are inundated


Don’t publishers ban authors who attempt such shenanigans?


And then make sure the arguments and evidence it presents are as the LLM represented them to be.


At which point it’s more of a hassle to use an LLM than not.


And then check that the cited article was not itself an AI piece that managed to get published.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: