It is a serious problem when these tools are being pushed as trustworthy when they are anything but.
On an almost daily occurrence I deal with some sort of hallucination in code, in summarizing something, we see it constantly on social media when people try to use Google's AI summary as a source of truth.
Let's not try to lie to push an agenda about what the capabilities of what these models can do. They are very powerful, but they make mistakes. There is zero question about that, and quite often.
The problem isn't that they hallucinate, the problem is that we have comments like yours trying to downplay it. Then we have people that, it is right just enough times that they start trusting it without double checking.
That is the problem, it is right enough times that you just start accepting the answers. That leads to, making scripts that grab data and put it into a database without checking. It's fine if it is not business critical data, but it's not really fine when we are talking about health care data or.. oh idk, police records like a recent post was talking about.
If you are going to use it for your silly little project, or you're going to bring down your own companies infrastructure go for it. But let's not pretend the problem doesn't exist and shove this technology into far more sensitive areas.
I think you're exaggerating. You're imagining the worst but your argument basically boils down to not trusting that people can handle it, and calling me a liar. Good one.