You need to engineer a system when the AI state something it has to give a command that should support what it says and explain how the command shows that it is true. At this point the command should be really executed and its output or error fed yo the AI so that it can confirm the statements or correct it.
I am crazy how they think a system with no feedback loop can be always accurate. Only perfect mathematics can work like this, any -like system need to have a feedback loop.
Excellent idea, we do internally feed the answers back to the system to improve its own inputs and outputs. The funniest part of some of this experience has been to find cases where even humans were hallucinating: "Hey, I thought this was shutdown?!" or "I can't find the bucket!" Even on a bad day, the humans are still ahead though.
I am crazy how they think a system with no feedback loop can be always accurate. Only perfect mathematics can work like this, any -like system need to have a feedback loop.