I'm against security theater, but this sounds like an information visualization problem: if the image contains enough signal that a theoretically optimally trained eye can detect the contraband, they should be able to use machine learning algorithms to detect suspicious items and make them show up with higher contrast.
In other words, the software should magnify anything out of the ordinary, like contraband. It's not foolproof of course, but would help.
I think the issue is that if you can produce anything approximately the size and density of a gut, you can get it past a scan - no machine learning is going to be able to pick up something that for all intents and purposes looks like a normal beer belly.
Sorry, but what you're suggesting is the equivalent of "every airplane should fly flapping its wings"
"if the image contains enough signal that a theoretically optimally trained eye can detect the contraband, they should be able to use machine learning algorithms to detect suspicious items and make them show up with higher contrast."
The type of processing which you'd run on an image to make it easy on the human eye is entirely different than what you'd do for ML applications.
I'm not talking about making it "easy on the human eye". I'm suggesting that they use the output from a classifier to make parts of the image stand out more.
> Advanced imaging technology safely screens passengers for both metallic and non-metallic threats, including weapons and explosives, which may be concealed under a passengers’ clothing without physical contact to keep the traveling public secure.
In other words, the software should magnify anything out of the ordinary, like contraband. It's not foolproof of course, but would help.