From my experience this is very much true of the current state of real-time rendering (and there's nothing really wrong with that, because it's just not possible to run anything resembling physically accurate algorithms in real-time). However, in the world of offline rendering and raytracing, there has been considerable work on doing physically accurate rendering. Of course a lot of approximations are still made due to memory/CPU constraints, but it is a different world. "Physically Based Rendering" by Pharr and Humphreys is a good intro to this way of doing things.
If that were true, we'd see the evidence in Hollywood. But nobody knows how to make a realistic fully-simulated video. The reason everyone believes that it's just around the corner is because those academics talk with authority on the topic, and the still frames look pretty convincing. But still frames are completely different from video -- the human visual system processes video differently.
The only technique that we know produces realistic video is if you mix actual, real footage with simulated content. That's very effective, but it's unsatisfying for obvious reasons. I think it hints at a way toward fully simulated realistic video, though.
To me the most interesting thing is not just a fully-simulated video but a fully-simulated interactive scene using VR or AR, and that is obviously an even bigger challenge. I don't personally think that either of these objectives are even close to being just around the corner, but I do think we are moving toward them. I have no idea how many additional orders of magnitude in computing power would be necessary to create a convincing simulation. The journey in that direction is a fun challenge though, right?
I did just read your previous posts, and it sounds like you have a pretty interesting history of working on all of this stuff.