We had a somewhat undocumented 120Hz mode on DK2 where both eyes would get the same image (meaning everything was at infinity distance), but I’m not sure if any developers used it.
It's funny. In my experience, stereo rendering is not a make-or-break issue in VR. I mean, it depends on the app design, of course, but a lot of apps can get away with monoscopic without most people noticing.
I had a regression in an app I was building where it was rendering in a monoscopic mode and it actually took me a long time to figure out what was "wrong" with the app, but apparently, I was the only one who thought it was wrong. It went for at least a month before I noticed a problem and another month before I eventually had the time to diagnose it. All during that time, nobody made any comment about it, people were enjoying the app.
I eventually figured out that you really only notice stereo rendering in modern VR headsets if you have a lot of near-field objects with which to interact and you interact with them with direct hand gestures (either hand tracking or controller bumping). So, you need stuff to be within a meter of the user for them to really notice "this isn't right".
There is a gap past a meter and before 10 meters where it's hit or miss if anyone is going to notice the lack of stereo rendering. The more the user is moving around, the less likely they are to notice.
And then, anything past 15 meters, I don't think there are any VR headsets on the market (other than maybe the Varjo VR-1, but I've never seen one myself) with fine enough pixel pitch that anybody is going to notice the lack of stereo rendering.
Now, that's just "where does stereo rendering take effect"? Part of what I think makes VR good is the ability to get up close and personal with the environment, to manipulate things in your hands, so typically I think "good" VR apps are going to need stereo rendering, but it was still surprising to see that it's not a universal issue.
I think 10 meters is also the point in real life where 3d stops being noticeable, too, so I'm not surprised your ended up with that number. (I just searched and found numbers between 18 ft and 20 meters.)
I don't think the pixel pitch is the limiting factor anyhow, because movement of the head means that sub-pixel accuracy is achieved by the brain anyhow. It's amazing what the human brain lies about in regards to vision.
This is a "you may not be able to unsee" thing once you read this, but... one of the major problems I have with 3D movies is precisely that 3D based on image separation extends out way less than people think it does, because the brain naturally picks up with heuristics and prior knowledge and a lot of other things. So if you're on a closeup of someone's face or something, sure, it can be in 3D. But if you've got a big action scene with, say, helicopters flying around and shooting at people on the ground, the camera is likely to be farther than 20 meters away. If you can see any "3D" at all, it's wrong. It should all be at the plane-at-infinity for you. Which means that if you can see "3D", rather than being an epic fight to save the world or whatever, you've got a fight taking place on a diorama in front of you, about the size of a large table or something.
Even something just as the scale like the Transformers being about a block away from you brawling with each other should all be on the plane-at-infinity to you.
It kinda takes the drama out of 3D action. Technically, the 2D version of the movie is probably more actually accurate at the important times in a lot of genres.
Yeah, the vast majority of 3D movies have always had that diorama effect to me, though I tend to discount my personal experience with anything stereoscopic because I had lazy eye as a kid, ultimately corrected by surgery, but late enough that my brain mostly ignores the information from that eye. Not totally but mostly.
In any case, I don't find the effect unpleasant or pulling me out of immersion, because frankly I don't feel that immersed in AAA cg heavy action movies anyhow. To me it's like watching a fun demoscene with a neat effect.
The only time I recall it being a bit tiresome is when friends dragged me to a few horror movies where the 3d was obviously low effort crank it to 11 for this jump scare stuff.
Yeah, I haven't done the math, so I don't know what the exact boundary would be. Supposedly Quest 2's tracking is precise down to ~7mm, SteamVR to 75mm, so depending on the system, micro head movements may not be larger than IOD. Assuming we discount micro head movements, then the ICD is the only stereo cue. I think I remember seeing someone do the math once and said devices like the Oculus DK2 couldn't show anything beyond 10m for 0.07m displacement. Or something like that. Notionally, for some pixel size S, you will not be able to see a discernable difference at Z distance for X view displacement; the left and right eyes will resolve objects to the same pixels.
Theoretically, yes, but there will be a lot of overhead for Z-sorting objects as well as performing a multi-stage render like that. I am only guessing, I certainly have never tried, but I bet it comes out to a wash, at best. It'd be better to keep the rendering pipeline simple, if that were the case. Large-scale worlds will have LOD for models and terrain at distance, which will help with the rendering complexity of far objects.
You can also pre-process it and apply super-far details as a skybox image. A 2- or 3-block radius of real, 3D city buildings with a cityscape skybox looks pretty good. A forest scene can use object instancing to really clutter up the medium field and leave even a generic landscape skybox for great results.
Yes, that’s why nobody used it. The plausible use case was something like watching 24hz, 30hz, or 60hz video content without juddering that 75Hz would have, and where depth didn’t matter.