RenderingPipeline

from geometry to pixels

Realistic FoV in HMDs

Note: Quickly after the first version of this posting went online, Tom Forsyth noticed some misconceptions and mailed me a long explanation discussing the problems. Below is the updated version. Thanks Tom for helping me understanding the details!

 

It is best practise for virtual reality applications using a head mounted display, to try to mimic the real world vision of a human as closely as possible to increase immersion and reduce simulator sickness. This includes accurate tracking as well as mimicking the field of view the user would have if the screen of the HMD was a glass screen to watch the real world directly (there have been experiments with wider FoVs, but only for HMDs with very small screens).

Oculus VR does also strongly suggests to get the FoV exactly right – but is “right for you” the same as “right for me”? Looking at a normal human with a given (measured) distance between the pupils (interpupillary-distance, IPD), it sounds like a simple question of math.

In fact it’s a bit more complex: The screen inside of a HMD is so close in-front of the eyes, that the user can not focus to it, that’s one reason why they have lenses embedded. But the lenses do not only help focusing, they they also increase the FoV: When you look thru the DK1 Rift for example without any lenses, you will find, that it is not only impossible to see a sharp image, but the field of view is also considerably smaller! A bit like sitting in-front of a 24″ screen. How does the user focus at the Rifts screen? The lenses let it appear to be infinitely far away. You can test this by moving the Rift away from your head (DK1, not DK2 with the positional tracking) and you will notice that the size of the objects you see on screen does not change – it’s bigger on the inside and the gigantic screen is very, very far away!

In my opinion we might want to also take two aspects into account to calculate the FoV:

1. The actual distance of the eyes to the screen┬álenses (as Tom noticed, the lenses make the screen appear to be infinitely far away, so a few extra cm don’t matter, the distance to the lenses and the plastic lens enclosure does). This is dependent on the shape of the users head and might be hard to measure with a simple enough method to be used for “Average Joe”. Maybe we will see time of flight / depth cameras in every phone and notebook soon just as they now all have photo cameras (hint: Apple has bought PrimeSense, the company behind the depth sensor of the Kinect). Then the user can scan his/her face to get not only a nice avatar but also give us some data to measure the IPD and head geometry ;-)

It was also noted, that a 3D scan might be too noisy to be usable. You also would want to do the scan with closed eyes, as the wet eyeballs are not ideal for 3D scanning.

Maybe a bigger influence (and simpler to measure) is the user adjusted distance of the screen to the head as it can be done for example with the Rift: If you hit the lenses with your eye lashes you can move the screen a few millimetres further away – this could be measured and be used for a more realistic FoV. In our model of the infinitely far away screen this is irrelevant too. In practice the lenses will not create perfect collimated light, thankfully the Rift SDK 0.3 will calibrate this and the IPD for us. More details about this is included in the slides of the GDC talk by Tom, right now they aren’t online but I’ll link to them as soon as they get released.

2. The shape of the eyes. Seriously: Ideally your eyes are round, but some are squeezed and are longer than wide, about one millimetre longer than wide… This is a quite common condition and leads to the inability to focus on objects far away: we call it short sightedness. Far sightedness works the other way around, the eyes are a bit shorter than they should. If this gets corrected by wearing glasses, they change the field of view of the person in the real world: The FoV inside the glasses of a short sighted person is a bit wider than normal and the brain has adapted to this (everything looks smaller and further away for the person). Anyone who has switched from glasses to contact lenses (which do not change the FoV in this strong way) or even new glasses may have experienced the process of the brain adjusting to a new FoV.

So it might be wrong to give a short sighted person the FoV of a person with 20/20 vision in VR as she/he might be used to a slightly wider one (the same applies to far sighted persons). Unless the person wears the glasses also inside the HMD (as Tom and Nathan Reed have pointed out), in which case the change of FoV is performed by the glasses normally.

One open question is, how the change of FoV could be derived based on the glasses prescriptions (and what impact the actual size and form of the glasses of the user has). Most people don’t even know exactly how bad there vision is and couldn’t input those values into a configuration tool. As Tom pointed out, the FoV change also depends on the distance of the glasses to the eyes, not only the shape of the lens itself…

For most users such a fine tuning might not even be necessary, we know that the brain can learn to handle (at least) two different FoVs (glasses and contacts) and switch within a minute or two (after an initial learning phase which can take one or two weeks!). However, for someone very sensitive to simulator sickness it might be beneficial if the FoV could be fine tuned to match the real world vision exactly…

This could be roughly estimated by the user by looking over his/her glasses: When he/she looks from a distance at two edges and lines one of them up in the blurry view above the glasses and the sharp view inside of the glasses, the distance in length could be measured.

Top: Blurry view without glasses, bottom the sharp view inside the glasses.

Top: Blurry view without glasses, bottom the sharp view inside the glasses.

This however has a problem: It is far from accurate. It might be a tiny bit more realistic, but as long as it is not 100% correct, the brain has to adjust anyway. Sadly, such small inaccuracies in the viewed image are not noticed concisely, we just get sick after a while…

Giving the user a way to fine tune the FoV also holds the risk that some will try to maximise the FoV to get an advantage in competitive games and thus might reduce immersion and increase simulator sickness!

The take away message here is:

  • Getting the FoV right is not a trivial task and the developer should definitely rely on the values provided by the Rift SDK and not try to do something “clever”.
  • It’s an unsolved problem how to provide a user who wears glasses in the real world, but does not inside the HMD with the correct FoV simulating his/her glasses. Getting it “nearly” right is probably as bad as just providing a 20/20 vision FoV.

 

I want to thank Tom again for giving me valuable input to better understand the problems of calculating the correct FoV for the Rift.

,

Leave a Reply

Your email address will not be published. Required fields are marked *

*