How VR works – completed

Today, I completed University of California San Diego’s edX course „How Virtual Reality works“. I have worked on all 6 projects assignments – the last one was a very good one as it combined many things we’ve worked on and offered the possibility to explore and evaluate a current Google Cardboard app on the basis of a criteria catalogue. I picked the Android app „Apollo 15 Moon landing VR“ and didn’t regret my choice.

Below are my notes from the second part of the MOOC – as a reminder for me later on and maybe helpful for others.

  • week 4: Travel and Wayfinding
  • week 5: Menus and Text Input
  • week 6: VR Design Principles

week4 – Travel and Wayfinding

Navigation-components are travel (actual movement, searching a specific target, maneuvering or exploring the environment) and wayfinding (this is the cognitive component).

In VR, travel is a designer decision (teleport, active or passive role,…); the steps are choosing a destination and choosing the travel technique (among them physical locomotion, steering a vehicle, instantaneous travel). Physical locomotion techniques are intuitive, but limited to space and human physics – because of their high level of immersion the entertainment industry (The VOID, modal VR via backpacks, treadmills) and phobia treatment make use of them. Steering techniques via physical devices can cause motion sickness whereas target based techniques (point to location and user is moved there – markers on the floor or use of maps) are not intuitive and might be disorienting.

Wayfinding means defining a path from a to b: there is spatial knowledge (exocentric knowledge – location of user irrelevant, typically it is map-based knowledge) and there are visual cues (egocentric wayfinding cues)
To make wayfinding tasks easier, the environment design can include natural environment features (characteristic mountains etc.) and architectural objects.
The most important wayfinding aids are landmarks (tall structures or buildings which can be seen from far away, man-made or natural like rivers or mountains), signs and maps. Landmarks also can be local and help users as visual clues where to take a turn. Signs with their single purpose of wayfinding aids can look like in the real world (attached to a post), like directional arrows to indicate a destination or user specific as labels placed directly for finding an entrance etc. In VR, maps can be shown at varying levels of resolution or zoom, for orientation the approaches are north pointing up or forward/track up.
Disadvantages in VR wayfinding are the limited resolution of the screen (therefore text is hard to read) and limited field of view (typical HMDs only have 90-100 degrees).

week 5 – Menus and Text Input

VR-Interactions like selection, manipulation and navigation are mappings of real-world-things, but the interaction types „menus“ and „text input“ are more abstract.

In VR we should always try to use direct interaction with the objects of our environment (no slider to change size but grab object and pull). Menus are very often done as easy-to-learn 2D flat menus (vertically and oriented towards the user), they are placed in the environment (physical room the user is in = Oculus Home etc.), in user space (on his arm,…) or in object space (to resize, delete, move object). 3D menus would be more natural (maybe as cubic menus arranged around an object), but 3D menu items may be occluded or outside users view.

There are different kind of interfaces and input options for VR: tangible interfaces (mostly consist of physical objects with sensors like flight simulators, special control panels,…), gesture control interfaces (MS Kinect, Leap Motion,…  disadvantage: learning gesture commands and practicing them), voice commands (there are already systems like Amazon Echo, Google Home, Apple Siri, MS Cortana which are designed to recognize natural human speech – with the right software they could control a VR application). Text input in VR is difficult, but occasionally there is a need for that (security passwords, saving things per filename, numbers in 3D design and modeling apps, labels in engineering apps,…) – Virtual keyboards are an active research area.

week 6 – VR Design Principles

Feedback: A user should get feedback after interactions in VR systems to his visual sense, auditory sense and haptic sense (the more senses the better). If the VR hardware has no haptic output, substitution can help, e.g. show visually what can’t be felt.
Temporal compliance (head motion complies with image updates) is important, therefore latency (delays after input, when rendering an image) should be at a minimum – to avoid image jutter, the programmer can reduce the amount of data that’s displayed (less complex 3D models).
Spatial compliance (when a user moves an object, his finger motions need to comply with the position of the controller and its effect on the VR environment – dials instead of sliders are recommended) and nulling compliance (going back to original position and original value) are also important.

Constraints: Artificial Constraints should be used to increase the usability of VR apps (limit DOFs / use physics engines like NVIDIA’s PhysX SDK in Unity to make apps compliant with law of physics / use dynamic alignment tools like user input to grid or limit values).

Human factors: It is important to think about the VR target audience: their previous VR experience (to avoid feeling overwhelmed or getting motion sick), age (maybe limited vision – adjust use of text), height (when moving things physically), VR hardware (are 1 or 2 hand controllers necessary? often, one controller is to hold an object which you manipulate with the other one) etc.

Isomorphic or non-isomorphic approaches: Whereas isomorphic approaches for VR application designs are based on real world and used for simulation apps (testing cars with VR wheel etc.), with non-isomorphic approaches for VR application design we can overcome human limitations and laws of physics limitations („magic“). N.i. approaches often borrow ideas from literature and film, are enjoyable und easy to understand when based on well-known ideas – on the other hand they are difficult to create because of people’s expectations.

VR system evaluation needs planning and statistical analysis – the best way is to test on target users under controlled conditions. Tests of functionality would include: frame rate through all parts of virtual world (today >=90fps for high-end HMD) / latency&lag (could cause motion sickness) / test for network latency if it can impact latency of app’s rendering engine / test for user comfort (motion sickness, how well HMD fits the head, how app is used)

What could be future aspects of VR?
Whereas today’s VR apps can be quickly written with VR authoring tools (Unity 3D, Unreal Engine, Amazon Lumberyard – disadvantage: locked in to provider) with powerful built-in functions, in future, VR development might directly happen in VR. There will be massive changes in Audio for VR (audio not yet implemented in tools, just visual). Thermal issues with mobile VR based on smartphones might be solved. Maybe there will be „inside-out-tracking“ from the HMD (built-in sensors in HMD). Future tech specs (today devices render 1200×1080 pixel, 15 pix per degree across a 90 degree field of view with fixed focal distance) might offer a new level of feeling presence in VR. There are indications that VR and AR are going to merge. There are high revenue predictions for VR (among them half is software development for apps, games, theme park solutions) and AR.

Today, I also decided to upgrade to the verified track in order to get the course certificate for „How Virtual Reality works“: paying with credit card was easy but it remains to be seen if my (not so good) webcam photo in combination with the (not so good) webcam-photo of an ID (I tried my driver’s license) fits the requirements …

Back to my blog post: How VR works (week 1 – 3)

(Update July 2, 2017)
Verification worked, that’s my verified course certificate of achievement:
https://courses.edx.org/certificates/31ee6ce5b22646808710b0d00385b435