tinySceneGraph



OpenVR support for HTC Vive

isoline hairs Friends of tinyScenegraph,

the most interesting new technology released to the graphics community in 2016 is probably virtual reality hardware. Foremost, VR headsets like the Oculus Rift or HTC Vive became affordable, offer a decent quality and are a really astonishing experience when you try them for the first time. The future promises to get even more exiting with hardware maturing and display resolutions increasing further. Augmented reality is about to enter the broader market as well, opening an additional, wide field of new applications.

tinySG's scene editor, csgEdit, is capable of loading plugins and provides access to 3D scene data, the rendering core and it's UI via an SDK. A new member in the family of plugins is a binding to OpenVR, the main VR programming interface for the HTC Vive (and interfacing the Oculus Rift as well).

This article describes, from a birds perspective, the integration of OpenVR in tinySG, as well as some of the design decisions and integration pitfalls. You'll also find comments on performance and hardware configurations. If you are more interested in a programming tutorial on OpenVR, I'd recommend to download the OpenVR sdk and take a look at the sample programs.

OpenVR

OpenVR is an API to access VR hardware, developed mainly by Valve. It aims at sending frames to the hmd, receive positional data from the tracking system, manage input data from controllers, a tracked camera or, in general, tracked devices. Furthermore, you can control a room-scale play area and utilise rendering features like overlays. OpenVR splits all these tasks across several C++ interface objects, like
  • vr::IVRSystem - Controls display, lense distortion, and device tracking. Provides access to events and hardware components attached to the system.
  • vr::IVRCompositor - all interaction with the compositor, like submitting the next 3D frame for display.
  • vr::IVRRenderModels - Provides access to 3D models for rendering, like VR controllers or the lighthouse trackers.
  • vr::IVRApplications - Interface to the VR dashboard, allowing to e.g. launch an application
  • vr::IVRChaperone - controls the play area for room-scale VR.
  • vr::IVROverlay - controls 2D overlays, like menues, info tooltips or the VR dashboard.
  • ... and some more (camera, screenshots, etc.).
The most important interfaces are vr::IVRSystem and vr::IVRCompositor - these two are already enough to get you going: Use IVRSystem to check for and initialise available hardware. Then use IVRCompositor to access data from the tracking system, like the view transform pf the hmd, or position and orientation of tracked devices like the controllers.

Tracking / polling

OpenVR uses a polling approach for most things you want to do:
  • You call IVRCompositor::WaitGetPoses() at teh beginning of a frame to get the latest positions and orientations of all tracked devices.
  • You call IVRSystem::PollNextEvent() to find notice on device (de)activation, controller buttons and touchpad, dashboard activities, overlays and much more.
  • You call IVRSystem::GetControllerState() to figure out the current controller position, orientation and button states.
  • You call IVROverlay::PollNextOverlayEvent() to receive virtual mouse input from an overlay.
  • You call IVRTrackedCamera::GetVideoStreamFrameBuffer() to fetch the next frame from the hmd's camera.
tinySG runs a render loop that polls for events, renders the next frame and submits it to the compositor. For all application logic, the loop translates the polled data into events that other entities can register for. Events can be processed asynchronously in another thread (use this e.g. for controller button events), or synchronously within the render loop (use this e.g. for custom rendering on top of the regular 3D scene, like for menues). In general, you'll want to do as much asynchronously as possible to help maintaining the VR framerate at 90Hz.

Submitting frames and lense distortion

The main render loop basically does the following (callback logic and camera stuff omitted...):
  while (vrSessionActive) {
    // Trigger tinySG scene rendering into FBOs/textures, for both eyes:
    vrViewer->RenderFrame();
    
    // Submit frame textures to OpenVR compositor:
    vrViewer->Present();
    
    // Poll all pending events from OpenVR and schedule events to other 
    // threads for processing:
    HandleInput();
    
    // Fetch new poses for tracked devices. This function will block if loop
    // runs ahead of current frames timestamp:
    vrViewer->UpdateHMDMatrixPose();
  }
  
The important thing to note is that all tracking pposes should be as recent as possible to avoid any lag. You can even use exact time stamps and motion vectors to extrapolate tracking of devices to the right point in time.

OpenVR takes care of fiddling around with the messy lense distortion of your hmd. Although it is possible to setup distortion yourself, tinySG never made the effort as the build-in distortion works pretty well.

The ViveVR plugin

All OpenVR code is implemented in the ViveVR plugin for csgEdit.

VR must avoid any lag at all cost and maintain a very high framerate. 90Hz is imperative to avoid simulator sickness. You just cannot afford to do any application tasks in between the render calls. The ViveVR plugin therefore creates a separate render thread running the minimalistic render loop as shown above, focusing only on getting information from the tracking system, updating controller positions and camera transformation accordingly and render the scene as fast as possible.

Everything else is done in other threads:

  • If a controller signals user interaction like pressing a button, an event is thrown and queued in a producer/consumer queue. Other threads pick up that event and implement the application logic, like selecting an object, opening a menu or telporting to a different location).
  • Scene animations are taken care of either in the main thread, or in some other thread.
  • The VR threads OpenGL context is shared with a master context, just like any other context in csgEdit. This is obviously not the best choice performance-wise, but preserves graphics resources and allows other threads to download data updates to graphics memory, preparing for the next frame.
The plugin itself only provides the rendering infrastructure and an abstraction of the OpenVR API. It exposes a Microsoft COM-like interface to other plugins that implement application logic. For example, you can write Python scripts that get executed when a controller button is pressed or a timer expires. The script has the same access to scene objects as any C++ plugin and thus can modify the scene, highlight objects, pick elements, teleport the camera to a different location and so forth.

Synchronisation support between threads is limited, but flexible in tinySG. The easiest way to synchronise application logic with the render thread is to provide a synchronous callback, executed within the VR render loop right after render traversal is completed, and swap the scenes root node. This basically double-buffers all scene data.

An alternative would be to utilise tinySG's multi-CPU support, basically managing a separate set of OpenGL objects per GPU (even if there is only one GPU!). This way, you'd have only one copy of the scenegraph in main memory, but each scene node maintains two sets of GPU objects (VAO, textures, etc) - one for each (virtual) GPU. The application logic operates on the C++ scene objects and triggers GPU updates on buffers associated with, say, pipe 0, while the render loop operates on the other set associated with pipe 1. Then simply swap the pipeID in the render loop to switch between GPU objects when all updates for the next frame are ready. This works, because OpenGL contexts are shared in csgEdit.

An even simpler approach could traverse the scene twice and rely on OpenGL processing commands asynchronously: In the 1st traversal just push data updates to GPU memory, in the 2nd traversal submit the actual render calls, in hope the data has arrived in the meantime. This may also be just good enough...

Other plugins using VR

A bunch of other plugins emerged over the last two years that utilise the COM interface of ViveVR to display their content in a virtual, tracked environment:

tinyCoiffeur is a plugin designed to simulate hair. It provides tools like scissor, brush or dryer to style a hair cut. While you can operate it with a spacemouse device on a normal 2D monitor, the real fun starts when entering an VR environment and taking the controllers into action: You see a virtual person in real life dimensions sitting in front of you, have the simulation engine responding to your actions and you can style her haircut using the Vive controllers. The best part is that nobody will complain even if you completely ruin the hair cut :)
You can see an image at the top, next to the introduction.
Left: tinyCoiffeur, VR-mirror of scissor tool in action. Right: tinyHuman, controller dragging a joint in inverse kinematics.
tinyHuman aims at character animation. It implements the Cyclic Coordinate Descent algorithm to animate a skeleton with inverse kinematics. Skinning is also part of the plugin, but still suffers on severe artifacts. The plugin operates on any bvh skeleton, but for development purposes we use the avatars from the MakeHuman project.

El Capitan is a demo building on top of images from a 3D VR experience by Mammut: #project360.

tinySG takes 360° images of Mammuts climbing tours and imports them into a scene with multiple viewer postions, represented by cubemaps. A Python script listens for events from the Vive controllers and teleports the viewer in between these locations.
Try it yourself! If you have a VR headset and a WebVR capable browser, just go to their website and experience a real cool feeling of height.

Technically, this application is interesting, because there is only a single cubemap for each location - and not one per eye. The effect is nevertheless amazing, using just different projection matrices for left and right eye into the cubemap.

Finally, ViveVR can capture any scene loaded into csgEdit and send it to a connected headset for viewing in virtual reality.

VR hardware requirements

So, is your PC fast enough to run your own VR experiments? Before buying the Vive I was pretty concerned that my hardware would fail to render scenes without causing motion sickness. Valve provides a tool that evaluates your systems performance and reports whether it is fast enough or not. My PC with an AMD Hawaii chip and a hexacore Sandy Bridge CPU officially failed to meet the minimum requirements.

If you are in a similar situation, don't worry! While such an ancient hardware configuration stutters pretty much when running scenes like the Vive home living room, it is still fast enough to experiment with the technology or run simpler applications like The Lab. I also used a laptop with an nVidia Geforce 1050, and it worked well enough for small scenes.

Outlook

There is much more to explore. nVidia drivers offer an interesting OpenGL extension called simultaneous multiprojection, which basically avoids rendering the same scene twice - for the left and for the right eyes perspective. Instead, you simply setup the extension, providing two projection matrices, and deal with a single pass in the shader.

The same strategy should be possible even without an nVidia-specific extension: You could provide two projection matrices in a uniform buffer object, render your scenes with an instance count of 2, and manage two render targets in parallel. Then simply direct the first instance to the left eye target, and the second instance to the right eye target.

With upcoming mixed reality hardware, augmenting real environments with virtual content should become possible. And the upcoming Vive Pro is supposed to have a stereo camera, which migt be used for similar effects. Interesting times these are...

Keep rendering,
Christian


Acknowledgements and further reading:

  • The OpenVR API documentation wiki is a good starting point for learning how to develop against OpenVR, although it tends to be somewhat outdated.
  • The OpenVR SDK is accesible via github. You'll need the headers and libraries from here if you want to use OpenVR in your own C++ code.
  • The SteamVR developer forum has tons of solutions ready for whatever problem you run into - great resource!
  • Internal documentation is done using doxygen and the violet UML editor.


Copyright by Christian Marten, 2018
Last change: 17.03.2018