tinySceneGraph Home Images Features

tinySG Stereoscopic Rendering

Friends of tinySG,

recent code cleanups in ancient OpenGL frustum code introduced the ability to directly create stereoscopic images inside tinySG's editor, tsgEdit. While several stereo render modes have been available via the Equalizer cluster layer before, the new features include both new interleaved stereo formats as well as tightly coupling them with the UI.

Implementation has been driven by the opportunity to showcase tinySG on the FTS forum 2011 in Munich on new, passively polarised TFT's. These displays polarize each other row of pixels for one or the other eye, using a thin film, coated directly onto the panel. The display works just as a monoscopic display if the user does not wears any glasses. But wearing polarising glasses, one eye will see only odd pixel rows and the other eye only even pixel rows.

T5 anaglyphic stereo T5 interleaved stereo
T5 model, rendered as anaglyphic image. Same model, using horizontal interleaving.

As the display expects regular 2D content as any other display, the application has to encode the stereo effect in its renderings, interleaving the images for the left and the right eye row-wise. The right image above shows this kind of interleave pattern. If you have red/cyan glasses, you can see the same effect with the anaglyphic image on the upper left (click to enlarge). positive parallax

Stereo images are most impressive if the objects of a scene appear to be hovering above the table in front of your monitor. Mathematically speaking, this effect requires rendering with negative parallax. Normally, OpenGL uses the near frustum clipping plane as it's projection plane. All geometry of a given scene will be placed behind this plane, like in the diagram to the right. If rendering a stereo pair from two different eye locations, an object's projection for the left eye will appear slightly left of the same objects projection for the right eye (see dotted projection lines).

Now, what happens if the projection plane moves away from the camera, behind the object to be projected? The second diagram on the right illustrates the situation showing up when the near clipping plane is moved to the far clipping plane (and the far clipping plane moved out even further). The situation is inverted now: The projection for the left eye appears on the negative parallax right side of the projection for the right eye. This is negative parallax, causing objects to appear being in front of the reference plane.

How can this be done with OpenGL? After all, objects closer to the camera than the near frustum clipping plane are dismissed (clipped), aren't they?
The answer is given by special view frustums for the left and right eye, converging at focal distance. Have a look at the diagram to the right again: What we would like to do is to set up the frustum for the left eye indicated by the red lines and the frustum for the right eye as indicated by the green lines, both converging at focal distance. The corners of these frusti in the plane at focal distance are easy to calculate as they match the symmetric frustum we would use for monoscopic rendering, but shifted left and right by half the eye separation. So, if these frusti are known, intercept theorems give the corners of the same frusti with a near clipping plane as before (the dotted line)! It is just that the near clipping rectangles of left and right eye are not the same any more, but both frustums still converge at focal distance. Objects are offset with positive or negative parallax, depending whether the object is in front of or behind the focal plane.

tsgEdits frustum parameters Although the above description is slightly simplified, it basically describes what the tinySG stereo parameters do (reality is a bit more complicated, because multi-segment projections with asymmetric frustums or even tracked cameras require additional work). Frustums are sheared and offset by the eye separation and the focal distance parameters. Depending on the stereo mode the pixels are masked and end up in different buffers:

  • For anaglyphic stereo, each eyes view is rendered with a different glColorMask.
  • Interleaved stereo uses the stencil buffer to mask out every second pixel row or column.
  • Shuttered (=active) stereo uses quad buffer visuals to render the image for each eye into separate buffers.
  • Passive Stereo also renders into separate buffers for each eye, but outputs the images simultaneously on different channels/projectors. Obviously, this scheme does not work with regular monitors.
All techniques except passive stereo suffer on compromises: Anaglyphic stereo uses a strange color spaces, while Interleaved Stereo loses half the screens resolution. Shuttered stereo requires bright displays and high refresh rates to avoid a headache.

Creating good stereo images

As stated above, stereo images gain fascination if they pop out of the screen. But this is not all - there are more effects to take into account, like
  • Total depth. Stereo is worth nothing if there is no depth in the scene.
  • Reasonable eye separation. The human brain has a limit on merging stereoscopic information into one spacial image. Most images on the web suffer on too enthusiastic shovelhead shark optics.
  • Combination of objects with positive and negative parallax. Having a background behind the screen and some foreground objects that are in front of the screen greatly increases the perception of depth.
  • Don't cut/clip foreground objects. If they collide with the border of the screen, the human brain goes on strike as close objects cannot be occluded by objects located further away.
  • Avoid noise/high frequency components, especially when using anaglyphic images.
  • Use a dark background (or background color that matches your monitor).
The images below show the model of the international space station in front of the earth. The station is placed slightly in front of the focal plane, while the earth is located well beyond the plane. Click on the images to enlarge them. Best viewed in fullscreen mode, dark environments and with red/cyan glasses, of course...

ISS anaglyphic stereo ISS anaglyphic stereo ISS anaglyphic stereo
The images use the following parameters:near clipping plane=2222, focal distance=8000, eye separation=100, fov=45

tinySG API

From an API's point of view, all modes are driven by the same functions in csgViewer:

  //! \name Stereo control
  //@{
      //! \brief Switch stereo mode of this viewer
      //! \param mode One of the supported stereo modes (mono, quad, anaglyph, passive L/R).
      //! \return CSG_OK or error code
      csgError_t   SetStereoMode (StereoMode_t mode);
      StereoMode_t GetStereoMode ();

      //! \brief Set parameters affecting stereo display
      //! \param eyeSep The eye separation.
      //! \param focalDist The focal distance to be used - this normally equals the wall distance.
      //! \return CSG_OK or error code
      csgError_t SetStereoParam (float eyeSep, float focalDist);
      csgError_t GetStereoParam (float &eyeSep, float &focalDist);
  //@}
	

Keep rendering,
Christian


Acknowledgements:

  • The T5 model is free for non-commercial use. Unfortunately, it comes without any information on the author. You can get it from DMI.
  • Paul Bourke runs an excellent website with all kinds of useful information about stereo rendering with OpenGL.
  • ISS model and earth textures are by courtesy of NASA.



Copyright by Christian Marten, 2011
Last change: 06.11.2011