tinySceneGraph



Render to 3D texture

Nearest filter An algorithm called marching cubes creates polygon meshes from volumetric datasets, like medical CT scans. This algorithm is widely used in many applications and it is easy to find descriptions and code samples on the Internet.

With the introduction of OpenGL frame buffer objects, the other direction has also become possible: Convert a polygonal model into a volume dataset by rendering it into a 3D texture. Although this feature is quite useful, there is little documentation on how to actually do it and even fewer code samples.

tinySG's utility library implements a voxeliser and the scene editor comes with a plugin frontend that allows to specify a volume in world coordinates and renders it's scene contents into a 3D texture. Current hardware like the AMD southern island chips allow a maximum resolution of 16k by 16k by 2k texels, so you get a decent resolution, preserving quite a lot of detail of the polygon data. The image on the right side shows a shock absorber rendered into a lowres volume, using GL_NEAREST filtering to visualise individual voxels.

Using OpenGL to create voxels

Just like with 2D texture render targets, a 3D texture may be bound to a framebuffer object. OpenGL still performs regular 2D framebuffer operations on that render target, so the result will be a 2D image. Thus, the 3D texture has to be bound and rendered slice by slice, and as often as the 3D texture has slices. You can think of the 3D texture contents as a stack of numSlices regular framebuffer images.

To avoid ending up with n identical slices, you use the view frustum to clip the scene to the boundaries of each slice. The code to the right is the voxeliser routine implementing exactly these steps, transforming a scene into voxels of a 3D texture.

A nice feature of OpenGL is that the 3D texture does not have to use a rgb-format to match the usual framebuffer pixel format. OpenGL will take care of necessary pixel format conversions transparently. Tests with single channel GL_RED textures work well and are probably the primary choice for simple volume rendering since - compared to rgb - they use only a third of the memory.

tinySG simplifies the required code by providing objects that serve as building blocks. The code to the right shows the slicing voxeliser code (error handling removed for simplicity).

    // Render subvolume "bbox" of scene/camera defined by "pViewer" into "pTex"
    csgError_t Voxeliser::RenderArea (csgBBox const &bbox, csgViewerPtr pViewer,
                                      csgTexture3Ptr pTex)
    {
      unsigned int imgWidth, imgHeight, imgDepth, imgDim;

      // Determine image parameters (depth 1.e. how many slices to render):
      csgImagePtr pImg  = pTex->GetImage ();
      pImg->GetImgParams (imgWidth, imgHeight, imgDepth, imgDim);

      int pipeID = pViewer->GetPipeID();
      int texID  = pTex->GetIDByPipe (pipeID);
     
      csgFrustum &frustum    = pViewer->GetFrustum ();
      csgFrustum origFrustum = frustum;

      frustum.SetType (CSG_FRUSTUM_ORTHO);
      frustum.SetFrustum (bbox.minV[0], bbox.maxV[0],bbox.minV[1], bbox.maxV[1],
                          bbox.minV[2], bbox.maxV[2]);
      frustum.OnReshape (imgWidth, imgHeight);

      m_FBO->SetSize (imgWidth, imgHeight);
      m_FBO->AttachTexture (pTex, pipeID,0);

      glPushAttrib (GL_ALL_ATTRIB_BITS);
        glDisable(GL_DEPTH_TEST);
        m_FBO->Bind();

          glFramebufferTexture3DEXT (GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,  
                                     GL_TEXTURE_3D, texID, 0, 0);
          RenderSettings_t S = pViewer->GetSettings ();
          
          // Render scene slice by slice:
          float curZ = bbox.minV[2];
          float sliceDepth = (bbox.maxV[2]-bbox.minV[2]) /float (imgDepth);
          for (unsigned int s=0; s<imgDepth; s++) {
            // Attach current slice to FBO:
            glFramebufferTexture3DEXT (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
                                       GL_TEXTURE_3D, texID, 0, s);

            // Setup frustum for current slice (using frustum clipping):
            frustum.SetNearFar (curZ, curZ+sliceDepth);

            pViewer->RenderScene();                // Render scene into slice:
            curZ += sliceDepth;
          }

        m_FBO->Release();
      glPopAttrib();  
      
      frustum = origFrustum;                       // Reset frustum
      return CSG_OK;
    }
  

Random write access to textures

highres 3D texture The algorithm shown above is pretty simple, but has some disadvantages:
  • It does not preserve volume information: Since polygons are rasterised, only voxels are written that are intersected by polygons. For example, a sphere ends up as pixel rings in the slices of a 3D texture. The interior of a sphere does not provide any polygons, so voxels are empty here as well.
  • Polygons with a normal orthogonal to the cameras view are invisible and do not generate voxels. The image on the right side shows this effect: Two spheres were rendered with the camera looking at the spheres from the right side. Thus, it saw the polygons at the 0-meridian exactly from the side - their projections do not cover any pixels on the near clipping plane at all. A cube would end up as just front and back plane, without it's four sides, if the front face normal points directly to the camera.
To avoid these artifacts, you can use the stencil buffer to fill internal voxels of a solid object as well. You'll find a short description on how to do that in the GPU Gems 3 book, chapter 30.2 - voxelisation.

tinySG takes yet another approach, which turned out to work extremely well: Modern OpenGL hardware allows random write access to buffer objects and textures from a GLSL shader via the ARB_image_load_store extension. This is all that is needed to write voxels into a 3D texture in a fragment shader.

  1. The vertex shader just transforms incoming geometry to world coordinates using the model transformation (view transformation/camera is set to identity) and passes the resulting coordinates on as varying parameters.
  2. The fixed function pipeline then clips primitives against the view frustum, which is set up to match the subvolume definition in the scene. It also rasterises the primitives for us, providing (3D!) texel coordinates to the fragment shader.
  3. All that is then left for the fragment shader to do is to scale the incoming 3D varying coordinates to match the 3D texture dimensions and write the voxels into the texture using imageStore().

Sounds easy, doesn't it? Well, sit back and think of it twice: Fragment shader invocations still depend on the projected primitives. Polygons with normals orthogonal to the view direction still do not create any voxels!
The solution to this problem is to render the scene three times, with the camera looking at it from x-, y- and z- direction, respectively. The view frusti should be orthogonal and match the subvolume size, the glViewports should match the texture size in that dimension in order to catch all texels in a 1:1 fashion.

Since there are very few samples on ARB_image_load_store out there on the web, yet, I also provide the shader source code used in tinySG's voxeliser (vertex shader, fragment shader). The tinySG Voxeliser sets the shaders as override nodes for the render traverser, so all other shader nodes in the scene are ignored. While this ensures that all scene nodes are processed by the voxeliser, it unfortunately does not work for scenes that depend on their internal shader nodes. In this case, the slicing algorithm serves as a fallback.

The images below show an original CAD dataset and two renderings based on it's voxelised data. Click on the images to see a larger image:

Regular CAD rendering voxelised dataset Sliced volume texture
Standard OpenGL rendering of a CAD dataset in tinySG's scene editor. Same dataset, voxelised into a 1024x512x512 volume. Image rendered with the tsgVolume node. Same dataset again, slicing the volume in y- and z-direction, using a greyscale transfer function.


Using the Scene Editor

tinyScenegraphs scene editor comes with a plugin that allows to define sub-volumes in a scene. Boundaries may enclose selected nodes and be resized or moved by dragging them around in the viewer with the mouse.

Once the region of interested is selected and the volume resolution is chosen, the 3D volume texture is just one mouse click away. It can be inserted anywhere in the scene or even in another opened scene. Together with a csgVolume node it is ready for rendering, like in the images shown here.

The entire process happens on the GPU, using scene data tinySG already keeps in graphics memory for rendering anyways. A FirePro W8000 completes the volume texture generation for a 2.5 million vertices scene into a 125 million voxels (512x512x512) texture in about 300 msec.

Find more information on tinySG volume rendering on the pages on

subvolume selection

sarja OpenGL sarja volume rendering

Top: Scene with subregion, selected for voxeliser. Bottom/left: The selected part, ISS module Sarja, in Isolate View. Bottom/right: Sarja, rendered as 512x256x256 volume data. Click on images to enlarge.


Conclusion

The render-to-3D-texture feature provides a foundation for a number of interesting effects, techniques and advanced GPGPU algorithms. In fact, the tinySG RT3DT code was written with GPGPU simulation in mind - see the tinyFluids CFD project from Nov. 2013.

Keep rendering,
Christian


Acknowledgements:

  • The CAD axle dataset has been taken from GrabCAD quite a while ago. Unfortunately, I cannot remember who to give credits for providing this great dataset.
  • The ISS dataset is by courtesy of NASA.
  • The GPU Gems book series is published by nVidia and provides a wealth of tips and tricks on algorithms and code samples.
  • Many people would spell the module voxelizer. This is of cause wrong, as John Cleese points out in his Letter to the USA.


Copyright by Christian Marten, 2013
Last change: 05.04.2014