Seamless LOD transition

Last weeks I've been working on a seamless LOD transition mechanism for my voxel terrain system. It does not seem to be a very discussed topic. NVidia didn't talk about the issue in their now famous article, there are simply rough transitions between LOD levels, as it can be seen on this video. However, it seems that nVidia published something on the subject, but the search engine on their website doesn't seem to work and I'm not going to dig into bunch of unordered pages just to find an article. The transvoxel algorithm works fine if the geometry is generated on the CPU, but it becomes a hassle if you want to generate it on the GPU (and it needs relatively big lookup tables anyway). Moreover, it has been designed to work with marching cubes: I'm using marching tetrahedra, so basically I would just need to generate all the transition cases myself. Someone posted a pretty looking video, though the quality wouldn't probably allow to distinguish any LOD issue. Unfortunately he made himself a hard man to reach: still didn't answered my PM on Youtube, nickname containing "Starcraft", no URLs in his video, no blog, no technical explanation, no nothing. This video is very nice looking, but his author is also a phantom.

Visualization of LOD levels So from here I started searching a technique on my own. I've had some ideas but many seemed very unreliable compared to the work required to implement them (yes, I'm lazy). But let me first introduce the problem to you.

The LOD system currently works like this. Each level is defined by a cube. A cube is a regular 3D grid containing the density values that represent the terrain. If you take 8 neighboring points you get a cell into which the marching cubes algorithm can be applied. Lower LOD cubes wrap higher ones. Since all cubes are of the same grid resolution, lower LOD cubes are made twice bigger (see the image on the right). Now at the intersection of two cubes, you get this:

Cracks between two levels of detail Cracks between two levels of detail

As it can be seen on those screenshots, there are holes in the terrain. Those holes match one of the six plans of the cube that forms the inner level of detail. The problem is that at the intersection between two levels, vertices of one level do not match those of the other. This is due both because there are twice as much vertices in the higher LOD and the LOD generation filter tends to smooth the topology of the terrain. Let's take a look at a slice of the terrain:

Visualization of a slice of a voxel terrain

Each cross is a single density value. The marching cubes algorithm is applied on each cell. This is a slice of the terrain as seen from the side; the region downside the line is underground, the upper region is the air. When computing one LOD, the resolution is the same but the area is twice as big, so the cells are twice bigger. Lower LOD cells appear darkened:

Visualization of a slice of a voxel terrain

As you can see, the topology has been simplified. As a result, bumps have diminished while pits have been filled (les bosses creusent et les creux bossent). Knowing this behavior, I thought I could just move the high LOD (blue) vertices along their normal to bring them closer to the simplified surface. Vertex normals are shown in orange (I like orange):

Visualization of a slice of a voxel terrain with vertex normals

The final position of vertices is then computed as follows: final = position + λ * normal. The only question is: what is λ and how do we compute it. You might have guessed that this coefficient needs to be negative on the bumps in order to shrink the shape. To know whether we are on a bump or in a pit we can just sample the neighboring densities. If their normalized (say between -1 and 1) sum appears to be less than 0.0 then the vertex is mostly surrounded by air: we are on a bump. For values greater than 0, we are in a pit. So we can basically say that λ is the normalized sum of surrounding densities. We don't actually need to compute this value, in fact we already have it in the lower LOD density map.

This works pretty well but it doesn't account the density at the vertex we are moving. We have to compute the difference between the density in the high and low LOD maps to know whether the vertex is likely to have swollen or shrinked. This produces a much more accurate result. You may think that we could proceed many iterations with this scheme, each time using the modified vertex position we computed in the previous iteration. Actually, from what I've tested, it doesn't improve the final result much, it may even alter it sometimes; I don't think it's worth the additional texture fetches.

Blending of vertex normals across two LODs Last but not least, it is very important to blend the vertex normals between LODs. Computing the normal given the density map is very easy; given a point p, we compute the difference between neighboring values of p in the density map in the X axis in order to get the rate of change in this axis. We do the same for each other axis:

gradient.x = density_map (p + [1, 0, 0]) - density_map (p + [-1, 0, 0])
gradient.y = density_map (p + [0, 1, 0]) - density_map (p + [0, -1, 0])
gradient.z = density_map (p + [0, 0, 1]) - density_map (p + [0, 0, -1])
normal = normalize (gradient)

Using the displaced vertex, we compute the normal in the lower LOD density map and blend it with the high quality normal as we get closer to the intersection. Screenshot on the left shows the difference.

This technique is only an approximation though, I've spent a lot of time tweaking to improve the overall result. It runs at render time, that is, it isn't part of the geometry generation process. As a result, generating the geometry (editing the terrain or moving around) doesn't take much longer. The rendering however needs to proceed the algorithm every frame for every vertex in the terrain. Fortunately, it is a quite lightweight vertex shader (up to 8 texture fetches only) with almost no branching.

Vertex position and normal are especially important in the context of volumetric terrain rendering, because they're used to compute the texture coordinates. Even a slight disturbance in the force the normals or the vertex position has a major impact on the texture projection, leading to quite visible artifacts. I haven't tried texturing yet, that will be part of another post. Let's just hope that the current seamless LOD system will be good enough to minimize the rendering glitches (which I predict will be most visible with normal mapping).

Posted by Yno on 2012-04-05 12:12