News Atom Feed

News index. Browse and read old stuff; you may rapidly find french posts.

Seamless LOD transition

Last weeks I've been working on a seamless LOD transition mechanism for my voxel terrain system. It does not seem to be a very discussed topic. NVidia didn't talk about the issue in their now famous article, there are simply rough transitions between LOD levels, as it can be seen on this video. However, it seems that nVidia published something on the subject, but the search engine on their website doesn't seem to work and I'm not going to dig into bunch of unordered pages just to find an article. The transvoxel algorithm works fine if the geometry is generated on the CPU, but it becomes a hassle if you want to generate it on the GPU (and it needs relatively big lookup tables anyway). Moreover, it has been designed to work with marching cubes: I'm using marching tetrahedra, so basically I would just need to generate all the transition cases myself. Someone posted a pretty looking video, though the quality wouldn't probably allow to distinguish any LOD issue. Unfortunately he made himself a hard man to reach: still didn't answered my PM on Youtube, nickname containing "Starcraft", no URLs in his video, no blog, no technical explanation, no nothing. This video is very nice looking, but his author is also a phantom.

Visualization of LOD levels So from here I started searching a technique on my own. I've had some ideas but many seemed very unreliable compared to the work required to implement them (yes, I'm lazy). But let me first introduce the problem to you.

The LOD system currently works like this. Each level is defined by a cube. A cube is a regular 3D grid containing the density values that represent the terrain. If you take 8 neighboring points you get a cell into which the marching cubes algorithm can be applied. Lower LOD cubes wrap higher ones. Since all cubes are of the same grid resolution, lower LOD cubes are made twice bigger (see the image on the right). Now at the intersection of two cubes, you get this:

Cracks between two levels of detail Cracks between two levels of detail

As it can be seen on those screenshots, there are holes in the terrain. Those holes match one of the six plans of the cube that forms the inner level of detail. The problem is that at the intersection between two levels, vertices of one level do not match those of the other. This is due both because there are twice as much vertices in the higher LOD and the LOD generation filter tends to smooth the topology of the terrain. Let's take a look at a slice of the terrain:

Visualization of a slice of a voxel terrain

Each cross is a single density value. The marching cubes algorithm is applied on each cell. This is a slice of the terrain as seen from the side; the region downside the line is underground, the upper region is the air. When computing one LOD, the resolution is the same but the area is twice as big, so the cells are twice bigger. Lower LOD cells appear darkened:

Visualization of a slice of a voxel terrain

As you can see, the topology has been simplified. As a result, bumps have diminished while pits have been filled (les bosses creusent et les creux bossent). Knowing this behavior, I thought I could just move the high LOD (blue) vertices along their normal to bring them closer to the simplified surface. Vertex normals are shown in orange (I like orange):

Visualization of a slice of a voxel terrain with vertex normals

The final position of vertices is then computed as follows: final = position + λ * normal. The only question is: what is λ and how do we compute it. You might have guessed that this coefficient needs to be negative on the bumps in order to shrink the shape. To know whether we are on a bump or in a pit we can just sample the neighboring densities. If their normalized (say between -1 and 1) sum appears to be less than 0.0 then the vertex is mostly surrounded by air: we are on a bump. For values greater than 0, we are in a pit. So we can basically say that λ is the normalized sum of surrounding densities. We don't actually need to compute this value, in fact we already have it in the lower LOD density map.

This works pretty well but it doesn't account the density at the vertex we are moving. We have to compute the difference between the density in the high and low LOD maps to know whether the vertex is likely to have swollen or shrinked. This produces a much more accurate result. You may think that we could proceed many iterations with this scheme, each time using the modified vertex position we computed in the previous iteration. Actually, from what I've tested, it doesn't improve the final result much, it may even alter it sometimes; I don't think it's worth the additional texture fetches.

Blending of vertex normals across two LODs Last but not least, it is very important to blend the vertex normals between LODs. Computing the normal given the density map is very easy; given a point p, we compute the difference between neighboring values of p in the density map in the X axis in order to get the rate of change in this axis. We do the same for each other axis:

gradient.x = density_map (p + [1, 0, 0]) - density_map (p + [-1, 0, 0])
gradient.y = density_map (p + [0, 1, 0]) - density_map (p + [0, -1, 0])
gradient.z = density_map (p + [0, 0, 1]) - density_map (p + [0, 0, -1])
normal = normalize (gradient)

Using the displaced vertex, we compute the normal in the lower LOD density map and blend it with the high quality normal as we get closer to the intersection. Screenshot on the left shows the difference.

This technique is only an approximation though, I've spent a lot of time tweaking to improve the overall result. It runs at render time, that is, it isn't part of the geometry generation process. As a result, generating the geometry (editing the terrain or moving around) doesn't take much longer. The rendering however needs to proceed the algorithm every frame for every vertex in the terrain. Fortunately, it is a quite lightweight vertex shader (up to 8 texture fetches only) with almost no branching.

Vertex position and normal are especially important in the context of volumetric terrain rendering, because they're used to compute the texture coordinates. Even a slight disturbance in the force the normals or the vertex position has a major impact on the texture projection, leading to quite visible artifacts. I haven't tried texturing yet, that will be part of another post. Let's just hope that the current seamless LOD system will be good enough to minimize the rendering glitches (which I predict will be most visible with normal mapping).

Posted by Yno on 2012-04-05 12:12

New year's present… at last

As you might already have noticed, I gave the website a little love today. Well, this actually waits for months, but it's finally there after a week of work: the website got a whole redesign!

That's all I wanted to tell, since there's no need to say that the skinning changed a lot or that the homepage now shows last news, latest commits, forum posts and more. Maybe I could tell you about the few core improvements, but let's be honest: it's still nothing attractive, and nobody cares.

So, voila, that's it. Enjoy your tour!

Posted by Ban on 2012-02-18 03:02

Voxel terrain rendering using marching tetrahedra

Hi there,

The last three weeks (20 days precisely) I've been working on a terrain system using voxels. Only the basics have been implemented so far, yet I think most of the dirty work is done anyway. But let's talk about the technique itself.

Volume rendering
Voxel is a quite generic buzzword which usually means that you're representing the 3D world using a regular grid. It however doesn't tell anything about the way you exploit this grid. There are many things that can be done with voxels, from ray-marching (for advanced volume rendering) to straightforward rendering of a bunch of cubes (like the well-known game Minecraft does). The grid usually represents a volume, each cell telling us something about how the world is shaped or defined at the position of the cell. In my case I store the density of a terrain, ie. whether a point is inside or outside the volume of the terrain. The first screenshot shows such a grid, the second is a close-up view.

Zoomed-in volume rendering
It gets interesting when it comes to make a nice-looking terrain surface using this rough grid. Actually there's a quite well-known technique to achieve this: marching cubes. It's an old technique that even has patent on it (now expired afaik). The principle is simple: it consists of placing triangles into a cube depending on whether the corners are inside or outside the volume, which leads to 256 triangulation cases for a cube (8 corners, 2^8 = 256). I let you search the web for the details of the technique. From now on constructing the iso-surface of our terrain is very easy, we simply walk (yes, we're not into Mordor... yet.) through the cells of the grid and generate the appropriate triangulation for each of them. Unfortunately, the straightforward method isn't suitable for real-time applications. Roughly, when the geometry shader outputs a lot of vertices it becomes a huge bottleneck. Moreover, vertices of adjacent cells are generated twice. nVidia has published a nice article about this problem and how to solve it pretty efficiently: GPU Gems 3's first chapter. Texture coordinates and normals are also generated. Note that I haven't managed to get decent normals yet.

Voxel terrain rendering
I won't go deep into the technique but the idea is to generate each individual vertex only once and use an index array, also generated. The tricky part is in the index array generation; you may guess that you just have to march through the grid like the first method and generate indices instead of vertices, but one does not simply walk into a regular grid to generate indices. First problem is: how do you know the index of the vertices when marching the grid to generate the index array? You'll have to do a first pass to store them into a huge 3D texture at wisely chosen positions. The second pass walks through the non empty cells (ie those who contain vertices) and fetches the 3D texture to know the index of the vertices it is marching on. Then output the indices out and blahblah- it's done; kid's stuff.

I believe however you are experienced enough to guess that it's not the kind of technique you manage to get working without a few surprise bugs! Funny thing, the more they are stupid and obvious, the more they are difficult to find. Moreover, the visual output is not often meaningful in bugged GL apps *but* you struggle hard to believe it is. I honestly wonder if I'm the only one to be that stupid but sometimes I only see what I wanna see; it's like searching the ground to find that plane in the sky.

Rage self-analysis paragraph behind, let's talk about my plans for the next few weeks. Even if you're not much into 3D rendering you might have noticed that there is a huge problem in this sweet terrain system: the regular grid has to be gigantic to describe even a small 1km^2 area. You sure can choose to pick 1 cell = 5m to manage large terrains, but it's needless to say that you'll end up with very poor details; trade-off is not a solution itself. What I thought of instead is a kind of level of detail system (how surprising) where the terrain is kept on CPU memory or even on HDD and progressive levels are uploaded as needed (ie. when the player moves through the terrain). On GPU-side I see it much like a geometry clipmap but in 3D instead of 2D. I believe the tough part will be the seamless LOD transition, but I've some ideas on it.

PS: the title states that I'm using marching tetrahedra (which is true) but I completely forgot to mention them! I'll do it in the next post, this one is big enough.

Posted by Yno on 2012-02-10 22:23 — last updated by Yno on 2012-02-11 19:06
| 1 | 2 | 3 | 4 | ... | 19 | 20 | older