Tuesday, March 16, 2010

GPU Procedural Planet

This example does not use any textures to colour the terrain, and is therefore 100% procedurally generated. In basic terms it means that no media is loaded. Everything is 100% generated in the engine.

Let’s take some time to recap what needs to be procedurally generated in order to render the planet (shown in the video below).

Permutations Texture
The topography of the planet is created using noise algorithms. This example uses Multi Ridged Brownian Fractal Motion.  At runtime this noise is created using a series of permutations which are stored in a texture so that it can be accessed by in a shader. The texture data doesn’t make allot of visual sense, however here is an example of what it looks like.


1 Vertex buffer
The planet is rendered as a series of patches. It is this patch nature which allows the recursive sub-division resulting in the increased/decreased level of visible detail. Whereas the CPU Planet generates a unique vertex buffer for each patch (because the noise is calculated when the patch is created and applied to height data in the vertex buffer), the GPU Planet only uses 1 vertex buffer of X * X vertices, generated procedurally, which are displaced in a shader at runtime for each patch to be rendered.


16 Index Buffers
An index buffer is used along with a vertex buffer to render some geometry. In allot of cases 1 vertex buffer is used with 1 index buffer. As described in previous posts a terrain requires 16 index buffers, generated procedurally, so that there are no terrain cracks. It must be possible for the edges of terrain patches, with different levels of detail, to join together seamlessly.



In the video above shows a basic GPU Planet. There is quiet an obvious bug visible as the camera moves. Because all noise is generated on the GPU, the Decade Engine running on the CPU has no knowledge of how much a vertex is displaced. All distance checking from the camera to the planet is calculated from the camera position to the underlying sphere of the planet (vertex displaced to radius of the planet but not with height noise applied). This is ok when the camera is high above the terrain however as the camera moves close to the surface, especially if this ground is at a high altitude, the sphere position may still be some distance beneath and therefore terrain subdivision do not occur properly.

I am considering 2 possible techniques to over come this
  1. Generate the same noise values on the CPU as is generated on the GPU. Since all (pseudo) random data is stored in the permutations texture, it should be possible.
  2. Render the height data to a texture instead of generating it as required each frame, then use this texture for shader vertex displacement as well as calculating the height of key vertices on the CPU.

5 comments:

  1. Hi, I've been experimenting with a single VB too, but it just didnt work because of the fp32 on the gpu. Add a few levels of depth to the quadtree and weird shit starts to happen.
    Just a tip: check it before going any further, and decide if its gonna be a problem...

    ReplyDelete
  2. I got the exact same thing on the CPU. On an earth sized planet, when I went below subdivision level 32 I got the "weird shit".

    I expect it to happen on the GPU to, becuase as you said it can only represent data as fp32, however even if it does happen I am still better off than using just the CPU implementation.

    With some scaling of the patch boxes in the CPU, (parameters which are then passed to the Shader) I hope to be able delay this happening until a much higher (or is it lower) depth, if not avoid it completely.

    Do you have a blog/website showing progress of your work?

    ReplyDelete
  3. I havent got a blog at this point... Maybe i'll start one when im progressing a bit more.

    The 'weird stuff' is due to the fp32 precision. BUT there is a way around this: Using camera space.
    It's actually quitte simple if you use multiple vertex buffers, and plain impossible (i think) if you use one.

    The trick is to SUBTRACT the camera position and ADD the planet position to each vertex. Then you save the camera position that you substracted in the quadtree node.
    Then you delete te camera position (translation) from the camera class. for each quad you render create a translation (world) matrix that contains the inverse (-) of the camera position which you created the quad for.

    What this does effectivly:
    It turns precision the other way around: the closer to the planet the more precision, the further away the less.

    Maybe it ís time to start a blog and explain this a little more ´complete´.

    ReplyDelete
  4. Thanks for the information. I hope to look into this further very soon. In the absence of your own blog, I would be very happy to publish a description here (with full credit and reference to you of course), so if you would like to write up a more detailed description please let me know.

    ReplyDelete
  5. Hi
    I'm reading about your engine for 2 weeks. I also read about the water effect. It's my desired effect and I like to use it in my simple game Engine.
    I have asked questions about this method at gamedev.net and opengl.org forum, but I didn't get good responses.
    This water effect has become a "disaster effect" for me!
    I need some help. Could you please help and response some of my questions about this effect?
    If so my Email is opentechno@hotmail.com
    I didn't find your email, so I don't know where must I send a message about my questions
    Thanks
    Ehsan

    ReplyDelete