I have received an email asking to explain how I generate a procedural sphere as per this post. Rather than respond to emails personally, I have decide in future to answer the questions on the blog, presenting the information to a wider audience for comment, correction, improvement and perhaps a little learning.

The base of my generated sphere is a cube. Depending on how smooth the generated sphere needs to be, I recursively split each side into 4 equally sized children. This is achieved my finding the center of the face, popping that point out so that it is the correct radius from the center, then making 4 faces using the original points and this new point. (There is some ASCII art showing this in the code below)

void CSphere::Initialize(float p_fRadius, int p_iMaxDepth)

{

// TLB----TRB

// /| /|

// / | / |

// TLF----TRF |

// | BLB--|BRB

// | / | /

// | / |/

// BLF----BRF

//Putting the Vertices of the initial Cube at p_fRadius is not correct as the distance of

//p_fRadius, p_fRadius, p_fRadius from the Origin is greater than p_fRadius.

CVector3 l_Vertices[8];

l_Vertices[TOP_LEFT_FRONT] = MoveToRadiusDistance(CVector3(-p_fRadius, p_fRadius, -p_fRadius), p_fRadius);

l_Vertices[TOP_RIGHT_FRONT] = MoveToRadiusDistance(CVector3( p_fRadius, p_fRadius, -p_fRadius), p_fRadius);

l_Vertices[BOTTOM_RIGHT_FRONT] = MoveToRadiusDistance(CVector3( p_fRadius, -p_fRadius, -p_fRadius), p_fRadius);

l_Vertices[BOTTOM_LEFT_FRONT] = MoveToRadiusDistance(CVector3(-p_fRadius, -p_fRadius, -p_fRadius), p_fRadius);

l_Vertices[TOP_LEFT_BACK] = MoveToRadiusDistance(CVector3(-p_fRadius, p_fRadius, p_fRadius), p_fRadius);

l_Vertices[TOP_RIGHT_BACK] = MoveToRadiusDistance(CVector3( p_fRadius, p_fRadius, p_fRadius), p_fRadius);

l_Vertices[BOTTOM_RIGHT_BACK] = MoveToRadiusDistance(CVector3( p_fRadius, -p_fRadius, p_fRadius), p_fRadius);

l_Vertices[BOTTOM_LEFT_BACK] = MoveToRadiusDistance(CVector3(-p_fRadius, -p_fRadius, p_fRadius), p_fRadius);

//Initialize the faces of the cube (The face structure just stores the vertices for the face corners and has render functionality, not applicable to this explanation, and its depth in the face tree)

m_pFaces[FRONT].Initialise(FRONT, l_Vertices[TOP_LEFT_FRONT], l_Vertices[TOP_RIGHT_FRONT], l_Vertices[BOTTOM_RIGHT_FRONT], l_Vertices[BOTTOM_LEFT_FRONT], DEPTH0);

m_pFaces[RIGHT].Initialise(RIGHT, l_Vertices[TOP_RIGHT_FRONT], l_Vertices[TOP_RIGHT_BACK], l_Vertices[BOTTOM_RIGHT_BACK], l_Vertices[BOTTOM_RIGHT_FRONT], DEPTH0);

m_pFaces[BACK].Initialise(BACK, l_Vertices[TOP_RIGHT_BACK], l_Vertices[TOP_LEFT_BACK], l_Vertices[BOTTOM_LEFT_BACK], l_Vertices[BOTTOM_RIGHT_BACK], DEPTH0);

m_pFaces[LEFT].Initialise(LEFT, l_Vertices[TOP_LEFT_BACK], l_Vertices[TOP_LEFT_FRONT], l_Vertices[BOTTOM_LEFT_FRONT], l_Vertices[BOTTOM_LEFT_BACK], DEPTH0);

m_pFaces[TOP].Initialise(TOP, l_Vertices[TOP_LEFT_BACK], l_Vertices[TOP_RIGHT_BACK], l_Vertices[TOP_RIGHT_FRONT], l_Vertices[TOP_LEFT_FRONT], DEPTH0);

m_pFaces[BOTTOM].Initialise(BOTTOM, l_Vertices[BOTTOM_LEFT_FRONT], l_Vertices[BOTTOM_RIGHT_FRONT], l_Vertices[BOTTOM_RIGHT_BACK], l_Vertices[BOTTOM_LEFT_BACK], DEPTH0);

//Subdivide each patch to the lowest resolution

m_pPatches[FRONT].SubDivide(p_fRadius, p_iMaxDepth);

m_pPatches[RIGHT].SubDivide(p_fRadius, p_iMaxDepth);

m_pPatches[BACK].SubDivide(p_fRadius, p_iMaxDepth);

m_pPatches[LEFT].SubDivide(p_fRadius, p_iMaxDepth);

m_pPatches[TOP].SubDivide(p_fRadius, p_iMaxDepth);

m_pPatches[BOTTOM].SubDivide(p_fRadius, p_iMaxDepth);

}

where

bool CSphereFace::SubDivide(float p_fRadius, int p_iMaxDepth)

{

if (m_iDepth >= p_iMaxDepth)

return false;

//Create the Additional Vertices

//

// NW---------------D-------------NE

// | | |

// | | |

// | | |

// | | |

// A--------------Center-----------C

// | | |

// | | |

// | | |

// | | |

// SW----------------B-------------SE

//

//

g_vAdditionalVertices[A] = m_vBaseVertices[eNorthWest] + ((m_vBaseVertices[eSouthWest] - m_vBaseVertices[eNorthWest]) / 2.0f);

g_vAdditionalVertices[B] = m_vBaseVertices[eSouthWest] + ((m_vBaseVertices[eSouthEast] - m_vBaseVertices[eSouthWest]) / 2.0f);

g_vAdditionalVertices[C] = m_vBaseVertices[eNorthEast] + ((m_vBaseVertices[eSouthEast] - m_vBaseVertices[eNorthEast]) / 2.0f);

g_vAdditionalVertices[D] = m_vBaseVertices[eNorthWest] + ((m_vBaseVertices[eNorthEast] - m_vBaseVertices[eNorthWest]) / 2.0f);

//Create Child Nodes

m_pChildren = new CSphereFace[4];

m_pChildren[eNorthWest].Initialise(eNorthWest, m_vBaseVertices[eNorthWest], g_vAdditionalVertices[D], m_vBaseVertices[eCentre], g_vAdditionalVertices[A], m_iDepth + 1);

m_pChildren[eNorthEast].Initialise(eNorthEast, g_vAdditionalVertices[D], m_vBaseVertices[eNorthEast], g_vAdditionalVertices[C], m_vBaseVertices[eCentre], m_iDepth + 1);

m_pChildren[eSouthWest].Initialise(eSouthWest, g_vAdditionalVertices[A], m_vBaseVertices[eCentre], g_vAdditionalVertices[B], m_vBaseVertices[eSouthWest], m_iDepth + 1);

m_pChildren[eSouthEast].Initialise(eSouthEast, m_vBaseVertices[eCentre], g_vAdditionalVertices[C], m_vBaseVertices[eSouthEast], g_vAdditionalVertices[B], m_iDepth + 1);

m_pChildren[eNorthWest].SubDivide(p_fRadius, p_iMaxDepth);

m_pChildren[eNorthEast].SubDivide(p_fRadius, p_iMaxDepth);

m_pChildren[eSouthWest].SubDivide(p_fRadius, p_iMaxDepth);

m_pChildren[eSouthEast].SubDivide(p_fRadius, p_iMaxDepth);

return true;

}

and

CVector3 MoveToRadiusDistance(CVector3 p_Vector, float p_fRadius)

{

//Get the Normalized Vector, of this vertex from the origin (center of the sphere) and pop it out to the correct radius,

return p_Vector.Normalize() * p_fRadius;

}

That is pretty much the basics of how I create the sphere from recursively subdividing a cube. If I overlooked anything please comment and I will correct the post to reflect any gap in information or any mistake.

Thank you for posting your code for this Ciaran! I will try to port it into my opengl/opentk and see how it goes.

ReplyDeletecheers,

Jamie

Interesting approach. Are you using quad/octtrees to check the patches/faces against the frustum? And if yes, how are you dividing the sphere into squares for the trees?

ReplyDeleteI choose to use quadcubes instead of subdividing a cube to a sphere... but I'm not quite sure which method could be considered better, because quadcubes have some drawbacks (like the distance between vertices depends on the polygons position inside a quadcube face).

Oh, and long time no see. :)

- Mischa

Hi, maybe it will help if you have an about page so people like me who just stumbled into your blog will know what exactly I'm looking at here. I'm not sure I'm seeing pov-ray or is this something else. Sorry if this sounds stupid to you and everyone who already know what it is. :-p I'm just interested to know cuz im trying to teach myself how to use pov ray using online tutorials. thanks!

ReplyDeleteThanks for the great posts, two questions:

ReplyDelete"This is achieved my finding the center of the face, popping that point out so that it is the correct radius from the center, then making 4 faces using the original points and this new point."

So it is kind of like using a billboard cloud of outwardly facing triangles?

How do you calculate texture mapping? I have some code to do texture mapping for the traditional sphere shape that bunches up near the poles, that is based on position of the point on the sphere, so I could probably repurpose that. I guess texture mapping is always going to be a problem unless we map the generated triangles to a texture atlas of the triangles on a texture instead.

Hi Steve

DeleteI shall try and answer your questions but please feel free to ask any follow up queries.

"So it is kind of like using a billboard cloud of outwardly facing triangles?" No. When the sphere/planet is rendered, it is just a simple mesh as any other model would be. Consider a square mesh with corners A, B, C and D. Inside this mesh there are N polygons rendered using a vertex and index buffer. When this face subdivides the centre is calculated (point E) and now instead of the 1 original mesh being rendered (A, B, C, D), four mesh's are now rendered which join up seamlessly at the edges. Sadly I cant attach any images to a comment reply so my poor command of the English language will have to do.

"How do you calculate texture mapping?" In short, I don't. When I was texturing a planetary body, I used shaders to procedurally generate the fragment color. Check out http://decadeengine.blogspot.com.au/2009/11/cpu-v-gpu-procedural-terrain-texture.html

If you did want to use traditional texturing methods, it wouldn't be too difficult. You could use a cube map, and since the sphere/planet starts out life as a cube, the texture coordinates of each of the corners is known, (0,0 - 0,1 - 1,0 - 1,1). When this patch subdivides, the texture coordinates of the new corner can be calculated as the middle point in its parents texture coordinates.

Hope that makes sense. Thanks for the questions and by all means keep them coming. If I cannot give enough detail in a comment reply I will gladly make a complete post on the matter.

Ciaran

Hi !

ReplyDeleteI used your code as an example for my engine and found a missing function call in your example code: The additional vertices aren't moved to distance.

[C#]

AdditionalVtx[(int)Corner.A] = mVertices[(int)Corner.NW] + ((mVertices[(int)Corner.SW] - mVertices[(int)Corner.NW]) / 2.0);

AdditionalVtx[(int)Corner.B] = mVertices[(int)Corner.SW] + ((mVertices[(int)Corner.SE] - mVertices[(int)Corner.SW]) / 2.0);

AdditionalVtx[(int)Corner.C] = mVertices[(int)Corner.NE] + ((mVertices[(int)Corner.SE] - mVertices[(int)Corner.NE]) / 2.0);

AdditionalVtx[(int)Corner.D] = mVertices[(int)Corner.NW] + ((mVertices[(int)Corner.NE] - mVertices[(int)Corner.NW]) / 2.0);

Must be:

[C#]

AdditionalVtx[(int)Corner.A] = RadiusToDistance(mVertices[(int)Corner.NW] + ((mVertices[(int)Corner.SW] - mVertices[(int)Corner.NW]) / 2.0), Radius);

AdditionalVtx[(int)Corner.B] = RadiusToDistance(mVertices[(int)Corner.SW] + ((mVertices[(int)Corner.SE] - mVertices[(int)Corner.SW]) / 2.0), Radius);

AdditionalVtx[(int)Corner.C] = RadiusToDistance(mVertices[(int)Corner.NE] + ((mVertices[(int)Corner.SE] - mVertices[(int)Corner.NE]) / 2.0), Radius);

AdditionalVtx[(int)Corner.D] = RadiusToDistance(mVertices[(int)Corner.NW] + ((mVertices[(int)Corner.NE] - mVertices[(int)Corner.NW]) / 2.0), Radius);

Also, the center point in the constructor / initialising method must be moved to distance:

[C#]

mVertices[(int)Corner.Center] = RadiusToDistance((NW + SE) / 2, Radius);

Lunatix

Hey Lunatix

DeleteThank you very much for your comment and for attaching the source for others to see. You raise a very good point which I failed to address in my original post.

The reason my code is missing the moving of the patch to the desired radius is because I stole the samples from my planet engine which does not do any translation on the CPU. All vertices are translated to the correct radius with noise applied in the vertex shader (or geometry shader if supported).

The bad thing about this approach is that since the translation is applied in the shader (either from a heightmap texture or running a shader algorithm) there is no state and the translation has to be processed for each rendered vertex each frame. This is quiet a bit of work, but in my research I find that pretty much any GPU is well capable.

The great thing about this approach is that you only need 1 mesh patch to render a complete sphere/planet. Since there are no local modifications to the patch on the CPU, the exact same patch can be used for every segment of the sphere/planet. At start-up I generate 1 vertex buffer of a specified size. I then generate 16 index buffers so that all neighboring level of details can be rendered (decadeengine.blogspot.com.au/2010/03/gpu-procedural-planet.html shows this visually).

At render time I tell the shader where the corners of the patch are (top left, top right, bottom left, bottom right) and the shader can then translate this shared mesh to the desired coordinates for the current patch, apply the radius translation and some noise.

Ciarán