Monday, March 28, 2011

Remove back facing patches before the render pipeline.

In a previous post, Procedural Planet - Subdividing Cube I mentioned how I remove complete patches which are facing away from the camera before the API culled their polygons in the render pipeline.

"Using frustum culling is not enough to remove any unrendered polygons from the planet. When close to a planet it can look like a flat terrain, just like the earth does for us as we stand on it, but from height it can be seen that the planet is in-fact spherical. With this knowledge it is possible to mathematically remove allot of the planet patches which are on the opposite side of the planet. With Back face culling the API would remove these anyway, however it would be very wasteful to pass these invisible faces down the render pipeline. By using a DotProduct with the LookAt vector of the camera and the Normal of the planet patch translated to model space, it is very simple to ignore these patches."

This code was part of what was lost from Decade with the recient SVN blooper, and therefore had to be rewritten. Despite having implemented the functionality about 18 months ago it took me some time to grasp the idea. I feel that a more technical post would be useful and will hopefully help anyone else when implementing similar functionality.

Anyone with experience in graphics programming will be familar with the concept of backface culling. As each polygon is passed down the rendering pipeline, it is tested to see if it is facing torwards or away from the camera. Since polygons facing away from the camera cannot be seen there is no need to render them. This concept can be applied when rendering a planet, however instead of just testing on a polygon by polygon basis, I test patch by patch. This allows me to ignore large collections of polygons with 1 test instead of the previously described polygon test.

How is this achieved?

Two pieces of information are required in order to test if a patch is facing towards or away from the camera.

1) The camera look-at vector. This is maintained by the camera object and updated as the camera moves and rotates around the world.
2) The normal vector of the patch. I calculate this when the patch is created by doing a CrossProduct of the vectors of 2 sides of the patch.



If the values of A,B,C and D in the above image are

l_vA: X=-11.547007 Y=11.547007 Z=-11.547007
l_vB: X=-6.6666670 Y=13.333334 Z=-13.333334
l_vC: X=-8.1649666 Y=8.1649666 Z=-16.329933
l_vD: X=-13.333334 Y=6.6666670 Z=-13.333334


The normal of the patch can be calculated using the following code.

m_vNormal = CalculateNormal(l_vB, l_vA, l_vC);

where

CVector3 CalculateNormal(CVector3 p_vOne, CVector3 p_vTwo, CVector3 p_vThree)
{
    CVector3 l_vA = p_vTwo - p_vOne;
    CVector3 l_vB = p_vThree - p_vOne;
    return Normalize(CrossProduct(l_vA, l_vB));
}

CVector3 CrossProduct(CVector3 p_vVectorOne, CVector3 p_vVectorTwo)
{
    CVector3 l_vResult;

    l_vResult.X = p_vVectorOne.Y * p_vVectorTwo.Z - p_vVectorOne.Z * p_vVectorTwo.Y;
    l_vResult.Y = p_vVectorOne.Z * p_vVectorTwo.X - p_vVectorOne.X * p_vVectorTwo.Z;
    l_vResult.Z = p_vVectorOne.X * p_vVectorTwo.Y - p_vVectorOne.Y * p_vVectorTwo.X;

    return l_vResult;
}

The calculated normal would be X=0.44721335 Y=-0.44721335 Z=0.77459687

With this information, as each patch is rendered, a simple DotProduct of these 2 pieces of information returns a floating point value. If this value is less than 0, the patch is facing away from the camera and therefore it and all its child patches can immediately be discarded.

float l_fDotProduct =  DotProduct(m_vNormal, p_pCamera->get_LookAt());
if (0.0f > l_fDotProduct)
    return;

where

float DotProduct(CVector3 p_vVectorOne, CVector3 p_vVectorTwo)
{
    return p_vVectorTwo.X * p_vVectorOne.X + p_vVectorTwo.Y * p_vVectorOne.Y + p_vVectorTwo.Z * p_vVectorOne.Z;
}

One more issue must be dealt with before we have a complete solution. The above works well for static objects where all vertices are relative to the origin. However what will happen if the object is rotating?
The answer of that question is shown in the following image.


It may not be obvious from an image instead of a realtime demo so I will try and explain. The above implemented patch culling is processed on
the raw sphere data. This is equilavent to removing back-facing patches (anything from the back of the sphere, relative to the camera, is removed), then rotating the sphere on the Y axis (in the image above
the sphere is rotated by 130 degrees) and then rendering with the API culling all back-facing polygons. This order is obviously incorrect.

The more correct sequence would be to rotate the sphere, remove all back-facing patches, then render remaining patches and allow the API to remove any back-facing polygons in the front-facing patches. Since rotation occurs in the render pipeline it isn't possible for us to rotate before we remove the back-facing patches.

The solution is to multiply the camera look-at vector by the modelview matrix. This is equilavent to transforming the camera by the same values that will be applied to the sphere, resulting in the correct back-facing patches being removed, regardless of what rotation/translation/scaling is applied to the sphere.


float l_fDotProduct =  DotProduct(m_vNormal, p_pCamera->get_LookAt() * p_pGraphics->get_Matrix(eModelView));
if (0.0f > l_fDotProduct)
    return;


(Note: Since p_pCamera->get_LookAt() * p_pGraphics->get_Matrix(eModelView) will yield the same value for every patch, it would be better to calculate this once per frame for each planet that is being rendered. This value can then be used within the test on each patch in the planet.)

No comments:

Post a Comment