Idyllon
Business
Pictures
Pictures
Technology
Technology

While our main focus is content and service, technology remains an important element. To this end, when we reach a point where existing methods and technology leave us without the look, feel and features we need, we extended ourselves and create these methods and technologies. The following is one example of this. There are many others that remain unpublished.

(Fox & Compton, 2007, as seen in Game Developer Magazine, March 2008) Ambient Occlusive Crease Shading: An algorithm for the calculation of fake/artistic approximation to ambient occlusion, lending a typical scene an otherwise missing depth and warmth. This approach uses absolutely no precalculation, works on absolutely any geometry or mesh, and has a fixed overhead dependent only on resolution (it's entirely screen-based). Below are two example shots, and at the bottom of the page are example shots of the AO coefficient buffer itself: (mouse-over to see how the scene looks with AO disabled, though ignore the fireflies in the far-off background)

Pay particular attention to the way in which the algorithm deepens the shadows about mostly-obscured creases, and especially the way in which it adds proximity shadows. For instance, note the shading around the branch in front of the door - when you remove the AO, it removes all visual cue that the door and branch are actually quite close to each other. You might also notice how the AO tends to darken the scene overall, but this isn't by any means a consistent/disagreeable modulation - pay attention to how it shades the plant leaves, for instance, or how bark and wood take on a softer appearance. It only appears to be a consistent dampening, there's a lot of surface-based noise that really makes the material properties "pop."

The basic approach is fairly simple:

Thus, per pixel you simply select a set of neighbors, extract their normals, determine to what degree the center pixel faces the neighbor (dot the normal against the vector from the center pixel to the neighbor pixel), use the dot and a scaling factor to compute the occlusion, scale the value relative to the distance between occluders, add up all the results per pixel and then you're done - just apply it to lighting. You'll want an artist tweakable factor that applies the occlusion coefficient per light component, as this is very much an artistic approach - the more tweakable it is, the better. I give the artist 7 values total per-scene: the 3 light+AO component scalars, a bias value (that can bias the dot product to produce a more or less shadows appearance), the range attenuation value (falloff coeff dependent on the distance between center and neighbor), and an averager (which is the divisor when averaging the results of all neighbor pixels). While it's tempting to assume that you want a straight average of all of the contributions to a given pixel, or that range attenuation can handle all averaging, in practice neither alone is sufficient for pleasing results.

To get you on your way, here's the pixel shader code I use to produce the ambient occlusion map, rendered once per frame. GBuffer1 contains per-pixel positions in its XYZ, and GBuffer2's XYZ contains the per-pixel normals:

//------------------------------------------------------------------------------ /** Shaders involved in crease shading Uses: DiffMap1 (GBuffer 1) DiffMap2 (GBuffer 2) Color0 (AOCreaseValues - Range, Bias, Averager, Unused) */ float4 psCreaseShadeStippled11x11(const vsOutput psIn) : COLOR { float2 UVLeft; UVLeft.x = 1.0f / DisplayResolution.x; UVLeft.y = 0.0f; float2 UVDown; UVDown.x = 0.0f; UVDown.y = 1.0f / DisplayResolution.y; float2 centeredUV = psIn.uv0 + (UVLeft / 2.0f) + (UVDown / 2.0f); float3 centerPos = tex2D(GBuffer1Sampler, centeredUV).xyz; float3 centerNormal = tex2D(GBuffer2Sampler, centeredUV).xyz; UpdateSamplesStippled11x11(DisplayResolution.x, DisplayResolution.y, SampleOffsetsWeightsStippled11x11); float4 totalGI = 0.0f; int i; for (i = 0; i < 24; i++) { float2 sampleUV = centeredUV + SampleOffsetsWeightsStippled11x11[i].xy; float3 samplePos = tex2D(GBuffer1Sampler, sampleUV).xyz; float3 toCenter = samplePos - centerPos; float distance = length(toCenter); toCenter /= distance; float centerContrib = saturate((dot(toCenter, centerNormal) - AOMinimumCrease) * Color0.y); float rangeAttenuation = 1.0f - saturate(distance / Color0.x); totalGI.r += centerContrib * rangeAttenuation; } totalGI.r /= Color0.z; return (totalGI); }

The implemented algorithm does have a few differences, and we hit a few snags:

Click the above examples to see (roughly, the camera is in a slightly different position) what the pre-blur AO buffers look like for each of the above example shots.

... and that's about it. With a bit of tweaking, you should be able to get results equivalent to the above screen shots. The shots are all from our product prototype, with AO being key in its art style. There is a frame rate hit, and I seriously doubt you could usefully include this in anything below PS3.0, but our frame rates are fine / it operates beautifully in the real world. Our prototype with all the bells and whistles (AO included) enabled runs extremely well on an 8800GTX, and reasonably well on a 7600GT.

As far as I know, this is a new approach, and my primary purpose for putting it into the public sphere is to protect it against possible copyright ugliness - but if it's old news, if someone owns the copyright, by all means let me know. (Crysis developers, if you happen to read this, I'd be curious to know how similar our approach is to yours? We're both working in screen space, but your approach in general seems to be more accurate/expensive and use a depth buffer rather than a normal buffer - or at least, that's what I would guess from the brief you released)

(Work was primarily done in August of 2007, with further refinement through November 2007)

vines