Well, I handed in my project on Tuesday, dissertation, source code and all. I can't say that I'm totally satisfied with my application, but I am quite pleased with the dissertation. All that's left is the showcase tomorrow and everything is totally finished.
ttfn
The Imp With No Name
Jessica Milne - CGT Honours Project - 2010/11
Thursday, 19 May 2011
Monday, 25 April 2011
Progress Report ~7
Just thought I'd update with my dissertation progress. I'm a little bit behind schedule, I had hoped to finish the first draft before Easter, but as it stands...that didn't happen. I am over halfway through writing it though, and progress is relatively good. I have finished the introduction, the literature review and a few sections of my methodology section, such as the detailing of the blurs used and the input required for the ssao shader.
I am currently working on the Future Work section, and hope to get on to the rest of the Methodology section tomorrow. With any luck I will be finished the first draft by next Monday.
I am currently working on the Future Work section, and hope to get on to the rest of the Methodology section tomorrow. With any luck I will be finished the first draft by next Monday.
Wednesday, 6 April 2011
Dissertation: Lit Review: Screen Space Radiosity
2.4 Screen Space Radiosity
Chapter 6.4 of ShaderX7, Fast Fake Global Illumination, describes several techniques used to simulate the effects of indirect lighting. Included in these techniques is the author’s implementation of Screen Space Ambient Occlusion and its logical expansion into a fake radiosity effect, i.e. Screen Space Radiosity (SSRad). SSRad simulates a local radiance transfer (a fake radiosity), by using the colour information for the scene in the SSAO calculations. This requires the scene’s colours to be rendered to a texture, in much the same way as the depth and normals. The SSAO term will require three channels instead of one, i.e. RGB, but the actual occlusion calculation remains the same. The occlusion factor is combined with the occluder colour values, resulting in a local radiosity effect. [Briney et al, 2010]
A similar approach was taken by a member of the GameDev forums, who posted a technique that they dubbed SSGI (Screen Space Global Illumination). This technique also uses a colour buffer, and multiplies the RGB values of the occluder with the AO term. [ArKano22, 2009] However, this implementation also stores the single channel AO term and when outputting the final pixel value, uses the AO term and the colour bleeding term added together. This allows the grey shadows of ambient occlusion and the local radiosity effect to be combined.
Dissertation: Introduction
1. Introduction
Realism is a big issue in the games industry today, graphical realism in particular. Video games have evolved greatly over the past thirty or so years, from pixelated sprites and electronic blips to fully 3D interactive worlds. And as games get more and more realistic, the graphical expectations of the consumers are getting higher and higher. At the same time however, these gamers also want better performance – high frame-rates and fast loading times. So there is a lot of pressure on the industry to develop graphical techniques that look good without sacrificing performance.
When it comes to adding realism to 3D games, graphical elements such as lighting and shadows are at the forefront. Without the definition that lighting, whether it be point, spot, or directional, provides, 3D objects look flat and unrealistic. Shadows are also important in that they provide visual clues to the spatial relationships between object in the 3D scene. [Drettakis and Stamminger, 2002] The simulation of these effects – the effects of direct lighting – is standard practice in today’s computer games, using the Lambertian and Blinn models. [Filion, 2010] With that in mind, the next step along the path to realism is the simulation of indirect lighting, i.e. global illumination and ambient occlusion.
“In reality, a significant amount of light in a scene comes from light reflecting from surfaces.” [Akenine-Moller and Haines, 2002, p277] Without simulating this reflected light, parts of the scene not directly affected by light sources would be left in complete darkness, which is not the case in real life. [Boonekamp, 2010] Global illumination (GI) calculations differ from direct illumination in that they use data gathered from surrounding objects to determine the lighting at a particular point, as opposed to needing only the surface data of that point. GI techniques simulate the ambient effects caused by light bouncing off of diffuse surfaces and around a scene, adding extra depth and definition to said scene. Unfortunately, such techniques i.e. ray tracing and radiosity, are computationally expensive and therefore not viable for use in real-time applications such as computer games. This means that other ways of producing similar effects must be found, that do not require so much processing power. Ambient occlusion is one such technique.
Ambient Occlusion (AO) is a subset of global illumination and the effect that it produces is produced automatically by techniques such as ray tracing. It is only an approximation of actual GI effects but can be implemented in real time in a far more cost effective manner than other, more accurate techniques. It is a simulation of the soft shadows which are produced by objects occluding each other, i.e. in creases and corners, or when two objects are very close together. It increases spatial awareness as it is easier to tell how close two objects are from how much they occlude each other.
With regards to computer games, Ambient Occlusion is a relatively new technique, with the original method actually being developed for use in CGI films. It was developed in 2002 by ILM (Industrial Light and Magic) and used ray tracing to calculate occlusion factors, i.e. “For every surface point, rays are cast in a hemisphere around the surface normal. The final occlusion amount is dependent on the number of rays that hit other surfaces or objects in the scene.” [Landis, 2002] This method was not used for real time applications but with non real time renderers for Pixar animated movies.
In 2007, the games developer Crytek devised a technique that could be used successfully in computer games. Screen Space Ambient Occlusion (SSAO) was first used in the computer game Crysis, and has since appeared in many other games. The technique is performed in screen (image) space and uses the depth buffer as input. It is implemented as a pixel shader and is executed solely using the GPU. SSAO is well suited for use in games and real time applications because, since it is performed in screen space, its performance is not dependant on scene complexity, making it faster than other methods in complex, detailed scenes. Of course, since it is a screen space calculation, it is not as accurate as geometry dependant ambient occlusion, and only approximates the effect. However, when it comes to computer games, physical correctness is not essential, rather it is the illusion of reality that is important. [Briney et al, 2010]
It wasn’t until 2009 that Crytek released a full description of their algorithm, and as such there are a number of different interpretations of the technique in use. The games developer Blizzard used their version of SSAO in the game StarCraft II (2010) and Nvidia also have their own version of the technique, called Image Space Horizon Based Ambient Occlusion (HBAO). Some implementations focus on improving the performance; while others are far better quality and some attempt to combine the two. Thus, it is clear that Screen Space Ambient Occlusion is a graphical method with room for multiple interpretations and expansion. With that in mind, the focus of this project has been to answer the following research question:
How far can real-time Screen-Space Ambient Occlusion methods be improved/expanded to better approximate Global Illumination while still remaining cost effective enough for games?
There are two facets to this question: One, an investigation of methods that will improve the graphical quality of the SSAO effect and therefore better emulate indirect illumination; And two, investigating ways in which the performance cost may be reduced, thus keeping the derived method viable for computer games.
Progress Report ~6
Application: By the end of this week I am aiming to have the practical element of the project pretty much finished. I cannot think of many more ways to improve my current SSAO/SSRad, without radically changing a lot of elements, so, though I will admit that my application is nowhere near perfect, I have started to tie up the loose ends and clean up my code. Perfectionist that I am, it's hard for me to admit that there isn't really such a thing as perfection anyway. After the application is tidied up, I will start taking frame time and frame rate readings for my results section.
Dissertation: I'm happy to report that I have made some progress with the dissertation. I have finished the introduction, and written one section of my literature review (albeit the shortest section). Tomorrow I shall hopefully start on the SSDO lit review and on the Methodology section.
Here are some screenshots of my SSAO/SSRad in action, and the buffers used in each step:
Depth/Normals buffer( size of screen):
Colour buffer ( 1/4 screen size):
SSAO buffer ( 1/2 screen size):
Blurred SSAO buffer ( 1/2 screen size):
Low cost smart blur:
Simple Gaussian blur:
Rendered scene without SSAO (just textured):
Rendered scene with SSAO:
Dissertation: I'm happy to report that I have made some progress with the dissertation. I have finished the introduction, and written one section of my literature review (albeit the shortest section). Tomorrow I shall hopefully start on the SSDO lit review and on the Methodology section.
Here are some screenshots of my SSAO/SSRad in action, and the buffers used in each step:
Depth/Normals buffer( size of screen):
Colour buffer ( 1/4 screen size):
SSAO buffer ( 1/2 screen size):
Blurred SSAO buffer ( 1/2 screen size):
Low cost smart blur:
Simple Gaussian blur:
Rendered scene without SSAO (just textured):
Rendered scene with SSAO:
Sunday, 27 March 2011
Progress Report ~5
- Almost finished writing the dissertation introduction.
- I have managed to implement an edge detecting blur. It is not a completely accurate implementation, mathswise, but it is low cost.
- Have started working on the global illumination aspect - I have looked at screen space radiosity techniques and implemented a simple local radiosity effect in screen space. This is essentially using the same ssao calculation but also taking into account scene colours (which are rendered into a half screen size texture).
- Next I plan to go over the SSDO paper in detail, both for my lit review and to develop more ideas for the GI aspect.
Friday, 18 March 2011
Progress Report ~4
Met with Matt on Wednesday to update him on my progress, and discuss what I plan to do next.
I have optimised the SSAO performance by using downscaled SSAO and Blur textures. Both texture are half the screen size, and the final term is gained by upsampling the final blur texture using bilinear interpolation. This resulted in a considerable performance gain - the framerate(fps) is almost doubled, but the quality of the SSAO has suffered somewhat. However, I consider this to be an acceptable tradeoff, considering how much lower the frametime (ms) is now.
There are a few main areas that can be optimised to increase performance:
In regards to the blur pass, I am looking into implementing a smart blur which respects the edges - using normals/depth as a weight. As with the bilateral upsampling, this may impact the performance, but, depending on the quality increase, may be worthwhile.
I hope to get the majority of this done before next week, and then I will start on the global illumination aspect of my shader.
Also - In my previous post, I stated that I had changed the way I calculated the depthDifference from my original implementation: ie. from OccluderSample.a - linearDepth to sample.z - OccluderSample.a, because the original calculation produced odd results. I have now realised, that this because I had them in the wrong order. It should have been: linearDepth - OccluderSample.a. This produces correct looking results - which to my eyes seem slightly more defined than the sample.z - OccluderSample.a results. For now, I have implemented the ability to switch between the two within the program.
I have optimised the SSAO performance by using downscaled SSAO and Blur textures. Both texture are half the screen size, and the final term is gained by upsampling the final blur texture using bilinear interpolation. This resulted in a considerable performance gain - the framerate(fps) is almost doubled, but the quality of the SSAO has suffered somewhat. However, I consider this to be an acceptable tradeoff, considering how much lower the frametime (ms) is now.
There are a few main areas that can be optimised to increase performance:
- Texture Lookups: This is probably the most performance consuming aspect of the SSAO shader. Downsampling the SSAO and blur textures reduces the number of lookups and increases performance.
- Shader Organisation: Combining shader operations and reducing the amount of code can help to optimise the shaders. I will probably leave this optimisation till the end, as it will reduce the readability of the code.
- No. of Samples: Reducing the number of samples does result in faster performance, however the quality dip is not acceptable. Currently the sample no. stands at 16, which provides a reasonable balance between quality and speed.
- Blurring: The type and range of blur used will also be a factor in the performance of the program. Right now, a very simple gaussian blur has been implemented which uses two passes - vertical and horizental. Other types of blur such as bilateral(edge detecting) have yet to be tested.
In regards to the blur pass, I am looking into implementing a smart blur which respects the edges - using normals/depth as a weight. As with the bilateral upsampling, this may impact the performance, but, depending on the quality increase, may be worthwhile.
I hope to get the majority of this done before next week, and then I will start on the global illumination aspect of my shader.
Also - In my previous post, I stated that I had changed the way I calculated the depthDifference from my original implementation: ie. from OccluderSample.a - linearDepth to sample.z - OccluderSample.a, because the original calculation produced odd results. I have now realised, that this because I had them in the wrong order. It should have been: linearDepth - OccluderSample.a. This produces correct looking results - which to my eyes seem slightly more defined than the sample.z - OccluderSample.a results. For now, I have implemented the ability to switch between the two within the program.
Subscribe to:
Comments (Atom)






