OpenGL on OS/2 - Avoiding the Jaggies

Written by Perry Newhook

Introduction
OK, now that I have my system back up and running, we can get back to coding some more OpenGL examples. Last month, I took a break from code examples (because of a system failure), and reviewed some common questions that people have asked over the last few months. I hope you found last month useful; I will try to do a similar column in the future as I collect more questions. You can send questions of any OpenGL skill level to my EDM/2 email address at perry@edm2.com. Remember if you have a question on how to do something, odds are that other people will have the exact same question.

In this month's column, I will be continuing the December discussion of using the accumulation buffer, and show how to do scene antialiasing. First however I will show a simple way of doing antialiasing that in many cases is sufficient.

To demonstrate antialiasing, first we have to build a scene. To make things simpler, I have taken our standard code and added some simple objects such as a cube, a torus, and a cone, all intersecting. We will then modify this initial code to produce several different variations of anti-aliased images.

Download the initial code and compile it. What you should end up with is a scene such as this:



What you will see if you look at the closeup section are jagged lines where polygon edges are and where polygons intersect. Jagged edges are inevitable, as each pixel is a finite width. Unless the line is perfectly horizontal or vertical, a perfectly straight line will not always fall exactly on a pixel, therefore the closest pixel is chosen resulting in a jagged line. Antialiasing is a rendering technique that assigns the pixel colour based on the fraction of the pixel area that is coloured. The result is an optical effect that makes it look like a solid line that has no jaggedness.

OpenGL has a built-in method that can smooth points, lines and polygons. To anti-alias points and lines, simply pass in GL_POINT_SMOOTH or GL_LINE_SMOOTH to the glEnable function. To anti-alias polygons is slightly more involved than a single line but not much. This is our first sample application.

This method of antialiasing has the advantage of being very fast, but is one of the more difficult methods to set up. Antialiasing polygons is a little more difficult than antialiasing points and lines because you have to worry about blending colours of overlapping edges. This method uses the alpha buffer as a way to store the fractional coverage values, so a final colour can be calculated.

The first step is to enable polygon smoothing by passing GL_POLYGON_SMOOTH to the glEnable function, just like we did above for points and lines. You can place this function at the end of the GLPaintHandler::initOpenGL. This causes pixels on edges of polygons to be assigned fractional alpha values based on their coverage. While this handles polygon edges, it does not properly handle overlapping pixels.

To blend overlapping pixels properly, we are going to enable a pixel blending with the function glBlendFunc. The glBlendFunc controls pixel arithmetic and takes two parameters: a source factor, and a destination factor. This function blends the incoming (source) RGBA values with the values that are already in the frame buffer (destination values). This function is normally disabled but can be enabled/disabled with the GL_BLEND argument to glEnable and glDisable. There are eleven symbolic constants defined as parameters: GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ONE_MINUS_SRC_COLOR, GL_DST_COLOR, GL_ONE_MINUS_DST_COLOR, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_SRC_ALPHA_SATURATE.

There are many possible uses of the blend function. Polygon antialiasing is performed by using the blend function (GL_SRC_ALPHA_SATURATE, GL_ONE). Point and line antialiasing can be enabled with the blend function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). We will get into other effects that use blending functions, such as transparency, in upcoming articles. Specifying a blend function of (GL_SRC_ALPHA_SATURATE, GL_ONE) means that the final colour is the sum of the destination colour and the scaled source colour; with the source colour being the smaller of either the incoming source alpha value, or one minus the destination alpha value. Therefore, for a pixel with a large alpha value, successive incoming pixels have little effect as one minus a large alpha value is very small.

Since we are using the alpha value for blending, we have to remember to clear the alpha bit to zero when we specify our glClearColor. We are also going to enable face culling to speed up the processing a bit. Since we are drawing solid objects, we know that all back facing polygons have to be inside the objects and therefore not visible to the viewer. We will therefore tell openGL to discard, or cull, those faces once it detects them. Depending upon the scene, this can speed up rendering immensely. The code to do this is very simple: // specify that back facing polygons are to be culled glCullFace( GL_BACK ); // enable culling glEnable( GL_CULL_FACE ); The commands to enable polygon antialiasing (discussed above but listed here for convenience) are: glClearColor( 0.0, 0.0, 0.0, 0.0 ); // enable polygon antialiasing glBlendFunc( GL_SRC_ALPHA_SATURATE, GL_ONE ); glEnable( GL_BLEND ); glEnable( GL_POLYGON_SMOOTH ); glDisable( GL_DEPTH_TEST ); These commands can be placed inside of a separate function to enable, antialiasing, or if you wish to have antialiasing enabled permanently, you can place them at the end of the GLPaintHandler::initOpenGL function such as I am doing in this sample.

Also since the depth buffer is disabled, change the glClear command in GLPaintHandler::paintWindow to // clear the buffer glClear( GL_COLOR_BUFFER_BIT ); Now if you make these changes to the sample and run it, you should see the following result. (You can also [glcol9a.zip download the modified sample.] )



Now the image looks very smooth, compared with the one before, but the objects now overlap instead of intersecting as intended! Why is this? If you were reading carefully you would have noticed that we turned off the depth buffer as one part of our antialiasing operation. This is due (as far as I can tell anyway; if I am wrong or you have another idea, email me and let me know) to the following reason: if we enable the depth buffer, what happens is that the depth test compares a rendered pixel to a pixel that is already in the buffer. If the new pixel is closer, it replaces the existing one and if it is farther, it throws the pixel away. With blending, the current pixel is merged with the incoming pixel. According to the OpenGL state machine model, the blending function is performed after the depth buffer test. Since blending requires two pixels and the depth test leaves only one pixel, the blending operation will fail with the depth buffer active. You can see the effect of what happens by enabling the depth buffer with glEnable( GL_DEPTH_TEST ) and changing the glClear function back to glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); As you can see, everywhere the polygons overlap (i.e. the edges), the rendering is incorrect.

So as you can see, while this method of antialiasing is very fast, it has the disadvantage of not being able to use the depth buffer. A way around this, for applications that require the depth processing, is to order the polygons from nearest to farthest. This sorting can in some instances be tedious, and in the case of a moving viewpoint, very difficult. This is why this method is generally the most difficult to setup for, however, if you are able to sort the polygons, or are not allowing the viewing position to change much, then this method is the best for antialiasing.

For complex images that cannot be pre-sorted in depth, but still need to be anti-aliased, an option would be to use the accumulation buffer. This method uses the accumulation buffer to add multiple renderings of the scene, each slightly offset from the last by less than a single screen pixel width. To do this we will use the same techniques we used before when we were using the accumulation buffer to produce motion blur.

To start with you can use the same [glcol9i.zip initial source] that you used before. The first change we make is to add the following lines to GLPaintHandler::initOpenGL: glDepthFunc( GL_LEQUAL ); glClearAccum( 0.0, 0.0, 0.0, 0.0 ); This simply changes the depth function from the default of GL_LESS (less than is accepted) to GL_LEQUAL (less or equal is accepted), and sets the clear colour for the accumulation buffer.

The next step we have to do is to modify the viewing position so that the scene shifts around in the screen x and y directions by less than one pixel at a time. For an orthographic projection, you could simply jitter the scene with glTranslate*, keeping in mind that glTranslate* operates in world coordinates and you want the scene jittered by less than one screen coordinate. Jittering a perspective projection is a little more involved, but fortunately the OpenGL Architecture Review Board lists some useful routines in their OpenGL Programming Guide that make jittering the volume easier.

These functions, accFrustum and accPerspective, replace glFrustum and gluPerspective respectively. The parameters for the functions have the identical parameters as their gl counterparts, but add extra parameters that affect jitter of the viewing volume. The two functions are listed here: void accFrustum( GLdouble left, GLdouble right, GLdouble bottom,             GLdouble top, GLdouble near, GLdouble far, GLdouble pixdx,              GLdouble pixdy, GLdouble eyedx, GLdouble eyedy, GLdouble focus ) {   // make a frustum for use with the accumulation buffer GLdouble xwsize, ywsize; GLdouble dx, dy; GLint viewport[4]; glGetIntegerv( GL_VIEWPORT, viewport ); xwsize = right - left; ywsize = top - bottom; dx = -(pixdx*xwsize/(GLdouble)viewport[2] + eyedx*near/focus ); dy = -(pixdx*ywsize/(GLdouble)viewport[3] + eyedy*near/focus ); glMatrixMode( GL_PROJECTION ); glLoadIdentity; glFrustum( left+dx, right+dx, bottom+dy, top+dy, near, far ); glMatrixMode ( GL_MODELVIEW ); glLoadIdentity; glTranslatef( -eyedx, -eyedy, 0.0 ); } void accPerspective( GLdouble fovy, GLdouble aspect,             GLdouble near, GLdouble far, GLdouble pixdx, GLdouble pixdy,              GLdouble eyedx, GLdouble eyedy, GLdouble focus ) {   GLdouble fov2, left, right, bottom, top; fov2 = ((fovy*PI) / 180.0) / 2.0; top = near / (_fcos( fov2 ) / _fsin( fov2 )); bottom = -top; right = top * aspect; left = -right; accFrustum( left, right, bottom, top, near, far,               pixdx, pixdy, eyedx, eyedy, focus ); } accPerspective is the function that is called instead of gluPerspective. The first four parameters are the same as gluPerspective: field of view, aspect ratio, near clipping plane, and far clipping plane. The next five parameters are specific to the accPerspective function and perform two separate effects. To jitter the viewing frustum for scene antialiasing, pass in the x and y jitter values as the fifth and sixth parameters, and pass in zero for the sixth and seventh and a nonzero for the eighth. The last three parameters are used for depth of field effects where the scene is jittered about a focal point, giving the resultant effect that the scene is sharp at the focal point and more blurry the further you get from the focal point. For this mode, the fifth and sixth parameters give the eye jitter amount, and the seventh gives distance at which the scene is in focus (and hence is sharp).

Because we have to call accPerspective once for each pass when we are drawing our scene, we have to take the gluPerspective calls out of the GLResizeHandler::windowResize function. Our windowResize function now gets reduced to the following: // window resize handler Boolean GLResizeHandler::windowResize( IResizeEvent &event ) {   // tell the paint handler the new size window->glPaintHandler.setWindowSize( event.newSize ); return false; } Our paint routine now gets turned into a loop, because we have to draw our scene into the accumulation buffer several times. Before we start the loop we have to clear the accumulation buffer. // clear the buffer glClear( GL_ACCUM_BUFFER_BIT ); The other part of the current paint routine that we will leave out of the loop is the positioning of the light, as it never moves. Everything else from our original paint routine will get placed inside our loop. Our loop will then take the following form: for( int jitter=0; jitter<ACCUM_NUMBER; jitter++ ) {      // clear the buffers glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); // set the perspective viewpoint accPerspective( ... ); // draw the scene ...      // put into accumulation buffer glAccum( GL_ACCUM, 1.0 / ACCUM_NUMBER ); } // return scene from the accumulation buffer to the colour buffer glAccum( GL_RETURN, 1.0 ); Our accPerspective call needs the aspect ratio of the window. When we used gluPerspective inside the setWindowSize function, we were able to determine the aspect ratio from the passed window size parameter. In our paint routine we don't know the current size of the window so we will have to ask OpenGL for it. This is done using the glGet* function, passing in the parameter that requests the size of the viewport: // get the size of the window GLint viewport[4]; glGetIntegerv( GL_VIEWPORT, viewport ); Upon calling this function, viewport[2] and viewport[3] will hold the viewports width and height respectively.

The last item left to be determined is what jitter values to use, and how to determine how many times we should jitter the scene. It may be intuitive to think that the best jitter values are either random, or an equally spaced grid spread out across the pixel. An equally spaced grid, may not necessarily provide the best results, as in some situations it may produce an undesired Moire pattern. A random pattern would not produce consistent results. The best solution will sometimes depend upon the scene being produced and one solution will probably not suffice for all occasions. You might want to experiment with uniform distributions, or a normalized distribution that is more clustered in the centre of the pixel. You may even want to try values that jitter larger than one pixel. For those that are interested in a further discussion on the issue, see "The Accumulation Buffer: Hardware Support for High Quality Rendering", by Paul Haeberli and Kurt Akeley, SIGGRAPH 1990 Proceedings p309-318.

Here we have chosen two different jitter arrays (using only one at a time of course), one requiring five iterations and the other requiring nine. These were taken from Table 10-5 in the OpenGL Programming Guide book, but you are free to come up with any type of jitter values that suit your needs. The two jitter arrays are: GLfloat j8[5][2] = { 0.5, 0.5, 0.3, 0.1, 0.7, 0.9, 0.9, 0.3, 0.1, 0.7 }; GLfloat j8[9][2] = { 0.5, 0.5, 0.166666, 0.944444, 0.5, 0.166666, 0.5, 0.833333, 0.166666, 0.277777, 0.833333, 0.388888, 0.166666, 0.611111,        0.833333, 0.722222, 0.833333, 0.055555 }; The arrays consists of five or nine, by two, two-dimensional arrays which contains the x and y jitter amount for each jitter iteration. The decision to use more or fewer iterations depends entirely upon how smooth you want the final rendering to be. Of course using this method you trade off speed for higher quality, as each time you double the number of jitter values, you are doubling the rendering time of the frame.

With the number of jitter iterations decided upon, we now have the form of the accPerpective command: accPerspective( FOVY, (GLdouble)viewport[2] / (GLdouble)viewport[3],            MINVIEWDISTANCE, MAXVIEWDISTANCE, j8[jitter][0], j8[jitter][1],             0.0, 0.0, 1.0 ); Running this, (you can download the completed source [glcol9b.zip here]), you should now get an anti-aliased image like this one:



Things to Try
Since accPerspective is also capable of doing depth of field jittering as mentioned above, you can enable this functionality with a minor change. The eight and ninth parameters take a scaled jitter value. The scaling value is determined entirely by experimentation, and depends upon the variables you have used in drawing your scene. Here I have used a scale factor of 0.33. The final parameter is the focal point and for this scene is 130.0. The command then becomes: accPerspective( FOVY, (GLdouble)viewport[2] / (GLdouble)viewport[3],               MINVIEWDISTANCE, MAXVIEWDISTANCE, 0.0, 0.0,                0.33*j8[jitter][0], 0.33*j8[jitter][1], 130.0 ); To more clearly see the depth of field effect you may wish to add some objects in front of and behind the existing object.

Conclusion
Well that's it for this month. If you have any questions or ideas for future columns, please contact me at my edm/2 address. Next month I will show how to get fast texture like effects without using texture mapping.

See you next month!