OpenGL on OS/2 - The Bumpy Road Ahead

Written by Perry Newhook

A quick note on this issue: Due to various issues I won't get into, this months' column was supposed to appear a few months ago. Hopefully everything will be back to normal in the months ahead. I have also updated the news section below because of recent developments.

Welcome back to another instalment of OpenGL coding. Last month I promised how to do fast texture mapping without actually using textures, so now I will show how to do this magical effect. This uses the long lost art of Bump Mapping; a cool effect with a strange name. However once we get into this effect a little more, you will see that the name is well deserved.

News
First some news. Fixpack 6 is now out and I have heard from a few brave souls who wanted to be the first to try it (including myself). First reports show the fixpack was generally received well: many have reported a slight increase in desktop responsiveness, and for many, the new GRADD drivers that accompanied this fixpack are a vast improvement over the previous GRADD drivers. I have also heard from several people that say the GRADD drivers are even faster than the native drivers for their card (especially S3 users). I have not yet tried this myself but I'd be very interested in hearing from users who have made the switch. Remember, if you want to have a hope of using OpenGL acceleration in the future, you will eventually have to make the switch to GRADD. As for the rest of the fixpack, I have heard of problems with FP6 and the MWave drivers (audio/modem/DSP driver mainly on laptops), and I have had this problem myself, forcing me to reload a driver from FP5. After fixing that and one other minor problem my system is back to its normal zippy self.

Also of note is that OpenGL version 1.2 was just approved by the architecture review board. Hopefully it will start making its way down into release levels soon. One of the optional components of this release is related to this months' topic.

And now for the best news: IBM has finally released (as of mid-May) the hardware accelerated OpenGL device driver kit. This will enable 3D card vendors to enable 3D OpenGL acceleration in their GRADD based video driver. As of this writing, while the sample driver and development kit is available from IBM, no 3D card maker has publicly declared that they have started driver development. This is where you the consumer has to come in: no card maker will develop drivers for OS/2 unless there is a demand for those drivers. If you want accelerated drivers for your 3D card, you have to make the effort and call your card manufacturer and ask for them. The best bet for vendors is probably ELSA who has a long history of supporting OS/2, and possibly 3dfx who as a chip manufacturer seem open to supporting multiple platforms, and possibly even ATI Technologies. There are others but these are the ones that come to mind. [Matrox may be a possibility as well. Ed] Call and/or e-mail these companies and ask for support nicely. There is no point in bugging IBM for drivers any more as they have done their part, it is up to the board manufacturers now and us to encourage them along.

Watch out for that ...!
Ok, for this demonstration I will try to fully describe the concept of bump mapping and then dive into a simple example.

If you think back to how lighting works, the colour of a polygon is determined by combining the polygon's natural colour with the orientation of that polygon with respect to the light source. If the polygon is directly facing the light source, the polygon gets drawn with full intensity. Likewise if the polygon is tilted a bit away from the light source the intensity is reduced to reflect the fact that less light is being reflected towards the viewer. (If you need a review of lighting, go back to the November column on adding lighting, and also the January column where we added lighting to textures.

The angle of incidence towards the light is not determined by the coordinates of the vertices of the polygon, but by the normal specified with the polygon. If no normal has been specified, OpenGL can be told to automatically calculate it on a per polygon basis. If you specify your own normals, the polygon will use the normal last defined. Usually when you specify the normal you calculate it so that it is perpendicular to the face of the polygon. (This is what OpenGL does when you specify that you want it to automatically compute normals). We can however, specify any normal direction we choose which is the basis of the bump mapping method.

If you think of objects that are physically bumpy, such as a tree trunk or the surface of an orange, you can see alternating light and dark areas as the light is either obscured by, or reflects off the raised or hollow areas. As you rotate your viewpoint around the object, the light and dark areas slightly change as the light source gets bounced off of different areas of the surface. While a texture map of an orange or tree would show it perfectly, rotating that object would not produce the effect that an irregular surface provides. This is one advantage of bump mapping over texture mapping. Now obviously, by simply varying the normals we will never recreate a complexly coloured object such as a painting or a photograph of the side of a house; but for many objects that vary their colours because of texture, this method is perfect. In situations where you can use this method instead of texture mapping, you will get a vast speed improvement over texturing because you are simply taking advantage of the regular lighting model which you probably have activated anyway. The fewer textures you use in your scene, the faster your scene will render.

This method was popular a few years ago on entry level workstations, as until recently, these systems had little or no texture capability and it were quite slow. Nowadays on workstations that have multimegabyte dedicated texture buffers and accelerated 3D capability, this method is not used much as they can texture map an entire scene and not suffer much of a slowdown. On non-accelerated hardware or on platforms that have limited dedicated texture buffers, this bumpmapping method proves very useful.

Choosing the normals that will modify the scene to be a realistic representation of bumpiness is not a trivial task (which is why simple texture mapping is the preferred method when you have the hardware to support it). Some advanced bump mapping algorithms can read in a texture, and create normals that approximate the texture. The end result is an object that looks texture mapped, but the texture is discarded once the normals are created and not used in the rendering process. Here I will simply demonstrate the effect and leave the advanced texture to bump mapping algorithms up to the reader. You can find these methods and algorithms in any good advanced rendering book.

To show how to make a textures using bump mapping, instead of picking a recognizable object, I will pick a simple flat plane and place the bumpmaps on that. In that way the effect that you see is purely the effect of the bump map. In the first example, we will start off with our standard code: in this variation we have a blue flat plane, with the plane subdivided into a number of small triangles. The code also sets up a timer that rotates the plane in front of the viewing position. You can download the code for this starting point here.

If you look at the code you will find that instead of specifying one large polygon, we have broken it up into a number of smaller triangles. This is because there can only be one normal specified per polygon. To specify the multiple normals that bump mapping requires, we have to subdivide the object into multiple pieces and place a normal on each piece. Running the code you will notice that the end result looks just like a single solid polygon; you cannot tell that the object is made up of a number of smaller triangles. This is because each of the polygon pieces is exactly the same colour and lying adjacent to one another in the same plane (normals do not yet take effect because lighting is not defined.

Now we need to add the lighting. This code is identical to the code presented in previous columns on lighting, but is relisted here for convenience. All we add is a function that initializes lighting, and a call to that function in the constructor. void GLPaintHandler::initLighting {     glEnable( GL_LIGHTING ); // setup light 0 glEnable( GL_LIGHT0 ); glEnable( GL_COLOR_MATERIAL ); // modify ambient light GLfloat amb_light[] = { 0.1, 0.1, 0.1, 1.0 }; glLightModelfv( GL_LIGHT_MODEL_AMBIENT, amb_light ); } The call to initialize lighting can be placed in the GLWindow constructor right after the initOpenGL call. // initialize lighting paintHandler.initLighting; Now we need to add a light source so that our normal definitions can take effect. As usual we can place this code right after the glLoadIdentity into our GLPaintHandler::paintWindow function. // place a headlight GLfloat light_pos[] = { 0, 1, 0, 0 }; glLightfv( GL_LIGHT0, GL_POSITION, light_pos ); If you recall the lighting column, this creates a directional light pointing along the y-axis. Running with just these modifications shows how the light modifies the perceived colour of the object although it is still viewed as a single plane.

Now we will see what happens when we specify normals for these polygons that are not necessarily equal to the true surface normal. What we will create are random normals, with directions mainly in the z direction but also with a slight random variation in the x and y. Any normal created should also be normalized, (ie the total length of the vector should be unity), before it is passed to OpenGL. // specify normal float nx, ny, nz, value; nx = (50*rand) / RAND_MAX -25; ny = (50*rand) / RAND_MAX -25; nz = (300*rand) / RAND_MAX +100; // normalize the normal value = sqrt( nx*nx + ny*ny + nz*nz ); nx /= value; ny /= value; nz /= value; Now that we have our normal, we have to specify it when we create our polygon. Change the glBegin / glEnd pair so that it looks like the following: glBegin( GL_TRIANGLES ); glNormal3f( nx, ny, -nz ); glVertex3f( i, j, 0 ); glVertex3f( i, j+STEP_SIZE, 0 ); glVertex3f( i+STEP_SIZE, j+STEP_SIZE, 0 ); glNormal3f( ny, nx, -nz ); glVertex3f( i, j, 0 ); glVertex3f( i+STEP_SIZE, j+STEP_SIZE, 0 ); glVertex3f( i+STEP_SIZE, j, 0 ); glEnd; Instead of calculating a new random normal for the second triangle, for simplicity I'm simply reversing the x and y portions of the normal for a different effect. In your implementation, you can do whatever you wish to the normals.

Running this (you can get the completed source [glcol10a.zip here]) you should see a rotating rectangle as before, but now the surface should appear to be rapidly changing with different shades of blue. You should have noticed three things about the rendering:
 * 1) The rendering speed did not slow down significantly by adding the normal calculations. This is because to a certain extent these calculations are already done on a per polygon basis. We are simply manipulating this to get the effect we desire. The only significant slowdown is that we now have to render a large number of smaller polygons instead of one large one.
 * 2) Changing the normal affected the perceived colour of the polygon even though the actual colour stayed the same. We could have extended this to have multiple actual colours on our initial polygon which would have happened if we had tried to emulate a coarse image.
 * 3) (And this is the important one). Not only could we change the perceived colour by changing the normal for that polygon, but we could do this dynamically as the image rotated, not just statically. This opens the door to some very inexpensive animation effects, just by changing the normals of the polygons. Ripples on the surface of water would be a good realization of this technique.

Now that we have the effect working, we can modify the normal generating routines to produce a variety of different effects. Try the following: // specify normal float nx, ny, nz, value; nx = (50*cos((j+6*rotationVal)*3.14159/360.0)); ny = (50*sin((i+6*rotationVal)*3.14159/360.0)); nz = 200; This ties the x and y parts of the normal to the rotation value; as the image rotates the normals change. The normal values are also modified by their i and j values. Running this you get the following effect:

As you can see the effect is only limited by the complexity of the algorithm used to generate the normals.

You may have thought to yourself, "can we combine normal texturing with bump mapping?". Well yes, and this is a relatively new technique just coming out (well for PCs anyway). If you have a graphics card that can do this effect in hardware, it would probably be touted as "multi-texturing", and if done properly can result in very realistic images. Routines to help in these effects are included in the optional components of the recently approved OpenGL v1.2, but if you are careful and with a bit of work, you can try them out by modifying the method shown here.

To try
We've just touched on what is possible using this effect. As you can see, the example shown here is simplistic but it does show the method. Trying to show something realistic would only complicate the code and not add any technique not already given. Now that you know the method, try applying it on other objects such as cylinders (to make for example a tree), and a sphere (to make for example an orange).

Because many of you asked, next month I will show how to do OpenGL programming without the VisualAge C++ libraries. I will go through program setup for straight PM programming, the GLUT libraries and the AUX libraries. See you then!