![]() |
OpenGL and OS/2A Model Viewer - Part 3Written by Perry Newhook |
IntroductionI hope everyone is enjoying this mini-series of columns where we are finally creating a real application: an OpenGL based Quake II model viewer. So far we have written the windowing and menuing framework (part 1), mouse movement, and model loading and display for points and lines (part 2). This month we hook the different display modes we want into the menuing system so that we can flip among them easily, and we will also add a solid mode display with artificial lighting and textures. To create this program, I will be drawing on topics covered in previous columns. In fact everything we are going to be doing to make this viewer work, we've already covered in previous columns, so you don't really need me at all! As we use each topic, I'll try to add a reference to the column where it was discussed previously. That way if you don't understand something, you can always go back to the main article covering it, where the feature is described in more detail. Our source code starting point for this month is what we ended up with last month, so if you don't have the source, you can either review last month and get it or download it here. More MenusLast month, we added code that enabled us to show the model in either wireframe or point mode. To flip between them however, we had to recompile. The first thing that we will do this month is to enhance the menuing system to let us choose which way we want our model shown. If you remember when we set up our menus, we already added four possible display modes: points, wireframe, solid and texture. Selecting one of these choices calls the function menuFuncDisplay(), which is currently empty. What we need to add here is code that performs any one-time mode switching for each of the display types, and then inform the draw routine which method to use when drawing. If you look at our draw code that draws a triangle mesh, you will
notice that there is a line in there like this one:
This command tells OpenGL that we want our polygons drawn as outlines only, and that we want both the front and back drawn this way. This is an OpenGL mode command, and once specified, all polygon drawing uses this mode until we issue another command. Placing such a command in the draw function is not only not needed, but will actually slow things down as OpenGL does processing on it unneccessarily. Therefore, we will move this command to the menuFuncDisplay() function. For solid display, we will change the glPolygonMode() setting from
GL_LINE to GL_FILL. Since at this point this is the only differece
between wireframe and solid, these two modes can now share the same chunk
of code in our draw routine. All that is left in our menuFuncDisplay()
routine is to store the selected mode, so that the draw routine knows what
it is. We end up with the following:
In our draw routine, a simple switch will choose which code we use to
draw our model:
Now via the menu popup, we can easily flip amongst our display choices. If you try this you will note that the solid mode doesn't really show much because all of the polygons are drawn in the same colour; no feaures except for the outline are distinguishable. We will correct this problem by adding lighting when we display in solid mode. When we add the textures, this problem will be bypassed as the textures help us determine where the features are. LightingTo see features in the 3D solid model that does not have any colour information, we need a way to distingush between the different polygons in the model. Lighting does this by modifying the colour of each polygon, based on the orientation of that polygon with respect to the viewer and the light source. (For an in depth discussion of lighting, see the 'Let there be light!' column). For lighting to be calculated correctly, we need to specify normals for
each of the polygons. The normals tell OpenGL which way the polygon is
facing so that the lighting variations can be calculated when the polygon
changes direction with respect to the viewer. To calculate the normal just
take any two connecting sides of the polygon, convert them to vectors and
take the cross product. This resulant vector should then be normalized by
dividing by its length. A function that does this is given below.
coord1, coord2 and coord3 are the three points of the triangle polygon. They are pointers so that coord1[0] is the x coordinate of coord1, coord1[1] is the y coordinate, and coord[2] is the z coordinate. The paintWindow() section now becomes:
You may have noticed that the vertex order has been reversed from what was listed earlier. This is because the face of the polygon was oriented in the wrong direction (inwards) resulting in no light being reflected back to the viewer. To reverse the direction of the polygon, we could either alter the order that we specify the polygon vertices, or we could call the function glFrontFace() with the argument GL_CW. This states that polygons whose vertices are ordered clockwise are front facing (by default, front faces are those that are counterclockwise GL_CCW ). Running this on the file tris.md2 (you can download the source up to this point here), you should now see a solid image that looks like the following:
Texture SkinsAnother method of showing the model as a solid, instead of using artificial lighting, is to add in colour texture maps to each polygon. While the colour information itself is not stored in this file, the name of the associated texture file, and all the information we need to attach it is. Note: The models and textures that I am working on can be found in the players\female directory under the main Quake directory. You are free to use any directory or Quake models you wish as the techniques described here should apply to all of the Quake II models. You may need to run the game before it self extracts these files from the pack file that the models and textures are originally stored in. Texture 'skins' as they are called for Quake II models, are stored in standard .PCX files. To see these files you can simply show them in an OS/2 folder and double click on them. OS/2 understands the format and will display them in the default image browser. By clicking on a variety of different skins you will notice that most of them follow the same organization and are interchangeable. Some .MD2 files have specific textures associated with them, for example the 'weapon.md2' file has as a specific skin file 'weapon.pcx'. This file name is listed in the .MD2 structure as a skin name. Some .MD2 files have no specific texture associated with them, such as 'tris.md2'. This model is a generic model (one for male, and one for female), and different characters are created by laying the approprite texture on top of the model. In both cases, the mapping of how to place the texture on the model exists in the .MD2 structure. The first thing we need to do is to load in the .PCX file format and convert it to a OpenGL compatible texture. The actual format of the .PCX file is not really that important to our discussion here so I will just give a brief overview and show you the code. What is important is how to convert it into a texture that OpenGL can use and how to map that texture onto the model. Note: The code that I'm using to read in the .PCX is just the bare minimum required for our purposes. It's enough to read in Quake II skins but I'm sure that it won't do much with other .PCX files. The first thing we need to do to decode the .PCX file is to read in the
128 byte header, although we only need the first six shorts. The first
two shorts are used to identify the file as .PCX so we'll ignore them.
The second two shorts are the image's start pixel, followed by two more
shorts that are the image's end pixel. To find the dimensions of the
image we subtract the start pixel from the end pixel and add 1.
PCX files are 8-bit palettized images that are also slightly
compressed. I say slightly because the only compression that occurs is
when there are a number of pixels in a row that have the same value [This
is also known as run-length encoding. Ed]. In this case it stores the
number of repetitions followed by the pixel value to repeat. To make
things easier, the first step I have done is to uncompress the PCX into a
regular image array:
The values have been palettized, so we need to convert them back out to
24 bit values before we can use them. The image palette is stored as the
last 768 bytes in the file (256 possible values * 24bit = 768 bytes).
We will do the final conversion out to 24bit below when we create the OpenGL texture. Skins to TextureThe previous column that dealt with textures was 'Let there be texture!'. If at any time during the next few paragraphs, you get confused and think to yourself "What the heck is this guy doing?", simply go back to the texturing column and all of your questions will be answered. And remember, "Hakuna Matata" [As far as I remember, this is a Lion King reference :) Ed.] One of the restrictions of OpenGL textures is that the dimensions must
be a power of two. Since the image has to be converted to 24 bit before we
give it to OpenGL, we can change the dimensions at the same time. In the
following code imgWidth and imgHeight are the dimensions of the PCX image,
and texWidth and texHeight are the dimensions of the next higher
OpenGL-friendly dimension.
Now that we have the texture, we need the information that maps portions of this texture to each polygon. Just like points have a coordinate in 3D space, the points also have a texture coordinate. In 3-space the coordinates are referred to as x, y and z. In texture space the coordinates are generally called s and t. S and t coordinates in OpenGL are floating point values ranging from 0.0 to 1.0, with 0.0 being the left or bottom most edge of the texture, and 1.0 being the right or top most edge. Texture coordinates in the Quake II file are not floating point, but
short values that correspond to pixel positions. We have to convert these
values to floating point for OpenGL, but first we have to read them in.
In the modelHeader (the header associated with the start of the .MD2 file)
there is an entry called numST. This is the number of ST pairs in the .MD2
file and you can find them starting at offsetST bytes from the start of
the file. When we read these values in we will convert them to floating
point by dividing by the texture dimensions.
These values are a list of all of the texture mappings used by the
model. When we created our mesh earlier, each of the points for the
polygons in the mesh were actually an index into the point list. The same
is true for the texture coordinates; each point for the polygon has in
index to the texture coordinate list created above. We can expand our
mesh creation code to the following:
This now gives us all of the information we need to decode the texture. If the model has a specific skin associated with it, then the numSkins value in the header will indicate this. This value will then be 1 and the name of the required texture will be a text string at offsetSkins from the beginning of the file. If no skin name is present then we can apply any texture we want. In this modeller I am reading the texture that you want applied from the program parameters. The only thing left to do is to set up the OpenGL texture settings, and
to enable the textures when we select that mode from the menu. In our
main() function, we can call a function that sets up our texture settings:
After we load our model and call the above function, we need to tell
OpenGL to use the texture we created:
That's it! Now we just have to enable texturing when we select it from
the menu (in the menuFuncDisplay(), add another case statement for
MENU_SHOW_TEXTURE)
And when we draw the function, specify the texture coordinates. In the
paintWindow() function, MENU_SHOW_TEXTURE is almost identical to
MENU_SHOW_SOLID except we don't calculate a normal. In addition, we have
to specify a glTexCoord*() command for every glVertex*(). Remember that
like the vertices, the texture coordinates are indexed. Our
MENU_SHOW_TEXTURE section then becomes:
Run this (you can get the finished source and executable here) and you should be able to get textures on your model. The following snapshots were taken with the voodoo.pcx skin mapped onto the female tris.md2 mesh:
One note about program placement: It is best to place the executable in
the 'baseq2' directory under the Quake II install directory. This is
because if a skin name is explicitly named, it includes the full pathname
starting from this directory (for example a valid skin name would be
'players\female\weapon.pcx' ). Therefore to view the images as above you
would type on the command line:
(The first parameter is the model name and the second is an optional texture) Things to try:When I enabled textures in the code above, I disabled lighting. Having them both together is not just a matter of enabling lighting with textures enabled; some other setup should be performed. Try to get both lighting and textures active at once. Hint: If you can't get it, go back and review the column that discusses it Let There be Lit Things ConclusionWell that's another OpenGL column for this month! I hope you have fun playing around with all of the Quake II characters. Next month I will show you how to animate the characters by running through the frames stored in the .MD2 file. Unless anyone has an OpenGL topic (this series was suggested by a reader), next month will be the last OpenGL column as I'm out of ideas of what to cover. If you have an idea (it can be anything, even basic stuff) just click on my signature above and send it to me at the EDM/2 address. See you next month! |