OpenGL and OS/2 |
IntroductionI hope you enjoyed last months' column. We learned a lot about lighting and positioning lights for the effect that is desired. Being able to place the viewer on the planet and viewing the scene from that position is the first step in creating a VRML type application. Besides lighting, the next effect that adds the most realism to a scene is texture mapping, and that is the subject of this months' column. Fixpack 5I have just tried fixpack 5 on my laptop (a Thinkpad 760ED) and have found that it gives about a 15% speed increase for my OpenGL apps, even without using the new GRADD drivers. I have not yet tried this on any other machine so I do not know if the speed increase is the exception or the rule. If anyone else notices a significant speed change, or has positive or negative experiences with OpenGL and the GRADD drivers, please drop me a line. Incidentally, the release of the GRADD drivers is the first, necessary step in producing hardware accelerated OpenGL drivers. Let's hope that these drivers come out in the first half of next year as rumoured. Lighting UpdateIn last month's column I showed a very crude but simple way of adding shadows to a scene. I mentioned that if you wanted better looking, more realistic shadows then you would have to write your lighting routines yourself as it is not a part of the OpenGL v1.x specification. If you are interested in proper shadows, you may be interested to know that Silicon Graphics (the creator of OpenGL) is working on sample code that can provide shadow generation in real-time given enough processing power (i.e. an SGI workstation, not a PC). This code is useful on all platforms however because it does not use any hardware features of a specific machine; all of the shadow generation is done in software. For those who are interested, the code can be obtained from OpenGL-Based Real-Time Shadows Another method of doing fast shadows related to this months' topic is "Fast Shadows and Lighting Effects Using Texture Mapping" by Mark Segal, Carl Korobkin, Rolf van Widenfelt, Jim Foran, and Paul Haeberli SIGGRAPH 1992 procedings. This is also in Computer Graphics magazine, 26:2, July 1992, p249-252. (Incidentally for people familiar with SIGGRAPH, it is interesting to note that even though Microsoft is trying to position NT as a great graphics workstation, everytime they present something at SIGGRAPH it is on an SGI workstation. I guess that says a lot about NT). TexturingWhen an object is too complex to be rendered accurately with simple polygons, a solution may be to apply a texture to a simpler representation of that object. Think of how to create model of a car. With enough polygons, you can accurately represent the shape of the car. Now think of what you would have to do if the car had a polkadot paint job. Since a polygon can only be one colour or a smooth shade between vertex colours, the original polygon model would have to be broken up into smaller polygons to accuratly represent where the polkadots are. This makes the model more complicated and harder to draw but is still do-able. It would be much simpler to place an image of a polkadotted surface on the model of the car. This has the added benefit of not forcing a modification of the original data. Now think of what you would have to do to accurately represent an extremely complex object like a tree. To place a green polygon for each leaf would probably look extremely realistic if done properly, but would also be so complex as to be impossible to create. It would also contain so many polygons as to make the rendering process extremely slow. Representing the tree trunk so that it accurately looks like bark using only coloured polygons would also be extremely complex. That is where texturing comes in. Whenever there is an object that is supposed to look bumpy, leafy or has a complex graphic placed on it, texture mapping is the solution that you should use. For example, to create a virtual box of cereal you would simply create a box of six polygons, and on each place an image corresponding to each side of the box. A can of pop is even simpler: Render a can as a closed cylinder. Place the bitmap corresponding to the top and bottom of the can on the top and bottom circles, and then place the image for the graphic on the can sides on the polygon representing the cylinder itself. The image will automatically wrap around the curved surface for you. A quick note about syntax before we continue. When you read the OpenGL manuals from SGI, they refer to bitmaps as specifically those images that use only one bit per pixel for colour information. They therefore have only two possible colours - black and white. Colour images are simply referred to as images. There is another term used in texturing called a mipmap. A mipmap is a collection of images or bimaps ordered in decreasing resolutions, generally used to speed up the rendering process. For example lets say you have this beautiful texture of a brick wall that you are mapping onto walls in a maze program. For the user to see a nice wall when the polygon is close to the viewpoint, the image should be quite detailed. However if the polygon is far away and therefore small, it is a waste to scale the image down for every frame to fit on the distant polygon. The user could not see all the detail anyway because of the distance. Also scaling down a large image into a smaller one can introduce undesired aliasing effects. With mipmaps, you pre-scale the image to a variety of sizes, and during the rendering process it picks the size image that most closely represents the viewing size. Drink Anyone?For this months' sample program let's make a spinning pop can. We will take bitmaps taken of an actual can of pop, and texture map them onto a cylinder. The code to show part of the can without the textures can be obtained here . The code is almost identical to last month's example, just the objects we are rendering are different and the lighting is removed. I have also added an initTextureMapping() function. Legal Mumbo JumboThe textures that I have chosen are from a Pepsi can. I'm sure that the Pepsi corporation won't mind me showing their product without permission as it is a very fine product and it IS free advertising. Not that I don't like Coke and it's associated products. I like Coke just as much as Pepsi and I actually tried getting images of a Coke can first, but it turned out that the Pepsi logo was simpler and turned out better when imaged with my camera. Not that I didn't like the Coke logo, it is a very nice logo. (Think I sucked up enough to both sides so that they won't sue us?)
Back to the DrinkThe biggest difficulty for people jumping into texturing for the first time is that there are so many options to set that it is diffficult to know where to start. Because there are so many options to set, each giving a different result, it would be impossible to cover all of them here. In this example, we will start off with a basic texture mapping and expand it to show a variety of effects such as clipping, wrapping and transparencies. The largest size image that is guaranteed to work for all implentations of OpenGL is 64x64 (or 66x66 with borders). The implementation you are usuing may allow greater size images; this maximum value can be obtained from the function glGet*(GL_MAX_TEXTURE_SIZE). If the image you want to display is greater than the maximum dimension supported by your implementation, simply cut the image into smaller images and stitch them together during the texturing process. Regardless what size of image you use, the dimensions must match the formula 2^m +2b (where b is the border size, 0 or 1). The value of m does not have to be the same for both width and height in an image. Borders are used in filtering when joining two textures together, and when textures are repeated across a polygon. If you run the sample program so far, you should see a square slowly rotating around the screen. This square will eventually become the top side of our pop can. First we will place the texture map for the top of the can, and then we will trim the corners off of the square so that only the round can top remains. After we have this we will add the bottom of the can, and then wrap the remaining image around the side of the can to finish the effect. The images we are using are in 24 bit bitmap format. Since OpenGL has no way of actually loading in an image, I have supplied a simple routine to read in the file and change the format into one of the formats that OpenGL can understand. For the benifit of those non-VisualAge users, I have written the routine in standard PM, so that it can be used on all compilers. Here is the routine: char * GLPaintHandler::readBitmap( char *fileName ) { HFILE hfFile; ULONG ulAction; ULONG ulBytesRead; ULONG ulWrote; ULONG ulLocal; // open the file DosOpen( fileName, &hfFile, &ulAction, 0L, FILE_ARCHIVED | FILE_NORMAL, OPEN_ACTION_FAIL_IF_NEW | OPEN_ACTION_OPEN_IF_EXISTS, OPEN_FLAGS_NOINHERIT | OPEN_SHARE_DENYNONE | OPEN_ACCESS_READONLY, 0L); // get the length of the file ULONG filePtr, filePtr2; DosSetFilePtr( hfFile, 0, FILE_END, &filePtr ); // read in the file char *buffer = new char[filePtr]; DosSetFilePtr( hfFile, 0, FILE_BEGIN, &filePtr2 ); DosRead( hfFile, buffer, filePtr, &filePtr2 ); DosClose( hfFile ); // reset data back to RGB from BGR for( int i=0; i<64*64*3; i+=3 ) { buffer[i] = buffer[22+i]; buffer[i+1] = buffer[21+i]; buffer[i+2] = buffer[20+i]; } return( buffer ); }Aside: This routine will only work on 24 bit bitmaps. The purpose of this is to simply show how to do texture mapping, not provide a way of reading in every known image format. This is why I chose the simplest way to read in a bitmap, and a bitmap format that is the closest to a form that OpenGL could read with as little hassle as possible. For those interested in reading in paletted images such as 8 or 16 bit gifs or bmps, the following steps could be followed:
GL_TEXTURE_MIN_FILTER specifies what happenes when the pixel being textured maps to an area greater than one texture element (the texture map has to be shrunk down to fit). The parameter specifies which function to follow.
GL_TEXTURE_MAG_FILTER specifies what happenes when the pixel being textured maps to an area less or equal to one texture element (the texture must be magnified to fit). The valid parameters are:
GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T specify how to handle out of range texture coordinates. Texture coordinates are specified as (s,t,r, and q) to distinguish them from object coordinates (x,y,z, and w ). Two dimentional textures use (s and t) to specify their position. The r coordinate is currently ignored (in OpenGL 1.0 anyway) and q can be used in effects such as texture projection. The coordinates of the texture range from 0.0 to 1.0 although the coordinates of the item being textured can be anything. If you specify a texture coordinate greater than 1.0 or less than 0.0, two things can happen: the texture can repeat, or the texture can stop at 1.0. These are the two settings that can be specified for GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T.
// specify texture parameters glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP ); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );Now that this is set we have to set the texture environment. This is done with the glTexEnv() command which takes three parameters:
For our use we are simply placing the texture on the polygon so we will use the GL_DECAL function: glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );Now that we have everything specified for our setup, we can enable 2D texturing calculations with: glEnable( GL_TEXTURE_2D );All of these commands are placed in our GLPaintHandler::initTextureMapping() function. The other call that I placed inside of this function is the call to read in our bitmap (calling the function given above): canTop = readBitmap( "cantop.bmp" );The canTop buffer is stored for later use inside the GLPaintHandler::paintWindow() function. Place the call to GLPaintHandler::initTextureMapping() right after the GLPaintHandler::initOpenGL() call. Our next step is to add code inside the GLPaintHandler::paintWindow() function that specifies which texture map to use, and how to map the texture coordinates onto the pixel coordinates. We select the texture to use with the function glTexImage2D(). This function takes nine parameters:
glTexImage2D( GL_TEXTURE_2D, 0, 3, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, (void *)canTop );The next step is to specify the texture coordinates that map to each pixel coordinate. Each pixel coordinate should have an associated texture coordinate. Texture coordinates are specified with the glTexCoord2*() command and should preceed the glVertex*() command. Change the polgon statements so that they look like the following: // draw the can top glBegin( GL_POLYGON ); glTexCoord2f( 0.0, 0.0 ); glVertex3f( -1.0, -1.0, 1.0 ); glTexCoord2f( 1.0, 0.0 ); glVertex3f( 1.0, -1.0, 1.0 ); glTexCoord2f( 1.0, 1.0 ); glVertex3f( 1.0, 1.0, 1.0 ); glTexCoord2f( 0.0, 1.0 ); glVertex3f( -1.0, 1.0, 1.0 ); glEnd();Compile and run this. You should now see that we have a textured image of a can top placed on a square rotating in space. Congratulations, you have completed your first texture mapping! There is a bit of work to do to get this looking like a pop can though. First of all, we have to get rid of those corners. We have to change our glVertex() commands from specifying a square to something representing a circle. Obviously the greater the divisions and the shorter the segments, the more circle-like the resulting polygon will become. To save space and since this is just to demonstration of what can be done, I am gong to use an eight sided polygon. This will create a fairly chunky can but will serve our purposes. You can increase the number of segments to make it more realistic if you like. To figure out where to place the texture coordinates (remember they should range between 0 and 1, since we are spreading the entire texture across the polygon and we are not repeating textures), simply draw the polygon on a piece of paper and rescale the axis such that the bottom left corner is (0,0) and the top right is (1,1). Calculating where each of the vertices of the polygon fall on this new axis should give you the correct texture coordinates. Note: We want to place our texture on the polygon without distorting it so that it looks life-like. This is why we are making an even mapping from polygon coordinates to texture coordinates. If you wanted to distort the image you could simply pick texture coordinates that do not map linearly to the polygon coordinates, therby resulting in a warped image. A variety of special effects can be achieved in this manner. Replace the polygon coordinates with the following: // draw the can top glBegin( GL_POLYGON ); glTexCoord2f( 1.0, 0.7 ); glVertex3f( 1.0, 0.4, 1.571 ); glTexCoord2f( 0.7, 1.0 ); glVertex3f( 0.4, 1.0, 1.571 ); glTexCoord2f( 0.3, 1.0 ); glVertex3f( -0.4, 1.0, 1.571 ); glTexCoord2f( 0.0, 0.7 ); glVertex3f( -1.0, 0.4, 1.571 ); glTexCoord2f( 0.0, 0.3 ); glVertex3f( -1.0, -0.4, 1.571 ); glTexCoord2f( 0.3, 0.0 ); glVertex3f( -0.4, -1.0, 1.571 ); glTexCoord2f( 0.7, 0.0 ); glVertex3f( 0.4, -1.0, 1.571 ); glTexCoord2f( 1.0, 0.3 ); glVertex3f( 1.0, -0.4, 1.571 ); glEnd();This is much better. The polygon now restricts itself to a (roughly) circular region representing the can top. You can produce the can bottom simply by reproducing the code for the can top and making the z-coordinate negative. Since OpenGL can only have one active texture at a time, we must call our glTexImage2D() function again but this time passing it the buffer space for our image of the bottom of the can. Also take note that the glTexImage2D() function will fail if it is placed inside the glBegin(), glEnd() pair. Therefore to do two diferent images, we must have two glBegin(), glEnd() blocks with a glTexImage2D() preceeding each of them. The sides of the can are produced with a GL_QUAD_STRIP, split up into two sections since the sides of the can are split into two images because of the 64x64 restriction in size. Using the same vertices as we used in the top and bottom of the can, we simply create two quad strips that wrap around the can. We can see that since each side of the can has five vertices (the edges are shared) each vertex moves along the texture by 0.25. By specifying the texture coordinaes along with the cylinder coordinates, the image will wrap itself around the can as it sticks itself to the specified vertices. Compiling this should give you a fully texturemapped enclosed can. The complete source code for this can be obtained here . There are a few things we could do to make this more realistic. We could taper the top and bottom edges of the can so that the model matches more closely to the real thing rather than using a simple cylinder. We could also use higher resolution images (splitting them up when necessary) so that the can looks better when enlarged and mipmapping down when the can is smaller. Another textuing trick that is frequently done is to make part of the texture transparent so that the viewer can see through it to the scene behind (much like a transparent gif). This is done because many images are irregular in nature and trying to map polygons to the shape would prove difficult. Such a shape would be a tree (perfect for Christmas!). Oh Christmas Tree...One easy way to make a tree is out of two polygons intersecting at 90 degrees. The same tree image is then mapped onto both polygons. Since the polygons are crossing though each other, you can look at the tree from any angle and still see the shape of a tree; the polygon never becomes edge on to the viewer and therefore never appears to disappear.
Setting this up is done in almost the same way as we set up before for the pop can. We have two texture maps, one for the top half of the tree and one for the bottom half. We then draw two polygons with a 2:1 size ratio that intersect at 90 degrees (actually four polygons, because the tree is split up into an upper and lower image). The only difference is that we want to specify one colour as a transparent colour. This can be done wih the use of what is called an alpha buffer, the fourth component of an RGBA colour. Beyond, into the Fourth Dimension.Up until now we have been largely ignoring the alpha parameter when specifying a colour. When an alpha parameter is not explicitly specified, such as when we set a colour with the command glColor3f(), OpenGL converts this 3 parameter RGB colour into a 4 parameter RGBA colour by setting the alpha component to 1.0. An alpha value specifies how the current colour is blended with the colour that is already in the frame buffer (i.e. this only takes effect when objects overlap). If a pixel has an alpha value of 1, then its colour simply replaces the colour of any value that may already be there. If the value is less than 1, then then that percentage of the colour is mixed with what is already there. For example, let's say we are drawing a scene that has a window placed in it. When we specify the glass colour, you could also specify an alpha value of say 0.2, which would result in a final colour of 20% glass, and 80% colour of the scene behind the glass. You could easily then overlap pieces of the glass which would let even less of the distant scene through. For our use we will simply either show a colour, or not show it resulting in it being completely transparent. By default, alpha testing is turned off and must be enabled. You can also specify how the alpha test calculations are to be performed with the glAlphaFunc() command. This command takes two parameters:
// enable aplha test for transparency glAlphaFunc( GL_GREATER, 0.5 ); glEnable( GL_ALPHA_TEST );We also have to modify our readBitmap() function so that it returns an RGBA quadruple instead of an RGB triple. The only differences are in the for() loop for converting the BGR into and RGBA. The code snippet is: char *imageBuf = new char[64*64*4]; // reset data back to RGBA from BGR for( int i=0, j=0; i<64*64*3; i+=3, j+=4 ) { imageBuf[j] = buffer[28+i]; imageBuf[j+1] = buffer[27+i]; imageBuf[j+2] = buffer[26+i]; if( imageBuf[j]==255 && imageBuf[j+1]==255 && imageBuf[j+2]==255 ) imageBuf[j+3] = 0; else imageBuf[j+3] = 255; } delete [] buffer; return( imageBuf );There is only one more change to make. Remember in the previous example when we told OpenGL what textue map we have selected? We specified that our texturemap consited of three unsigned bytes forming an RGB triplet. We have to change this to reflect the fact that we are also giving it the alpha values. Where before we had: glTexImage2D( GL_TEXTURE_2D, 0, 3, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, (void *)canTop );we now change this to: glTexImage2D( GL_TEXTURE_2D, 0, 4, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void *)treeTop );The polygon and texture coordinates are as follows: // draw the tree top in two directions glTexImage2D( GL_TEXTURE_2D, 0, 4, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void *)treeTop ); glBegin( GL_QUADS ); glTexCoord2f( 0.0, 0.0 ); glVertex3f( -1.0, 0.0, 0.0 ); glTexCoord2f( 1.0, 0.0 ); glVertex3f( 1.0, 0.0, 0.0 ); glTexCoord2f( 1.0, 1.0 ); glVertex3f( 1.0, 2.0, 0.0 ); glTexCoord2f( 0.0, 1.0 ); glVertex3f( -1.0, 2.0, 0.0 ); glTexCoord2f( 0.0, 0.0 ); glVertex3f( 0.0, 0.0, -1.0 ); glTexCoord2f( 1.0, 0.0 ); glVertex3f( 0.0, 0.0, 1.0 ); glTexCoord2f( 1.0, 1.0 ); glVertex3f( 0.0, 2.0, 1.0 ); glTexCoord2f( 0.0, 1.0 ); glVertex3f( 0.0, 2.0, -1.0 ); glEnd(); // draw the tree bottom in two directions glTexImage2D( GL_TEXTURE_2D, 0, 4, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, (void *)treeBottom ); glBegin( GL_QUADS ); glTexCoord2f( 0.0, 0.0 ); glVertex3f( -1.0, -2.0, 0.0 ); glTexCoord2f( 1.0, 0.0 ); glVertex3f( 1.0, -2.0, 0.0 ); glTexCoord2f( 1.0, 1.0 ); glVertex3f( 1.0, 0.0, 0.0 ); glTexCoord2f( 0.0, 1.0 ); glVertex3f( -1.0, 0.0, 0.0 ); glTexCoord2f( 0.0, 0.0 ); glVertex3f( 0.0, -2.0, -1.0 ); glTexCoord2f( 1.0, 0.0 ); glVertex3f( 0.0, -2.0, 1.0 ); glTexCoord2f( 1.0, 1.0 ); glVertex3f( 0.0, 0.0, 1.0 ); glTexCoord2f( 0.0, 1.0 ); glVertex3f( 0.0, 0.0, -1.0 ); glEnd();Now we should have a nice rotating tree in our scene. You can download the completed code here If you want to see what the texture mapping looks like without the transparency calculation, simply comment out the glEnable( GL_ALPHA_TEST ) line. Now all there remains to do is to decorate the tree! I will leave that up to you.
I hope you enjoyed our first crack at texturing. We have only scratched the surface as to the different effects possible with texturing, and I hope that you will do some experimenting on your own. Next month we will delve a bit deeper into texturing and try out a few more special effects that we can do with it. Until then, if there is something that you would like me to demonstrate with texturing, or if there is an OpenGL topic that you would like me to cover in general, just drop me a line. Happy Holidays! |