OpenGL on OS/2 - Questions and Answers

From EDM2
Jump to: navigation, search

Written by Perry Newhook

Introduction

I hope you liked last month's column. Advanced effects such as texturing and blurring are easy to do but they underline the need for hardware accelerated 3D graphics. Fortunately, I have some good news in this area that I will get to later in the column. And now for some bad news; due to an unfortunate laptop accident I lost my development environment for a few weeks - leaving me with too little time to develop code for this months' column. (Remember the importance of proper backups everyone!) Instead, I will discuss some problems that people have emailed me with and have been discussed in the OpenGL mailing list. Luckily I did not lose any data so things should be back to normal in time for next month's column.

First some news: As was reported on several of the OS/2 news sites including WarpCast, the AIX group has transferred the code for an accelerated OpenGL toolkit to the OS/2 group just before December. As the developer states, there is a lot of code and he doesn't expect the toolkit to be in a position to be released for several months, but at least it is being worked on. Expect the card manufacturers to take a few months on top of that for development time. Hopefully there will be a large number of 3D accelerated cards in people's Christmas stockings next year.

Another newsworthy item is Snowstorm's release of version 2 of their popular screensaver software EscapeGL. For those of you who haven't yet seen what EscapeGL can do, go to the Snow Storm Software site and check out this impressive piece of software that really shows off what your system is capable of.

Ok, now for the frequently asked questions:

Q: Why is my pglChooseConfig() function failing?

A: The pglChooseConfig() function specifies the minimum configuration that we want, not the exact configuration. If there exists a better configuration that the one we specified, we will get the better one. The most common reason for pglChooseConfig() to fail is we asked for a better configuration than the one that is available. Lets say we asked for the following configuration:

   // create a visual config
   int attribList[] = { PGL_RGBA, PGL_RED_SIZE, 4, PGL_GREEN_SIZE, 4,
             PGL_BLUE_SIZE, 4, PGL_DOUBLEBUFFER, 0 };
   PVISUALCONFIG pVisualConfig = pglChooseConfig( hab, attribList );

In the above we asked for four bits each of red green and blue, for a total of twelve bits. If we ran this on a screen configured for 16 bits, it would succeed (and we would get an RGB configuration something like 5-6-5). If we ran it on a screen configured for 256 colours (i.e. 8 bits), it would fail because the number of requested bits exceeds the available bit depth for the screen. If instead, when we wrote our application we specify the above code as:

   // create a visual config
   int attribList[] = { PGL_RGBA, PGL_RED_SIZE, 2, PGL_GREEN_SIZE, 2,
             PGL_BLUE_SIZE, 2, PGL_DOUBLEBUFFER, 0 };
   PVISUALCONFIG pVisualConfig = pglChooseConfig( hab, attribList );

it would work in all bit depths, and we would get the entire colour depth that is available. This is actually one of the benefits of writing an OpenGL program; over Christmas I had the unfortunate experience of trying to install two DirectX (Win95) applications on a relative's computer. One REQUIRED 640x480 256 colours, and the other REQUIRED anything but 256 colours. Needless to say flipping between the different resolutions was a pain in the butt. This should never happen with an OpenGL application.

Q: Why is my pglMakeCurrent() function failing?

A: Assuming that you got a valid configuration back from pglChooseConfig() and pglCreateContext() gave you a valid graphic context handle (hgc), pglMakeContext() will generally only fail when either the window handle passed to it is invalid, or the dimensions of the window is zero. The window dimensions will also be zero if the window has never been visible before, such as when the window is first created. The procedure that must be followed is:

  1. create the window
  2. make it visible
  3. create the OpenGL context and attach it to the window client

Q: I just upgraded to OpenGL v1.1 and now none of my apps that worked before work

A: If you installed OpenGL when you installed OS/2, or installed it with the selective install, chances are that some of the OpenGL DLLs are in the \OS2\DLL directory. If you then install OpenGL v1.1 in a different directory, you could end up mixing different versions of DLLs which will almost certainly cause a problem. What I have found to be the best way to install OpenGL and keep both versions around is to create two directories and unzip both copies in their respective directories (for example you would end up with a \OPENGLV1.0 directory and an \OPENGLV1.1 directory). Also remove all OpenGL DLLs from the OS2\DLL directory and anywhere else they might be hiding. In this manner, you can easily switch back and forth between the two versions just by changing the LIBPATH in your config.sys. Why would you want to do this you ask? Well for many applications, the v1.0 DLLs run a lot faster than the v1.1 ones, but some applications may require the advanced features that v1.1 offers. Another thing to keep in mind is that a program compiled with the v1.0 libraries will not run under the v1.1 DLLs, but an application compiled with the v1.1 libraries will run under both. The way I have my system set up is I compile using the v1.1 libraries which gives me full compatibility, but I run using the v1.0 DLLs which give me the higher performance.

Q: I'm using C to program OpenGL and not VisualAge C++, so what the heck do I do with the WM_PAINT command?

A: There is no one right way to do this, as the approach you take depends entirely upon the effect and the performance that you want. To answer this I will outline a variety of methods and specify what each is useful for.

One thing that you have to keep in mind is that if you created a direct context in pglCreateContext() (the preferred method as it is much faster than indirect), OpenGL does not need the paint routines to draw into the window. OpenGL will simply calculate the position of the window and draw the rendering directly to the video card memory. This is in a similar manner as to how DIVE draws on the screen, although OpenGL does not use the DIVE engine itself. You still need to use the WM_PAINT command however, as you need to stop the painting process from drawing over what you have rendered. To stop the normal painting process, still create the WinBeginPaint(), WinEndPaint() pair, but do not call the WinDefWindowProc() function at the end. With no commands between WinBeginPaint() and WinEndPaint(), there is no drawing performed, but the update rectangle gets cleared.

Method 1: The most obvious place to put the GL rendering commands is within the WM_PAINT routine itself. With this method, every time OS/2 decided the window needs painting, the paint routine is called resulting in your scene getting rendered. However this in itself is only good for a static 3D image as your scene would only change if the user forced a repaint. If you want fixed rate animation, you could set up a timer that forced a repaint of the window at a set interval. This method is fine if you know exactly how long your rendering will take or you don't want your frame rate to exceed an upper limit. The problems with this method are if another window obstructs your render window and causes a repaint, you will end up with an extra frame being rendered. Another problem is when the rendering is complex, taking a second or more to complete. Since the rendering takes place on the main message thread, it will suspend user interaction until the rendering is finished because the system is busy processing the last WM_PAINT command.

Method 2: Another method that is a variation of the above is to put all of the rendering code in the WM_TIMER statement. This is possible because as I have stated before, the OpenGL direct context bypasses the OS/2 painting routines. You still need the WinBeginPaint() and WinEndPaint() pair in the WM_PAINT routine to prevent OS/2 from trying to clear the section of screen that you are rendering into. This method still has all of the benefits and drawbacks of the previous method, except you won't get extra frames if the user forces a repaint.

Method 3: Probably the most flexible way of rendering but also initially the hardest to do is to create a separate thread and do all of your rendering in it. This has many benefits that outweigh the small bit of added complexity: first because the thread is doing the rendering continuously in the background, your computer will always be rendering at it's fastest rate giving you the best option for high frame rates. Another benefit is that because the task is on a separate thread not associated with your message thread, long render times will not affect the perceived performance when the user tries to access things like menu selections. With the previous method, the program would only be able to process a new message such as your menu selection, once it had finished with the last message which could have been the paint message. The end result with this method is there is no delay when the user tries selecting something from the menu, and therefore the perceived performance is much higher, and your users much happier.

There are some caveats to be aware of if you are trying this. The first is that since your render thread is running continuously, immediately calculating a new scene once the previous one is complete, you must change the priority to a lower one than your message thread. Ideally, to not impact system performance you would set the priority at the lowest level in the system (IDLE-0). You also have to remember that it is the thread that calls the OpenGL routines that needs all of the stack space. If you have been increasing the stack space in the linker options, you have to change this as the stack specified in the linker is only for thread 1, your main message thread.

The OS/2 version of OpenGL as it exists today is not thread aware. Therefore if you are separating out the rendering into a separate thread, to be safe ALL of the gl calls (and pgl calls, and so on) should be in that thread. This restriction does create some logistic problems. For example, when your window routine detects that a resize has just happened, it cannot call the required gluPerspecive() and glViewport() commands because the window resize routine is not the same thread as the rendering thread. One way to get around this is to set a flag when the window routine detects a resize. The render thread after each frame checks to see if this flag is set indicating a resize is requested, and performs the appropriate resizing routines. In this manner, you can transfer information from your window to the OpenGL render thread to tell it what to do.

Notice above that I said that to be SAFE you should separate out the gl calls into their own thread. I didn't say that it wouldn't work if we ignored this rule. In fact, once the context is created successfully, any thread can call OpenGL calls, usually without a problem. The key word here is usually, and if you call it when you shouldn't you will get anything from a minor render glitch for that one frame, to a complete application crash. Remember that the implementation is not thread aware - it was not designed to be called by multiple threads. Those of you more adventurous, can experiment to find out when you can and cannot call gl functions from multiple threads.

Conclusion

Next month things should be back to normal and we will hop right into the thick of OpenGL programming. Coming up in the next few months, look for how to perform anti-aliased text and graphics, and the long lost art form of how to create realistic texture map quality objects without having to deal with the texture mapping routines (and their overhead).