Gearing Up For Games - Part 2

Written by Michael T. Duffy

Introduction
Welcome back to Gearing Up For Games. Last time we looked into how to use DIVE to blit images to the screen, and then developed a canvas class to make accessing the DIVE buffer easier. This time we will look at how palettes are handled under OS/2, and we will look at decoding and displaying the common bitmap format of PCX to our canvas object. This article builds on the previous one, so the canvas class and DIVE code from the last article will be used in this (and future) articles. Eventually all this code will go together to create a simple game, and the concepts will allow you to branch out and write more meaty games that run natively under OS/2. No without further ado...

Palettes
Before I start my discussion of palettes, I must call attention to an article written for EDM/2 way back in issue 0101. Raja Thiagarajan wrote an unofficial guide to the palette manager, and I think it may have even been Raja who introduced me to EDM/2 in the first place (Thanks!). I would suggest that you read his article in addition to this one, because it covers some aspects of the palette manager which I will not touch on.

I would also like to thank the many people on the Internet who helped me out when I was first learning about palettes. Palettes under OS/2 drove me crazy for several months before I could finally control them to the point that they did what I wanted them to do. Even now they occasionally surprise me; in preparing for this article I discovered that some of the palette code I had been using for a long time would only display a black window in 256 color mode, whereas it worked fine in the 32k color mode I normally use! Just when I think I completely understand palettes, they surprise me with something like this.

With the information in this article, the article in EDM/2 issue 1-1, and the API specifications in the manuals you should have enough information to experiment around with palettes. The approach to palettes that I present here seems to work best for games that use DIVE, though there may be some subtle palette tricks that I am unaware of and thus don't cover. Palettes are tricky to use though, so be certain to approach any experimentation with lots of patience, otherwise you may well be driven insane.

One final note before we begin: This article makes the assumption that our application will display a 256 color image in our application's window, and use the DIVE API to do it. DIVE is capable of displaying images with more than 256 colors, but for simplicity's sake this series of articles will not cover those situations.

What is a Palette?
A palette is an array that tells PM what colors to use when displaying an image. A 256 color bitmapped image has one byte for each pixel. This byte tells what color in which to display that pixel. A palette is used to convert that byte into the color information the display hardware needs to actually put the color on the monitor. Palettes can have up to 256 entries. Each entry describes one color, and the color information is stored in three bytes, one each for the component colors red, blue, and green. As they are stored in bytes, each of these component colors can have a value ranging from 0 to 255. Using varying amounts of these three colors, any other color can be described. A bright red would have Red set to 255 and both Green and Blue set to zero. Black is achieved when all three are set to zero and white results from all three being set to 255.

All palette activities for an OS/2 program are handled through the palette manager. The palette manager is part of the OS/2 API, and is accessed through the API function calls. A palette manager is needed in a GUI environment because multiple programs may be running at the same time and each program may require different color needs. The palette manager tries it's best to meet these needs.

However, the palette manager is restricted by hardware limitations. Many common color modes of video cards have a maximum of being able to display 256 colors at the same time. Although each of these colors may be selected from among 262,144 colors (256k colors), on the graphics card there is a physical maximum of 256 color registers to hold the colors. This has led to the development of two different kinds of palettes: physical and logical.

The physical palette is the palette set in these hardware registers. Under OS/2, you can not set the physical palette directly. The palette manager will always determine what colors go in the physical palette, and in what indices these colors are placed. You can read the physical palette with GpiQueryRealColors, but you cannot set it.

[Note: There is one exception to the above. In dive.h a comment explains a way to set the physical palette with undocumented calls. I would urge everyone to avoid this method though, since it is unknown what that code would do in a non-paletted video environment (such as the 32 and 64 bit modes), and this method may not be supported on different hardware (such as the PowerPC, or future Intel based hardware). I have not experimented with the undocumented code in dive.h because of lack of information on the undocumented calls, and because I have been able to achieve the results I need through the palette manager and without messing up the integrity of the desktop.]

Although programs cannot access the physical palettes, they can have their own logical palettes. A logical palette is set up in the same format as a physical palette. The logical palette is sent to the palette manager, and whenever your window has focus, the palette manager will change the physical palette to match the logical palette as much as possible. For the colors that do not match the physical palette exactly, the palette manager will decide the closest match, and use that instead. When you request that color 4 be drawn to the screen, the palette manager will look at color 4 in the logical palette, decide what color it most closely matches in the physical palette, and send the new color out to the display hardware.

By approaching palettes this way, several applications can share the scarce palette entries at the same time. The palette manager will always try and fulfill the needs of the palette in the active window. Applications whose windows are not active will still have their colors remapped to the closest palette entry, but the palette manager will not use the inactive palettes to determine the contents of the physical palette. Note that remapping is done when an image is drawn to the screen, such as with DiveBlitImage. If the display is in a 256 color mode, the physical palette changes, and a window is not redrawn, then its colors will be wrong.

But what about graphics modes with more than 256 colors? Many users of OS/2 have their displays set to modes that offer 32k, 64k, or even 16.4 million colors. In these cases, there are no color registers to fill. If two applications request palettes of 256 different colors each, both requests can be met without any problem. In these cases there is no physical palette to set, but a logical palette is still required. The system must know what color each of the 256 palette entries in our source bitmap is to be displayed as when it converts each pixel to a 15, 16, or 24 bit color value.

How to Use Palettes Under OS/2
Our goal is to use palettes in conjunction with the DIVE API in order to display images in a window. Although DIVE has the calls DiveSetSourcePalette and DiveSetDestinationPalette, neither of these calls affects the physical palette in any way. These calls are used to tell DIVE what logical palette it is mapping from and what physical palette it is mapping to. To change the physical palette in 256 color video modes, you must request that the palette manager remap the physical palette as much as possible to a specified logical palette.

The steps for the creation and activation of a palette are as follows: During normal processing you have a few tasks to perform to maintain the palette.
 * 1) Query the device context of your client window with WinOpenWindowDC.
 * 2) Create a presentation space from the device context with GpiCreatePS. You cannot use a cached-micro PS (from WinGetPS). We will use a micro PS.
 * 3) Create an array that describes your palette. (The format for this array is described below.) Use the PC_RESERVED flag to obtain as many of your colors as possible.
 * 4) Create an OS/2 palette from this array and obtain a handle to the new palette with GpiCreatePalette.
 * 5) Select the palette into the presentation space with GpiSelectPalette.
 * 6) Make the palette active ("realize it") with WinRealizePalette.
 * 7) Set the palette DIVE will use for your source bitmap by using your original array with DiveSetSourcePalette.
 * 8) Tell DIVE what the physical palette is by either using GpiQueryRealColors and passing the result to DiveSetDestinationPalette, or call DiveSetDestinationPalette(hDive,0,256,0).
 * 9) Blit your image.
 * 1) On WM_REALIZEPALETTE messages, inform DIVE that the physical palette may have changed. Do this with either method described in step 8, though simply calling DiveSetDestinationPalette(hDive,0,256,0) is the easiest.
 * 2) On a WM_ACTIVATE message, if your application is losing focus, free up your palette so that other applications may have as many of their colors mapped into the physical palette as possible. Do this by selecting a null palette into the presentation space with GpiSelectPalette, and activate this null palette with WinRealizePalette.
 * 3) On a WM_ACTIVATE message, if your application is gaining focus select and activate your desired palette again so that the physical palette will be remapped more closely to your logical one. Use GpiSelectPalette and WinRealizePalette to achieve this.

When you are done using the palette and your game is ready to end.
 * 1) De-select your palette from the presentation space by selecting a null palette with GpiSelectPalette.
 * 2) Destroy the palette with GpiDeletePalette.
 * 3) Destroy the presentation space with GpiDestroyPS.

The OS2Palette object
Many of the above steps can be combined because they always occur after one another, such as GpiSelectPalette and WinRealizePalette. Also, we may have to work with palettes other than OS/2 palettes.An example of this are the palettes stored in compressed bitmap files. To simplify palette operations we will use a palette object that hides from us repetitive tasks. For learning about the specific implementation of any of the tasks provided by the palette object, I encourage you to look at the source code for the object in os2pal.cpp in [code.zip].

The following functions are those provided by the OS2Palette object. They do not cover every single aspect of palettes, but they provide an illustration of the major points. Also, they provide all the functionality we need for our simple game.


 * Default16
 * sets the palette to the first 16 colors of a standard VGA palette. This can be done for initialization and testing purposes. This function illustrates how to organize and set a default palette.


 * Convert
 * this routine converts the internal palette from one format to another. The two formats implemented in this code are for plain 8-bit RGB palettes and OS/2 palettes. The former are what you might find in a compressed bitmap (we will use them in the PCX codec), and the latter are what you will pass to OS/2 and DIVE.


 * InitSystemPalette
 * this will convert the internal palette to the OS/2 format, create an OS/2 palette, select it into the supplied presentationspace, and realize it.


 * UninitSystemPalette
 * this will de-select the internal palette from the presentation space, realize a null palette, and destroy the palette. This routine is called on cleanup.


 * SuspendSystemPalette
 * this will de-select the palette from the presentation space and realize a null palette, but it will not destroy the palette. This is meant to be used on a WM_ACTIVATE message. Call RestoreSystemPalette to reactivate the palette.


 * RestoreSystemPalette
 * restores the palette suspended by SuspendSystemPalette.


 * SetAsDiveSource and SetAsDiveDestination
 * these routines are used to tell DIVE about the palette. You will usually have at least two palette objects, one for the DIVE bitmap and one for the screen. If you choose to always set the DIVE destination palette with DiveSetDestinationPalette(hDive,0,256,0), then a palette for the screen is not needed.


 * LoadActualPalette
 * this sets the palette to the physical palette.


 * SetValuesFromRaw8
 * this will set a range of values from an array of 8-bit RGB values.


 * PaletteFromBlock, RequestSaveBlock, and ReleaseSaveBlock
 * these three functions package or unpackage a palette into a single block of data. These routines will be used in the future when we look at storing data in a single library file. Currently they may be used to load and save a palette to disk.


 * SetFlag and QueryFlag
 * these inline functions are for setting and getting the value that is copied into the flag position of the colors in an OS/2 format palette.


 * QueryPaletteArray
 * this returns a pointer to the internal array that contains the palette. The format of the array depends on the format set with Convert. One use of this routine would be to supply palette information to a bitmap encoding routine. First, Convert would be used to set an 8-bit palette. Then this routine would be used to get a pointer to the palette information to send to the encoding routine.


 * QueryLastErrorCode
 * Returns the code of the last error that occurred. This error code is specified in errcodes.hpp and the associated message is stored in game2.msg. This error code can be used with PostError, a routine found in errdisp.cpp. Both codes are in [code.zip].

Palettes for Multiple Child Windows
Sometimes I am very glad that I don't keep a rocket launcher on hand near my computer. Recently I'm afraid that I would have had to blow my computer into tiny, unrecognizable pieces considering how much grief it caused me. After months of being bothered by the same, nagging palette problem, I finally discovered a solution.

The problem arises when you have two child windows, both of which are using DIVE to blit an image to its client area. I ran across this situation while I was writing editors and other utilities for my own game, though such a situation could very well arise in a game with multiple windows.

In order to get the same palette to display correctly in both windows, implement a palette as above in the "How To Use Palettes in OS/2" section. The various steps will be split amongst the parent and children windows as follows: array to both child windows. Every time the parent window receives a WM_REALIZEPALETTE message, it must query the real palette and pass the array to each of the child windows. The child windows should not process the WM_REALIZEPALETTE message.
 * 1) The HDC and the HPS for the palette should be derived from the parent window. The palette should be created, selected, realized, and eventually destroyed from the parent window.  The parent window will handle suspending and restoring the system palette on WM_ACTIVATE calls.
 * 2) An HDC and HPS should also be created for each child window, and these handles will be used to setup the DIVE blitter among other things. No palettes will be selected into the child windows' HPS's, however.
 * 3) Each child window should have its own instance of DIVE.
 * 4) Instead of setting the DIVE source and destination palettes in the parent window, send messages to the child windows and have them perform these tasks. When you send the message, pass a pointer to the palette array as one of your parameters. Make sure this array is in the OS/2 palette format.
 * 5) Do not use the shortcut of DiveSetDestinationPalette(hDive,0,256,0). You will have to use GpiQueryRealColors, and then pass a pointer to the returned palette array to each of the child windows so that they can call DiveSetDestinationPalette with this info. When you need to update the destination palette, only call GpiQueryRealColors once and pass the same

Performing the above steps should allow the same palette to be used for two separate child windows.

Graphics Formats
When graphics are stored to disk, they are usually saved in a special format rather than just dumping memory to a disk file. These formats contain additional information about the graphic such as its size, number of colours, palettes, and other types of info. The graphic data is also usually compressed in order to save disk space. Compression/decompression algorithms are usually called codec's, and each graphic format usually has its own codec which is different from the others. Codecs differ in their compression ratio, speed, and quality. Certain codecs throw out some of the original information during compression, and then try and guess what is missing when they decompress the image. These are called lossy codecs because they lose some of the data. JPEG is an example of a format with a lossy codec. Lossless codecs generate the same image upon decompression as was given to them for compression. Popular formats with lossless codecs include GIF, PCX, TIF, and PNG.

Our code will deal with PCX files. PCX files have the advantage of being supported natively by many paint programs both shareware and commercial. This format was originally developed by ZSoft for their paint program PC Paintbrush, but it was adopted by many other programs as well due to its ease of use. PCX files support monochrome, 16-colour, and 256 colour images though we will only concern ourselves with the monochrome and 256 colour capabilities. PCX files are also good because their compression scheme is very simple to understand. It is not a compression scheme that works well for all graphics though, and many scanned images would wind up larger than where they started if the PCX scheme were used on them. Nevertheless, it will suit our needs nicely.

As far as implementation in our sample code is concerned, the PCX encoding/decoding object is derived from a parent class called BitmapPainter. This allows us to treat different graphic formats in the same way. The same calls are used to setup, compress, and decompress graphics no matter what format in which they are stored. Although we will only implement code to deal with PCX files, you could write code to support other formats without having to change much code in programs that already use BitmapPainter-derived objects.

Also, our PCX display code will use the Canvas class to hold the decompressed image. The Canvas class was covered in the first article of this series.

On to PCX!
PCX files have three parts: the header, the image data, and the palette. Monochrome images only have a header and the image data, since all pixels are either black or white.

The Header
The header of the PCX looks like this: typedef struct { BYTE   byManufacturer;	 // always 0x0a BYTE   byVersion;		 // version number BYTE   byEncoding;		 // always 1 BYTE   byBitsPerPixel;	 // color bit depth USHORT usXmin;		 // coordinate of left side of image USHORT usYmin;		 // coordinate of top of image USHORT usXmax;		 // coordinate of right side of image USHORT usYmax;		 // coordinate of bottom of image USHORT usHres;		 // horizontal resolution of creation device USHORT usVres;		 // vertical resolution of creation device BYTE   abyPalette [48];	 // color palette for 16 color images BYTE   byReserved; BYTE   byColorPlanes;	 // number of color planes in the image USHORT usBytesPerLine;	 // line buffer size USHORT usPaletteType;	 // gray or color palette BYTE   abyFiller [58]; } PCXHEADER; The header is exactly 128 bytes long, and is always located at the very beginning of a PCX file. A more detailed meaning of each field is as follows:
 * Manufacturer:always 0x0a. Checking this byte is the only way to really tell whether or not a file is a PCX file.
 * Version:this variable is meant to tell what version of PC Paintbrush created the file. A zero indicated PC Paintbrush version 2.5. A two or three signifies version 2.8. A two means that the abyPalette field contains palette information whereas a three means that the image is either monochrome or meant to be displayed with the default palette of the device. A five indicates that the PCX came from version 3.0 or better, and this is the only version that can support 256 color images.
 * Encoding:this is always 1, though the field was put here in case ZSoft wanted to add a different kind of codec to the PCX specification. A 1 signifies that the image uses a Run- Length Encoding (RLE) codec.
 * BitsPerPixel:this tells how many bits are used to make a single pixel in a single color plane. Monochrome images only have black and white pixels, so they have 1 bit per pixel. 256 color images have 8 bits per pixel since they need entries from 0 to 255. 16 color images need 4 bits to generate a number between 0 and 15. However, 16 color images are stored in four color planes, so they only have one bit per pixel in each color plane. Monochrome and 256 color images only have a single color plane, so their bits per pixel value is a count of their total bits. Color planes are explained below.
 * Xmin, Ymin, Xmax, and Ymax:these values give the boundaries of the image. Usually Xmin and Ymin will be equal to zero. These coordinates represent the origin, and the origin is located in the top left hand corner of the image. Xmax and Ymax give the right and bottom edges respectively. These coordinates are inclusive. This means that if Xmin is 0 and Xmax is 639, a row of this image runs from pixel 0 to pixel 639 inclusive. It has a width of 640 pixels. Therefore when computing the width or the height from these values, be certain to add 1 to get the correct answer: (i.e. usWidth = usXmax - usXmin + 1).
 * Hres and Vres: these values give the dimensions of the hardware that generated the image, and are basically useless values.
 * Palette:this array of bytes contains a palette for 16 color images. PCX palette have a red, blue, and green byte for each color, so 3 color bytes times 16 colors leads to the 48 byte length of this array.The PCX format was developed before 256 color images were available on the PC. As a result, the designers of the PCX format did not leave room for a 256 color palette. To solve this problem, the palette for 256 color images was tacked onto the end of the PCX file, and this array was not used. Since we will not be using 16 color images, we will ignore this field.
 * ColorPlanes:this tells how many color planes are used for the image. Color planes are used because of the way hardware is setup in EGA and VGA cards for the 4 and 16 color modes. A pixel in these modes has one bit in each bit plane instead of having its bits side by side as in 256 color mode. The bits in each of the bit planes are combined to get the color index of a single pixel. If you haven't encountered bit planes before and don't understand how they are set up, I would suggest looking in a book on EGA/VGA cards. They take a little bit of time and space to explain clearly, and since we are not dealing with 16 color formats I will not cover them here.
 * BytesPerLine:this field tells how many bytes are in a single decompressed row.	This is useful in monochrome and 16 color images, but the bytes per line in a 256 color image is the same as its width.
 * PaletteType:this simply tells whether or not an image is meant to be displayed in color or gray scale. A value of 1 indicates gray scale and a value of 2 indicates color.

The Codec
The algorithm for decoding a PCX image is simple. You take the current byte of the image data and look at it's top two bits. If they are both clear, then you write that byte out to your canvas. If they are both set, you clear them and that byte becomes your duplication count, or run length. Take the next byte and copy it to the canvas "run length" number of times. Repeat the process until you have enough bytes to fill a row. Move to the next row down and repeat the process for that row. Do this until there are no more rows left in the image.

The only catch to the above algorithm is when you run across a single pixel that has information in one or both of the top two bits. Bytes for these types of pixels are handled as having a run length of 1, since the entire 8 bits of the byte can be stored in the byte following the run length byte.

Encoding a PCX is just the reverse of this process. Start at the first pixel in the top row of your image. Count how many times this pixel color appears in a run. Do not let this run length go over 63, because you can only store the run length in 6 bits (the top two will be set). If the run of bytes is greater than one, set the top two bits of the run length and send it to your encode buffer. Next send the byte to duplicate. If there is only one byte, check and see if any of the top two bits are set. If one or both are set, output a run length of one, and then output the byte. Otherwise, simply output the byte. Repeat until you are done with a row, and then move to the next row down. Continue in this way until all rows are done.

This same codec is used in both monochrome and 256 color images, though the pixels are stored slightly differently. For monochrome images, each byte contains 8 pixels, since only one bit is needed for each pixel. If the bit is set, it denotes a white pixel. If it is clear, it denotes a black pixel. The most significant bit is the left most pixel in that byte, and the least significant pixel is to the right. Bits can be tested in the byte with a bitwise "and" operation.

256 color images have their bits stored right next to each other. Since it takes 8 bits per pixel, the entire byte is used for the color index. You can decode the row directly onto the surface of a Canvas.

The Palette
In monochrome images, there is no palette because the pixels can either be white or black. In 256 color images, the palette is attached to the end of the PCX file. The palette is stored as one byte each for the red, green, and blue components of each color, in that order (RGB). There are 256 colors possible, so the palette is 3 color bytes * 256 indexes = 768 bytes long. To make sure that you aren't reading image data instead of palette data, PCX files have the byte 0x0c right before the palette information starts.

To read a PCX palette you find the end of the file and move back 769 bytes. Check that byte to see if it equals 0x0c. If it does, read the next 768 bytes as your palette.

Byte Order
One last note about palettes: To use the palette attached to the PCX file, you will need to convert it to an OS/2 palette. An OS/2 palette has the elements Flag-Red-Green-Blue stored in a ULONG. However, Intel based PCs store USHORTs and ULONGs in a byte reversed order, and if you access the OS/2 palette as an array of bytes instead of an array of ULONGs, then you must take this into consideration.

The byte reversal scheme works like this: Suppose you have a variable at memory location 0x1000. A byte would be stored at location 0x1000, a USHORT would be in locations 0x1000 and 0x1001, and a ULONG would be stored in locations 0x1000 through 0x1003. Placing a byte is no problem; just put it in location 0x1000. A USHORT is made up of two bytes. One byte handles the lowest 8 bits, and the other byte handles the highest 8-bits. When an Intel based PC stores the USHORT in memory, it will place the lowest 8 bits in the first byte (0x1000), and the highest 8 bytes in the next byte (0x1001).

A number under 256, say the value 42, would be stored in the lowest 8 bits of the USHORT. A pointer to our USHORT would have the value of 0x1000. When the USHORT is accessed with the pointer, then the bytes will be placed in their correct order so that the value of 42 is retrieved. However, what if we typecast this pointer and use it as a pointer to type BYTE. A BYTE can hold the value of 42, but it is only looking for a single byte in memory (8-bits). Since the bytes are reversed and the lowest byte is at location 0x1000, the byte pointer can read the value fine. If the bytes were not reversed, then the value at 0x1000 would hold a zero and the byte pointer would receive the wrong value. This is the rationale behind the reversed byte order scheme.

For this same reason, the bytes that make up a ULONG are reversed twice. You can look at ULONGs like two USHORTs, but the USHORTs are reversed so that the lowest 8 bits of the ULONG are placed at the first memory location (0x1000).

What all this means for palettes is this: You access the bytes in a PCX one after another in the order of Red, Green, Blue. byRed = *(pbyColor + 0); byGreen = *(pbyColor + 1); byBlue = *(pbyColor + 2); If accessed with a byte pointer, the bytes in an OS/2 palette are accessed in the order Blue, Green, Red, Flag. byBlue = *(pbyColor + 0); byGreen = *(pbyColor + 1); byRed = *(pbyColor + 2); byFlag = *(pbyColor + 3); For implementation of a PCX decoder and encoder, check out the sample source code that comes with this article. Handling the header, codec, and palette can all be seen in pcx.cpp. The handling of the byte reversal scheme can be seen in the Convert routine of os2pal.cpp.

The PcxPainter object
The PcxPainter object is derived from the BitmapPainter object. All of the member functions called by a program are virtual functions in the BitmapPainter object, so you could just as easily replace the PcxPainter object with a GifPainter or TifPainter, and not have to change much of anything else in your program. The functions of the BitmapPainter object are:


 * AssociateCanvas:this routine tells the BitmapPainter object what canvas is to be used for the uncompressed graphic. If the canvas is not large enough for the graphic, the information outside of the canvas will be discarded.
 * DissociateCanvas:this routine clears the internal pointers to the canvas.
 * AssociateBuffer:this routine is used to tell the object where the encoded graphic is in memory. Such a buffer might be created by finding the size of a PCX file, allocating that much memory, and then reading the entire PCX file into that buffer. In the PcxPainter object, the AssociateBuffer routine also checks the buffer to see if it contains valid PCX information before it accepts the buffer. If the buffer does not contain valid information or is of an unsupported PCX type (16 color), then the routine returns with an error.
 * DissociateBuffer:this clears the internal pointers to the buffer.
 * PaintCanvas:this instructs the object to take the compressed graphic specified in AssociateBuffer, decompress it, and display it to the canvas specified with AssociateCanvas. This routine also determines the palette of the graphic, and stores it in an internal buffer.
 * EncodeCanvas:For PcxPainter, EncodeCanvas will take the entire canvas specified with AssociateCanvas, encodes it in PCX format, and stores it in an internal buffer. It allocates and uses an internal buffer because the size of the compressed graphic is not known when the routine is called. EncodeCanvas will return a pointer to the internal buffer, as well as the size of this buffer.
 * FreeEncodeBuffer:This causes the internal buffer that EncodeCanvas allocated to be deallocated.
 * QueryBitmapStats:This routine looks at the information in the buffer specified by AssociateBuffer, and fills in the provided structure with the bitmap width, height, and color depth.
 * MonoSetWhite and MonoSetBlack:These routines set the color index to be used for the black or the white pixels when a monochrome bitmap is decoded. Since the target canvas is a 256 color canvas, values from 0 to 255 may be specified.
 * QueryPaletteBuffer:This routine returns a pointer to the internal buffer with the palette info. The palette data is stored with three bytes per palette entry, and the bytes specify the red, green, and blue components of the color respectively.
 * QueryLastErrorCode:This returns a code for the last error encountered, and its value can be used to look up the error code from the resource file's string table. The codes are specified in errcodes.hpp and the error strings are specified in game2.msg.

Conclusion and News of the Future
Well, that's about it for my second article in Gearing Up For Games. As always, comments, flames, and questions are welcomed. I can be reached at the email address listed in the "Contributors" section.

Next time I will probably look into threads, semaphores, timers and other multitasking concerns. If I have time, I may start on how to draw sprites as well.

In the future, I will be looking into the fullscreen support aspects of DIVE. I recently added fullscreen support to a project I'm working on, and it was a fairly simple process. It took about 15 minutes to write all the code needed to implement fullscreen support, an hour to get the darn thing to compile (problems with parameter passing since header files for fullscreen support don't exist yet), and another hour working out undocumented pitfalls of fullscreen modes. I'll be certain to give a full report in a couple of months as availability of the Games SDK draws closer.

It looks like the Games SDK will be released through the DevCon CD subscription program. For those of you unfamiliar with DevCon, it is subscription that comes four times a year, and each "issue" contains several CDs with information, toolkits, and sample code on them, as well as a paper newsletter with articles on various aspects of developing for OS/2. I included ordering information in my last article, so I won't repeat it here. I don't know if the Games SDK will be released through other means other than DevCon. I think that IBM should release it freely, but that is just my (and a lot of other people's) opinion.

Also, I have recently learned that IBM has released joystick drivers for OS/2. The drivers can be obtained by joystick.zip. I haven't found any programming information for these drivers yet, but I'll be sure to fill everyone in on what I can find as soon as I find it!

Whoops!
Well, looks like I made a few mistakes in the code for my last article. Time to come clean on them: ICON	 ID_APPMAIN  game1.ico
 * 1) In game1.rc, the ICON statement should read:
 * 1) The file game1.msg was not needed. GAME2 uses .MSG files however.
 * 2) The Canvas::QueryWidth and Canvas::QueryHeight methods should return USHORTs instead of SHORTs.
 * 3) In the article itself, the colors were wrong for the picture of the sample code's output. This happened because I did not convert the palette of the bitmap to the system palette before I submitted it.

Compiling the Sample Code With the Watcom Compiler
I did not give instructions on how to get the sample code to compile with the Watcom compiler (which I actually use by the way.) To use the DevCon toolkit with Watcom, the following things must be done:

In Your C/CPP Compilations
First, for all of your C and CPP files that use the DevCon Warp Toolkit, add: before you include any of the DevCon header files. Note that there are two underscores both before and after the "IBMC", not just one.
 * 1) define __IBMC__

Also make sure that the compiler is looking in the \H directory of your DevCon toolkit for header files, not in the WATCOM\H\OS2 directory.

In The File mmioos2.h
Find the section titled: "Country codes (CC), languages (LC), and dialects (DC)."

You will notice that some of the defines in this section begin with a zero. This causes the Watcom compiler to read these numerals as octals (as any good ANSI compiler would ). Remove these zeroes. Therefore the definition: would become This needs to be done from MMIO_CC_NONE (value of 0) to MMIO_CC_TURKEY (value of 90).
 * 1) define MMIO_CC_USA		001
 * 1) define MMIO_CC_USA		 1

In Your Linker setup
Don't forget to include the library mmpm2.lib when you link. This can either be done in the IDE in the menu "Options \ OS/2 Linking Switches \2.Import, Export and Library Switches" in the "Libraries:[libr]" window, or in the command line of your linker with the statement "libr mmpm2.lib" (without the double quotes, of course).

Also, be certain to have a large enough stack for your program. The default stack for the Watcom IDE is about 8K I think. The MMPM/2 documentation suggests at least a 16Kb stack, but I compile with a 32K stack.