Jump to content

OS/2 Device Driver Frequently Asked Questions

From EDM2

By Tim Snape

This article contains a selection of frequently asked questions and is intended to be used by anyone who is writing or planning to write OS/2 device drivers.


Support Categories

Situations when it is (un)necessary to write an OS/2 device driver

Is a driver necessary to use IRQs

Question: I have a requirement to intercept hardware interrupts from an adapter. Do I need to write a device driver.

Answer: Yes, a device driver is absolutely the only way to interact with hardware generated interrupts.

Is a driver necessary to access I/O ports

Question: I have a requirement to read and write to I/O ports. Is it necessary for me to write an OS/2 driver.

Answer: No, it is not.

Under OS/2 there are four levels of privilege (referred to as rings)

  • Ring 3 - Least privilege, this is where applications normally reside
  • Ring 2 - Next highest privilege. Code which is running at Ring 2 may access I/O ports under OS/2. For this reason this privilege is often referred to as I/O privilege level or IOPL.
  • Ring 1 - Is not used under OS/2
  • Ring 0 - Is the highest privilege level and is used by device drivers & the operating system.

In order to access an I/O port (and also to disable/enable the interrupt flag) a program MUST be running at ring 2. The way this is done is to specify in the programs definition file that a particular segment will have IOPL (IO privilege).

Next comes the clever bit. The compiler/ling will generate an executable that contains a special (IOPL) code segment. When the program is loaded, the loader will see that there is an IOPL segment & it will create a special mechanism that will allow that segment to run at ring 2.

For the technically knowledgable the loader creates a ring 2 call gate.

When the IOPL segment (containing the I/O access code) is called, privilege is changed to ring 2, and I/O access can be performed.

N.B. There is an overhead in this transition so if you plan to implement a polling loop on an I/O port (or something equally horrible) then be warned.

For some example code that accesses I/O ports go to the file download area.

Is a driver necessary to access memory on an adapter card

Question: I have a requirement to read and write to specific areas in memory in the range D000:0 to D800:0. This is to interface with a special function option card.

There is no time criticality or interrupts to deal with, so writing an OS/2 device driver does seem rather over-the-top.

In DOS, no problem. With MS-Windows, there are Global Selectors pre-defined ( _D000 etc. ).

How can this be done using OS/2.

Answer: At some point you will HAVE to use a driver to access the RAM directly. However it is not necessary to write a driver from scratch. Instead use one of OS/2's drivers to do the job for you.

Failing that, there are a number of drivers written by third parties that will provide you with this function.

I have placed some code in the anonymous ftp area that can be used to access memory directly. Download the Technical Developers Toolkit. The interesting file in the .zip is parallel.c, this contains code that reads the Bios data area used to hold the parallel port addresses.

Accessing Video Ram

Question: I need to access the memory mapped video ram do I need a driver

Answer: There is already support for obtaining addressability in the way you require. I suspect you don't need a device driver. You can access the frame buffer directly, use memory-mapped and port IO from the app level.

A note on the physical memory address space. Your graphics adapter memory can live anywhere above the end of physical memory (RAM) up to the 4Gb limit (0FFFFFFFFh). To see the memory, try as Tim suggested, using the KDB to dump physical memory, %%address.

There is an IOCTl supported by the SCREEN$ device driver for this purpose. You'll need to know the physical address and the aperture size, ie. how much memory the adapter has. See the include file bsedev.h and the definitions for SCREENDD_GETLINEARACCESS. The IOCTl will create and return a linear address which maps to the physical for you to use.

The flags field in the call is critical and one reason I asked what you're writing. They allow you to choose where in OS/2s linear address space the mapping occurs 1) in the current process address space 2) in the shared arena 3) in global space. BTW the flags map directly to those for VMAlloc except the ATTACH bit which causes a DevHlp_ProcessToGlobal.


Application Issues

Accessing IO ports at application level

Question: I have a requirement to access I/O ports from a 32 bit application. I know all about IOPL privilege BUT the only way to create a ring 2 IOPL segment seems to be to use a 16 bit DLL, is this right ?

Answer: Unfortunately yes, there is a more complete explanation of IOPL and accessing I/O ports in the Is a driver necessary section, so I will not repeat it here.

Basically though ring 2 IOPL segments can only be 16 bit, there are however several approaches to solving your problem.

  • Use another driver, one that already exists. The TESTCFG driver contains code that allows you to access I/O port addresses higher than hex 100.
  • Package up your 16 bit IOPL segment into a DLL and call that from your 32 bit code segment.
  • Write a driver from scratch and get the driver to perform the I/O on behalf of the application.
  • Write a driver from scratch and get the driver to set the IOPL bits in the EFLAGS register to 3. This will allow the calling thread to perform I/O at ring 3.

The last (rather neat) suggestion was supplied by Holger Veit. It should be noted that patching the EFLAGS register will only allow the calling thread to access I/O ports.

Read buffers greater than 64K

Question: When performing Read/Write operations can I use data buffers larger than 64K.

Answer: No you cannot. The Read/Write device driver commands have a length field which is a 16 bit word, this implies that the largest I/O operation is 64K.

If you really want to perform I/O on regions greater than 64K, then you should use an IOCTL command to pass in the address of a 32 bit memory item.

Compiling and building your OS/2 device driver

Using the Microsoft C compiler

Question: Should I use the Microsoft C6 compiler & if yes, where can I buy it.

Answer: Back in the old days when IBM & Microsoft were talking to each other, all device drivers were written using Microsoft C compilers. Usually Version 6.00a.

Unhappily following the rift between the two companies Microsoft withdrew support for the C6 product & brought out new versions. These versions were incompatible with OS/2.

Many people today still use the C6 compiler, many examples exist based on the C6 compiler & many companies still recommend using the C6 compiler - but you cannot buy it.

So if you want an easy life use the C6 compiler, how you acquire the tool is the real question. But that cannot be answered here.

If you elect to purchase an alternative C compiler to generate your driver look at the other QA's in this category.


Using the Borland C compiler

Question: How do I use the Borland compiler

Answer: You should refer to Borlands web site for support questions.

Using the Watcom C compiler

Question: How do I use the Watcom compiler version 10.5

Answer: Watcom is a recommended supplier of 'C' compilers for OS/2 device driver developers. IBM development are now widely using the product. The Watcom 'C' compiler comes with sample OS/2 device drivers plus a number of useful utilities. For detailed support information on using the Watcom product you should refer to Watcoms web site.

When using Watcom for the first time you may experience compatibility problems linking to libraries compiled using other products. The following comments assume you are using the dhcalls.lib helper library found in the Devcon DDK

  1. Instead of _acrtused, the Watcom C runtime library looks for cstart_. This should be declared as public in the .asm file.
  2. Watcom provides a special pragma, cdecl, for specifying the calling convention used by MSC. They also provide a _Cdecl keyword, as in 'int _Cdecl main(...);'.
  3. The Watcom naming convention is to append an underscore to function & variable names. Under Microsoft 'C' the convention is to prefix the names with underscore :
 Watcom function name  = name_
 MSC     function name = _name

Therefore all references to Watcom 'C' names from assembler code should assume a trailing underscore. All references to Microsoft 'C' names from assembler code should assume leading underscores. The reverse should be assumed when referencing assembler names from Watcom or MS 'C' code.

  1. The compiler flag -Zu must be used.
  2. The "_Seg16" keyword is not legal when compiling with WCC(Watcom 16bit), therefore in all declarations "_Seg16" must be replaced with "_far". Because of this some of the toolkit headers like OS2DEF.H must be changed.
  3. The "PASCAL" keyword must be changed to "pascal".
  4. There is a library of device driver helper functions, that is callable from Watcom 'C'. This library can be found on the DDK BBS.
  5. The symbol files generated by Watcoms linker are incompatible with the Kernel debugger. If you use Watcom .sym files and the kernel debugger then the system will crash at boot time, with the error message :
 Internal symbol error: SegDefLinkUp

There is a utility for translating Watcom .sym files to the correct format. This utility is called WAT2MAP.EXE and can be found on the DUDE BBS.

  1. The Watcom 'C' compiler is more rigorous in its compliance with the ANSI 'C' standard. This may cause minor problems when porting from other 'C' compilers.

There was an article on porting OS/2 device drivers to Watcom 'C' in the Devcon 7 newsletter.

Please note that according to Watcom technical support, there are some problems with using NMAKE with the Watcom compilers/linkers.

Using the CSet compiler

Question: How do I use the CSet compiler

Answer: You should not. CSet generates 32-bit code this is incompatible with the the 16 bit architecture of OS/2's device drivers.

Will IBM fix CSet so that it can generate OS/2 device drivers

Question: Will IBM fix CSet so that it can generate OS/2 device drivers

Answer: No, the stated policy on CSet is that it will NOT and NEVER will generate 16 bit code. The reason IBM's CSet people give for this drastic decision, is that there already exist many, many 16 bit 'C' compiler products.

In fact this will all change when OS/2 for Power hits the streets. OS/2 for Power uses a 32 bit driver model so it will be possible to use CSet for drivers in the future.

Using the CSet compiler to generate 32 bit driver code

Question: How do I use the CSet compiler to generate 32 bit code for linking into a 16 bit device driver.

Answer: This can be done. The security kernel consists of 16 bit assembler code which is linked with 32 bit code emitted by CSet.

There is an example device driver that does this however it is not for the faint hearted. In order to get the code to work you must first supply a (complete) replacement run-time library. The body of the driver is pure 16 bit assembler and the CSet code is bolted onto this.

The way this code is implemented is very specific to the security kernel, however the message is it can be done

Using Steve Mastriannis example 'C' code.

Question: I am about to write a device driver for a simple network now. Before starting this I am reading the book "Writing OS/2 2.1 device drivers in C" by Steven J. Mastriani. In this book they write about a C callable 2.0 Devhlp library and a toolkit. About this I have a few questions. I don't have this C callable Devhlp library so I will have to call this function from an assembler routine. My problem is that I can not find any "prototype" of this function anywhere. I suppose these are different for all Devhlp functions. Where do I find this?

Answer: Steve supplies these helper libraries as seperate products. I think the retail price is 129 USD.

There are a number of alternative helper libraries. These can be found in the Devcon DDK product.

It is unfortunate that IBM was not able to standardise on a single library of helper functions. The result is everyone tends to create their own.

Problem using Microsoft library

Question: In S.J.Mastrianni's book (second edition page 498 3rd line from the end and in manyother places) he uses the library "llibcep.lib" from the Microsoft Compiler LIB directory. I did a full instalation of the compiler (Version 6.0A) and I can not find the librery.

Answer: The files installed & their names will vary depending on the installation options you typed in when you ran the MS setup program.

The name

llibcep
  l = represents large memory model
  s = represents small memory model
  e = represents floating point emulation
  7 = represents floating point in 8087 hardware
  p = represents protected mode
  r = represents real mode

One of the installation options asks "use default naming convention" in which case your llibcep.lib file is renamed llibce.lib

To fix it either rename or copy the file to llibcep or modify your link parameters so it uses the "correct" default name of llibce.lib.

Where are the 16 bit DOS library functions

Question: I wrote a driver under OS/2 1.3, but now under Warp the linker cannot find the DOS library functions, DosOpen, DosRead etc., where are these functions ?

Answer: These functions used to be in the file DOSCALLS.LIB, this file is no longer shipped with the Warp Toolkit, instead you should use the library OS2286.LIB.

Alternatively you could copy your version of DOSCALLS.LIB from the earlier release to your current development system.

Initialising the driver for the first time

When you reboot, the device driver will not install

Question: The driver has been compiled & linked with no apparent problems but at initialisation time the kernel reports that the driver is in an incorrect format.

Answer:There are a number of rules for the layout of OS/2 device drivers :

  • The first segment MUST be a data segment.
  • The data segment MUST contain a header structure at the very beginning.
  • The header structure must be in the correct format.

Data items quite often get placed in front of the device driver header structure, there is a tool called DHDR.EXE that will dump the contents of a device driver files header.

At initialisation time the device driver is responsible for telling the kernel the length of the drivers code & data segments. The kernel then uses the information to limit the amount of storage allocated for the drivers code & data.

If you forget to set these fields up then by the values used will be invalid and your driver will not have very long to live.

Similiarly if the value supplied by the driver is too short, then the moment you pass control into this non-existent code space the driver will crash.

There are a number of ways of identifying this problem and removing it.

  • You can examine the .MAP file, if a code function occurs after the initialisation code then you will probably want to move code around.
  • Place special markers at the end of the data & code segments. In the data segments the marker could be a text string e.g. char *eodata = "END OF DATA"; In the code segment the marker could be a function (not a static) with a special name.
  • The simplest approach is to say (at init time) that the size of the code & data space is the size of the currently loaded segments (LSL assembler instruction). This way you are guaranteed that all the code will be in the correct place.

In the bad old days of DOS & OS/2 1.x memory for device drivers was at a premium, with OS/2 2.x & Warp this is no longer the case. Make life easy for yourself and do not worry about saving that last 100 bytes of code.

How should a driver output strings to the display at boot time

Question:What is the best way to output a string to the screen from the DD INIT routine and INIT_COMPLETE routinE?

In INIT I call DOSPUTMESSAGE. Is this function still available for INIT_COMPLETE - or does that depend on INIT segment of my code?

Answer: DosPutMessage is fine for INIT time. You could also do a DosWrite to the stdout device (File handle 1). Similiarly you can read keystrokes using a DosRead on the stdin device (File handle 0).

There is no way for a driver at INIT_COMPLETE time to output a message.

The only solution I can suggest is to fire up an application in startup.cmd or run from config.sys. And get that application to talk to the driver & report whether the INIT_COMPLETE was successful or not.

The only reason for the INIT_COMPLETE is to set up links between drivers (IDC) and you do it after all the INITs so as to remove any dependencies on the order of DEVICE='s in the config.sys. There should be no need to output strings at INIT_COMPLETE time.

How should a base driver output strings to the display at boot time

Question: What is the best way to output a string to the screen from a base device driver at INIT time.

Answer: This is a little tricky.

DosPutMessage is fine for INIT time. However it is not available to Base Device drivers. Instead there is a special device helper that saves all the base device driver messages. At the end of base device driver initialisation all the saved messages are output to the display.

#include  "dhcalls.h"

USHORT APIENTRY DevHelp_Save_Message( NPBYTE MsgTable );

typedef struct _MSGTABLE {

  USHORT   MsgId;                       /* Message Id #                  */
  USHORT   cMsgStrings;                 /* # of (%) substitution strings */
  PSZ      MsgString[1];               /* Substitution string pointers  */
} MSGTABLE;

typedef MSGTABLE *NPMSGTABLE;

As you can see from the example, the save message helper requires you to use message ids.

How can you tell the type of the PC's bus at INIT time

Question: I want my device driver (PDD) to determine if it runs in an EISA PC environment at INIT time. Is there any support by the operating system to do this ?

Answer: There are some IOCTL calls a driver can make to talk to the TESTCFG driver and ask what type of bus the system is using.

The file bus.zip in the download area contains the source code you need to query the bus type, ISA, MCA or EISA.

Function call hangs on return

Question: At INIT Time I use DosPutMessage() to print some messages on the Screen. If I call DosPutMessage() directly from my INIT-function, it works fine. But if I put DosPutMessage() in a separate function and call this function from my INIT function, the driver hangs upon return.

Why? Is this a limitation or a bug?

Answer: A bug.

This is a fairly common problem. If code (any code) is crashing on a return instruction the most likely cause is because the stack frame (which has the return address) is in some way damaged.

I would guess that you have declared the function incorrectly either

   declared as pascal & called as C

or

   declared as C called as pascal

If there is only one input parameter then both variants can be called ok, & will work, but both will crash when you return. Of course there could be another explanation, you have an errant pointer which is corrupting the stack :

function ()
{
   char arr[10], *p;

       p = &arr[0];
       *--p = 0;       // the return will "probably" crash
}

Differences between OS/2 2.0 & 2.1 at initialisation time

Question: I recently moved from OS/2 2.0 to version 2.1 and I noticed that my driver was being entered with some new strategy commands. Specifically, at boot time immediately after the INIT call, I get called again with function 1F. What is this function ?

I also noticed that my CS selector changed between the INIT function and the 1F function (from 0AEB to 0AE8). What is going on ?

Answer: The 1f function is new, its purpose is to tell all device drivers that all device drivers are now installed. It is referred to as INIT_COMPLETE.

The INIT_COMPLETE command is passed into drivers at initialisation time after all device drivers have been initialised. The context is strategy time and the drivers privilege level is 0.

The operation you should perform at INIT_COMPLETE is to make all your IDC attachments (see attachdd helper service).

OLD WAY

1) Drivers install at ring 3 with iopl set to 3

2) Applications open and start using drivers

NEW WAY

1) Drivers install at ring 3 with iopl set to 3

2) Drivers can continue to initialise and communicate with other drivers, at ring 0 with iopl set to 2

3) Applications open and start using drivers

As regards your second question :

I also noticed that my CS selector changed between the INIT function and the 1F function (from 0AEB to 0AE8). What is going on ?

That is quite normal, the selectors are constructed so that the low order two bits contains the "Requestors Privilege Level" or RPL

At Init  time the RPL is 3 - 0AE8 orred 3 => 0AEB.
At 1F   time the RPL is 0 - 0AE8 or    0 => 0AE8.


Error code SYS1201 at Initialisation time

Question: I recently assembled a pysical DD with MS MASM 6.0. When I install it on my CONFIG.SYS file by typing: DEVICE=C:\........\XXXXX.SYS

I have 2 situation on 2 different configuration. On a PS/2 P70 all run OK. On a PS/2 95-V01 I have an error code SYS1201. Why?

Answer: When a driver is initialised it has the option of installing or not installing, the 1201 error means it is not installing, this could be for anyone of a million reasons - ALL UNDER THE CONTROL OF THE DRIVER -

So basically your driver has given up, without really telling you why.

It could be it could not acquire an interrupt because it was not available " " " " " " " an memory segment " " " " " " " a GDT selector " " " " " " " a timer

 anything in fact.

Two suggestions if the driver is not going to install call a standard routine with a message to display. The following is what I use.

#include 

UINT abend (PREQPACKET rp, char *message);

static char     ActMessage[] = "Hit enter key to continue.\r\n";
static char     FailMessage[]  = " driver failed to install.\r\n";
static char     CrLf[]= "\r\n";

UINT abend (PREQPACKET rp, char *message)
{
   ULONG length;
   UCHAR ch;

   DosPutMessage(1, strlen(FailMessage), FailMessage);
   DosPutMessage(1, strlen (message), message);
   DosPutMessage(1, 2, CrLf);
   DosPutMessage(1, strlen(ActMessage), ActMessage);

   DosRead ((SHANDLE)0, (FARPOINTER)&ch, (USHORT)1, (FARPOINTER)&length);

   rp->s.InitExit.finalCS = 0;
   rp->s.InitExit.finalDS = 0;

   return RPDONE;
}

But you are desperate, so put an int 3 instruction in your drivers initialisation routine, reassemble and install it on the problem machine, using the kdb you should then be able to follow through the code to the point where the driver is setting :

   rp->s.InitExit.finalCS = 0;
   rp->s.InitExit.finalDS = 0;
       which will cause the 1201 result.

If you apply (or already have) the change and you are still experiencing the same problem then there is an even simpler explanation. You are trying to install the same device driver twice.

This will work under DOS. Under OS/2 you cannot install multiple device drivers with the same name (and attribute). And anyway, why would you want to ?

How can you tell which version of OS/2 you are using at INIT time

Question: Is there a way to know during init time if a devide driver is running under OS/2 1.3 or OS/2 2.0. What sort of call,function or instruction can i use to find out ?

Answer: You can use the DosGetInfoSeg API at INIT time. This API provides the addresses of two data structures :

1) The Global info seg 2) The Local info seg

The addresses of these structures does not change so you only need to raise the query once. Once you have the address of these structures you can query their contents, at any time during the device drivers life. I've enclosed the code fragment I use to display the version number at Initialisation time.

   DosGetInfoSeg (&g, &l);
   Ginfo = MAKEP(g, 0);
   Linfo = MAKEP(l, 0);

//          Output OS/2 version number
   DosPutMessage(1, strlen(InitMessage1), InitMessage1);

   ch = Ginfo->uchMajorVersion;
   out_dec (ch);
   DosPutMessage(1, 1, ".");
   ch = Ginfo->uchMinorVersion;
   out_dec (ch);

   DosPutMessage(1, strlen(VersMessage2), VersMessage2);
   out_dec (ch);
   DosPutMessage(1, 1, &ch);

   DosPutMessage(1, 2, CrLf);


Calling API's at INIT time

Question: I have seen an example device driver that invokes a DosExecPgm during initialisation. Is this possible ?

Answer: Yes it is, so long as the file system is ready (so you can read the DLL's), you can call any and all the .DLLs on the computer, HOWEVER the the sub-systems required to support/cooperate with those dlls, may not be ready to go at init time . This means that the dll call may appear to work at init time but it may not work in the next release of the OS.

The ONLY APIs documented as supported at init time are - DosBeep & all the file handling APIs. Unless there is a very good reason, I would suggest complying with the documentation.

SYS1719 error after loading a PDD

Question: What are the posible reasons for SYS1719 error after loading of PDD?

Answer: The output from "help sys1719" states :

SYS1719: The file "***" specified in the *** command on line *** of the CONFIG.SYS file does not contain a valid device driver or file system driver. Line *** is ignored.

EXPLANATION: The file specified does not contain a valid device driver or file system driver, or contains a valid DOS device driver when a DOS session was not started.

ACTION: Perform one of the following actions, then restart the system: 1. Edit the CONFIG.SYS file to correct or remove the incorrect command. 2. Edit the CONFIG.SYS file to remove the PROTECTONLY=YES command. 3. Install the correct device driver or file system driver in the specified file. 4. Install all dynamic link libraries required by the specified device driver.

There are a number of reasons your driver may be considered invalid, if you have a map file for it, check :

   That the driver contains a data & a code segment IN THAT PHYSICAL ORDER
   The very first data item in the data segment is the drivers header.
   Quite often when people are building drivers, the device header (WHICH MUST BE AT OFFSET ZERO) is placed incorrectly in the data segment.
   If you have the utility exehdr then you can try examining the driver with that, ie EXEHDR MOUSE.SYS :

   Microsoft (R) EXE File Header Utility  Version 2.01
   Copyright (C) Microsoft Corp 1985-1990.  All rights reserved.

   Library:                  MOUSE
   Description:              mouse.DLL
   Data:                     SHARED
   Initialization:           Global
   Initial CS:IP:            seg   0 offset 0000
   Initial SS:SP:            seg   0 offset 0000
   DGROUP:                   seg   1
   PROTMODE

   no. type address  file  mem   flags
     1 DATA 00000110 001f1 01430 SHARED, PRELOAD
     2 CODE 00000320 01c9a 01c9a PRELOAD
     3 CODE 00001ff0 01ac3 01ac4 PRELOAD, IOPL, (movable)


Where in memory, is a device driver loaded

Question: If I got the previous conversation right in this area, then device driver will be loaded with its first DataSeg and CodeSeg below the 1 MB border. Is this correct ?

Answer: Yes, it is correct for OS/2 1.x. No, it is wrong for OS/2 2.x

Under OS/2 1.x, it was a problem too, as all those OS/2 drivers sucked up the DOSboxes memory. Under 2.X however the physical ram for the dd's code & data is coming from the TOP of memory not the bottom.

Making messages to display at Initialisation time

Question: I have heard of a program called MKMSGF which can be used to generate some sort of generic message information which allows a driver to support foreign languages very simply. It seems to have some relationship with a DLL call LANMSGDD.

Do you know where I might find out detailed information about this, so I can look at include such support in my drivers.

Answer: The MKMSGF & the MSGBIND utilities allow you to bind text messages into an application AFTER it has been compiled & linked. All OS/2 's error and warning messages work this way.

The intention is that a developer can produce some code which is then given to the distributors. The distributors can then apply all the text messages to the app. The benefit is that the distributor can use different message files for different countries and thus (hopefully) "internationalise" the application, to suit the end user.

There is info (and examples) on how to use these utilities in the toolkit tools reference online help.

The rules for using messages bound in this way from within a device driver; you can either use the DosGet/Put/Ins message APIs or alternatively there is a helper service (0x3d). In the 2.0 doc Physical Device Driver Ref, this helper is called SAVE_MESSAGE 17-26. This is a typo its real name is DispMsg - which is why it appears between DevDone & DynamicAPI. Messages can only be displayed in this way by device drivers at initialisation time.

How can I query the OS/2 language version at initialisation time

Question: I want my driver query the language version of the system so I can output messages in the correct language.

Answer: There are several ways to do this :

  • Open the keyboard driver (DosOpen ("KBD$", ...)) and pass in an IOCtl request to query the current codepage.
  • Use the DevHelp_GetDOSVar to read the Codepage field from the Global Info Seg.
  • Parse the config.sys file to read and interpret the CODEPAGE= line.

But really all the above solutions are wrong. The correct approach would be to use Message files. You should create one message file for each language you intend to support, then when the driver is installed you should let the user decide which language variant should be installed.

Can I use an IDC link at Initialisation time

Question: I have created an IDC link to another driver. I used the AttachDD helper service to create the link at initialisation time. When I tried to call the other driver using the IDC link the system hung. Why ?

Answer: The IDC link information is for a Ring 0 call, into the Attached driver. At Init time your driver is running at Ring 3. It is not possible to call from Ring 3 to Ring 0 in this way. Give up it cannot be done.

Instead, you should delay ATTACHing to other drivers until all the drivers have been installed. You do this by adding support for the INIT_COMPLETE command. Once your driver receives the INIT_COMPLETE, you know that all drivers will be installed. You should now establish the IDC link using the AttachDD helper service. Once this is done you can start using the IDC interface, immediately.

Start of operations

How should an application handle a proprietary device driver error code

Question: I am trying to return a proprietary return code to a READ strategy command, however what ever value I return is interpreted by the system causing the device driver error message window to be displayed. The value I am returning is:

RPERR | RPDEV | RPDONE | my-8-bit-code

I understood that specifying RPDEV meant that OS/2 would not try to interpret the error. Am I wrong ? Am I restricted to a particualar range for my-8-bit-code ?

Answer: There are two things I can suggest you do:

  • When you open the driver, use the bit flag to tell the kernel to return error codes to the application (I cannot remember which bit it is, but its in the manual)
  • Use the DosError API to selectively enable/disable the error notification popup.

How should a driver return a proprietary error code

Question: My driver needs to return a special error code. The list of standard error codes does not include the error I want to return. How should my driver return its special error code.

Answer: Set the following bits in the Request packets status word.

0x8000  -   RPERR - Signal an error
0x4000  -   RPDEV - Signal a device specific error code
0x0100  -   RPDONE - Signal the operation is complete
0x00XX  -   The drivers special error code

The RPDEV setting is the important one as it informs the system that this is a device specific error code, and NOT one of the standard ones.

The driver crashes the first time you call it

Question: The driver has installed correctly, everything is looking good, but the first time you call the driver it crashes with strange register values. You may have looked at the driver using the kernel debugger but the driver is still crashing in a very strange way, and for no apparent reason. Answer

Answer: At initialisation time the device driver is responsible for telling the kernel the length of the drivers code & data segments. The kernel then uses the information to limit the amount of storage allocated for the drivers code & data.

If the value supplied by the driver is too short, then the moment you pass control into this non-existent code space the driver will crash.

There are a number of ways of identifying this problem and removing it.

  1. The simplest approach is to say (at init) time that the size of the code & data space is the size of the currently loaded segments (LSL assembler instruction). This way you are guaranteed that all the code will be in the correct place.
  2. You can examine the .MAP file, if a code function occurs after the initialisation code then you will probably want to move code around.
  3. Place special markers at the end of the data & code segments. In the data segments the marker could be a text string e.g. char *eodata = "END OF DATA"; In the code segment the marker could be a function (not a static) with a special name.

How can a driver tell which session context it is being called in

Question: I need to be able to tell what type of application is invoking the driver. Windows, OS/2 I would also like to know what kind of session the application is running under - Full Screen, Windowed etc.

The sgCurrent field in the global info seg seems to give me the information I want, at least in my initial testing. I've found that it takes a value of 1 when a windowed app has focus (i.e. PM apps, seamless WinOS/2 apps) and different values if a full-screen DOS or OS/2 session has focus. My method then simply becomes:

   if (sgCurrent == 1)
      Let video be shown
   else
      Don't let video be shown

My worry is that I can't find the values of sgCurrent documented anywhere, so I'm a bit uneasy about assuming I've done the right thing. Do you know of any documentation of this field ?

Answer: The sgCurrent field contains the number of the current Screen Group. Different types f screen group use specific values.

The PM session is always 1, OS/2 full screen sessions start at 4, and full screen DOS/Win OS/2 sessions begin at 16.

In addition there are pop_up & hard error sessions, I guess there numbers will be 2 & 3 respectively (or vice versa).

I haven't seen these values documented anywhere. So be warned they may change.

There is a TypeProcess field in the local info seg, this describes the type of the application :

0 Full Screen protect-mode session
1 Requires real mode
2 VIO windable protect-mode session
3 Presentation Manager protect-mode session
4 Detached protected-mode process

This is from the PDD reference manual under GetDOSVar.

Since you are looking to detect screen switching, look at the cat 5 ioctls, you should see some cat 5's coming through to all drivers when screen switching. From memory I think they are query & set active font. You will need to experiment but I believe they may be useful.

How should a driver use the busy flag

Question: I developed a simple device driver for an in-house device that performs synchronous comms using the DevIOCtl call. This works fine in either interrupt or DMA modes but I found a problem when I had two applications hammering the device. Eventually one applications communications threads just seizes up.

I traced this problem to the fact that the application with focus was getting all the communications time and as a result the device driver evetually returned a timeout on a RAM semaphore to the other thread. This was done using the DevBusy value as in:-

   MOV     AX, DevBusy OR ReqError
   MOV     [(Req ptr ES:BX).ReqStatus], AX

This results in the DosDevIOCtl Call never returning to the caller. A change in code to

   MOV     AX, Done OR ReqError
   MOV     [(Req ptr ES:BX).ReqStatus], AX

Returns OK.

Now the questions. Is there a problem in the way I am returning the DevBusy flag ?

Answer: Yes,

The DevBusy flag only has significance for the status & test for removable media commands. The way you are using it in the ioctl command it will have no affect at all (its being ignored). The reason you have a problem is that you are omitting the Done bit. The way it works is this :

Any driver returns any command with the Done bit NOT set

The kernel will block that thread (lets call it thread A)

.... time passes .... more time passes

A driver invokes the DevDone helper service with the es:bx register pair set to the address of thread A's request packet

The kernel will unblock thread A, and control will be passed back to thread A

The way this mechanism is intended to function :

Thread A does something but the operation needs to wait for an external event (an interrupt perhaps) before it is finally complete.

Thread A saves the address of its request packet somewhere

Thread A exits with the Done bit clear

The kernel will block thread A

.... Time passes

The interrupt occurs, does what it needs to do, and flags the blocked thread (Thread A) that the operation is complete using the DevDone helper

The interrupt irets

Thread A completes

What is the Yield flag for and why should I check it ?

Question: How am I meant to use the yield flag, should I test it before issueing a yield. What should I test it for.

Answer: You do not need to check the yield flag.

To understand the yield helper imagine you are back at college sharing a flat with several other students.

You are on the phone to your girlfriend, meanwhile your flat mates also want to use the phone. The device driver ref states that you should yield the phone to your flat mates every 3 milliseconds. Now you could simply yield every 3 ms or in a better organised flat you would ask your flat mates every 3 millis if any one else wants to use the phone, if noone does, then carry on talking to your girlfriend.

The bottom line is that it is more efficient to :

  1. GetDOSVar the address of the yield flag
  2. do your thing for 3 millis
  3. Check the yield flag
  4. If the flag is clear goto 2)
  5. Yield
  6. Go to 2)

This is because when you yield (or TCyield), your process is descheduled, the kernel then schedules the next highest priority task. If your task is the next highest priority task it will be scheduled again. Which is an inefficient way of doing things.

When to use the DONE bit

Question: When is there a need to return from the strategy function without setting the DONE bit? (There must be because otherwise the bit would be useless).

Answer: The DONE bit provides a simple & easy (to implement) way to manage asynchronous events. The following pseudo code illustrates the two options :

1) Using the DONE bit 2) Using BLOCK/UNBLOCK helpers.

Using the DONE bit

 strat rtn
       do something
       set up an int handler
       put the request packet into a data structure somewhere
       return with done bit not set
               the kernel will now block the calling thread

 time passes ....

 more time passes ....

 int handler
       get the previously saved request packet
       do what ever you have to do (the request packet contains info
       on the calling threads read/write buffer etc.)
       invoke devdone helper using the es:bx address of the request packet
       ret
               the kernel will now UNblock the calling thread

Using the BLOCK/UNLOCK helpers

 strat rtn
       do something
       set up an int handler
       put the request packet into a data structure somewhere
       invoke the block helper on the calling thread using es:bx address of
            the request packet as the block id
               ... the thread is now blocked until int handler unblocks it
       ...
       YOU CAN NOW DO SOMETHING HERE
       ...
       return with done bit set

 time passes ....

 more time passes ....
 int handler
       get the previously saved request packet
       do what ever you have to do (the request packet contains info
       on the calling threads read/write buffer etc.)
       invoke unblock helper using the es:bx address of the request packet
       ret
               the kernel will now UNblock the calling thread

As you can see the only functional difference between the two solutions is that if YOU do the block/unblock you can do some processing at the end of the strat routine. If you do not need to do processing at the end of the strat routine then exiting with done bit clear is the simpler solution.

When does the system change the running thread

Question: I am confused about blocking & yields. If a device driver never issues these calls, who does and when. How does this affect the thread that is currently scheduled and descheduled.

Answer: The way it works is this.

The current thread is only descheduled under certain circumstances, these are called scheduling opportunities and here they are :

1) In response to the Yield helper 2) In response to the Block helper 3) Ring 0 to Ring 2 & 3 transitions

Notice I have not included exiting drivers with done status bit not set, because all rets from your driver (see 3) above and next para) are scheduling opportunities including the not done exit.

The ring 0 to ring 2 & 3 transition occurs after every entry to your driver including interrupts & traps, BUT if you get an interrupt when you are already at ring 0, no descheduling opportunity will take place.

The overhead of blocking high priority threads

Question: I have a number of questions.

Q1. Does blocking a high-priority thread impose more load on the system than blocking a low-priority thread ?

Q2. When a thread is run by a device driver, does the priority of the thread determine how quickly that thread is scheduled ?

Q3. The NETBIOS command completion is scheduled using semaphores - when a semaphore is cleared, does the priority of the thread waiting on the semaphore determine how quickly that thread is scheduled ?

Q4. Perhaps I should change the priority of the rx_thread dynamically,depending on the level of the rx q, ie decrease

its priority as the queue fills up ?

Answer: Yes, yes, yes, Perhaps.

Time critical threads will be serviced 8 milliseconds after an unblock. Normal priority threads take 32 milliseconds. Basically the timer resolution is increased by a factor of four when you block a Time Critical thread. Its a bit like saying, "this is a really important job, respond immediately it unblocks". This will impose some overhead on the CPU, but not much.

As far as dynamically changing priority I would not bother. If you are really concerned, invest time in benchmarking and measure EXACTLY what the overhead is. Then worry about optimising. Optimisation without some metrics is rarely useful and usually counter productive - but thats just my opinion

Using Request Packets

PUSH request packet will not work

Question: I am using the PushRequestPacket helper at strategy time, yet whatever I do it always returns error.

Answer: No, it does not.

The real gotcha with these push/pull helpers is that push, DOES NOT set/clear the carry flag on completion, so do not test it to see if it works. I suspect you ARE using the push (correctly) at strategy time (where its meant to be used) but testing the result of the operation which YOU SHOULD NOT DO.

The upsetting thing about push/pull request packet helpers is that they are inconsistent. They are the only helpers that do not return an error code and they should be regarded as special cases.

Pushing request packets at interrupt time

Question: I want to use the PushRequestPacket helper at interrupt time, yet it does not seem to be supported. Why is this & how do I get round it.

Answer: The question is "why do you want to push request packets in an interrupt". For arguments sake we will assume you have a good reason .

It was probably felt by the original designers that there was no reason for developers to want to push request packets so they made the arbitrary decision not to support it. Think of it as a design limitation (not a bug).

To code round the problem, simply write your own function. Appending a request packet onto the end of a Request Packet Queue is trivial to implement.

What is the length of a request packet

Question: Different driver commands use different size request packets. When I use the DevHelp_AllocReqPacket call, what size request packet does it create.

Answer: All request packets are allocated from the same pool of memory and they are all (currently) 32 bytes long.

The data areas which are unused at the end of the request packet can be used by the driver to pass parameters into interrupt routines. The elegance of this technique is that it simplifies writing re-entrant drivers. This is because all request packet specific values can be stored in the request packet.

Do I need to Lock the memory for a request packet

Question: I am using the push & pull request packet helpers to pass request packets into an interrupt routine. Do I need to lock the request packets in memory to prevent them being swapped out.

Answer: No, all request packets are allocated from the same pool of memory and this pool will always be resident at the same physical location.

Distinguishing request packets from multiple file handles

Question: I am writing a driver that may be called multiple times by multiple threads and processes. How should the driver keep track of each opened file handle.

Answer: The Open, Close, Read & Write request packets all contain a field identifying the unique file handle. This field is called the System File Number or SFN.

union
{
   struct
   {                                 /*  READ, WRITE, WRITE_VERIFY */
        UCHAR      media;            /* media descriptor     */
        PHYSADDR   buffer;           /* transfer address     */
        USHORT     count;            /* bytes/sectors        */
        ULONG      startsector;      /* starting sector#     */
        USHORT     rwsfn;            /* system file number   */
   } ReadWrite;                      /* available: 6 bytes   */
   struct
   {                                 /* IOCTL */
        UCHAR      category;         /* category code        */
        UCHAR      function;         /* function code        */
        FARPOINTER parameters;       /* ¶meters          */
        FARPOINTER buffer;           /* &buffer              */
        USHORT     iosfn;            /* system file number   */
   } IOCtl;                          /* available: 7 bytes   */
   struct
   {                                 /* OPEN/CLOSE */
        USHORT     ocsfn;            /* system file number   */
   } OpenClose;                      /* available: 17 bytes   */
};

A collection of commonly asked questions about the IOCTL interface

Why does my driver receive a PRT_ACTIVEFONT IOCTL request

Question: It appears that the device driver always receives the PRT_ACTIVATEFONT IOCTL when the application does an OPEN, even when the device header says it doesn't support the OPEN command. Am I doing something wrong, or is this just a fact of life?

Answer: Fact of life, just return an unrecognised command error code 0x8103 & it will go away.


How should I open a block device

Question: I want to perform some low level IOCTL commands on the floppy drive. What file name should I use to open this type of driver ?

Answer: "A:" et.seq

My comms software works on some machines but not others

Question: When communicating to a "black box" over COM1 on an IBM Mod 77, 90, 95 (tried them all !!), I can DosWrite a few bytes --> finito. DosRead-ing somethin with another thread doesn't work at all.

SAME Program, SAME black box, SAME Cables 'n'stuff on a IBM Thinkpad, IBM ValuePoint (tried them all again...) --> works SUPER fine.

Seems like some handshake problem...

I want the program to work on ANY machine, so what is different on IBM Mod 77, 90, 95 COM ?

Answer: Use the IOCTLs to set the handshaking.

 ASYNC_SETDCBINFO
 flags1 = 0x08;
 flags2 = 0x40;
 ASYNC_SETMODEMCTRL
 mask_on = 0x02;
 mask_off = 0xFF;

By default, DTR is initially set on MCA PW>driver opens but NOT on ISA driver opens...

How do OS/2 drivers work with DOS apps that are raising IOCTL requests

Question: I have a DOS app that uses IOCTL requests to communicate with a DOS driver.

What will happen if I try & run this application and driver under OS/2

- a DOS application opens a character device driver and gets a handle to it - then it fills in the following values to the registers

   ah    = 44h
   al    = 03h (send control string)
   bx    = handle
   cx    = number of bytes to transfer
   ds:dx = segment:offset of data buffer

and issues an Int 21h

- a DOS device driver will receive these values via the DOS kernel thru the device driver function 0Ch

If this DOS application now runs in a DOS box and instead of the DOS driver there is an OS/2 PDD, what kind of request packet will an OS/2 PDD receive?

Answer: A generic IOCTL function 10h. The only difference is that you can also use :

      si:di = segment:offset of parameter buffer

The addresses passed thru to the driver will be virtual addresses (not physical).

Your parameter usage tho' is wrong/inappropriate for the GIOCTL command

ah = 44h       - correct
al = 03        - ?  I use 0c, I cannot remember why tho' I guess 3 is OK
bx = handle    - correct
ch = category
cl = function
ds:dx = segment:offset of data buffer

It is important to set up cx, as OS/2 pdds get lots of ioctls from lots of places (some pretty strange). As a convention category & function codes are used to identify the ioctl command.

In general you should find that ALL your old DOS apps which are using standard DOS ioctls will still ALL work. Internally what will be happening is those IOCTLs will be translated by their respective drivers.

If you are using software using non-standard (hardware specific) IOCTL's these cannot be translated, & you will need to provide a device driver that will recognise them.

Can a device driver report back length information in an IOCTL request

Question: DosDevIOCtl2() takes the address of the data and parameter buffer sizes, which implies that the values can be changed on return - does the PDD have a way to change these values,

Answer: There is nothing in my docs about dd's being able to change the length fields. My suspicion is that it is a design flaw in the API function.

In the past I have found dd data structure that 'could' be changed, only to find in subsequent releases that they could not. So my advise is to assume that the length fields are read only (but I may be wrong).

Using OS/2 IOCTLs from DOS & Windows apps

Question: I have a Win OS/2 app that can talk directly to the PDD by opening the device name.

eg, in "C"          dev = open( "DEVNAME" );
then          bytecount = read( dev, ... )
and           bytecount = write( dev, ... );

You can also do ioctl( dev, ... ) but I hav'nt found a way to control the category, and in general IOCTL's don't seem very satisfactory.

Answer:

DOS apps (& windows is still a Dos app) have access to OS/2 drivers (when running under OS/2). In order to invoke an IOCTL call to an OS/2 driver your DOS app should raise an int21h call. I use an assembler routine to do this :

       TITLE   devioctl.asm
       NAME    devioctl

DEVIOCTL_TEXT           SEGMENT
       ASSUME          CS: DEVIOCTL_TEXT

       PUBLIC          _devioctl
_devioctl       PROC FAR
       push    bp
       mov     bp,sp

;       OFFSETS OF INPUT PARAMETERS RELATIVE TO BP
;
;        buffer = 6     - Data buffer containing parameters
;        funct  = 10    - Function being performed
;        class  = 12    - Category of device
;        hand   = 14    - File handle of IOCTL device

       push    ds

       mov     cx,WORD PTR [bp+10]     ;funct
       mov     ax,WORD PTR [bp+12]     ;class
       mov     ch, al
       mov     bx,WORD PTR [bp+14]     ;hand
       mov     dx,WORD PTR [bp+6]      ;buffer lo address
       mov     ax,WORD PTR [bp+8]      ;buffer hi address
       mov     ds,ax
       mov     ax, 0440ch

       int     021h

       pop     ds
       mov     sp,bp
       pop     bp
       ret

_devioctl       ENDP

DEVIOCTL_TEXT        ENDS
END

I've extracted some info from Ralph Browns interrupt list that documents the int21 function 44 :

-----D-21440C-----------------------------
INT 21 - DOS 3.2+ - IOCTL - GENERIC CHARACTER DEVICE REQUEST
        AX = 440Ch
        BX = device handle
        CH = category code
           00h unknown (DOS 3.3+)
           01h COMn: (DOS 3.3+)
           03h CON (DOS 3.3+)
           05h LPTn:
           9Eh Media Access Control driver (STARLITE)
           00h-7Fh reserved for Microsoft
           80h-FFh reserved for OEM/user-defined
        CL = function
           00h MAC driver Bind (STARLITE)
           45h set iteration (retry) count
           4Ah select code page
           4Ch start code-page preparation
           4Dh end code-page preparation
           5Fh set display information (DOS 4+)
           65h get iteration (retry) count
           6Ah query selected code page
           6Bh query prepare list
           7Fh get display information (DOS 4+)
        DS:DX -> parameter block (see below)
        SI = parameter to pass to driver (European MS-DOS 4.0, OS/2 comp box)
        DI = parameter to pass to driver (European MS-DOS 4.0, OS/2 comp box)
Return: CF set on error
           AX = error code (see AH=59h)
        CF clear if successful
           DS:DX -> iteration count if CL=65h
           SI = returned value (European MS-DOS 4.0, OS/2 comp box)
           DI = returned value (European MS-DOS 4.0, OS/2 comp box)
Notes:  bit assignments for function code in CL:
           bit 7: set to ignore if unsupported, clear to return error
           bit 6: set if passed to driver, clear if intercepted by DOS
           bit 5: set if queries data from device, clear if sends command
           bits 4-0: subfunction
        DR-DOS 3.41 and 5.0 return error code 16h on CL=45h,65h if the device
         does not support a retry counter
SeeAlso: AX=440Dh,INT 2F/AX=0802h,INT 2F/AX=122Bh,INT 2F/AX=14FFh
SeeAlso: INT 2F/AX=1A01h

Defining and Using Semaphores

How to use a semaphore at interrupt time

Question: I have a question about using semaphores to make a physical device driver communicate with an application. The application process creates an event semaphore and passes the semaphore handle to the device driver in a generic IOCtl call.

The dd uses PostEventSem when it has something for the application, and the application, which is hanging on a DosWaitEventSem, takes over. Everything is working just fine.

BUT when I'm using PostEventSem at interrupt time OS/2 gets very angry. I don't like it, but it's ok 'cause PostEventSem cannot be used at interrupt time.

Back in message #218 you said it is possible go get an event sem to work at interrupt time. How is this done ?

Otherwise, is it possible to create a second thread inside the DD which could post the semaphores, and then control this thread via system semaphores at interrupt time. If yes - how ?

Answer: Yes it is possible to get an event sem to work at interrupt time and you do do it by creating a second thread.

  1. Create a context hook, using the AllocateCtxHook helper, you can do this at strategy time or init, but I'd suggest do it as part of the initialisation process. The AllocateCtxHook allows you to specify the address of your ctx hook handling function. When you do this you place the 16 bit offset address of your hook handler into the 32 bit eax register. It is important that the high order address bits are zeroised (I guess this is for future compatibility with the 32 bit).
  2. To deinstall a driver with an allocated ctx hook use the FreeCtxHook helper
  3. Place your event sema4 stuff in the ctx hook function. NB Hook functions MUST save & restore all registers on entry & exit.
  4. To pass control into your strategy time hook handler from an interrupt time context, simply use the ArmCtxHook helper. And exit the interrupt handler.

Effectively what is happening when you arm a hook is that you are creating a mega high priority (strategy time) thread. At the VERY NEXT scheduling opportunity the ctx hook will be called and simultaneously disarmed. When control passes into the hook function you can then do the things you wanted to do within the interrupt handler but with the limitations of a strategy time context.

The DS SS & CS registers will all be set as they would in strategy time for the driver.

What types of semaphore work with device drivers

Question: I am passing semaphore handles to my driver as parameters in IOCTL packets. What are the different types of semaphore that will work inside a device driver.

Answer: System semaphores (named semaphores) can be used at strategy time AND interrupt time. RAM Semaphores can only be used at strategy time. Event semaphores behave like RAM semaphores. And finally, you cannot use Mutex semaphores in drivers.


How should a device driver create a semaphore

Question: I would like to use a semaphore in my driver to communicate with an application. I have looked in the device driver references but I cannot find out how to do this.

Answer: Device Drivers cannot create semaphores. Instead you should create the semaphores inside an app. The app should then pass the semaphore handle into the device driver for registration.

What does semhandle do ?

Question: I have examined what happens when you use the semhandle helper. It does not seem to do anything ? Do I need to call it.

Answer: Yes ! Semhandle has two important functions :

  1. Registering/deregistering access. Each semaphore has an in use count field. This is used to indicate how many, processes & threads are using this semaphore. When the in use count drops to zero. Then the semaphore will be deleted.
  2. Converting the handle to system format. Named semaphore handle need to be converted to a different address format. RAM semaphores do not.

How do I create a system semaphore

Question: I would like to create a system semaphore to pass into a device driver. I have studied the manuals but I cannot find anything on system semaphores. How should I create a system semaphore.

Answer: System (named) semaphores are 16 bit objects. They are created by the 16 bit Dos16CreateSem API. Once created a system semaphore can be passed into a device driver for registration.

Restrictions on semaphore usage

Question: Is there any restrictions to use semaphore between 32 bit application and OS2 driver?

Answer: Yes, 32 bit OS/2 APIs refers to MUTEX & EVENT sema4's, EVENT sema4s can be used within drivers BUT you must use the special event sema4 helpers. Of course event sema4's are not supported in versions of OS/2 before 2.0

Mutex seamphores are not supported by device drivers.

16 bit named and RAM sema4's can be used within drivers

Alternatively you could do ALL the work yourself with the BLOCK & RUN helpers

BLOCK stops
RUN   unstops.

You should understand that all the higher level synchronisation services, Timers, Sleeps, Semaphores are implemented using the lowly Block & Run primitives.

How the stack works

Who owns the stack

Question: Who owns the stack of the driver at DosDevIOCTL time ?

Answer: The driver does. The system supports 4 stacks, one for each privilege level (ring)

RING                      STACKSIZE
ring3 application       - as requested by app
ring2 iopl              - 512 bytes
ring1 not used          -
ring0 drivers/kernel    - about 1800Hex bytes

When control passes into the driver it has its own private stack. The only thing that can grab the stack is an interrupt, so you must always be wary of interrupts eating all of your ring 0 stack. There is a helper service for specifing/limiting max number of ints coming in on your stack.

Stack based RAM semaphores seem to cause problems

Question: When I declared a structure local to (int main) which contained a semaphore handle and passed it via reference to secondary function which opened / created the semaphore and passed it to the driver I received an error 87 invalid parameter from the DosDevIOCTl. When I declared the struct as static ie not stack based the code worked correctly. The code was sent to the compiler writers who could find nothing wrong with the 'c' source and hinted that it was either in the driver or os2.

Answer: I guess you were using a RAM sema4. I did not know of the above problem. DosDevioctls are definitely allowed to pass paramaters on the stack. The only thing that could have caused the error result is your driver. So to investigate further follow the ioctl into the driver and find out exactly which helper is failing on you.

I do think your suggestion that a RAM sem should not be on the stack is probably correct. But there are only two ways to find out rtfm and try it out with kdb, I dont remember reading in the manual so ....

Creating a larger stack

Question: Can you tell me a way to get a bigger stack during initialization time of my selfwritten Physical Device Driver (PDD) ? I can't find any information in the books.

Answer: Why do you need a larger stack, creating a larger stack is likely to be technical nightmare, I'd suggest that there may be an easier way to crack this particular nut.

The only way would be to create your own stack segment in the driver at run-time. You would have to be very careful to ensure that the original stack is preserved & restored on entry & exit to your driver and you can expect some problems with some of the devhelp calls.

Similiarly you would need to disable all interrupts as if one occurred on your "fabricated" stack, it would almost certainly crash the system.

Basically the bottom line is you cannot do it - but if you try it, it "might" work.

My advise is "Do not waste your time, trying to get it to work"

Using DMA

Locking DMA addresses

Question: If I allocate a buffer in the driver's one and only data segment, then I can be sure the buffer is always in memory. To make sure it doesn't move during the DMA initialisation and execution, I can use DevHlp to lock it. Question is: I understand that the DMA controller can only address the bottom 1 Meg of memory,so how do I make sure my buffer is locked down there? I believe there is also a restriction that limits DMA to 64k boundaries, so I must make sure the lock keeps the buffer in a single 64k segment.

Answer: DMA allows you to define a memory transfer operation, the addresses supplied MUST be physical addresses. If it is physical that also means it cannot be moved, locked down, etc. It is physically wired up to a physical RAM chip.

The address limitation of DMA are limited to the addressing capabilities of the RAM you are talking to. The DMA controller chip can address up to 16MB.

One memory transfer can move up to 64KB of ram, some of the DMA channels (Channels 5, 6, & 7) can transfer up to 64 K WORDS, The only alignment (128 KB) issue I'm aware of is that word transfers have to align on word boundaries.

Aligning DMA buffers on 64K boundaries

Question: I believe there is also a restriction that limits DMA to 64k boundaries, so I must make sure the lock keeps the buffer in a single 64k segment - How

Answer: There is no documented way to do this. A recommended solution is to allocate a buffer twice the size that you actually need and then use the part of the buffer than does not cross a 64K boundary.

Alternatively split it into 2 dma requests. (One of) the recommended way of doing this is :

  1. Stuff your initial request packet into a queue, with the length field set to only copy WITHIN the first 64K
  2. create a new request packet (AllocReqPacket) and set it up for the tail end of the transfer. Stuff that into the queue.
  3. Repeat 2) as many times as is necessary
  4. Start first transfer
  5. Interrupt handler receives completion interrupt
  6. Start next transfer from within isr using RP queue.

The only thing to worry about is running out of request packets. I've done tests and after allocating 87 or so the system keels over. So be very careful to :

  • Not alloc too many RP's
  • To free them up when they've done their job.

Accessing RAM for DMA

Question: I am developing a driver for an A/D card, which does DMA transfers. The problem I have is with the read request where the DMA transfer is to be performed. As you may know, the physical buffer for the DMA transfer must:-

  • Reside below address 0x1000000 (24 bit address space).
  • As i understand it, be entirely within a single physical (64K) page i.e the buffer must not cross a 0xXX0000 boundary. This is a consequence of the 8237A only having a 16 bit address register, the DMA page select being a separate 8 bit register.

The question is whether the physical address parameter passed in the read request packet satisfies these conditions, and if not, what is the recommended way to obtain a buffer that does.

Answer: I am not sure, the decisions being made as to what does & does not go above the 16MB line seem to be a little arbitrary. I'd recommend using the compatibility strip to indicate to the kernel that your driver does not support addressing above 16MB (its bit 1 set to zero). That way you can use the read/write physical buffer directly, as opposed to creating a working < 16MB buffer, to copy your read/write data into prior to the dma.

Dma channels 5->7 support 16 bit data transfers, so theoretically you can set up a 128KB transfer, but I'd suggest it'll be a lot easier just sticking to 64KB.

Sharing DMA completion interrupt

Question: The other question I have concerns the DMA completion interrupt. How is this shared between devices using the various DMA channels. This is particularly of concern because reading the 8237A status word clears the terminal count status bits (Intel, Microsystem Components Handbook Vol 1). If one cannot access the interrupt then one has to monitor the terminal count using a timer, which is somewhat inelegant.

Answer: If you mask out the dma controller so only your device has access to it. ie as per Mastrianni, when the dma interrupt goes off, you will know it can only be for your channel.

DMA and SCSI devices

Question: Does anyone have any advice to offer on the recommended method for DMA transfer from a SCSI device ? I would particularly like advice of buffer allocation and locking using virtual memory and 32 bit support functions. Does anyone have a library of these Device Driver support services ? Any advice is gratefully recieved.

Answer: The easy way to write a SCSI driver is to use the Advanced Scsi Programming Interface (ASPI driver). (or maybe its the Adaptec Scsi ...) anyway it is a driver produced by adaptec that you talk to. It does all the hard SCSI work for you.

There are two ways you can use it :

APP <------------> ASPI driver

This has limitations as it does not support transfer of data, but it can be used to send simple commands like eject page, query ....

The real way to use ASPI is

APP <------------> Your driver <------IDC link-------> ASPI driver

It is actually very simple to do, Adaptec have example drivers on their BBS and specs for the ASPI interface.

Defining and Using memory

Accessing memory on adapters

Question: I have a serial interface, with dual ported RAM, which needs to be mapped into the LDTs.

When I map this interface under 1MB, using the DevHlp routines PhysToUVirt and UnLockSeg, everything is O.K. . When I try to map it at 15MB (in a computer which has only 8MB) it fails. Do I have to do something other then using the above DevHlp routines in order to map the interface ?

Answer: The phys2uvirt is failing because the ram is outside the system memory area.

There are two quick ways of solving this problem, if the memory region is > 64KB and you will NOT be using OS/2 1.X, you should consider using the VMAlloc helper.

Alternatively you could use the GDT helpers to create a ring 0 selector :

 AllocGDTSelector
 Phys2GDTSelector


Does a device driver have its own memory space

Question: I do not understand where a device driver gets its memory from and how it should share it with other processes, especially at interrupt time.

Answer: At initialisation time the driver will tell the operating syustem how much memory it requires for its data and code. The kernel will then reserve that amount of storage for the driver and the drivers image will then be copied into this space.

In the strategy routine, the driver is running in the memory context of the application that made the device driver call, so it shares the same LDT, which gives it access to the applications memory space. It can access the memory as 16 bit memory via the LDT, or it can access the memory space using a 32 bit flat selector.

The CS,SS and DS of the driver are all GDT selectors, so they are accessible to the driver both at kernal AND interrupt time. The driver loads an application LDT selector into its ES (for example) in the strategy routine, so it can access the application memory.

So, in answer to your questions, the driver does NOT have a different memory space. Instead the driver has access to the calling applications address space, as well as it's own.

At interrupt time, the memory space will be effectively random, whatever was running when the interrupt happened.

Since you loose addressability of the process space at interrupt time, there needs to be a way for passing data into the interrupt handlers.

There are two ways to do this :

  1. Place the data into the device drivers data segment. This will be accessible at interrupt time
  2. Place the data into a request packet. Enqueued request packets are accessible at interrupt time, so you can use request packets as storage buckets for small amounts of data.

The beauty of using the request packets to hold data is that it makes writing re-entrant code very simple.

Can a driver access an applications variables at interrupt time

Question:

Answer:

Which LDT is used at interrupt time

Question:

Answer:

Should I Lock memory before VerifyAccess or after ?

Question:

Answer:

Preventing memory conflicts

Question:

Answer:

Using addresses supplied with READ/WRITE command

Question:

Answer:

Problem allocating 64K selectors

Question:

Answer:

What is the TLB

Question:

Answer:

Self modifying code

Question:

Answer:

32 Bit issues compiling/linking/running

How time works

Using interrupts

Shutting down OS/2 gracefully

What differences are there between the versions of OS/2

Debugging tips, tricks and treats

A collection of tools & information on tools for device driver authors

General useful information and support

Other sources of information and support

Download