Feedback Search Top Backward Forward

OS/2 Routes - Network Protocols

Written by Tom Brown



Welcome to OS/2 Routes. The importance of networking continues to increase in the information technology sector to the point that now even word processors incorporate special networking features. This entry and intermediate level column is dedicated to network related issues, so networking experts are unlikely to find any revelations on these pages. I will try to make the content and scope clear in the Topic section of each column so that the knowledgeable reader's time is not wasted.


This column will attempt to provide a general overview of networking protocols. Readers who are totally unfamiliar with networking will gain the most from this column.

What is Networking

A network is a system that allows data to be directed to its destination by a unique identifier. Machines on a network are often called nodes. Data is passed between nodes using packets. Here is an oversimplified view of a packet:

You will notice that this packet contains both a destination and source address along with data. It is quite common to contain a packet within a packet. This might look something like this:

Here, the data for the perimeter packet is the entire interior packet.


Network protocols are designed using a layered metaphor. Each layer adds value to the data as it passes through to the following layer. By convention, the application is the top layer and the physical layer is at the bottom. These layers are 'stacked' on top of each other like a layered cake. For a data send operation, the data begins in the application and each successive layer adds value to the data by making the data increasingly ready for presentation to the network media (the wire, fiber, air, whatever). A data receive operation begins at the network media and the data is progressively made ready for presentation to the application by stepping through the layers in the protocol. This construct is known as a protocol stack.

Here is an example of a protocol stack:


Let's assume that you are connected to the Internet right now using a modem. Your modem is likely to use an ARQ (Automatic Resend Request) protocol to communicate with the modem at your service provider and your computer is likely to use PPP to communicate with your modem. IP is the protocol you are using and TCP is the sub-protocol that is used for web access. What you have then, is TCP/IP over PPP using ARQ modems at the physical layer. Here is a crude idea of what this looks like from a protocol stack point of view:

I don't want to focus on ARQ, but it is a legitimate protocol for networking and it makes a good starting point. I believe that my modem likes to use 128 byte packets. I can send megabytes of data through my modem but with an ARQ connection, it will be divided up into packets no larger than 128 bytes. Data is sent from the computer to the modem. As it is received, a small quantity of data and a CRC are put into a packet for transmission. The other side receives the packet and recalculates the CRC for testing against the CRC held within the packet. If the CRCs match, the packet is accepted and sent along to the remote computer. If the CRCs do not match, the packet is assumed corrupt and a retransmit request is sent back to the transmitting modem. Basically, the sending modem will divide data into small chunks for transmission and use packets to contain these chunks as well as CRC information. It might look like this:


Creating packets requires CPU time. In the web browser example, the server sends data to a TCP socket. TCP creates one or more packets. These packets are passed to IP which encapsulates them within an IP packet. The IP packet is passed to PPP which further encapsulates the IP packet before sending it to the modem where each PPP packet will turn into one or more ARQ packets. The ARQ packets are created by the modem and are therefore coprocessed. This means that this will not have any impact on the host CPU, but will still increase latency. All of the other packetization is overhead that the CPU will have to handle. IP brands each frame with a CRC and PPP brands each frame with an FCS (frame check sequence). This means that each packet goes through three layers and two check sequences before it is presented to the serial subsystem (which also requires CPU). All of this overhead adds up.

Packetization overhead can be reduced in a private network by setting the MRU size as large as possible. With OS/2 PPP, this is 5120 bytes. In situations where you are able to specify the MRU/MTU from end to end, throughput can be maximized by simply jacking up the MRU/MTU at all points. Remember this if you ever want to connect a couple of PCs using a null modem or a parallel port connection. By maxing out the MRU/MTU, you will make interactive traffic seem more sluggish. This means that your telnet sessions will not be as snappy as they are with small MRU/MTU in the case where the line is also busy with a session capable of generating large packets. For most, this will be a small price to pay. Telnet will still function at the same speed as it used to when the line isn't busy with a bunch of other stuff.

Increasing the MRU in a public network will not tend to help, because this sort of thing is negotiated at connect time and there is a high likelihood of your provider using a 1500 byte MRU on the other end of the PPP connection. PPP will negotiate the packet size with the other end by using the smallest MRU of the two nodes.


There's not much more to say about networking. Networks are made up of nodes, nodes communicate by sending packets, packets are defined by protocols. Various protocols use packets in different ways to accomplish different goals. These will be discussed my next column.