Avoiding the Build Lab Blues

From EDM2
Jump to: navigation, search

by Gregg Mason and Yvonne Nonte Stoetzel

Are you tired of those seemingly endless late nights trying to get your code to build? If you are, this article is for you. Based on years of experience building OS/2, we have discovered that if you don't design, maintain, and enforce an organized build process, you are doomed for the Build Lab Blues. Some of the most frustrating build and integration problems can be remedied quickly by keeping just a few things in mind. In this article, we share with you some of our time-proven ideas for success such as library control and problem tracking; public header; common tools; and the secret of makefiles.

A reliable library control system is a must for creating a definable, repeatable build. This lets you recall previous versions of a file at any given time. When debugging potential problems, this can be a real time saver. A variety of source code library control systems are commercially available for all shops - big and small. If your shop is small to medium, you might want to investigate LAN-based systems such as PVCS by INTERSOLV, MKS RCS by Mortice Kern Systems, or TLIB by Burton Systems Software. Large development shops (like our own OS/2-development) can use a client/server-based application like CMVC by IBM. However, if you are already running a VM mainframe, ISPF by IBM might be your ticket.

A problem tracking system also is essential for change management control for problem and design. This allows accurate and effective problem reporting, fix details, file history, integration and test resolution information. Several schools of thought exist regarding problem tracking systems.

  • A stand-alone database: For example, you can use a customized database running on FoxPro by Microsoft, with the previously mentioned library control system, PVCS. Typically, this a good workable configuration.
  • An interactive library control and problem tracking system: This provides a more robust solution without the frustration of designing your own database. Primarily, this reduces the potential for error, while simultaneously providing the ability to recreate any given version of a build at any time.

CMVC provides both of these functions for us.

Header Files and Build Bats

IMAGINE THIS. ...ITS 2:00AM, you have been building all day and you're 20 minutes away from going home, and then it happens, an abnormal termination error due to a global header file change.

Short of pulling out the ol' BUILD BAT and using it on your favorite developer (that is, your buddy that broke the build), your only solution is to fix the header file error and start the build all over again from SCRATCH! For these reasons, IBM has adopted a strict limited access control process for public header files, which includes:

  • Identifying a single owner for all public header files and having that person control all file modifications. Uncommunicated changes to public header files can introduce some of the worst build problems.
  • Housing public header files in one central location for simplicity's sake. Software developers then associate public header files with that central location.

Tooling Around

Have you ever used someone else's tool in building your own code and found a bug in this tool that you would love to fix? Then, you find out the developer who wrote the tool has long since left the company taking the source code with him. If so, here's an idea for you...

At IBM tools that are shared between components are common tools. Implementing and maintaining a common-tools process is crucial to a successful development build environment for three reasons: compatibility, maintenance and debug support. Some examples of candidates for common tools are compilers, assemblers linkers, and utilities.

Common tools are either internally developed or commercially available. Internally developed tools should be checked into the library control system and built as part of a public build, before the product build cycle. For example, if a make utility is an internally developed tool, it is built in the environment first; then, it is used to build the product. This is also known as boot strapping. Commercially available tools can be handled in two different ways:

  • Use the library/problem tracking system to check in and extract from the tool binaries to the build machine.
  • Install the retail tools directly to the build machine, and maintain a master file in the library that describes retail product information. This provides interested developers with the information needed to purchase tools as required.

There are pros and cons to each of these methods. With the first method, you can always recall the appropriate level of a tool. Additionally, only the owner of the tool can determine the build tree location of the tool. The downside to this option is that if these tools are made generally available in this fashion, the development organization might need to purchase multiple or site licenses.

The second method is a good alternative if the tools are either excessively large or must be physically installed to a machine. This philosophy also gets you out of the tools distribution business. The method's downside is that the development group is at the mercy of what's available through normal retail channels.

A Road Map

Another word for makefile can be road map. Makefiles provide the directions your code should take on its path to becoming an executable. They are applied at two different levels in the build process: component and master.

For best results, component makefiles should include these basic elements:

  • Environment setup: Include environment-specific information (for example, paths to tools). This will insure that anyone can use the makefile without modifying their local build environment. This can be an invaluable asset to someone who needs to step into an unfamiliar component and squeeze in a last-minute fix.
  • Flexible Tree Structure: Accommodate a flexible development tree structure. For example, using macros to define relative paths to subroutines let the makefiles become portable within the development environment.
  • National Language Support: Design one component makefile file designed to support all product translations for ease of understanding and use. For instance, base component function can be built into a product; while, translated messages may be brought in by the makefile at link time. This can be done by inserting country code flags inside the makefile.
  • Dynamic Linker Response Files: Use dynamically generated linker response files and definition files. This was the saving grace for OS/2 2.0 translation builds meeting their scheduled ship date and allowed the component to build different versions of an executable without modifying source code.

The master makefile is the superglue that holds the components together by joining compilations and linkages as a single product deliverable. To accomplish this, the master makefile uses a dependency list (like build header files before components). A nice feature to include in a master makefile is a clean function, which removes old or unwanted source and object code. Being able to capture component build outp information helps to determine the cause of build breaks.

More to Come!

Remember, designing, maintaining, and enforcing an organized build process gives you the keys to success and ensures that quality source code inputs result in quality executable outputs. Stay tuned for the next Developer Connection News for Ready, Set, Build.

Reprint Courtesy of International Business Machines Corporation, © International Business Machines Corporation