C is a low-level language -- some people sneer at it as being merely a glorified form of assembler, as it does not provide support for higher levels of abstraction. This low-level focus means that hardware-dependent details are allowed to affect the implementation of many language features (most noticeably, is an int a 32-bit, 16-bit, 64-bit or an 8-bit quantity?) I find this level of operation useful, particularly in my field of embedded systems, as price/performance trade-offs are very important and the low level permits better control of this trade-off. For example, try telling a hardware manufacturer who makes 100,000 units of a product a year that you need a $40 RAM chip instead of a $10 RAM chip because the swish high-level language you use gobbles up memory -- and see how much money is available for you to try to reduce memory consumption by using a less sophisticated language. Despite the fact that hardware is getting cheaper all the time -- Moore's Law certainly applies -- these trade-offs remain important.
While there are certainly places in the code where the low-level control is very valuable, much of the code in any project doesn't have such demanding requirements, and making the code reusable is more important than optimising for the hardware. In this case, C's low-level focus becomes a liability rather than a help. However, C provides facilities to build higher-level abstractions on top of its basic primitives, and so the designer is free to choose abstractions that help promote code portability. The CompDef module (compdef.h) collects together a set of definitions that help the coder write more readable and portable code. It provides data types designed to isolate the code from the hardware, and helper macros to allow some simple constructs to be written more directly.
Implementing software components should be similar to implementing hardware components. In the hardware world, the interface is usually expressed as a set of registers, with controls for the various hardware components tightly packed together -- often four or five separate fields can appear in a single 8-bit register. These fields require bit-level addressing. However, the bitset primitives provided by C are unacceptable, as many of these registers also use the sequence of reads and writes to act as commands, and C does not provide this level of control to the user. When working with bit fields in C, writing BIT6 or BIT13 is easier to understand than 0x40u or 0x2000u . CompDef defines BIT0 to BIT31, including making the number explicitly unsigned, so that sign extension artifacts do not disrupt the meaning of things like "BIT31 >> 4".
We define TRUE and FALSE as 1 and 0, as the names let the reader know that we're making a very simple statement, whereas using 0 and 1 themselves leaves open the question of whether we might use 2 or any other number at some point.
We define NULL as " ((void *) 0)", so that this name is explicitly reserved as a "pointer to nothing". We also define NUL in ascii.h to mean "the character '\0' ", and we write 0 when we want "the number zero" so the usage within the Grouse sources is extremely careful and clear. Others define NULL as 0 and use it in wider contexts to mean either a 0 pointer, a 0 character or a 0 integer -- in these cases, the Grouse definition of NULL will (sadly) need to be overridden.
In addition to NULL, we define NIL (more or less taken from Pascal) to mean "Invalid pointer". So using NULL means "There's nothing there, but it's reasonable for you to be looking", whereas NIL means "This pointer is invalid, and you should've known that from other information: There's nothing there, but it's an error for you to be looking". This distinction is very valuable in helping debug complex pointer operations, especially across interfaces. Using NIL as well as NULL allows you to do a better job of slicing the program into components: NIL is a very important component of interface engineering: The interface must support execution by buggy development applications as much as it supports fully-debugged code in the final system. (Incidentally, the Grouse code could certainly be improved by the use of assertions, as this is another way of improving the clarity of the interfaces.)
Finally, declaring NULL as "((void *) 0) " means that this definition becomes explicit to the data model, and can be incompatible with a null function pointer under some memory models. The definition NULLFUNC is provided to patch this deficiency in the definitions.
The type definitions in CompDef look very simple but are in fact quite sophisticated. The intent is to divide the integer type up firstly into functional categories, and secondly to provide control over size within each category. This division is not perfect -- some components are perhaps biased towards 16-bit systems -- but overall the types work extremely well.
The first division is by function: We divide the use of integers into the following categories:
· Characters -- Plain, Signed and Unsigned
· Numbers -- Signed and Unsigned
· Bit collections
We use the base names BOOL for boolean, and INT and UINT for numbers. For the bit collections, we define BYTE , WORD and LWORD. In the case of characters, a careful reading of the ANSI C standard shows that CHAR must be treated as a separate type to SCHAR and UCHAR. Apologies to the purists who believe that all-capitals should be reserved for preprocessor symbols; all I can do is humbly note that typedef very like #define . For example, in the K&R description of typedef , we find: "In effect, typedef is like #define , except that since it is interpreted by the compiler, it can cope with textual substitutions that are beyond the capabilities of the C macro preprocessor."
Note that we are very interested in making statements about how each type functions. For example, adding 1 to a bit collection doesn't make sense -- the semantics of two's-complement operations are important and useful, but are not appropriate everywhere.
We define minimum sizes that are permitted for these base types: BOOL (1 bit); CHAR/SCHAR/UCHAR (8 bits); INT /UINT (16 bits); BYTE (8 bits), WORD (16 bits) and LWORD (32 bits). Any code written that uses these types may assume that the types have at least the minimum number of bits, but cannot assume that the type has more bits. Different versions of compdef.h will be required to cater for the different semantics supplied for int by compilers on different architectures, but that's not a problem.
We don't specify a maximum size for these types: any type may be larger if required, especially if the larger implementation performs more efficiently. One classic example I came across was some code developed on a 16-bit PC compiler that was cross-compiled for a 32-bit embedded processor. The author took great pain to specify short (16-bit) integers in order to gain performance on the PC; on the 32-bit system, the code performed extremely poorly as the compiler kept adding code to truncate the hardware's 32-bit registers to meet the 16-bit specification. The author wanted to say "int, at least 16 bits wide, should run fast", and have this type implemented as 16 bits on the PC and as 32 bits on the embedded processor. This is precisely how the types in CompDef such as INT are set up.
Next, we may want more control over the size of the types, in order to provide a wider range of computing ability or to request a size optimisation as appropriate. So we define that a size specification (in bits) may be added to any of the boolean or number types if required, and that the implementation must provide a sufficiently large variable if requested, and should supply a smaller variable if available (but supply of a variable of the requested size is not guaranteed: The variable may be larger if that's all that the C compiler supplies). So we add the following types to our collection: BOOL8, UINT8 , INT8, UINT16, INT16 , UINT32 and INT32.
Finally, there are cases where we demand that a variable be exactly some size: We may wish to use the size to implement tricks such as wrapping around when the number overflows, or performing special bitwise operations such as CRC generation, or we may be interfacing to an external interface that cannot conform to type changes. In these cases, we are adding trickery to the size specification, and so we document this by creating another set of types with explicit sizes with T added: UINT8T , INT8T, UINT16T, INT16T , UINT32T, INT32T, BOOL8T , CHART, SCHART, UCHART , WORDT, LWORDT.
If this extended family of extra types is used properly, we make it wonderfully easy to port the code to new platforms: Only the types containing trickery need to be considered -- all other types are automatically supported once we find the appropriate compiler incantation! (Sadly, we may still have issues about memory consumption, such as the case where a char is implemented as 32 bits which can happen on DSPs, but these issues are not confined to the typing system -- we are not able to make life perfect, but we are able to make engineering trade-offs easier.)
Finally, CompDef contains some helper macros, to simplify some potentially tricky code. Most of these macros are to combine BYTE s into WORDs or LWORDs, or to split WORDs or LWORD s into smaller units. These macros are notable in that they avoid some potential errors that may creep in as a result of the type rules in C, including some changes in the signed/unsigned status of integers upon promotion introduced by ANSI C.