previousupnext

2.1 Compartment Area Data Structure

This section is concerned with the storage of memory allocations up to and including 4 KB in size.

LeapHeap divides the address range it controls into compartments. Each compartment is itself subdivided into identical memory cells. The length of the cell is different for each compartment, being the size that the compartment is dedicated to allocating.

Whether any particular cell is from time to time allocated or free is recorded in one bit of a bitmap 'word'. The number of bits in a word is not crucial to the algorithm that uses the data structure. It is convenient to show 8-bit words (bytes) in the diagrams. The LeapHeap implementation uses 32-bit words (Intel dwords).


The connecting lines in the diagram show which bitmap bits relate to which memory cells. Navigation is possible in either direction with a simple address calculation.

To avoid expensive linear search of the cell bitmap when looking for a free cell to allocate, the cell bitmap is augmented with higher-level bitmaps to form a system of cascading bitmaps.

The following diagram shows a compartment with a three-tier bitmap. In LeapHeap, different compartments have different tierings, depending on the number of cells desired for the compartment. Also, the top-tier bitmap consists of precisely one word, so that the number of cells in a compartment is selectable from among the positive integral powers of the number of bits in a word i.e. (bits-per-word) to the power (number-of-tiers).


Each bit in a higher-tier bitmap reflects whether the corresponding word in the tier below contains any '0' bits, so by implication whether there is any free cell in the group of cells controlled by that word. Whereas a bit in the tier-1 bitmap is definitive as to whether the corresponding memory cell is free, a bit in a higher-tier bitmap is only indicative. For single-threaded processes the higher-tier bitmaps will in fact always be accurate when read by the thread. For multithreaded processes they may be momentarily misleading.

It is easy to see that the memory overhead of the data structure is slightly more than one bit per cell, and that this overhead accrues whether the cell is allocated or not.