Sunday, 16 December 2012

Asymmetric Bounds

Experienced programmers are aware of the advantages of expressing ranges using asymmetric bounds.  If you haven't heard of this term, I should explain that it is simply a way to express a range (typically of integers) using a (zero-based) start value and a value just past the end. Some people consider use of asymmetric bounds to be just a convention and I have even heard Pascal/Delphi programmers insist that it is a bad convention. It is not only a good convention, it is more than that, since it has a basis in mathematics as I explain below.

Further, though developers agree that using asymmetric bounds is a good idea in code, they try to hide the idea from users. Sometimes it is a good idea to translate internal representations for the benefit of the user (zero-based indexes being an obvious example), but the elegance and simplicity of asymmetric bounds is very useful in user interface design. I explore this more in the User Interface Examples section below.

Also note that the principle of asymmetric bounds is closely related to the idea of not treating zero as a special value in code. This is something I discussed about a year ago (see blog post on Zero). [I finally got around to writing this follow-up post which I promised at that time.]  The main problem with using an inclusive range is that there is no way to specify a range with zero elements.

There is little written on this subject, however, while researching this post I discovered that Andrew Koenig recently wrote some articles on this for DDJ (see Asymmetric Bounds, Part 1). However, I feel these articles are incomplete as they:
  • lack discussion of problems with using asymmetric bounds
  • do not talk about its importance in user interfaces at all

Specifying ranges has been problematic for humans for a very long time. These problems often result in the careful addition of either of the words inclusive or exclusive to ranges such as a range of dates. Unfortunately, there is no simple way to express asymmetric bounds for ranges other than to say inclusive start, exclusive end.

Mathematicians have been aware of the advantages of asymmetric bounds for hundreds of years. For example, a piece-wise function may have different formulas for different values of x, such as:

f(x) = 1,  for -infinity < x < -1
f(x) = -x, for -1 <= x < 0
f(x) = x,  for 0 <= x < 1
f(x) = 1,  for 1 <= x < infinity

The interval for each domain is specified using asymmetric bounds. So "0 <= x < 1" means x can take any value from zero (inclusive) up to one (exclusive). An abbreviated notation for this range is [0,1) where square brackets [] indicate that the ends are included and round brackets () indicate they are excluded. Using asymmetric bounds like this is important since a function can only return a single value for any particular value of x.

In the history of computers, the first high-level languages (like Fortran) used inclusive bounds for specifying loops and array sizes etc. On the other hand assembly-language programmers soon discovered the advantages of asymmetric bounds. The insight of Dennis Ritchie meant that asymmetric bounds became an intrinsic part of the C language not only in arrays and for-loops but also in other areas like file-handling. For example, eof() (end of file) does not become true (in C) until you read past the last byte of a file, whereas in Pascal eof becomes true at the last byte.

Fence-post Errors

The main advantage of using asymmetric bounds is that it reduces the likelihood of fence-post errors.

The fence-post error gets its name from the idea of counting the number of posts in a fence. For example, say you have a fence that's eight metres long and the posts are one metre apart. When asked how many posts there are in the fence it is easy to just think eight, whereas I am sure you know it is really nine.

In code, it is easy to make similar mistakes. For example, I have encountered programmers who when asked how many times the loop below executes would say, without hesitation, 12, since 22 - 10 = 12.

  DO I = 10, 22  ...

  for i := 10 to 22 do ...

  for (i = 10; i <= 22; ++i) ...

These are obvious mistakes, but there are situations where this sort of off by one error is not so obvious. Once, you understand the idea, using asymmetric bounds avoids all manner of fence-post errors.

BTW, the convention in C (and derived languages) is to use asymmetric bounds in a for loop, so that the obvious 23 - 10 gives the right answer:

  for (i = 10; i < 23; ++i) ...

Empty Ranges

The use of asymmetric bounds is related to other conventions that are commonly used in C such as zero-based array indices, and even the use of modular arithmetic, such as integer division (/) and remainder (%). Looking at fence posts again it makes sense to number the posts starting at zero since the "first" post is the post zero meters from the start of the fence.

One thing that C does better than preceding languages is not treating zero as a special case. In some other languages you need special code to handle zero related "boundary" conditions. For example, in Fortran a FOR loop always executes at least once, which has caused innumerable bugs and crashes for things like degenerate conditions and empty input. You can read more about this in my blog from last November (see Zero).

One reason that C handles zero correctly is due to the use of asymmetric ranges. An empty range simply occurs when the start and end of the range are equal. (Compare this with Pascal where ranges are inclusive -- the smallest range you can specify has one element.) Hence it is simple to test for an empty range in C by checking that the start and end are equal.

Another advantage is that it can sometimes be useful to distinguish between different empty ranges. If you just use a special value (like a NULL pointer) to represent an empty range then you lose this ability. (See the section on malloc madness in the above-mentioned blog post from November last year).

The Special Value

Another thing that fits well with other conventions in C is the special value that indicates the end of the range. Since it does not indicate a valid value within the range the just past end value is useful to indicate an error, an unknown value or other special condition.

It is a common convention in C to use a value, that does not normally occur, to indicate an error - for example, a NULL (pointer) value or -1 (integer) value. The just past end value is also useful, especially where the type is indeterminate. For example, in C++ STL containers, iterators have an indeterminate type (so that the same code can work with different containers without modification). All containers support the end() function that returns an iterator with the special value. This value usually represents the "just past end" value -- that is, when iterating through the objects of the container it is used to determine that you have iterated past the end -- however, it can also be used to indicate other things such as that a search of the container returned no value.

These ideas are entrenched in the C standard. Examples can be seen not just in the run-time library (such as the behavior of eof() as mentioned above), but also in the language itself. For example, the C standard says that it is perfectly legal to take the address of an element one past the end of an array. Such a pointer can be compared to a pointer to any other element of the array (but cannot be dereferenced).

This idea is used in many libraries, operating systems, databases and other APIs, which present a "list" of items that can be manipulated. Typically, a function or method is provided which inserts an item into the list by specifying which existing item the new item will be inserted before. Specifying the special value the insert() function will append to the list. For example, many Windows controls (such as a list box) take an index of -1 to mean "one past end" -- inserting before this index appends to the end of the list. Similarly many STL containers provide an insert() method which take an iterator as the insertion location or append to the container if passed the special value end().


A common algorithm is a binary search, which is used to find an element in a sorted list (usually an array). I've found that binary searches takes two forms depending on whether you are searching for a specific value or just want to know in which range a value occurs. Note that in either case you need to make use of the special value. When searching for a particular value you would return the special value to indicate that the sought value was not found. When searching to find a range you also need to make use of the special value since the number of possible ranges is one more than the number of elements in the sorted list.

    function bin_search(int tofind)
        int bot = 0;
        int top = values.size();          // top starts one past the end of the vector "values"
        while (bot < top)
               int curr = (bot+top)/2;    // split the range into two (almost) equal sub-ranges
               if (tofind < values[curr])
                     top = curr;              // now check bottom half
                     bot = curr + 1;        // or check top half
       return bot;

Using asymmetric ranges make this code simple and elegant. For comparison, I tried rewriting the above using symmetric bounds and 1-based array indexing (as a Pascal programmer would do) but I gave up after the third try since I kept getting the wrong value or an infinite loop. [If someone gets a working version using symmetric ranges please post it below.]

There are countless other example of where asymmetric bounds are useful. I am sure you will encounter them if you have not already.

Circular Buffer Problem

Despite all the benefits there are a few problems to watch out for.

One example is a circular buffer which is commonly used to implement a queue. A queue is a FIFO (first in - first out) data structure which is easily implemented with a circular buffer where the current output/input locations are simply stored as an index into the buffer. (The buffer is typically just an array). Adding to (or removing from) the queue is simply a matter of writing (or reading) the current item and incrementing the input (or output) location. Of course, these locations must wrap around when they reach the end of the buffer (which is why it is called a circular buffer). The wrap around is easily accomplished using modular arithmetic (ie, the % operator in C).

A circular buffer is actually a good example of using asymmetric bounds since the end of the current range of valid items is the same as the current input location. It is also a simple matter to check whether the buffer is empty since this occurs when the input and output locations are the same. However, there is a problem - can you see it?

The problem is that when the buffer becomes full then the input and output locations are also the same. So without further logic there is no way to distinguish between a buffer completely full and a buffer empty condition. This is a trap that I have fallen into as probably have many others.

Binary Heap

Speaking of queues reminded me of another problem. A binary heap (not to be confused with the C run-time memory heap) is normally a very good way to implement a priority queue (not to be confused with the FIFO queue we saw above). The algorithms for a binary heap are a rare example where zero-based array indexes are not a good idea.

I first wrote code that implements a binary heap in Pascal and it was simple and elegant. A binary heap is essentially a binary tree structure stored in an array. Since arrays in Pascal are 1-based the root element is at index 1 (the start of the array). Given the index of a node it is simple to find the index of its two children - the first child has an index which is twice that of it's parent, and the other child has an index one more than that. (This is efficiently accomplished using bit-manipulation by shift left by one bit and filling the new bit with either 0 or 1.)

Similarly, to find the parent of a node simply divide by two (using integer division which throws away any remainder), or perform a right-shift.

At a later time I translated this code to C. Of course, C arrays are zero-based so I started with the root of the tree in index zero. However, this really made a mess of the code to find the children/parent of a node. The solution was simply to use one-based indexing and ignore the array element with an index of zero.

    void heap_add(int n)
         // heap full check needed here

        int hole = ++current_size;                     // start at end
        while (hole > 1 && n < values[hole>>1]) // stop at root or at a higher priority item
            values[hole] = values[hole>>1];         // move lower priority item up
              hole = hole>>1;                             // go to parent
        values[hole] = n;

This is the only algorithm I can recall where using one-based array indexing worked better than zero-based. Also note that this does not use asymmetric bounds since current_size indicates the valid top element in the values array, rather than one past the end.

User-Interface Examples

There are countless places where user-interfaces can make use of asymmetric bounds - for example, almost anywhere that a user can manipulate an array or list of items.

An example, with which you are undoubtedly familiar, is a text control (or, similarly, a line in a text editor or word processor). If the control contains, for example, the string "Hello" then there are six places for the cursor - at each of the characters and also after the last character. Similarly, selecting some of the characters (using the mouse or keyboard) uses asymmetric bounds since the end of the selection is marked by the cursor position of the character one past the end of the selection. Further, when there is no selection the current cursor position (sometimes called the caret) can be seen to indicate both the start and end of an empty selection.

Similarly, any sort of editable list such as a list-box should have an empty position one past the last valid entry. When you "insert" a new item at this point it is appended to the end of the list.

Visual Studio Bookmarks

I use bookmarks in Visual Studio a great deal. I think it was Visual Studio 2005 that introduced a modeless Bookmarks window (similar to the one I added HexEdit a few years earlier) containing a list of all your bookmarks.

I really appreciate the VS bookmarks list apart from one peculiarity. You can reorder the bookmarks to your liking by dragging them in the list but when you drag a bookmark to a new location it places it after the item you dragged onto. This means to drag to the end of the list you drag onto the last item not past it. Further, you can't move an item to the top of the list (with a single drag) - see the screenshots in which I tried to drag "InitInstance" to the top of the list but it ended up underneath the top item.

To me this design is astonishing. Even if the developer(s) who designed it knew nothing about user-interface design you would think that they would be familiar with C, C++, or C# and have a rudimentary appreciation of the elegance and simplicity of using asymmetric bounds. This behavior of Visual Studio bookmarks confuses and annoys me every time I use it.


Use of asymmetric bounds and zero-based indexes greatly simplifies all sorts of problems. Most algorithms that involve any sort of list or range of values are more simply implemented, apart from a few special cases mentioned above.

Moreover, any user interface design that manipulates a list or array of objects should also use asymmetric bounds. When there is a current location (ie, cursor or caret) that the user uses to interact with the list then adding an item at the current location should insert before the current item; adding at the special location past the end should append to the list.

Friday, 23 November 2012


The verifiability (sometimes called testability) of software is how easy it is to check your code for bugs.  I often talk about the importance of making software maintainable but verifiability is another attribute that does not get enough attention.  (See some of my previous posts for more on quality attributes such the Importance of Developer Quality Attributes).

There are many ways to create software that makes it difficult to verify.  If software is hard to test then bugs will inevitably occur.  I talk about how to make code more verifiable below.

But before I go on, I should mention TDD (test-driven development).  I will probably talk more about TDD in a later post, but I believe one of the main (but never mentioned) advantages of TDD is that it makes code more verifiable.


Verifiability plays an important part in testing.  After all, you can't test something properly if it is difficult to verify that it is correct or, conversely, show that it has no bugs.  I give an example of code below that shows how code can be hard to verify and makes bugs far more likely.

In extreme cases, software changes can be impossible to test.  For example, I was recently asked to fix up some dialogs that had some spelling mistakes.  The problem was that the software was old and nobody even knew how to get the dialogs to display or even if the code that displayed them was reachable.  (Some of the dialogs were not even used anywhere in the code as far as I could tell.)  It makes no sense to fix something if you cannot verify that it has been fixed!


Verifiability is also important in the debugging process.  Removing bugs has three phases:
  1. detecting the presence of a bug
  2. tracking down the root cause of the bug
  3. fixing the bug
The last stage is associated with the maintainability (or modifiability) of the code.  The first two stages are usually associated with verifiability, though I prefer to use an extra quality attribute, which I call debuggability to describe how easy it is to track down a bug, and use verifiability to strictly describe how hard it is to find bugs.

How Do We Make Software More Verifiable?

Verifiability can be increased in three ways that I can think of:
  • external testing tools
  • extra facilities built into the software such as assertions and unit tests
  • how the application code is actually designed and written

Testing Tools

There are many software tools that can be used to assist software testing. Automated testing tools allow testers to easily run regression tests.  This is very useful to ensure that software remains correct after modifications.

One aspect of code that is not often well-tested is error-handling.  Test tools, mock objects, debuggers and simulators allow simulation of error conditions that may be difficult to reproduce otherwise.  For example, a debug heap library is invaluable in detecting memory allocation errors in C programs.

Debuggers obviously make software more debuggable, but they can also be useful for verifiability.  Stepping through code in a debugger is extremely useful for understanding code and finding problems that may not have otherwise been seen.


Assertions are an old but crucial way for software to self-diagnose when it has problems.  Without assertions bugs can go unnoticed and the software may silently continue in a damaged state (see Defensive Programming).

In my experience, assertions are even more important for debuggability.  I have seen problems in C (almost always rogue pointers) that have taken days if not weeks to track down.  Careful use of assertions can alert the developer to the problem at its source.

Unit Tests

Unit tests (and TDD) are important for verifiability.  If you have proper and complete units tests then you can get a straightforward and obvious indicator whether the code is correct.

In my experience most bugs that creep into released code are due to modifications to the original design.  Poor changes are made because the modifier does not fully understand how the code works -- even if the modifier was the original designer, they can still forget!  This is further compounded by the fact that the modified code is often less thoroughly tested than the original version.

Mock objects are useful by themselves, for example for testing error-handling code.  (When not used with unit tests they may be called simulators, debug libraries, automated test harnesses. etc)  Mock objects are particularly useful with unit tests as they allow testing of a module to be isolated from other parts of a system.

I will talk more about verifiability and unit tests in a future post, hopefully soon.

Software Design

Many aspects of good design that improve the maintainability of code are also good for verifiability.  For example, decoupling of modules makes the code easier to understand but it also makes the modules much easier to test individually.  Testing several simple modules is always simpler than testing a single large module with the same features, even if only because the permutations to be tested increase dramatically as the number of inputs increases.  (See SRP and other design principles in Software Design).


Perhaps the simplest way to increase verifiability is simply to write good code.  The same principles that a software architect or designer uses also apply at the coding level.

One of the worst programming practices is to use global variables to communicate between modules or functions.  This results in poorly understood coupling between different parts of the system. For example, if the behaviour of a function depends on the value of a global variable, it makes it very difficult to verify that the function works correctly, since you can't ensure that some outside influence will change that variable.

Another problem is large functions that do too much (remember SRP).  Again the more parameters (or other inputs) to a function, the harder it is to check that all combinations are handled correctly.

Similarly, code with many branches (ie, if statements) is hard to verify since there may be a large number of different paths through the code.  Even ensuring that all code is tested (with a code coverage tool) is insufficient. You may need to test every combination of code paths to be sure the code has no bugs.

Code Example

Finally, I will give an example of some C code that is virtually impossible to properly verify due to the way it was written.

Some time ago I worked on some C code for an embedded system.  A lot of the code made use of time (and date) values returned from the system.  Time was returned as the number of milliseconds since midnight.

One particular function was used to determine if a process had timed out.  Like the rest of the code it used this system "time" value. The code was very simple to implement except for the special case where the start time and the end time straddled midnight.  (Normally the code below would not be executing in the middle of the night but under unusual circumstances it could.)  Here is a simplified version of the code:

/* return 1 if we have already passed end time */
int timed_out(long start, long end)
     long now = get_time();
     if (start == end && end != now)
          return 1;

     if (end > start)
          if (now > end || now < start)
               return 1;
     else if (end < start)
          if (now < start && now > end)
               return 1;
     return 0;

Do you think the above code always works correctly?  It is difficult just looking at it to tell if it is correct.  It is virtually impossible for testers to verify that the above code works correctly, even if they were aware that midnight was a special case in the code. Setting up a unit test would be difficult since the software was not permitted to change the system time (though a mock object could have been created to emulate the system time).

You might also notice that the  code suffers from poor understandability (and hence poor maintainability) as well as poor verifiability.

A much better way to handle such timing events is to use a type like time_t or clock_t which does not reset at the end of day.

/* return 1 if we have already passed end time */
int timed_out(clock_t end)
     clock_t now = clock();
     assert(now != (clock_t)-1);
     return now >= end;

Note that the above code is only for low-resolution timing since the C standard says nothing about the resolution of clock_t – ie, the value of CLOCKS_PER_SEC is implementation-defined. However, in this example 1/10 second was sufficient and CLOCKS_PER_SEC == 1000 for the compiler run-time being used.
This code is more verifiable. First, it is fairly simple manually verify since anyone reading the code can understand it fairly easily.  It is also easy to step through the code in the debugger to make sure it works.  Moreover, it does not need special testing for midnight.


It is important to create software that is easy to test. We talked about using tools and code (such as unit tests) to enhance verifiability, but more important is creating good code. Now that I think about it, verifiability is probably affected more by the data structures that are used (such as the time-of-day format used in the above example). The data structures define the code; get these structures right and the software is likely to be much more verifiable, and consequently less likely to have bugs.

Furthermore, in my experience writing software with testing in mind is more likely to produce better code. This is one of the (many) advantages of unit tests (and even more so with TDD). If you create units tests as you create the code (or even before you create the code in the case of TDD) then you will create code that is not only more verifiable but also better designed in general.

Tuesday, 13 November 2012

Reusability Futility

The software industry regularly gets carried away with the some new fad or other. Sometimes these are good ideas but because they are over-hyped and taken to extremes can actually be counter-productive. I have seen many such fads in my career. A big one that took off in the late 1980's, was the emphasis on making code reusable. It's even said that this lead to the development, or at least popularization, of object-oriented languages.

Now, I think reusability of code is great. It has had enormous benefits on productivity almost from the birth of the industry. However, many people tried to take it too far. There are more important attributes of software, and when reusability is applied it can be to the detriment of other attributes, such as understandability and efficiency.

It is also often attempted on modules of too large a size. My rule of thumb is that the larger a module the less likely it is to be usefully reusable. Of course, I expound on this further below, but first will explain about software quality attributes.

Quality Attributes

You may recall that in February I mentioned the difference between two types of quality attributes of software which I gave the names user and developer to emphasize who directly deals with their effects (see Importance of Developer Quality Attributes). User quality attributes are things like correctness, reliability, usability, efficiency, security, compatibility, etc.

Developer quality attributes are things like understandability, maintainability, portability, verifiability, etc. Typically the users of the software (and other stakeholders) are not troubled by, or even aware of, these developer quality attributes but that does not mean they are not important to them. In fact, my contention has always been that, for most large-scale projects, maintainability (and the closely related understandability) are by far the most important attributes. (Yes, even more than correctness since if software is incorrect you can fix it, but if it is unmaintainable then, unless you never have to change it, it is worthless.)

Another attribute of concern to developers is reusability. It is important to understand that maintainability and reusability are very different, though, there is generally some correlation between the two. I mention this because I worked with a colleague (AK - an OO "expert") in the late 1990's who could not even understand the distinction so I will explain:
  • Maintainability (or sometimes modifiability) is how hard it is to fix or enhance the software. It depends somewhat on other attributes, most importantly, understandability.
  • Reusability is how hard it is to make use of the same code in more than one place. It is important for the DRY principle (see Software Design)

The Rise of Reusability

In the 1970's a great debate began in the software industry about the software crisis. Rapid advances in computer hardware meant that more and more ambitious software projects were being attempted. Lack of experience with large software projects meant that most software was late, over-budget and of poor quality. It was thought that this situation would become much worse in the future due to the anticipated hardware advances (which, of course, did eventuate). In brief, the crisis was that there would be a huge shortfall in the number of experienced programmers required to develop the software that would make good use of the hardware.

At the same time, it was seen that there was a large amount of redundancy, as the same or similar software was being written over and over. To stop developers continually reinventing the wheel ways were sought to increase the reusability of code. There was already a lot of reuse, using low-level libraries (such as the C run-time library) but some people thought that high-level (ie generally large) components could be created which could then simply be slotted together to create useful software with little or no programming involved.

This emphasis on reuse reached such ridiculous levels by 1995 that I remember seeing this full page ad from Microsoft in many magazines I read at the time. This featured a (probably fictitious) recycling fanatic named Edwin Hoogerbeets, who lived in a house made from beer bottles.

An idea that a lot of people had was to try to emulate the advances in hardware, by using similar techniques to do the same in software. For example, a major hardware advance was the IC (integrated circuit, popularly known as the silicon chip), which lead to a push for similar software components. Many people even used the term "Software IC" to describe these components.

Also around this time (late 1980's) object-oriented languages started to gain popularity. To a large extent this was fuelled by the obsession with reusability as it was believed that OO designs enhanced reusability. Indeed many OO evangelists heavily emphasized this.

Many of these ideas were adopted by, or even originated in, educational institutions. The effect was a whole generation of software designers who were indoctrinated into the importance of making code reusable. This is the root cause of many problems with software created in the last 20 years (and probably still being created today).


When I first read about Extreme Programming (late last century :) I liked most of what I read, but I was not certain about the idea of YAGNI (You Ain't Gonna Need It). This principle says to only do enough to get things working according to current requirements. The idea is that developers should refrain from adding extra features for increased flexibility, perceived future needs, or just because they are cool.

My first thought was that it is stupid to design software without regard to any future requirements. On the other hand I could see where the idea was coming from since I had worked with many software components (such as the Windows API) that were tedious and time-consuming to use since they made you consider all sorts of irrelevant details. This problem was caused by a design that allowed you to accomplish almost any conceivable task (including many pointless ones) without regard to how easy it was to accomplish simple ones.

I don't want to go into a detailed discussion of YAGNI (maybe later). My main point here is that I believe YAGNI was a reaction to the behaviour of many software developers and architects. Instead of designing software components for the particular task at hand there was a push to make them as flexible and general purpose as possible in the name of reusability. In all companies I have worked for the software "architects" would often design the equivalent of a "Swiss Army Knife" when a simpler, more specific, tool was required.

This extra complexity in the design greatly detracts from understandability and hence maintainability (which I have already said is the most important "ility").

An Analogy

I am loathe to analogize software design with the design of physical machines as I believe the two are very different. However, I think there is some value in the analogy below.

But first I would like to point out that the history of invention is littered with failed designs that emphasized reusability. I read recently about 19th century American invention of a system of firearms made from interchangeable parts, which could be assembled to create various weapons like guns, rifles, etc. It was an ingenious system but was never successful because machines designed for a specific purpose are easier to use and maintain than ones designed for flexibility and reusability.

The analogy I have chosen is in aircraft design. If the reusability obsession of the software industry was applied to the aeronautical industry a manufacturer of a plane would design a single wing "module" that they would attempt to fit to all their aircraft from a tiny twin-engine to a super-jumbo. Of course, the wing would have to be flexible enough to work with all size of aircraft. Perhaps for large aircraft you would use multiple wings (4, 8 or even 16). For small aircraft perhaps you could get away with one wing.

We all know that planes are not designed like that. As far as I know (which is very little) every plane has its own custom wing design. Perhaps the same wing design could be used on two different models as long as they had similar size and performance characteristics, but what aircraft manufacturer is going to create two nearly identical models that will just compete with each other?

Of course, even in aircraft manufacture there is a lot of reuse. It's not inconceivable that all models from one manufacturer use the same rivets for the airframe, electrical wiring, seats, cables, pneumatic pumps, etc. This has also always been done in the software industry where the same low-level routines are used for a myriad of things.

The electronics industry is lucky in that fairly high-level components can be created that can be used in different products. Other industries, like the aircraft and software industries, are not so lucky. In software design, as in many other areas of design, creating high-level reusable components results in cumbersome products that just do not work.


Emphasizing reusability was one of the big mistakes of the software industry in the last few decades. I remember almost 20 years ago telling my colleagues "forget reusability, concentrate on maintainability". The trouble is software designers have been taught to make even high-level components flexible and reusable. This flexibility comes at the cost of increased complexity. Reusable components are usually harder to understand (and often also less efficient) which leads to software that is hard to build and maintain.

Fortunately, many in the industry are now saying the same thing. I believe that overly complex software lead to the idea of YAGNI. More recently criticism of reusability has become more open -- for example, Kevlin Henney's recent book, 97 Things Every Software Architect Should Know gives the rule Simplicity Before Generality, Use Before Reuse. I couldn't agree more!

Tuesday, 30 October 2012

Scrum Standup

One of the best ideas to come from Scrum is the daily standup.  This is where each member of the team tells the rest of the team what they did in the previous (working) day, what they intend to do in the next day, and any problems they anticipate that will stop them doing their job.

[Note that this follows on somewhat from my previous Scrum post at:]

I've recently read a lot about the Scrum Daily Standup, but what is never mentioned (or perhaps just hinted at) is its major advantage: it quickly becomes obvious when someone is not pulling their weight.  Moreover, the standup, when done properly, motivates team members to focus on the task and give their all.  I discuss this a bit more below but first we need to understand a few things.

What It's NOT

First I will dispel a few misconceptions about the standup.

First, it is not about general communication, only about communicating to the team what each team member has and will be doing.  Any form of communication between two individuals or a subset of the team should be taken offline.

Second, it is not about updating the customer or management on progress.  You can tell when this sort of thing is happening when everybody talks to just one person (usually the Scrum Master or an observing manager).

Lastly, it is not about problem solving.  This usually happens when an ill-prepared team member instead of reporting their status starts asking questions.  Generally these problems should be taken off-line (and in my experience usually could have been resolved at an earlier time).

Why It's GOOD

Why do I like the standup so much?  There are three reasons: visibility, visibility and visibility.

First, let's be honest.  The average developer works at 50% efficiency or less, sometimes much less.  This is for many reasons, some of which are not entirely or not at all their fault.  This can be that they have been misinformed or misunderstood what they were to do.  Often programmers become side-tracked on less important or completely unimportant issues, or they take an inefficient approach for selfish reasons (such as trying a new technique or technology).  Finally, there are quite a lot who are simply unmotivated.  In summary, developers have a tendency to lose focus or focus on the wrong thing.

The visibility of the daily standup makes all these things obvious, if not immediately, then over several days or at most a few weeks.  If everyone knows what everyone else is doing then it soon becomes obvious when someone is slacking off, have become side-tracked or have lost focus.  It is up to the team to work out what is happening and provide the assistance, guidance or punishment as required.

Luckily punishment is rarely needed, due to the positive effects of the standup.  When everyone understands how everyone else is contributing to the sprint's goal(s) then there is much better motivation and focus on achieving those goals.  The team members make a commitment to the rest of the team and generally do their utmost to honor their commitments.  In summary, the standup helps to get the team working together.

What Goes WRONG

Things go wrong when there is reduced visibility.  This can be for a number of reasons.

One reason I have recently encountered is due to a Scrum team made up of many disparate members, working in different areas.  When different team members don't, or can't, understand what others in the team are doing then all the enormous benefits of the daily standup are lost.

Another problem is team size.  When a team becomes larger than 7 or 8 then all sorts of problems appear.  (This is the reason for the cardinal rule in Scrum of a team no larger than 9.)  First with too many people it becomes too hard to follow what everyone else is doing.  It is also much easier for a team member to disappear into the background. Further a smaller team brings a sense of camaraderie and sense of purpose.


For Scrum to work well you need a small team where each team member is involved with, or at least understands, what ever other member is doing.  Ideally, everyone makes an important contribution to accomplishing the team goals; a great motivator is knowing that others in the team are depending on you.

It's important to ensure that the standup is working.  If people arrive late or fail to turn up at all then this indicates that the standup is uninspiring and needs to be fixed.  Here are some ideas on how to make sure the standup goes well:

  • when it is their turn each team member must talk for at least 1 minute but not more than 2 minutes.
  • they also cannot ask questions of others but must stick to reporting status and roadblocks
  • they should stick to reporting what they accomplished not how they did it or any problems they had
  • when a team member is reporting, other teams members cannot say anything except to asks questions
  • if someone misses the standup for some reason they must get someone else to report for them
  • if someone is away they still report on what they did the previous day they worked
  • people must not talk directly to any individual (like the Scrum Master) but to the whole team

Thursday, 25 October 2012

The Layer Anti-Pattern


In brief this post is about a pattern of design that is often wrongly used - that of structuring a design into layers.  Layers can sometimes be a good design but often are not.  Even the most experienced software designers are prone to misuse layers - I give some examples below.

I first encountered the idea of anti-patterns when I stumbled on this Wikipedia page:

Though this list a muddle of various ideas many of them show some insight and it is worth a read.

Over the years I have encountered numerous design problems that can be traced back to this fundamental problem.  Generally having an understanding of good design, and being aware of a natural tendency to use layers, can help you avoid this anti-pattern.

Design Fundamentals

In my first draft of this post I wrote a summary of how to divide a problem into sub-problems by way of introduction.  This summary became too long so I put it in a separate post - see my previous post.  If you don't want to do that, here is a summary of that summary in the form of a few guidelines:
  • Each module should just do one thing (SRP)
  • Duplicate code is indicative of poor partitioning (DRY)
  • Partition the problem at places that creates simple, well-understood interfaces
  • Try to minimize the number of other modules a module uses (loose coupling)
  • Remove unnecessary information and control from interfaces (information hiding)
  • Separate the parts that are likely to change from the rest of the system (encapsulate what varies)
  • Create flexible interfaces if likely to change and forward/backward compatibility is required
  • Provide separate read and modify interfaces
  • Avoid unreachable code

Layers, Hierarchies and the Brain

One more thing, before we get to the point of this post, is to do with the human brain. The human brain seems to have an affinity for giving related objects an ordering. (Come to think of it, this is not just a human trait as even chickens have a pecking order.)  An ordering of objects is easily visualized using layers.

Not everything can fit into a linear ordering so the slightly more flexible concept of a hierarchy is used for organizing a myriad of things, from the directory structure on your hard disk to the classification of all lifeforms.  Of course layers and hierarchies are closely related.  Layers can be thought of as a linear hierarchy.  And hierarchies are often organized into layers for simplicity.

Although layers may assist our understanding they do not always make sense in the problem domain.  As an example, the taxonomic classification in biology mentioned above is based on 7 layers (kingdom, phylum, class, order, family, genus, species). Despite the enormous effort that has been put into this classification it is recognized that the practical usefulness of it is limited at best.  For example, many lifeforms have evolved so little that there is little point in having that many levels, whereas others have branched so often that seven levels is insufficient.

Even hierarchies are not always the best way to represent information.  This is why hierarchical databases are no longer used, having been usurped by relational databases. Though relational databases are in some ways more difficult to understand, they have great benefits in their ability to split data into tables and relate them to each other.  In fact they afford many of the advantages that we are seeking in the design of software, such a eliminating duplication (DRY), SRP, etc.

Use Of Layers

Danish Approach

I remember an old TV ad promoting some sort of Danish pastry with the catchphrase "layer upon layer upon layer" (said in a supposedly Danish accent).  This Danish approach may be good for creating delicate pastry but delicate software is a bad idea. Of course, we typically don't stretch and fold the code to make layers so how do they form?

The most common way that layers form is through a tendency to move subservient code into a lower layer.  It is a great idea to divide code into different modules but doing it this way is often too simplistic.

As we saw above the proclivity to use layers is related to how we think and also to how we are taught to write code.  To most programmers it is obvious that when a function gets too large, you try to extract a portion of that function and push it down into a sub-function.  Most of my university training was in Pascal and this practice was heavily promoted by most, if not all, of my tutors and lecturers.  (It is especially easy to do this in Pascal as a nested subroutine can access the local variables of its parent).

This is not always the wrong approach but often there are better ways to sub-divide a problem than by simply pushing the details into lower and lower layers.


Another way that layers are used is by adding a "wrapper" around an existing module.  Again this can be useful but is often misused.  First we will look at when wrappers should be used then consider how they are misused.

By far the best reason to add a wrapper is to simplify an interface.  By hiding irrelevant parts of the interface you aid decoupling and assist understanding of how the module is used.

A wrapper may also be used to restrict use of a module in some way.  For example you might need a read-only interface to a container class (see dual interfaces discussion in my previous post).  Doing this is really just a special case of the above (ie, creating a simplified interface) but is not done to assist understanding but to enforce security.

The Adapter pattern (see Adapter Pattern) is used when you need a different interface to a module and have no control over the module being wrapped.  For example, an interface may be precisely specified so that different modules can be used interchangeably (see Strategy Pattern) -- if a third-party module needs to conform to the interface then you need to create a wrapper to do this.

I can think of two ways in which wrappers are misused.  There are "do-nothing" wrappers that have no real purpose.  There are also wrappers that add features that are not directly related to the module being wrapped and hence violate SRP.

With regard to "do-nothing" wrappers, I should first explain that wrappers that simply add a layer of indirection can still be useful (see Adapter pattern above).  However, when the reason for using them is not valid then they just add complexity (and inefficiency).

As an example, I recently encountered a wrapper for LibXML which was justified on the grounds of decoupling and good programming practice.  However, it did not provide a simplified interface any better than the cut-down interfaces already shipped with LibXML.  It was also not an appropriate use of the Adapter or Strategy pattern since the library was efficient (and open source) and was never going to be replaced.

LibXML is an open source library for working with XML files.  It is highly recommended as it is efficient and comprehensive.  It also provides interfaces for simplified use.
Further, the wrapper by its design duplicated some error handling which affected the performance of the system.  It also added data conversion facilities which is a bad idea too (see below).  Both of these are violations of the DRY principle.

Even worse are wrappers that add features not directly related to the module being wrapped.  It is not uncommon for a wrapper to perform data conversion operations. Often a better design can be found by moving the operations into a separate module which is independent of the called module.  There is also a tendency for code unrelated to the module being wrapped to creep in, which is definitely a bad idea.

So before creating a wrapper, ensure there is a point to doing so.  If you just want to create a wrapper to provide extra features then try to find a design where the extra features are encapsulated in a different module instead.


One special case I feel compelled to mention is the misuse of inheritance in object-oriented languages.  To be honest I have never liked inheritance since the late 1980's when I read the first edition of Stroustrup's "The C++ Programming Language" (and even before that when I read about Smalltalk in an edition of Byte magazine in 1981).

Inheritance has very little applicability to real world software.  It does have its uses, as demonstrated by hundreds of elegant examples in various textbooks, but I find that it is rare to want to model an "is-a" relationship in real-world software.

Now I am not talking about inheritance when applied to derivation from an abstract base class (in C++) -- this is essentially implementing an interface and is useful for implementing polymorphism which is extremely important in many design patterns.  I am talking about inheriting from a working (or at least partially implemented) base class, which is essentially the same as adding another layer (the derived class) on top of an existing module (the base class).

Of course, misuse of inheritance has been discussed at length for more than 20 years (even Stroustrup in the 2nd edition of "The C++ Programming Language" [1991] says that "inheritance can be misused and overused") so I won't go into that, except to say that it is probably the most egregious thing since goto.  It has also affected the design of languages (examples: multiple-inheritance is not permitted in Java and C#, the addition of final (Java) and sealed (C#) keywords, etc).

However, even what is considered acceptable use of inheritance can often be avoided with consequent improvement to the design.  One example would be the use of the Decorator Pattern.


There are many, many examples of large projects falling victim to the Layer Anti-Pattern.  Many operating systems have a layered architecture that can create problems and inefficiencies.  For example, Windows NT (upon which Windows 2000, and all later versions of Windows are built) originally had a strict layering but this caused problems (in particular for graphics performance) and the layering was later subverted so that the operating system could adequately run games and other graphically intense software.

Here are two well-documented examples, which you can also read about elsewhere.  I also give a C code example in the next section.

Example 1: OSI

Soon after TCP/IP was invented another networking protocol was invented in an attempt to do it better.  This was called OSI (open systems interconnect) and was made a standard by the ISO (international standards organisation.)  Some computer system manufacturers spent many, many millions of dollars in the 1980's developing and marketing it.

The problems with OSI and the reasons for its failure have been discussed extensively.  I believe the fundamental problem was in its layered design.  OSI used a 7-layer model (as opposed to TCP/IP's 3 or 4 layers) which made it complicated to understand and difficult to implement.  It also made it inefficient - for example each layer had its own error-handling which meant that there was a great deal of overhead and redundancy.

OSI shows how even experienced designers are subject to the layer anti-pattern.  For more on this see the section Layering Considered Harmful in the document at

Example 2: Linux Scsi Midlayer Mistake

A device driver is software that essentially becomes part of the operating system in order to interact with a piece of hardware.  In the Linux Kernel, SCSI devices are handled specially as there are a large number of devices that support the SCSI standard.  SCSI is handled using two layers: the high-level layer that does SCSI stuff, and low-level drivers for specific hardware.  This is an example of good layering since it nicely separates concerns.

However, with this design it was soon noticed that a lot of the low-level drivers were using the same or similar code.  In an attempt to remove this redundancy (remember DRY) shared code was moved upwards to become a midlayer.  After awhile it was recognized that this was the wrong approach and became known as the midlayer mistake.  A better approach was found that moved the shared code into a library that any of the low-level drivers could link to.  For more on this see Linux kernel design patterns - part 3.

Avoiding Layers

So how do you avoid this anti-pattern?  First understand and be aware of it, then consider design alternatives.  Knowledge of, and experience with, different design patterns is useful.

Waterfall Methodology and Layers       

The waterfall development methodology, though not an approach to software design, can also be seen as an example of the layers anti-pattern.  The phases in the waterfall model are really layers: the design layer is built on the analysis layer; the coding layer builds on the design layer; etc.

Agile methodologies break the layering effect.  A great benefit is that they avoid the problems caused by dividing a project into phases.
One thing to remember is a good design often consists of many small interconnected modules.  The layered approach starts with small low-level modules but tends to build larger and larger modules on top of them.

Code Example

To clarify all this I will give a real world example.  I was going to add a C code example here, but there is a good example in my recent July post on Dependency Injection.  I include the diagrams again here, and a brief explanation.  For the example code, and more details, see the post at Dependency Injection.

The design was taken from a real application that used the standard C library function qsort() to sort an array of file names for display to the user.

Diagram 1.

The problem with this design was that the main display function (which obtained, sorted and displayed a list of files) was a too large at almost 100 lines of code.  It seemed that the sorting of the strings could be separated into a separate module so I tried to push the sorting code into a lower-level SORT module (as I had been trained to do).

Diagram 2. With Layer.

Is this an improvement?  If you think so then you need to re-read this post from the start! This is a classic example of the layer anti-pattern.  The new layer (SORT module) really adds nothing to the design whatsoever and has several problems.

For example, it does not make sense that the SORT module should be dependent on the OPTIONS module.  Thinking even more about the problem, I realized that the SORT module, even though it depends on the code that compares the strings, does not need to know the details of how the comparison is actually performed.

A much better design was found where the OPTIONS module returns a pointer to the current comparison function.  The function pointer is then used by the main DISPLAY module to tell qsort() how to sort the strings.
Diagram 3.  Without Layer.

Note that this solution now avoids the unnecessary layer of code that wraps the qsort() function.  There are many advantages to this design - for example,
A good rule of thumb is that the less modules that need to be modified for a simple change then the better is the design.
if a new sort option is required then only the OPTIONS module needs to change, whereas the design using layers would require at least two modules to change.


When I first started programming I remember that there was a great debate about which approach to design was better: top-down or bottom-up.  Of course, the real problem with this debate is that it emphasized a layered approach.  Do you create the top layer first and use stubs for the lower layers; or do you build up from the bottom layers? Unfortunately the underlying assumption that you have to use layers at all is flawed.

I hope this post has convinced you that using layers is not always, and actually not often, the best approach.

Tuesday, 14 August 2012

Fundamentals of Software Design

As an introduction to my next post I wrote the text below to explain some basic principles of breaking down a program into more easily handled chunks.  Discussion in software design literature, under such names as separation of concerns, information hiding, modularity, encapsulation, etc, is massive.  However, I have never seen a concise summary so I wrote this.  Unfortunately, my summary was not as brief as I hoped so I split it into this separate post.

This post is an introduction to the my next post on the Layer Anti-Pattern.  I believe it is useful in its own right.


Going back to my very first blog post you may recall that I discussed the fundamental problem facing software development (complexity) and the most important approach to tackling it (the principle of divide and conquer) - see Software Design Complexity.  Of course, that is just the start of the story.  The real challenge is to find the best way to divide a problem in order to conquer it.  In general, approaches to this are the subject of just about every book you have read or will read about designing software - ie, the best way to split a problem into smaller and smaller sub-problems which are more easily tackled.  Here I distill the essence of commonly described techniques as well as adding a few approaches of my own.

Modules and Interfaces

First I will clarify my nomenclature.  There are many names given to software components designed to handle specific problems and sub-problems.  I am going to use the word module.

Of course, a large module will use sub-modules to handle it own particular sub-problems.  At the highest level a module may be a program or tool (or even a suite of programs) which is part of a larger system.  At the lowest level a module is a function or procedure.  In C, it is common to have clearly defined modules at the source file level.  In OO languages like C++, Java, and C# the most common type of module is a class, which may or may not correspond to a single source file.

Another important idea is what I will call the interface.  This is simply what a module exposes to the outside world.  By definition, all communications between modules take place through interfaces.  In C and C++ the interfaces to most modules are typically specified in individual header files (typically in a .H file with the same name as the corresponding .C file).



One of the simplest, and most obvious, ideas is that a module should do just one thing.  This idea is ubiquitous in the history of software design.  For example, one of the basic design principles of UNIX is that a tool (or any module) should do just one thing (and do it well).  This is what is commonly discussed when people talk about separation of concerns.  More recently the name SRP (single responsibility principle) has become popular.

Although the idea is simple it is not always easy to recognize when a module is doing more than one thing.  For example, many potentially simple routines are often overwhelmed by ancillary concerns, such as data initialization, data retrieval, data conversion, global state management, error handling, logging, etc.  When queried on why the code is structured that way the designer/developer is unaware of how to separate the concerns or even oblivious to the advantages of doing so.

Further, it is often not possible to separate concerns without access to (or understanding of) the necessary technique or technology.  There have many technologies that have been created for this very reason, such as:
  • function pointers: being able to pass around a pointer to code makes many patterns of modularity possible (see my previous posts on IOC and DI)
  • object-oriented: sometimes a function pointer is not enough; OO technologies also allow data and code (methods) to be handled as one
  • exception handling: allows error handling code to be separated from normal flow of control
  • multi-tasking: allows separate processes to run simultaneously with controlled interactions
  • aspect-oriented: allows "cross-cutting" concerns to be isolated from the rest of the code
  • generics: allows concerns to be separated from the rest of the software in a way that does not reduce performance or sacrifice type-safety
Another good example is from my previous post on Dependency Injection.  It is not obvious that comparison can be separated from sorting which might result in the design of Diagram 2.  Once you realize that the sorting algorithm does not need to know the details of how two elements are compared (only that the elements can be given an ordering) a better separation of concerns can be achieved as in Diagram 3.


Another basic idea is to eliminate duplicate code, and is often known by the acronym DRY (Don't Repeat Yourself).  It is closely related to SRP since if module boundaries are well chosen then the likelihood of duplicate code is reduced.  Duplicate code is not only a maintenance problem it's also indicative of a poor design in general.

Similarly to SRP there are two problems with DRY: recognizing that two or more pieces of code have commonality; and isolating that commonality.

It can be hard, just inspecting two different pieces of code, to detect any commonality.  For example, the code to sort an array of strings may bear no resemblance to code for sorting a linked list of structures.  The original designer should be aware that both pieces of code are performing a sorting operation and attempt to isolate it into a separate module (or re-use an existing sort module).

Of course, sorting is a fairly obvious operation, but a designer may have much more difficulty finding and separating other pieces of commonality.  Generally, this takes practice and a little lateral thinking but mainly depends on the ability of the designer.

A common pattern when duplicate code is found is to isolate it into a separate module.

Duplicate Code Moved to a New module.

Information Hiding

An important part of the design of modules is the idea that implementation details should be hidden from the outside.  This means designing an interface that does not expose internal details that may need to change.  It also means hiding private methods and data from outside use.
What is Information Hiding?
Some authors consider information hiding to be the principle behind good module design.  I use the term simply to mean hiding details of a module's implementation from the outside world.

A common problem is that an interface will expose details of the module's implementation.  In C++ this is often due to using public data members.  For example, consider a 2-D rectangle class that exposes four numbers: bottom, left, top and right; which are effectively the the coordinates of two corners of the rectangle.  If, for some reason, the implementation needs to be changed to use the bottom, left corner plus height and width it is not possible without changing the interface.  Further, a rectangle like this can only have sides parallel to the axes - what if you wanted to add the ability to rotate?

Another problem is when internal parts of the module are accidentally left visible.  For example, in C it is not unusual for functions internal to a module to be globally visible.  This can cause maintenance problems, such as name conflicts or accidental use of similarly named functions.  Anything that is not hidden effectively becomes part of the interface, since it could potentially be used from another module.  Module-private functions in C should always be declared static to avoid their name being globally visible, in the same way as functions internal to a class in C++ should be declared private.

It should be obvious that using a simple, well-defined interface (that only exposes information on what the module does not how it does it) has many advantages.  It enhances maintainability and portability.  Possibly more importantly, it improves understandability of what the module does, which in turn helps you to understand interactions between modules and verify that the overall design is consistent.


Another piece of software design jargon is coupling which is simply how much modules depend on each other.  The aim is to remove dependencies as much as possible.  This partly depends on information hiding but also relates to how modules themselves are inter-connected.  Decoupling has large benefits for the maintainability of software.  Loosely coupled modules allows changes or improvements in one module to be made easily without affecting (or introducing the possibility of inadvertently affecting) other parts of the code.

If two modules are tightly coupled then there is a lot of interaction between them, which if taken to extremes means they effectively become one module.  When most or all modules are tightly coupled you end up with the Ball of Mud anti-pattern.

For effective decoupling, module interfaces should be as simple as possible, visible and well-defined (see information hiding above).  One indicator of poor decoupling is when a minor change to a system requires changes in multiple places.

However, decoupling goes further than just having good interfaces.  Suppose, we have six well-designed modules that each perform a single function with simple, well-understood interfaces.  If each module depends on every other module then there is a high degree of coupling between the modules.

Tightly Coupled Modules.

A good design will try to organize the modules into a hierarchy so that each module is only dependent on a few other modules and there are no circular dependencies.  A hierarchy make the interactions easier to comprehend.  A graph of the dependencies will be a tree or DAG (directed acyclic graph).  Here is an example:

Loosely Coupled Modules

Encapsulate what Varies

A major purpose of dividing software into modules is to enhance its maintainability.  The are other advantages such as understandability, re-usability, verifiability, etc, but in my opinion maintainability is the most important quality attribute of most software - yes even more important than correctness.  (See my blog Developer Quality Attributes.)
Software Designer's Jargon
Encapsulation, modularity, information hiding, reducing coupling, increasing cohesion, etc are all terms that are used in software design literature to describe the same idea of decomposing a design into modules and reducing the coupling between those modules.  Different people have slightly different interpretations of these terms but the important thing is to understand the ideas, not to be too fussed about the jargon.

Hence it makes sense that, when splitting a design into modules, you should try to isolate the parts of the system that are likely to change.  This is given the software design catchphrase encapsulate what varies.

Flexible Interfaces

Having well-defined interfaces means that modules can be changed more easily, but often enhancements are required that mean the actual interface to a module needs to change.  Just having a simple, well-defined interface is not enough -- it's also important to have flexible interfaces that support forwards (and even backward) compatibility.

I think an example (in C/C++) is in order, to clarify this.  Imagine we have module A that enquires of module B the price of a stock using a function call like this:

     /* moduleB.h */
     extern long moduleb.get_price(const char * stock_code);

     /* moduleA.c */
     price = moduleb.get_price("GOOG");    // get Google's stock price

Now imagine this needs to be enhanced to allow the price of the stock to be obtained at any time in the past.  In C++ we can add a defaulted time parameter like this:

     /* moduleB.h */
     extern long moduleb.get_price(const char * stock_code, time_t time = (time_t)-1);

     /* moduleB.cpp */
     long get_price(const char * stock_code, time_t time)
          if (time == (time_t)-1)
               time = time(NULL);     // default parameter value uses current time
     /* moduleA.cpp */
     price = moduleb.get_price("GOOG");
     price = moduleb.get_price("MSFT", open_time);

This allows the old module A to work with the new module B without any code changes.  This facility (of C++) is useful but module A stills needs to be rebuilt because the defaulted parameter is added by the compiler at compile-time.

If you try to use the old module A with the new Module B or the old module B with the new module A you will get a link error (for statically linked libraries) or a run-time error if using dynamic libraries.

A more flexible interface would allow optional parameters to be passed at run-time.  This is often accomplished using a text-based interface.

     /* moduleB.h */
     extern long moduleb.get_price(const char * params);

     /* moduleA.c */
     price = moduleb.get_price("code=GOOG");

Now if we enhance module B to accept a time then module A can be enhanced like this:

     /* moduleA.c */
     price = moduleb.get_price("code=GOOG, time=10:00");

If the old module A calls the new module B it behaves exactly as before, for backward compatibility.  That is, if the time parameter is missing the new module B should use the current time.

Also if the new module A calls the old module B then it can also behave sensibly, if it has been designed to be forward compatible.  That is, the original module B should ignore anything it does not understand, such as the time parameter.  Of course, this means that the current stock price will be returned instead of the price at 10:00, but this may be preferable to a run-time error.

Of course, the disadvantage of the above system is that you need extra code to create and parse the text strings.  This is one reason that XML is very popular for decoupling modules.  The main advantage of XML is that the parsing and validation can be done for you by using a DTD or schema.  There are other advantages to using XML such as the plethora of code and tools available and the fact that there are standardised, culture-neutral formats for numbers, dates, etc.

Dual Interfaces

Generally modules are written to only provide one interface to the outside world.
The inspiration for this idea came from the use of the "const" keyword in C+ and C, and const iterators in STL.  It allows you to pass a pointer (or reference) to a function and be sure that the function does not modify the object pointed to.
One technique that I have found useful is to provide two interfaces: one interface is read-only (ie is just used for enquiry) and a separate interface is provided that allows making changes.  When understanding the overall design of a system it is often very useful to know that one module does not affect another even though it may use it.

Using two interfaces in this way can help you to understand the design.  (Having more than two interfaces may indicate that you module violates SRP.)  For example, it is not uncommon to find a container passed to a method and not be sure that the method does not modify the contents of the container.

As a more complete example, consider a simple spreadsheet program that I once implemented using MFC.  MFC use a Document-View architecture which is an example of the Observer Pattern and a simplification of MVC (where the MVC model is called the Document in MFC and the MVC View and Controller are combined into the MFC View class).  Note that this is a good example of decoupling as the model need know nothing of its views - to update the views it simply broadcasts a message to which all attached views subscribe.

In MFC the model or Document is essentially a 2-d array that stores the data of all the cells of the spreadsheet.  The model class provides many methods for modifying the data such as changing the contents of cells, deleting rows or columns etc.  There are also methods that just retrieve information, such as cell contents.

In this design you can have multiple views connected to the same model.  For example, you could have two table views in split windows that show different parts of the model in the normal tabular format of a spreadsheet.  You could also have a other types of view such as a graph view which shows the spreadsheet data as a bar or pie graph.  This is shown in the following UML class diagram.

UML Diagram showing Model-View Associations.

The problem with this diagram is that the table and graph views appear equivalent.  In reality, the table view can make changes to the model, such as editing the contents of a cell, but the graph view only reads the data.  Unfortunately, in a UML class diagram such an association is represented by a dotted arrow and there is no way to differentiate between an association that modifies an object from one that only retrieves information.

Eliminate Unused Code

Unreachable code is code that can never be executed when the software runs.  It is similar to redundant code in that it can be indicative of a poor design -- though it is can be just an oversight.  To find such code a code coverage tool should be used to make sure your tests exercise all the code.  If code is not executed then create tests that cover it; if that is not possible remove the code.

One way I have seen unused code created is through misuse of inheritance.  If derived classes always override a method then there is no point having a base class implementation.  This typically indicates that the classes should be inheriting from an abstract base class (or an interface in C#/Java).


This post has looked at some techniques for deciding how to split a design into modules, how to design interfaces, and how to recognize a design that needs improvement.  Generally, good design requires experience and a readiness to learn new ideas, such as the design patterns discussed in the Design Patterns book that I have mentioned in previous posts.

Most software developers understand the benefits of decomposing a large program into modules.  In reality, the best way to do so is not always obvious and consequently most software still suffers from poor design. In my next post I will look at a common approach which is often used, but is rarely appropriate.  It is perhaps one of the worst design anti-patterns.