Sunday 16 March 2014

Unit Tests Summary

I have written a lot about Unit Tests in the last few months. It's time to summarize.

Agile

The reason I have talked so much about Unit Tests (recently and in the past) is that I believe it is the core enabling technology of Agile methodologies. Trying to do Agile without Unit Tests is like trying to build a ship without a hull - it just doesn't work. (This is one thing I don't like about Scrum - it does not require any technical procedures like Unit Tests.)

One of the guiding principles of Agile is that the system evolves in response to change. Unit Tests allow that to happen by:
  • detecting bugs immediately after changes are made
  • giving developers the confidence to make changes properly
  • avoiding the need for a lot of regression testing which would kill frequent changes
  • avoiding all of the problems that makes software hard to modify
  • allowing the initial design to start out simpler - avoiding unnecessary complexity (see Reusability Futility)
One important point that is often missed: responding to change is about a lot more than how the software behaves. Just as important is the ability to refactor the design to take advantage of new ideas, knowledge and technologies, without even affecting the external behavior. And, Unit Tests allow you to fix bugs in a way that does not compromise the original design.

Design

There is a lot of evidence that using Unit Tests results in a better initial design.
Unit Tests allow
you to...
take advantage of
new ideas and technologies

Further, without Unit Tests the design of the software will degrade over time (see Why Good Programs Go Bad). With Unit Tests changes can be made properly, so the design does not need to degrade.

Finally, the design can even improve over time, since Unit Tests make it easy to make changes without fear of introducing new bugs. As software is developed and evolves it is inevitable that better ways are found (new algorithms, new hardware, new third-party libraries, etc). Unit Tests allow you to refactor the code to take advantage of new ideas and technologies.

Documentation

There are other advantages to Unit Tests (see What's So Good About 'Em). A major one is that they act as a substitute for (and improvement on) design documentation. Unit Tests have these advantages over written documentation:
  • documents have mistakes - Unit Tests always agree with the code
  • Unit Tests are never out of date as long they are maintained and run often
  • verifiable - just run them to check that they agree with the code
  • more easily written - since developers like coding not writing
  • more easily understood by other developers
  • if not understood you can always step through the code in the debugger

Implementation

Of course, creating Unit Tests is not without its challenges. First of all it can take some time, especially if you need to build test doubles. Here are the major points on creating foolproof tests:
  • the system being tested must have a good, verifiable, modular architecture
  • tests are written by the developer at the same time as the code being tested
  • tests should use knowledge of the implementation (see White Box Testing)
  • but tests should not depend on the implementation (only the interface)
  • tests should be complete (use code reviews and a code coverage tools)
  • tests should only test one concept (see Unit Tests - Best Practice)
  • tests should be short - move set-up and tear-down code to sub-routines
  • tests should be independent (of each other and the environment)
  • use test doubles for any external dependencies that may cause problems
  • use test doubles to simulate errors and other hard to reproduce situations
  • don't use a test double if the emulated module is OK (fast, reliable, etc)
  • tests should be easy to run and fast - and run often
  • use TDD to avoid many of the problems encountered with Unit Tests

Saturday 1 March 2014

Arguments Against Unit Tests

Here are all the arguments against Unit Tests that I have ever heard (and a few new ones I just discovered). Many of these I believed myself in the past but have since found to be wrong or exaggerated.  I have tried to refute most of them. I do concede that, in a few rare cases, Unit Tests are not useful.

I wrote some Unit Tests but they did not find any bugs at all. It was a waste of time.


Unit Tests
are not about
finding bugs!
This is a common trap that I initially fell for too. Unit Tests are not about finding bugs! - at least not initially. The real advantage is that you can check at any time that there are none. Plus, Unit Tests also have many ancillary benefits such as documenting how the code is supposed to behave. (See here for more on the benefits.)


Our framework makes it impossible to use Unit Tests.


It's true that many frameworks and languages are not Unit Test friendly. Luckily this is changing. If you have an existing application with this problem then maybe you can't use Unit Tests as much as you would want (see next argument). If you are creating new software, make sure the language and other software you use is amenable to Unit Testing.

We are trying to add Unit Tests to an existing application. The code has no obvious design. We can't work out what to do.

Unfortunately there are times when it is too difficult to create Unit Tests. The reason is nothing to do with what the software is designed to do and certainly not a limitation of Unit Tests per se. Instead, it is caused by a bad design.

Our code interacts with web services, databases, servers, hardware, etc. It's impossible to create Unit Tests for this sort of code.


It's not impossible. It may be hard or take some thought and effort. On the other hand you may be able to use tools like a mock library, whence it may be much easier than you think. Even if it does take a lot of time, it is worth it.

The software is highly user-interactive so we can't use Unit Tests.

It may appear to be mostly GUI but there is a lot of other logic in there. Again, this gets back to a good design (including any GUI framework you are using). You need to decouple your logic from the user-interface.

Many modern development frameworks promote good design by separating the user-interface from the rest of the code using a thin UI layer on top of everything else. For example, XAML allows you to create the GUI often without any programming at all. In fact, the user-interface can be (and often is) created by someone with no coding skills. Moreover, it can be modified at any time without code changes.

The MVVC is a design pattern which separates the View (user interface) from the View-Controller (code behind the user interface). Unit Tests can be inserted at this interface by talking directly to the View-Controller. (Though more commonly a user-interface scripting language is used which allows anyone to create tests without modifying the code - these are usually called acceptance tests not unit tests.)

We are working to a tight deadline. We don't have time to create Unit Tests.

This one I really hate! You need to find the time somehow to add the unit tests, or renegotiate the deadline. The "decision makers" should be made aware that trying to save a little time now will have much larger costs in time and money in the future. If they cannot be convinced then I recommend you look for a new job.

One of the reasons that unrealistic deadlines are forced on projects is that there is a perception (not entirely unwarranted) that people don't work at their full capacity until the deadline approaches. An alternative is to take the Agile approach and make several releases so that everyone can see that something is happening.

CASE STUDY
About a decade ago I was given the task of adding a GUI to an existing piece of software which was just being run via the command line (using dozens of confusing switches). The deadline for completion was two months. I thought it would take six months to do properly with full Unit Tests.

I tried to negotiate for more time but it was decided that if we spent any more than two months it would be impossible to make any money out of the project. It was to be a one-off "quick and dirty" project. We were told to cut corners to save time -- there was definitely no room for Unit Tests. The other limitation was that we could not drop any of the major features.

However, I felt the real reason for the short deadline was that, in the past, for any project longer than a couple of months, the developers had lost focus somewhat. I was just getting into XP and thought the Agile approach of multiple releases would assuage that concern. My alternative proposal (eventually accepted) was that a minimal working version be created, then have fortnightly releases to enhance it until the right trade-off between capability and profitability was attained.

We quickly created a basic initial running version without cutting corners and with full Unit Tests. In the next two months we added most of the desired features though some were changed and others were left out. Even though it went three weeks past the original deadline everybody was happy with the process and more than happy with the end result.

The software was more successful in the market than anyone imagined. There were several later enhancements which were easily implemented due to the fact we had worked "slow and clean" instead of "quick and dirty". Maintaining this code was when I really came to appreciate the advantages of having a comprehensive set of Unit Tests.

I have a gut feel for the amount of testing I need to do and have usually been proven correct. Unit Tests are often overkill.

Your "gut feel" may be misleading you. How can you be sure you've generally been correct in the past? Adding more Unit Tests may have produced even better results than the ones you got.

I also have a tendency to trust the feelings I get from decades of experience. However, my initial "gut feel" was way off when it came to Unit Tests.

The moral is trust your instincts, but be open to the possibility that they are fallible. It may surprise you how much time Unit Tests can save in the long term..

Bugs were slipping through the Unit Tests. We didn't have time to maintain something that was not working. Eventually we stopped using them.

OK, it sounds like your tests were not very good to start with. Then you stopped adding new tests. There was a snowball effect -- the tests had less and less value until they become worthless.

There was a alternative possibility, where the snowball was pushed to the top of the hill and ran away down the other side. Some people call this "reaching the critical mass". By pushing a little harder initially to get good tests, people would have eventually recognized the benefits. You might also look out for them yourself and point them out. Once the programmers are aware of the benefits they will be encouraged to add more and maintain them, which will add even more value.

In summary, you need to start off writing good tests. Good tests have 100% code coverage (or as close as is feasible). Also, don't forget that in code reviews, it is just as important to review the tests as the code (if not more so).

We tried Unit Tests but found no real benefits so we dropped them.

This is a common cry. It usually comes from someone who was forced to use Unit Tests, despite not being convinced. When you look closely, either the design/tests are not well done OR the many benefits were there but not noticed.

Doing things properly involves having a good (modular, maintainable, verifiable, etc) design and having good tests (see next item).

Sometimes the benefits are not obvious, but still significant. And sometimes you have to be a little patient before you start to see the advantages. One day you will realize how much easier and pleasant many aspects of your job have become compared to when you didn't use them. Here are some with/without examples:



1. Someone introduces a nasty errant pointer bug. There is no Unit Test for this specific problem but the overnight build/test causes all sorts of bells to ring because the bug has a follow-on effect that causes several Unit Tests to fail. Though the errors are far from the source of the problem we know that the previous days changes caused or triggered the problem. Hence it is relatively straightforward to track down and fix.

Without Unit Tests this might not have been noticed for some time, even slip into production. Some users may have experience odd behavior, and a very small number may have had their system crash. These sorts of bugs can be very hard to track down unless you know when they started happening.
2. An enhancement is proposed from way out in left field (ie, strange and completely unanticipated). Implementing the change with the current core engine would create a solution that was grossly inefficient. However, an efficient solution is found which requires completely rewriting the core engine. Unit Tests allow us to take this approach and be sure the system is has no new bugs and is backward compatible.

Without Unit Tests the only viable solution is to completely rewrite the system allowing for the new requirement, but this would have not been possible due to the cost. Moreover, such a rewrite would have other problems (like missed undocumented features that users have come to depend on).
3. We had developed a problematic library which performed complex financial calculations. The trouble was it had grown large and cumbersome over many years of modifications. Many parts were poorly done and almost impossible to understand and modify. Fortunately, the overall design was acceptable and additions and bug fixes had been accompanied by Unit Tests.

One of the programmers proposed a table-driven approach that grossly simplified the design. In fact changes that were difficult in the past could be made with simple configuration changes (no code changes at all :). It took some time but the library was completely rewritten. The first new version failed most of the Unit Tests but after a few days work we were confident that the replacement had the same behavior as the original (and incidentally was much faster).

Without Unit Tests we would have just had to limp on with the problematic library. Any rewrite would have introduced so many subtle differences that it would have taken years to find and fix all the bugs. Further, some bugs would have probably still slipped into production with the possibility of large financial losses to users.
4. A new programmer joins the team. After being pointed in the right direction she is immediately able to make a contribution. She found some tests related to the area of code she was enhancing, stepped through it in the debugger to check her understanding, then wrote a new function (and a new test) all in a few hours.

Without Unit Tests a new guy would take days or even weeks of reading documentation and experimenting before being able to accomplish anything useful.

We stopped using Unit Tests because we found it too hard to maintain them.

This happens due to poorly written tests. It's not easy to write good tests if you have not been taught. In general, you should not need to modify individual tests much - most of the time you just add new ones or delete ones that are no longer valid. More commonly you would need to modify support code, like set-up routines.

Many novices try to cram as much as possible into one giant test. With a large cumbersome test a simple change to the SUT (system under test) can invalidate the whole test requiring the test to be completely replaced. If instead, many small tests are used then a simple change may be all that's required, such as to a single test or a set-up function.

I already discussed this and many other bad test practices (see Best Practices). In brief here are some reasons tests may be hard to maintain:

  • trying to check too many things in the same test - one concept per test
  • lots of tests with similar or duplicate setup (and tear-down) code - DRY
  • tests that depend on other tests having already been run - make tests independent of each other
  • tests that depend on the environment being set up correctly - make tests independent of the environment
  • tests that depend on the SUT's implementation - only test through the public interface (see also White Box Testing)
  • poorly written code - generally tests should be of same high standard as normal code
  • use of external dependencies with complex behavior - use test doubles

I made a simple (legitimate) code change and I got 42 red lights. (The programmer who made this comment then stopped running the Unit Tests.)

In this situation it's either a problem with the recent code change or a problem with the tests (or a combination of both). Neither is a good reason to stop using Unit Tests.

In the former case, it usually happens that a code change has been made without fully understanding the consequences. This can be easy to do, and it is one of the advantages of Unit Tests that they often tell you when you may have done something wrong. However, Unit Tests are no substitute for understanding what you are doing as well as having someone review your code.

The other possibility is that the tests make an assumption about the implementation which has been invalidated by your change. It may be overwhelming to get a large number of red lights but the fix may be as simple as updating a setup function used by all the failing tests.

The point is you have to fully understand the code (and the tests). Have another programmer review you code (and tests) too.

It's too hard to write a test that checks what happens when the server is down. It's easier to just pull out the network cable.

It may not be fun, but writing a test double that simulates the server will save a lot of trouble in the long-term. A test double can be used to simulate any sort of error -- after all communications to the server can be lost for more reasons than an unplugged cable. Having a comprehensive suite of Unit Tests using this test double allows you to easily test many situations that are rarely (or never) seen using manual tests.

Without a Unit Test for the off-line problem(s) then some silly programmer will introduce a bug that causes the software to crash when the server is disconnected. One day, perhaps weeks or months later, somebody will find this problem (pray it is not a customer). By then it will be a complete mystery as to what is going wrong. It will take much longer to track down the cause of the problem than if you had a Unit Test that flagged it immediately.

We use several complicated hardware devices that are always being updated and reconfigured. Creating test doubles for all of them would be impossible.

I feel for you, mate :( I have worked in several environments like this in the past. First of all I will say that if you are working with a complex hardware device and you do not have a simulator for the device you are already behind the eight ball. Even with normal testing there are situations and error-conditions that you probably cannot generate using the real hardware. You need test doubles even if you don't use Unit Tests.

First and foremost, you need a simulator for every hardware device. If a device (and associated drivers) is provided by a third party then you need to ask them for a simulator or work with them to create one. Of course, this needs to be maintained to always be up to date. Many hardware suppliers will have simulators for their own use which they will provide on request.

If you are writing the hardware drivers/library for some hardware then you also need to create a simulator. With a bit of thought and inventiveness this can be easy to do. The main thing is to update the simulator at the same time as changes are made to the real hardware/drivers, or even before; often (remember DRY) they can share a lot of code. In fact it can be very useful to update the simulator to test new functionality even before the real hardware is available.
CASE STUDY
About 15 years ago I wrote a C-library that controlled high-capacity SCSI tape drives. At the same time I wrote a simulator that performed all the same operations (using a temporary disk file in order to store large amounts of data that was "written" to the tape.)

This made it very quick and simple to write tests for many scenarios that would have been tedious using real drives. For example, with a real tape drive it might take hours to fill up a tape in order to test what the software does when the drive returns an EOT (end of tape) early warning. Even rewinding a tape can take several minutes.

Creating a simulator for hardware in this way is essential for writing Unit Tests for software that interacts with the hardware. If a real tape drive was required then the tests would take too long and nobody would run them. Moreover, the simulator allows testing even without the presence of a physical tape drive.

Our tests are out of date. We don't have enough time to catch up, since we're flat out just doing the development.

You've dug yourself into a hole. To avoid this situation in future I recommend you use TDD, where you write the tests before you write the code. In fact there are many other advantages to TDD (see xxx).

We have the best developers. They don't make mistakes, so we don't need Unit Tests.

To me this is like saying "Nobody around here lights fires so we don't need fire extinguishers".

Everyone makes mistakes, especially when modifying code they haven't looked at in months. One day you will spend days or weeks tracking down a problem that would have been spotted months earlier (and fixed easily) if there had been Unit Tests. Disregarding this there are other advantages to Unit Tests...

Without Unit Tests, your brilliant developers will have to document the design of the system, and very few brilliant developers can write well. Those that can are often more interested in coding. And those that are capable and willing invariably can't explain things in a simple enough manner for us ordinary developers to understand. Finally, even if you have a good designer, who can write, wants to and can explain things simply, they will not have time to update it every time the software changes (particularly if they leave the organization).

Far better, to get them to do what they do best - write code (ie, Unit Tests). These will act as documentation and living proof of how the software currently works.

Also, what do you do when your best developers leave and all you are left with is mediocre ones? Do you get the new programmers to write Unit Tests for all the existing code, which they don't even understand (despite reading the doco 10 times). In fact there will be too much inertia to even start writing tests for new code. After all why should they have to write tests when the previous developers did not have to?

Quite simply, if you really have the best developers then they will be pushing to have Unit Tests for all their code. First, they will realize that they themselves are not infallible. They will also want to give their code its best chance of surviving and prospering once they are gone.

I can't test the real-time behavior of my code using Unit Tests.

You can probably use test doubles in your Unit Tests. Most likely you will need to simulate the system clock. You may also need test doubles for other hardware devices.

If there are aspects of the environment that are non-deterministic then you probably cannot check for correct behaviour using Unit Tests (see next argument).

Our code is multi-threaded so we can't use Unit Tests.

You can't use Unit Tests to test interactions between threads since that is non-deterministic. (You need another approach to avoid/detect race-conditions, deadlocks etc, which is outside the scope of this article.) However, these sorts of interactions should be minimized and isolated from the rest of the code.

You can still use Unit Tests to test everything else.

If you use formal proofs of correctness then you don't need Unit Tests.

Good luck with that.

Summary

In conclusion, if you are starting a new project there is absolutely no excuse for not using Unit Tests. Almost everybody has some sort of problem using Unit Tests initially, but with a sensible design and properly implemented tests (as described above and in previous articles) they will give enormous benefits particularly for software that needs to be modified frequently.


If you have heard any other arguments against Unit Tests please feel free to share them.