Monday, 8 May 2017

Why I like C#

I was going to talk a bit more about version control and Git after my posts on Agile version control late last year but I have been very busy with a new job.

“it was 20
years ago today”
However, I have had a draft of this post for many years and significantly it was 20 years ago today (actually last Thursday) that I invented C#. I remember it distinctly as a colleague suggested we go out and celebrate "Star Wars Day" after work. I had never heard of Star Wars Day (May the Fourth) before then.

It was during a jovial conversation over a few drinks that I mentioned my idea that Microsoft should invent their own language which, as it turned out, was rather like C#. The reaction was that I was a complete idiot, after all Microsoft were not known for that sort of thing - the closest they had come was COM (Component Object Model) which we had all used and agreed was an unwieldy mess. Microsoft had also invested heavily in their own dialect of Basic called VB (Visual Basic), as well as their C compiler, so why would they want a new language? This was in the days when it was novel for a company to create a language, begun, I guess with Sun and Java.

My idea was oddly triggered by my problems with the latest Visual C++ 4.0 (which we had been using for a few months). I had written a new SCSI library in C++ for some 32-bit code but was greatly dismayed to find that VC++ was not designed to create 16-bit code (for MSDOS, Windows 3.1) or even Win32s - I wanted to use the C++ code with some legacy C applications.

We had also encountered lots of frustrations with C++.  We had done a lot of stuff in C previously but needed some modern languages features. Other languages around like Delphi and Java were not suitable either.

I decided that what was really needed was a language that Microsoft would properly support (C++ support was not good at the time), had modern features that C was lacking (like templates, exceptions and even a few niceties from Java, like GC). There was also a technological development (see next), where Microsoft was at the forefront, which was crucial for my new language.


I had been greatly intrigued by p-code since 1978 when I had first used Pascal at University. Pascal (at least the compiler I used) compiled to p-code which thence required a p-code interpreter for whatever CPU you were using. (This idea was later used Java in byte-code and .Net MSIL.) Not long after that I did a lot of programming in Forth (sort of a p-code assembler) on my Commodore 64.

The problem with p-code is that it is slow. Running p-code (via interpreter) is about an order of magnitude slower than the same program compiled to native instructions. A lot of people thought that this would become less of a problem as computers got faster. The trouble with this argument is that while the interpreted code got faster so did the native code. (The idea of p-code also seemed to be fading until Sun created Java, but a lot of people did not like the poor performance of Java.)

However, some people at Microsoft realized that once processors were fast enough you could compile on the fly (later called JIT-compilation) rather than interpret the p-code. Sooner or later the compilation time would become negligible.
"once processors
were fast enough
you could compile
on the fly"

Microsoft had done a lot of research on JIT-compilation, on their (then) flagship language VB. (Basic is an interpreted like p-code).  During the mid 1990s I did some tests and found that VB6 was similar in speed to C++ in many areas due to use of JIT-compilation. All the C++ programmers I told this to just did not believe me (or did not want to).

Why Microsoft needed a new language in 1997

  1. C (and an emerging C++) were the driving force behind almost all the successful MSDOS and Windows products and most C/C++ programmers, myself included (perhaps unfairly), avoid VB like the plague.
  2. C/C++ development was painful and slow. COM was supposed to address this but made it worse!
  3. There were a lot of error-prone areas in C/C++ that could be fixed by new syntax and semantics.
  4. There were a lot of things that could be taken from C++ like exceptions, templates and STL containers.
  5. They could wrap/hide the horrendous Windows API (like MFC did for C++).
  6. There was a lot of interest in Java (especially its innovative memory management system) but Microsoft were impeded using Java for their own ends by a litigious Sun. (Sun had already successfully sued Microsoft for not properly implementing Java.)
  7. A language like Java generating intermediate code (p-code) would be slow but this problem could be alleviated by Microsoft's research into JIT-compilation from the early-mid 1990's.

.Net and C#

I did not really think much more about it but a few years later I started hearing about NGWS, VB7 and later C#. In 2001 I downloaded and tried the C# public beta. It was then that I realized that C# was very similar to the language that I had previously been advocating. My only disappointment was that C# lacked templates and STL-like containers, though C# 2.0 later added generics.

To be fair to Microsoft (and Anders) there were a hell of a lot more great things in C# than I could ever have come up with, though many came from Java and C++. And even since then, C# has added some brilliant new things of its own, like lambdas and LINQ.

C# Problems

You may have noticed from previous posts that I like C# in many ways. I guess this may be partly due to the fact that I feel I independently invented it. However, there are a few things that it got wrong, but remember that these are very small in number compared to the large number of things it got right.  (Also note that some of these things are due to the .Net CLR upon which C# depends.)
  1. One thing that I really thought C# should have had from the start was (what came to be known as) generics. In 1997 I had become a huge fan of templates, and especially the STL, in C++. I remember reading that they would be added later (and were added in C# 2.0). Why not delay the release of C# 1.0 until they were ready? This has caused a lot of problems of maintaining backward compatibility, especially when implementing generic interfaces.
  2. I really hate the containers in C# compared to C++ STL containers or even those of other languages. Related to the previous problem is the later addition of generic containers to replace the earlier ones.
  3. Apart from the containers most of the C# run-time library is excellent. But there were some simple, obvious mistakes which I picked up straight away. An obvious one was that Int.Parse() throws an exception if it encounters a non-digit rather than the more sensible behavior of something like C's strtoi().  This was later addressed with Int.TryParse().
    Another one I encountered almost straight away is that String.Substring() throws an exception if the string is not long enough. This might be good behavior sometimes but more commonly you would just want a shorter string returned rather than throw an exception.
  1. One of the stupidest things in C# is the Decimal type. (Anders seems to have an irrational penchant for stupid numeric types such as the Delphi real48.) As soon as I saw it I thought it would be better and much simpler just to add fixed point facilities to existing integer types, attached as metadata (ie, using an attribute attached to an integer variable).

    I wrote at length about this over a decade ago - eg see Why do we need Decimal?
  2. Const is one of the simplest and most useful additions to C++. I do not know why C# (and other languages) continue to ignore it. TBD: elaborate?
  3. The fact that all static and heap variables are cleared (made zero) at run-time is sometimes unnecessary and inefficient.  What's the point of a huge array being initialized to zeros then immediately have all its values set to some other value(s)?  (Note that the security argument is a furphy.)
  4. I previously mentioned that the behavior of the default test for equality (Object.Equals()) is flawed. (See the C# section in my post on Zero).  Actually having recently used Go (the language from Google) I now realize that all the "object-oriented" stuff, that C# copies almost exactly from Java, is unnecessary and actually encourages poor designs (but at least there is no multiple-inheritance!).  I may talk about this more in a future post on Go.


C# made things so much easier after using C (and then C++) for decades. There are so many things I could mention but a few immediately spring to mind:
  • metadata system which avoids all sorts of configuration problems that plague C due to header files, linking issues, DLL hell, etc
  • the garbage collected heap which frees you from the tedium of having to track who allocated what and who needs to free it and making sure there are no memory leaks and double-frees, etc
But there were many other little things - for example, see the Code Project article that I wrote in 2004 called C# Overflow Checking.

Sure C# borrowed a lot of things from Java, but that is de rigeur for language design (and Java got a lot from C++).

Thursday, 17 November 2016


Here are links to all of my posts so far, loosely categorized.

Note that there are a few bonus links to my Code Project articles - marked with [CP]

Software Design

Design Principles

Handling Software Design Complexity - what software design all boils down to
DIRE - an obvious thing we often forget
Developer Quality Attributes - or why fixing bugs is not important
Verifiability - software is useless unless you can verify its correctness
Why Good Programs Go Bad - risk avoidance causes software to "rust"
Book Review: 97 Things Every Architect Should Know

Design Practices

Fundamentals of Software Design - 8 ways to create a good design
Agile Design - how emergent design almost always works better than BDUF
Inversion of Control - IOC is a technique for better decoupling using DIRE
Dependency Injection - an example of IOC


Gas Factory Anti-Pattern - a mistake even (or especially) good designers make
Reusability Futility - "Simplicity before Generality, Use before Reuse"
Shotgun Initialization - an example of the dangers of defensive programming
Layer Anti-Pattern - the problems of a common, obvious approach



Agile's Fifth Element - favor simple design over re-usability and generality
JIT (Just In Time) - an example of DIRE that is core to much of Agile
DIRE (Don't Isolate Related Entities) - how you divide and conquer is the key
Agile Design - evolving software one small step at a time
Agile and Code Reuse - all about YAGNI (you ain't gonna need it)
Software Quality Assurance & Agile - how Agile evolved from, but is different to, SQA
Lean is not Agile - applying "eliminate waste" to software design leads to BDUF
Software Development Methodologies [CP] - Agile and other methodologies by analogy


Scrum Team Size - teams should be small to avoid social loafing and other phenomena
Scrum Team Composition - "feature" teams are the key
Collaboration - traditional development discourages collaboration + why Scrum works

Making Agile Work

Scrum Standup - it's more about visibility than communication
Developer Quality Attributes - what benefits developers eventually helps users
Agile Version Control - Agile requires the right version control practices & software (Git)
Scrum Problems - management "buy-in" & other things that help Scrum work properly
Why Scrum Fails - intransigence, non-collaboration, etc
Written vs Verbal - when, who, why, and how of Agile documentation
JIT Testing - testing as you go (continuous testing) is an example of JIT (Just In Time)

Unit Tests

Change - how Unit Tests help you to embrace change
What's so great about Unit Tests - Unit Tests are not about finding bugs
White Box Testing - the best Unit Tests use "good" white box testing
Personal Experiences with Unit Testing - it took me 20 years to truly appreciate them
Challenges - why getting started with Unit Tests seems, but is not, insurmountable
Unit Tests Best Practice - a few things to avoid
Arguments Against Unit Tests - common arguments and why most are invalid
Summary - Unit Tests concisely summarized



Zero - bugs are less likely if you don't treat zero as a special case
Asymmetric Bounds - in code and GUI design this is an important way to avoid bugs
Book Review: Clean Code - a great book on creating the best code

C Coding

Best Practice in C for Modules - strong-coupling and other things to avoid
Defensive Programming - how it works and how it can hide bugs
Shotgun Initialization - a defensive programming practice to avoid
Alignment and #pragma pack - make structs "alignment agnostic" to avoid surprises
Making Code Testable - coding for testability improves correctness, reliability, etc
Ten Fallacies of Good C Code [CP] - 10 more things to avoid

C++ Coding

STL's Dark Secret - vectors are slower than they should be
Iterators Through the Looking Glass - subtleties of the STL reverse iterators
C++11 and Lambda Functions - lambda functions make STL so much better
Nested Functions using Lambdas - you can finally have nested functions in C++11

C# Coding

Overflow Checking using checked/unchecked [CP] - C# has some cool features
Nested Functions using Lambdas - includes an example of using C# lambdas


Long Identifiers make Code Unreadable - don't try to put too much info. into a name
Self Describing Code - why it's a bad idea and why you should comment your code


The Phillips Scale of Code Quality - how good is your code?
Version Control - Personal Experiences - hands on version control

Version Control - Personal Experiences

Last month we looked at how to use version control when using Agile development. My conclusion was that you should be using Git. This is simply because using CI (Continuous Integration) there is a lot of branching and merging going on and Git is the only version control system that allows a version to have to have two parents. This is not to say that you can't use other version controls systems (and in fact I like SVN better in many ways - see below) just that Git keeps track of what needs to be merged for you.

This month I take a leisurely stroll back through time and look at all the version control systems I have used. I have a long personal history of using version control systems (generally being the administrator for such systems). I have used the best (and worst) but you should note that there are some excellent systems (like the proprietary Perforce and open-source Mercurial) that I have not used (yet?).


I first experimented with version control while at Sydney University in the early 1980's using the Computer Science department's VAX 11/780. This ran a variation of UNIX that included a primitive version control system called SCCS (Source Code Control System) I think.


I first used version control for my C source code in several MSDOS/C jobs during the mid-1980's. At the time the only serious option for MSDOS was PVCS (Polytron Version Control System) which I used at several companies.

I can't say I loved PVCS but it did the job. It efficiently stored changes to text files as "reverse deltas" and had all the basic features like branching and tagging.

CVS, etc

In the late 1980's I  moved back to UNIX where I was a system administrator and system programmer. Under UNIX I tried SCCS, RCS (Revision Control System) and an early version of CVS (Concurrent Versions System) all of which worked butwere difficult to use, in some way.


When I moved back to MSDOS/MSWindows systems in the early 1990's I used TLIB. This was similar to PVCS, but quite a bit better. However, this was still a command line driven system which I found tedious to use.


In the mid-1990's Microsoft included a GUI based version control system with their Windows IDE (Developer Studio). This seemed like a great idea to me after my experiences with command-line version control systems. However, Visual Source Safe (VSS) turned out to be by far the worst product I have ever used - it was not only poorly designed and very confusing, but also had a tendency to lose and corrupt files and whole repositories! Moreover, it made multi-site development impossible due to poor performance - there were 3rd party extensions to VSS (I later used one called VSSConnect) that were developed purely to improve performance over the Internet - but even then the performance was barely acceptable.


In my next job I used ClearCase (originally developed by Rational before being bought by IBM). This is the sort of product you would expect from IBM - thorough but confusing due to its plethora of features and options and requiring a lot of work to maintain. Luckily, I got to work on a new project where I had the opportunity to try a new open-source control system called Subversion (SVN).

SVN (SubVersion)

I set up SVN as an Apache module on one of the companies servers and was amazed at the performance. Using an Apache server allowed SVN to easily work over the Internet since it used HTTP/WebDav. (SVN also provides its own protocol and server call svnserve but the Apache option has advantages.)

The team for this project was split between Australia and Belgium but the two developers in Belgium got great performance (through VPN over the Internet) even though the server was in Sydney. Generally we spent about 10 minutes a day updating and committing changes.

This success with SVN encouraged me to use SVN for my own personal projects.  I put my HexEdit open-source software (see into an SVN repository which was hosted on SourceForge.

SVN was the first version control system I actually enjoyed using. One reason was that there was a Windows shell extension called TSVN (Tortoise SVN) that allowed you to easily do all your version control tasks using Windows Explorer.
SVN was the first
version control system
I enjoyed using

Another favorite thing is that, even if you are disconnected from the repository (eg if Internet connection is lost), you can still compare your current changes with the repo. This is because SVN keeps a local copy of all files as they were when you last updated from the repository.


In my next job I found that I was again dealing with the horrible VSS.  Luckily, the company decided they had had enough problems with VSS and moved to TFS. Now TFS is much much better than VSS but still inferior in many ways to SVN. TFS does provides "shelving" which is a good idea but I have not found it all that useful in practice.
TFS does not
conform to the
Observer Pattern

TFS is more of a "centralized control" system than SVN. For example, it keeps track of all the files you have checked out into your WC (working copy) in its central database, whereas SVN only stores the actual files (the repo) in its central database and tracks things to do with the WC locally. To me the SVN approach makes more sense (conforming to the "Observer Design Pattern") and indeed many developers encounter problems when the local WC becomes inconsistent with TFS's idea of what it should contain.


Finally, I last came to try Git a few years ago as I was intrigued by its branching model.  This solved the only annoying thing I found with SVN - the problem of merging changes between the trunk and a long term branch. I like to merge often (as Agile and CI say you should) but SVN forced you to manually keep track of which versions you have already merged between branches.  Git automatically tracks your merges so you can't forget to merge or merge the same thing twice.
Git makes it easy
to branch and

There is a lot to like about Git but in all honesty I do not find it as enjoyable to use as SVN. First, there are a plethora of confusing commands and options. For example the ability to "stage" a commit before actually committing I never found that useful. It just adds another layer of complexity.

But the worst thing about Git is that it is all command line driven. I always find it much easier to remember how to use GUI software than to remember obscure command names and options. Luckily Atlassian provides a GUI interface to Git using a free product called SourceTree.

One good thing about Git is that it has an excellent book called "Pro Git" that explains in detail how to use it. However the book does get a little evangelical in its praise for Git at times. For example, it goes on about atomic commits (SVN has atomic commits), how fast it is to clone a repo (SVN checkout is faster) and that it has the killer feature of  lightweight branching (SVN has that too).

Then there is the fact that Git is distributed whereas SVN is centralized. Now people rave on and on about the advantages of distributed version control but I really don't see it.  Sure if you have an open-source project with one or more different "forks" then it's probably useful. Personally I prefer one central "master" copy of the source where changes are merged to as soon as possible. I think having multiple repositories floating around would lead to a merge nightmare and contravenes the idea behind CI.

Anyway, I don't want to go into too much depth on the "centralized vs distributed" debate here (I may later). So that's all for now. Bye.

Monday, 26 September 2016

Agile Version Control


A mistake often made when adopting Agile is insisting on certain Agile practices and outcomes without converting to using the necessary tools and techniques (see the CASE STUDY below for an example). This is one major deficiency of Scrum or, at least, of using Scrum by itself.  Scrum does not require necessary development tools (and even some essential processes) that allow Agile to work. I have talked about this previously (eg the Summary of November 2013 post).

A crucial practice in Agile is Continuous Integration (CI).  CI is difficult, if not all but impossible, without certain tools and practices, such as automated builds (ye olde build box), Agile (JIT) Design, etc. I will also mention Unit Tests here (again :) as without their safety net you cannot hope to make CI work.  CI also depends on using a modern version control system, like Git, and using it in the right way.  This is what I want to talk about.

A few years ago I was working on a project where management insisted on a move to Agile with the aim of creating new software releases every few weeks, instead of every few months, as was previously done (ie, about 4 to 6 times more frequently). However, no new tools or development infrastructure was introduced to facilitate this. Moreover essentially the same procedures were used.  The development procedures alone were onerous, but not as bad as testing and release procedures (of which I had little understanding and will make no comment).

For an unlucky developer there was a tedious and error-prone procedure for every new release. It was bearable when done a few times per year but less bearable when it had to be done more often. This was a typical Waterfall development approach where the project was branched for the new release so that bug fixes could be made on the branch without affecting ongoing development. (I will explain this sort of approach in detail below.)

The major steps were essentially
• Branch the project in VSS, then delete some of the unneeded branched files
• Branch and move some global headers shared between projects
• Manually modify project files to handle VSS problems and change global header locations

This whole process usually took one developer at least a day if everything went well. This is not an exaggeration, though the whole process was exacerbated by the use of VSS and a large manual process that should have been automated.

I will get to the point of this post in a moment but first I give a brief overview of how version control relates to the development process and how it was used before Agile came along.

NOTE: If you are familiar with version control concepts then you can skip to the Continuous Integration section below.

Version Control

All version control systems allow you to keep track of the changes made to source files. One advantage of this is that you can see how the software has evolved over time. This can provide a deeper understanding of code than can be obtained by just looking at the current state. Being able to compare source files from different times is invaluable when investigating why a change was made, how bugs were introduced, etc.

Moreover, you can get a snapshot from any point in time. For example, in the diagram below you could use the version control system to "checkout" the source as it was at the time of Release 1.0. You can then build that specific version if you need to investigate its behavior.

Diagram 1. Basic Version Control

Each box in the diagram represents a check-in of one or more files. Of course, this is a simplified diagram - real projects have many more check-ins (hundreds or even thousands).

Another essential facility of a version control system is branching. This allows variations to be developed from a common base version. Traditionally, branching has two uses:
  • release branching - a branch is created when a new version is released
  • feature branching - a branch for an experimental or long-term development

Release Branching

Release branching (sometimes called fix branching) is very common (if not ubiquitous) in pre-Agile development.  It allows released versions to be quickly fixed while not interfering with ongoing development. For example, consider a software project with two releases: versions 1.0 and 1.1, with ongoing development on version 2.0.
Version Control Jargon      

Repository (repo) = file historical storage
Checkin = add or update file(s) to the repo
Checkout = obtain a local copy of file(s)
  usually in order to update and checkin
Commit (v) = checkin
Commit (n) = files that were checked in
Merge = combine changes from 2 sources
Working Copy (WC) = local copy of files
HEAD = pointer into the repo for the WC,
  usually the most recent commit on the trunk
Branch = fork in version history
Trunk = ongoing development "branch"

Now imagine that a user has found a critical bug in version 1.0 (Bug 2 in the diagram below). You can't reproduce the bug in the latest version but you can reproduce it in 1.0 (and 1.1). Of course, you can't simply give the customer a copy of 2.0 as they have not paid for the new features and, in any case, it is not ready for release. You need to provide a fix for version 1.0.

You check out the code for 1.0 to view and debug it and quickly find the problem. Now you can check-in your fix to the branch for version 1.0. You also port and check-in the fix to the version 1.1 branch as well. (For completeness you also check why the bug no longer occurs in 2.0 - it may simply be hidden by other changes or obviated by some later development.)

Diagram 2. Release Branching

Feature Branching

Feature branching is traditionally used for a development that needs to be separate from the main ongoing development. This may happen for various reasons:
  • the development is experimental and may not prove to be viable
  • the development is not certain to be needed (eg, for proposed legislation)
  • the development is for a large feature that overlaps with other release(s)
Diagram 3. Feature Branching

These branches are always intended to be merged back into the trunk, but it can happen that the branched code is not required and so is discarded, eg if the experimental development is found not to be viable.

I have been involved with a few feature branch developments and they are notoriously tedious and troublesome. The first problem to avoid is that by the time the feature branch is merged back into the "trunk" there are so many incompatibilities caused by the divergent code that it can be difficult or even impossible to merge the differences. In this case a great deal of work is required to integrate the changes and often this involves workarounds and kludges that corrupts the integrity of the software design. It's not uncommon for the feature to have to be completely rewritten to be compatible with the current ongoing project.
“feature branches
can be difficult
or impossible
to merge”

Because of the above problem developers have learnt to "merge early and often". That is, changes on the trunk should be regularly merged into the feature branch to avoid divergence. Of course, this is a tedious and time-consuming process that tends to get skipped due to more urgent tasks. It often also requires discussions between members of the feature and maintenance teams to understand what the code does and how best to merge the differences.

Diagram 4. Merging Trunk Changes

Diagram 5. The completed feature is merged into the trunk

Continuous Integration

These sorts of problems of merging and integrating code (as well as other problems) led to the practice of continuous integration (CI) which is core to the Agile approach to software development. But even without Agile, CI avoids integration headaches, improves common understanding and communication in the team and generally results in a better design and less bugs. It is an example of DIRE since you are not isolating the new features from the rest of the code as it evolves.

Agile Approach

CI enables the agile approach of delivering small improvements that slowly but surely moves the development towards the target. The target, of course, is the PO's understanding of what is needed and which may itself be moving.

Each atomic development task, called a User Story, needs to be small enough to be completed in a few days (and certainly within the current sprint). If the task is larger than that, then it needs to be split up.
What is a User Story?   

User Stories are used in Agile as a replacement for "specs". A User Story is a simple statement about a change or enhancement to the software. This is often written on a small card in the format:

As <A> I want <B> so I can <C>  where:

<A> = the person/group requiring the enhancement -
  often a software user, but can be anyone
<B> = a simple description of the enhancement
  from the perspective of <A>

<C> = the purpose or benefit of the enhancement -
  this can be skipped but I highly recommend it

A User Story is almost all the written documentation you need to specify all changes to the software.  Of course, for a large feature you will have many User Stories grouped into an Epic.

The other written documentation you need is a handful of Acceptance Criteria written on the back of the related User Story card. These explain how you can check that a User Story is complete.


As an administrator I want to be able to change my password so I can ensure the security of the system

Acceptance Criteria:
1. old password must be entered first
2. new password must be entered twice to catch typos
3. new password must be different to old password

The common argument against this approach is that it is inefficient - it's better to understand the problem, come up with a solution and implement it all in a controlled manner. In theory this sounds like a good argument, in practice it doesn't work (see May 2014 post on Agile Design for more on the evils of BDUF).  If BDUF did ever work as it's supposed to (which it very rarely - if ever - does) it would be more efficient. But even then the Agile approach is more reassuring to the PO/users/stakeholders; even in that worst case it still has the perception of greater productivity since everyone can see progress being made.

A stronger argument against the Agile approach is that there are some complex tasks that cannot be decomposed into simpler ones - they cannot be tackled at all with an evolutionary approach. Again this may be theoretically possible but I have never encountered such a situation in practice. Once you get the hang of it,  it's easy to find a way to work towards a goal while keeping the software useable and useful at every point along the way (or at least at the end of every sprint).

The crucial point is that User Stories are designed such that at every stage the software can be used. At the end of every sprint the PO will have a working, bug-free piece of software that can be tested and even delivered to real users. To make this work you need a certain type of version control system.

So what sort of version control do you need for Agile?

In the end many things in Agile - short sprints, small User Stories, JIT Design, feature teams, and CI - work together and depend on a version control system that allows easy branching and (especially) merging. Having a clumsy or manual merging process is not an option as User Stories are continually being merged back into the trunk.

Conventionally version control systems treat the relationship between versions as a tree. If you look back at all the above version control diagrams (ignoring the dashed arrows) you will see that they are all tree diagrams. (I know, it's obvious that you need branches to form a tree.) Modern version control systems help you merge code between branches (the dashed arrows leading into the blue boxes) but you still need to manually keep track of where the merge comes from and which bits have been merged already.

This is where Git  comes in.


In my opinion Git is the only version control system that should be used for Agile development. Git has one killer feature - a version can have two parents. Git can automatically merge versions always keeping track of things so that it does not miss versions or try to merge the same thing more than once.

This means that a version "tree" becomes instead a "DAG" (directed acyclic graph) because each version can have two - not just one - parents.

Before I discovered Git I used another fine version control system called SVN (short for Subversion), starting about 10 years ago, and found it a joy to use except for one thing - on occasion I would need a long-term branch which was painful to keep updated with trunk developments. To avoid a nasty surprise when the branch had to be merged back into the trunk I regularly merged trunk code into the branch (as in Diagram 5 above). However, to make sure that changes were not missed, or the same change merged more than once I had to manually keep track of what versions from the trunk had been merged into the branch. This was tedious and error-prone and something that Git does for you.

Agile Version Control

Agile version control using Git is simple. A developer branches the code to work on a User Story. Git makes it easy to merge the branch back into the trunk. A simple example is shown in the following diagram where all User Story branches are merged back into the trunk by the end of each sprint.

Diagram 6. Agile Version Control

However, generally you need control of what features are delivered to "production". This is often accomplished by having dual streams - an on-going "development" stream (or branch) and a separate "delivery" stream (trunk) allowing control over when features are delivered.

Diagram 7. Dual Streams

This is very different from traditional version control where branches are eventually discarded (after possibly having been merged back into the trunk) - instead you have two on-going streams. This approach is only possible with a version control system such as Git where a version (ie, a node in the diagrams) can have two parents - in the diagrams this is any node with two outgoing arrows.

For a large project with multiple teams I have even seen the suggestion of multiple on-going "development" branches (eg: see Version Control for Multiple Agile Teams). I have not tried this but I have reservations because code merges between the teams would occur irregularly and might easily be forgotten (remember the rule of merge early and often). The two teams might create conflicting changes which are not discovered until the conflicting code is merged from the trunk into the other teams stream.

Diagram 8. Multiple Development Streams


Agile version control is very different to traditional version control. It is performed using many small feature branches which are being continually merged back into the trunk (or main development stream). This is necessary for the practice of Continuous Integration (CI) which is a core part of the Agile approach.

CI is an example of JIT (and hence DIRE) allowing problems to be found as soon as possible. It also supports other Agile practices such as short sprints and evolving the software using small, simple, user-centric User Stories. Use of CI depends on a version control system that allows easy branching and merging.

Most Agile teams also have two ongoing code streams (see Diagram 7) - the development "branch(es)" and the delivery "trunk". Again, this relies on a version control system that supports easy merging.

As far as I know Git is the only version control system currently available where a version node in the repository can have two parents. In other words Git allows you to automatically and safely merge code from different sources.

Although Git is not without it's problems (which I will discuss next month) I think using it is essential for Agile development to work smoothly. I will discuss the day-to-day use of different version controls systems (including Git) next month.

Tuesday, 1 September 2015

Written vs Verbal


Communication of technical information between software developers is crucial. The debate about the advantages and disadvantages of written vs verbal communication has been going on for at least half a century, but even half that time ago, thorough written documentation was generally considered essential. For example, the SEI (Software Engineering Institute) released the CMM (Capability Maturity Model) around 1990 which heavily emphasized documentation of the product (as well as the procedures used to create the product).

Quality Standards        

I did a post-graduate diploma in SQA at UTS in 1993. Much of the software quality stuff was really useful (and found its way into Agile).

However, I found CMM and quality standards (like ISO 9001, etc) not that useful if not downright burdensome. They seem to me like a money-making scheme for certification organizations and consultants.
The debate usually centers around requirement specifications or specs. I have talked about the problems of writing detailed up-front specs before so I won't go into that again (see Agile Design). [However, in brief specs are incorrect, incomplete, inconsistent, out of date and just generally difficult to understand and use - this is almost inevitable, no matter how hard you try to get them right and maintain them.]

Agile reopened this debate. One of the five core principles of the Agile Manifesto is to Favor working code over documentation. This is taken by many people to mean that Agile has no place for written documentation. This is not true - Agile does not try to do away with documentation when it is demonstrably useful to the developers and it cannot be replaced with more effective alternatives. (There are often better alternatives, usually in the form of code, of which my favorite is Unit Tests).

Written vs Verbal

We will start with a recap of what I believe is common knowledge.

Written documentation is good for well-understood information that needs to be disseminated to different people, and/or at different times. Moreover, the readers (and the writers) can go at their own pace. Another advantage is that the author(s) have the opportunity to fix and refine the document and others can validate it.

* The problem of misunderstanding written words has resulted in many problems due to hastily written emails. Often something written as a joke is taken seriously.

This is the reason that emoticons (like :) were invented, and why you should carefully read emails, and think about how they may be interpreted, before sending.
Verbal communication is good where the actual subject matter is less well-defined or understood, and/or there is a need for interaction between the participants. A commonly cited advantage is that there is often less misunderstanding due to secondary information being conveyed beyond the actual words spoken* - intonation, body language etc.

On the other hand, studies show that verbal communication is much less likely to be remembered. Further, many work environments have other effects that can interfere with verbal communication such as different accents, background noise, and simply the emotions and distractions of interpersonal communication.

Apart from these well-known attributes, here are a few more points from my own observations.
  1. In my experience some people are good at learning verbally, but many are not. The best approach depends on the team and should allow for personal preferences and abilities. If you favor one approach or the other you may drive away good people.
  2. In a top-down, autocratic environment people are scared to make a mistake. Verbal communication does not work well since someone will not ask for clarification for fear of appearing stupid. They may also not want to repeatedly ask for clarification for fear of embarrassing the explainer.
  3. In the software industry (at least in my experience) there are professionals from many different backgrounds. Verbal communication can be problematic when people of different NESBs (non-English speaking backgrounds) attempt to communicate. Written communication can often avoid this problem.
  4. Written documentation is often used more for self-protection than in any valid attempt to communicate information. (See the case study below.)
  5. Finally, and possibly most importantly, the documentation produced by a team is typically written by one (or perhaps two or three) people. Other team members have no ownership, and are disinclined to modify or even use the documentation.

Document Types

We talked briefly about "specs" above but we should clarify the different types of documents typically required for a software project. Traditionally, there are three main documents. (There may also be ancillary documents, such as project plans, test plans, etc.) I will later describe how these documents are used (or avoided) in an Agile environment.
  1. User Manual - how to use the software
  2. Specifications - what the software does
  3. Design Document - how it does it
There are variations and combinations of the above but these are the three essentials. Note that I won't talk about the User Manual (as this discussion is limited to technical documentation) except to say that one variation is to use the User Manual in place of the specs - that is, first write a detailed User Manual and use that to specify what the software is to do. (In my opinion, this is just as bad an idea as writing detailed up-front specs - I just mention it for completeness.)

Technical Documents

In CMM jargon:
specs == "Software Specifications"
design == "Technical Specification"
The "technical" documents are the specs and the design. You may know them by other names. For example, the specs are sometimes called the functional specification, customer requirements, etc. What I call the design document may be called the internal design, software architecture, technical design document, etc. Traditionally an analyst writes the former and a designer creates the latter - but these are often done by the same person (analyst/designer).

Sometimes these two technical documents are also combined, perhaps intentionally, but more often because the analyst/designer has already formed an opinion of how the software will work internally. The specs become full of implementation details, which can needlessly restrict design choices.

Additional problems arise because technical documents try to serve different purposes. Ostensibly they are to tell the developers what to do, but in practice their main audience is the client. First, they they must be comprehensive and sound authoritative to give the client confidence that the project is on the right track. Another purpose is to protect their writers. (You can tell this sort of document by the large number of exclusions, restrictions, provisos, assumptions, and client responsibilities -- for the writer(s) to later point to, when the client is unhappy -- and by the fact that they have to be "signed off" by a large list of stakeholders.)


I was attempting to wean the team off detailed written specifications. It was assumed that the client required detailed specs but we found the client was quite happy to work closely with us to create user stories each accompanied by a few acceptance criteria. This was a revelation and even a relief to most of the developers but I still had one experienced developer who was vehemently opposed. After some discussion I discovered why. Here is his story...

My colleague had worked for over a year on a large project which was finally delivered. Apparently, this was not a typical waterfall project but the contract did require a large detailed specification to be signed off by all relevant stakeholders. After almost two years the product was delivered but the client was unhappy. There was a large gap in the design. They argued that it was a major oversight that should have been avoided or at least discovered much earlier. My colleague was the scape-goat for the problem, being responsible for much of the analysis and design. After scrutinizing the design document he was saved by one small sentence which implied that the "gap" was explicitly not covered. Since the client had signed off on the document they were clearly the ones at fault. One small sentence in the specifications (not even written by him) saved my colleague his job!

Design Document

In previous posts I talked extensively about the problems with big up-front specs (see the section called Problems of BDUF in Agile Design). A different problem is that the design document is typically underdone. (This happens even, or especially, in an Agile environment as I discuss later.)

Most of the projects I have worked on in the last three decades had absolutely no design documents. Some possible reasons were:
  • it simply didn't occur to anyone to document the design 
  • the specs already included much of the design 
  • the developers created the design on the fly and never got around to documenting it 
  • the developers simply found it too hard to put technical details into words 
  • there was nobody willing and/or able to write the document 
  • the developers thought the implemented design was "self-describing" 
  • there was unwillingness to document something that was likely to change 
  • there was unwillingness to write something that would be criticized or simply ignored 
  • management didn't ask for it 
I have had the good fortune to work with some excellent teams that did document the system architecture. However, even then the document was generally ignored or under-utilized. Reasons for this sort of problem might be that developers:
  • don't know the document exists 
  • can't understand the document 
  • have no incentive to understand it 
  • think it is incorrect (even when it isn't) 
  • believe it is out of date, especially as the original author(s) have left 
  • feel no ownership and so will not update or even read it 


Many Agile proponents take the approach that technical documents should be just sufficient and no more -- I call this MVD (minimum viable documentation). The problem with MVD is you can get away with providing no design document whatsoever, unless there are contractual requirements. MVD is short-sighted as it creates huge maintenance problems down the track, especially once the original developers have left.

So how should you approach documents in Agile? I will look at it from three different angles: who, when, and what. The nice thing is that all ways of looking at the problem lead to similar conclusions.


First you need a clear idea of who a document is written for. Technical documents (especially those that need to be "signed off") have more than one audience which muddies their purpose. A simple example is that have a lot of detail (superfluous for developers) intended to ensure testers consider all scenarios.

However, the main problem with specs is they are primarily intended to make the customer happy. First, the specs must be detailed to give the client confidence that the developers know what they are doing.

“for the developers,
by the developers”
Just as important though is who is the document written by. A document is not fully utilized unless the users of the document feel they have ownership of it. (It also makes it much more likely it will be kept up to date.) For the team to have ownership the team needs to write the document. Not just one or two people but all the developers need to contribute.

In summary, you need to be clear on the document's audience and purpose. Technical documents need to be written for the developers, by the developers.


The problem can alternatively be seen as one of timing. As we saw above, the purpose of many documents is mainly to demonstrate to the customer that the team know what they (the customer) wants and know how to give it to them. Hence the document is detailed but all this detail is provided much too early which invariably locks the project into a design, which is invariably not the best design.

In large waterfall projects mountains of documents were often written in the analysis and design phases even before the programmers had joined the project!

On the other hand real world projects before Agile were typically not done anything like that. Developers had to (and sometimes even wanted to) produce documentation but to avoid having to continually modify it to match the changing design (or have it become out of sync) most teams left writing design documents till the end of the project. Unfortunately this is the time when the team probably has little time and motivation to write it, or those who started to write it had left, etc.

The Agile approach is one of JIT-design. Design is not done until needed. This is the Goldilocks time to update the documentation - not too early and not too late but just in time.

I will note here that this is another classic case of DIRE. The developers should be documenting the design as they build it. Of course, this does not mean updating the document whenever any aspect of the design changes. The code itself is the "true" documentation, but looking at the code is not useful for getting a high-level overview. As I have mentioned many times before Unit Tests are very useful for documenting the behavior of modules and how interfaces are meant to work (see the section on Documentation in Unit Tests). However, at the highest level it is important to update design documentation that reflects major changes, particularly if they are core and not likely to change in the future.


So what does Agile documentation contain?

It should always record the current state of the system, not some past state or a future proposal. As I mentioned above, it is owned by the team and updated with the code. The documentation is focused on the customer, as it reflects system changes that are driven by the backlog.

What does it not contain?

You can tell a document is not "Agile" when it is used to tell people what to do. For example, the traditional specification is written by the analysts/designer to tell the programmers what they are to do. Another clue is that it contains lots of disclaimers and provisos and its main purpose is to gain the client's approval.

The author of this type of document is focused on the document itself (and how it is perceived) rather than the success of the product. In other words, it has to be nicely formatted according to company standards, appear complete and authoritative and signed off by all stakeholders.


As a bonus I will give an example of how a design document may be created for an Agile project.
  1. Initially, the developers gather a few user stories and create a simple first version of the software. There is no design documentation as it is easy enough to understand it by inspecting the code. The team have yet to decouple any parts of the system since it is not yet clear why or how to do so.
  2. As the system grows it becomes obvious how to (but more importantly necessary to) divide the system into modules. How these modules work is "documented" using a comprehensive set of Unit Tests, not written documentation. However, the team needs to create a brief design document that explains the overall design, in particular why it was necessary to decouple certain areas.
  3. As the system grows many more modules will be needed. New modules will be added or existing ones split as the need arises. The design document is updated by the team as the actual code is updated. This sort of documentation is invaluable for new people joining the team and even for existing team members as the design becomes more complex.

Finally, I should explain the vital role of code in the documentation debate. As the saying goes the best information comes "straight from the horses mouth". In software development the actual operational software is the "horses mouth". To extend the, admittedly poor, analogy Unit Tests might then be the "jockey"; Unit tests are also code which interfaces directly with the "horse".

More traditional documentation is getting further from the horses mouth. In fact most that I have encountered is akin to an ill-informed tip overheard at the local pub.

Getting back to the point. You often don't need written documentation since the actual behavior of the software is a better form "documentation". Further, Unit Tests demonstrate how modules can be used, are often easier to understand (and can even be stepped through in the debugger), are typically more comprehensive (especially with regard to error-handling) and are never out of date (if run regularly).


An objection I often get: "I think the software is doing the wrong thing. How can I check if there is no written document describing what it should do?" My reply is: "What do you think it should do? And more importantly, what does the customer think it should do?"

However, sometimes the customer doesn't care and the behavior is determined by external reasons, such a technical limitations, government regulations, etc. Sometimes UAT (user acceptance test) scripts can document the correct behavior - but UAT scripts have a habit of quickly becoming out of date.

In many cases this is a valid objection and a good example of why documents are still required.


In the past, waterfall methodologies, quality standards (particularly CMM), and simply the influence of how things are done in other "engineering disciplines" have meant that there is far too much emphasis on documentation in the software development process. On the other hand Agile extremists have gone to the other extreme of MVD or even no documentation. The right balance is somewhere in between.

That said, it's better to err on the side of too little. But I still believe that for a large project some documentation is needed to explain or remember important details that cannot be easily gleaned from the code.

Some simple rules to remember are:
  • avoid documentation if you can
  • there are often better alternatives (like Unit Tests)
  • technical documentation is written for the developers only
  • practice continuous documentation - ie, update the doc at the same time as the code
  • ownership of documents by the team is just as important as content

Saturday, 18 April 2015

Team Collaboration

Collaboration is at the core of problems experienced when implementing Scrum (or any team work). As I mentioned last week, consideration of personalities is important when composing a team. This is a big/important enough subject that I decided to devote a whole post to it.

Developers, generally, are simply not good at working in a team. In the past, good programmers could get away with working fairly independently in their own specialized area. Nowadays, larger projects, more varied project types and different development methodologies means that good team-work is expected of everyone. It is a simple demonstrable fact that collaborative (Agile) teams invariably produce a better outcome, more efficiently and with less risks.

“collaborative teams
invariably produce
a better outcome...”
You can try to hire people with the right personality, but it's not necessary (and sometimes you simply don't have that choice). Luckily, using Scrum and some tips that I mention below, most developers can learn to work as part of a team, and even enjoy working with others for a worthwhile goal.

Why Teams Don't Collaborate

Poor collaboration is most obviously seen in conflict over technical issues. In three decades in the industry I have seen this many times. When you get to the heart of the matter the problem is rarely purely a technical issue but also due to personality conflict, poor communication or interpersonal skills (or even emotional disturbance).

Differences of opinion on technical topics such as software design, coding standards, etc are inevitable. Often this is simply a sign that different people have different priorities - eg, different team members may focus on different aspects of the quality of the code such as efficiency vs maintainability. (See Importance of Developer Quality Attributes for more on software quality attributes and their relative importance.) When the team is collaborating, and pulling in the same direction these issues can disappear, or become much easier to resolve, since team members are more amenable to compromise.

Managers often realize there is more behind the disagreement, such as a clash of egos. But they fail, or are reluctant to, to dig deep enough to find that some of the team may feel disempowered, threatened or generally dissatisfied. Again, a Scrum team overcomes these sorts of problems by empowering and encouraging communication and collaboration.


Typical Developer DISC Profile
To get an idea of why collaboration does not happen let's look at the personality of a typical developer. Last month I mentioned the DISC system of personality assessment (see Team Composition). Developers tend to fall in the S-C-D range and are predominantly of the C personality type (see diagram at right). This means they are conscientious and competent, striving for accuracy and quality. However, they prefer to work independently with little social interaction.

It seems that the software industry tends to attract people who are averse to collaboration. (In fact DISC assessors recommend software development as an appropriate career for those with a C personality type!) This is the major problem when attempting to get a Scrum team to collaborate and self-organize.

So do we give all new recruits a personality test and only hire developers with the right personality type? No -- that's not a good idea. It would greatly reduce the pool of potential candidates, some of whom may be otherwise ideally suited to the role. The fact is almost all developers (even the extreme case of someone with Aspergers) can learn to collaborate, and even enjoy working in a small self-organizing team.
My Internal DISC Profile

I believe this non-collaborative behavior is primarily a conditioned or learnt response, as I explain later. This is clearly shown with DISC personality assessments of a typical developer. DISC assessments give two personality profiles - called internal and external. An internal profile (see diagram at right) reveals a person's natural tendencies, whereas the external profile (see diagram below) shows actual behavior in the work environment. In disempowering environments developers typically suppress their collaborative tendencies.

My External DISC Profile
For example, last year I had a DISC assessment which showed that for my Natural/Internal personality my S and C styles were about average. However, for my Adapted/External personality my S style had disappeared. (The S personality type is supportive and likes working closely with other people.)

What was it that suppressed my natural collaborative tendencies? I think there are a number of factors which I cover in the next few sections. Further, I also believe that proclivity to working in a small team is a part of human nature as I explain later in the section below entitled Homo Habilis.


Training in the industry tends to reinforce the notion that software development (at least design and coding) is a solitary activity. For example, in all the courses I did at uni. just one (of the dozens) of modules I took involved working as part of a team - my 3rd year course on "Software Engineering". Things have undoubtedly improved since 1983 but I still believe there is a problem.


Even more significant is non-collaborative behavior that is reinforced by many organization's culture or managerial practices. Many managers, by their attitude and behavior, promote rivalry and even conflict between team members. This is symptomatic of a command-and-control management style as I discussed previously (see Production Line Mentality).

Managers often realize that what they do and say has detrimental effects and try to change their behavior. However, this can be difficult when  motives are questioned. For example, common advice is "Praise publicly, criticize privately" - but this can backfire. Of course, you should never criticize publicly - it is demoralizing and humiliating - but (too much) public praise may not be seen as genuine and may trigger envy in other team members. A sudden change in behavior will also be viewed with suspicion unless there is evidence of a change of attitude. Above all never say one thing and do another.

Praise privately   

It's commonly said to "criticize privately but praise publicly". My own policy is to give all feedback to employees privately - both criticism and praise.

Of course (unless you are very good a deception :) it should also be honest, otherwise it is simply seen as manipulative.

Admittedly, it is nice to ensure it is known publicly when someone has done a good job.
In addition, many companies have policies that work against team work. For example, such things as Annual Performance Reviews and Employee of the Month awards usually have little effect apart from causing resentment. But when they do have an effect it is one that promotes competition and rivalry which actually undermines collaboration.

Why Teams Should Collaborate

We have seen how the personality, training and years of conditioning make it difficult for developers to work together - so why don't we give up on the idea of an highly-collaborative, self-organizing team?

Don't give up! If we can overcome the above problems then there are big advantages. I have been expounding these advantages (and the disadvantages of a command-and-control no-collaborative approach) in all my posts since last December (except for Lean is Not Agile).

In summary they are:
  • more creative ideas are generated
  • less coordination overhead
  • avoids many pointless activities
  • greater focus on giving the customer what they need
  • less specialization leading to greater productivity
  • specialists cannot hold the organization to ransom
  • greater adaptability to change
  • greater job satisfaction
  • greater productivity
In essence a good Scrum team will produce better results. Further, the results are often produced more quickly - or there is, at least, the perception of increased productivity because the team is focused on what is important to the customer - in the short term. In the long term there are definitely large productivity benefits.


In researching this post I found quite a few web pages arguing against small self-managing teams (eg: Does XP/Scrum Violate the “Agile Manifesto”?), Some of them have a point, but some are simply incorrect. I refined it to six basic points which I will now address.

1. Having many small tasks done by different people results in inconsistencies.

This will happen in any team, but in Scrum team members are actually more likely to work together to avoid these sorts of problems. For example, in almost all jobs I have had there have been coding standards "in use", but I have only ever seen them followed with any rigor in a Scrum team.

2. Group decisions are often not good decisions.

The team may be prone to "group think", but this is not due to Scrum. In fact, Scrum empowers individuals to make a contribution. The focus should be on finding the best solution, not going along with everyone else.

There will be occasions where there are differences of opinion on technical issues, but in my experience, when all team members are aligned, these are easy to resolve. Moreover, decisions are often not group decisions since generally Scrum teams have experts in different areas to whom they defer.

3. Decisions are slow because they are made by committee.

First, it is important to remember that a Scrum team is not a committee. Most committees are composed of members with very different agendas; whereas in a Scrum team all members should have the same focus.

Also, it may appear that a team is indecisive because they are using an important Agile practice of deferring decisions until necessary (see JIT). When the current backlog task dictates that a decision be made then a good Scrum team will not hesitate to decide (or give it to the expert in that area to decide).

4. There is no single person with the authority to enforce design consistency.

This is related to the previous points. The argument is that there is nobody in control who sees the big picture, which results in haphazard decisions and inconsistencies in the design. Common opinion is that it's better to have a single person with the vision to direct the project.

“ better
than the single
visionary approach”
The truth is that the "single visionary" approach does not work well with a team of more than a few people. The people actually creating the software are continually twisting and changing the vision for many reasons. The visionary is flat out ensuring that her vision is implemented.

Scrum encourages a better approach where everyone sees, or at least has a grasp of, the big picture. It does not always work perfectly, but it works better than the single visionary approach.

5. The more you divide a task between people the greater the communication burden.

Again, this is simply a consequence of working in a team of more than a few people. Using Scrum and feature teams (an example of DIRE) actually reduces the communication burden.

6. There is no individual ownership and pride in work.

I talked about this last week (see the section on Code Ownership). You may believe, as I do, that you write perfect code, but with input from other team members it can be even better. Moreover, there are other disadvantages of ownership such as specialization, increased risk, etc.

Of course, taking pride in your work is a good thing. Knowing you have done a good job is immensely satisfying. What I didn't expect with Scrum is that I can be even more proud when my team does a good job.

Homo Habilis

One fundamental advantage that humans have is the ability to work in small teams. Humans are inherently social - far more than any other species. Much of this behavior (or tendency towards the behavior) is built into our DNA, due to our recent evolution over the last few hundred thousand or more years.

The formation of small teams was an important early step in human evolution probably beginning more than two million years ago when Homo Habilis began hunting in small self-organizing teams.

Many developers work for years, even decades, almost in isolation, which can be alienating. Many report having an immense feeling of satisfaction when they first work on a collaborative, high-performance Scrum Team. The problem is that not all teams are like that - in the next section we will try to fix that.

How to Encourage Collaboration

Collaboration is fundamental to Agile and Scrum in particular. Lack of a collaborative spirit is usually the reason that Scrum fails. So how do we fix it?

Scrum Practices

Scrum practices encourage collaboration. The visibility provided by the daily standup provides a feedback mechanism that keeps everyone on the same track (as well as providing motivation and peer pressure - see the discussion of the Ringelmann Effect in Scrum == Communism). The product owner, and particularly their product backlog, keeps everyone focused on the one thing - what the customer needs. Empowering the team gets them working together to come up with creative solutions.

XP Practices

Other Agile (not specifically Scrum) practices also help. Here are some examples from XP.

A common open work area for the team encourages communication. (See XP:Space)

A customer representative (eg, the PO in Scrum) works closely with the team. This not only allows the developers to obtain timely help, it also gives the customer invaluable insight into the development process as I explained in Customer Management(See XP:Customer)

Continuous integration means the developers are always working together to ensure that the system works and that code modifications are compatible. (See XP:CI)

One of the best signs of a good team is when programmers are happy to work on other's code and have other's modify their code. At the very least all code should be reviewed by at least one other person. If people get defensive about "their" code then that is a problem. Pair programming is the ultimate aim in this regard, and has the added benefit of producing better code. (see XP:Pair)

Related to pair programming is the idea of "collective code ownership". Nobody owns any piece of the code (or anything at all, actually), so anyone is free to modify any part of the code. Note that I do agree that often only the original creator(s) of a piece of code may be the only one to appreciate all nuances of the code, which is why Unit Tests are essential to ensure that changes do not introduce bugs. (See XP:Ownership)

Enhancing Team Spirit

Unfortunately, some teams just never "click". Usually the problem is the "mind-set" of one or more team members, as I discuss below. First we look at building team spirit which can be the first step towards changing a mind-set.

Here are 12 suggestions:

  1. encourage open and honest communication
  2. empower the team to make decisions
  3. communicate about strategic planning/goals
  4. encourage team members to trust each other
  5. make the most of the talents of the team
  6. never punish or humiliate team members for mistakes
  7. avoid behavior (like public criticism/praise) which promotes rivalry
  8. never compare team members against each other
  9. reward the team not individual team members
  10. encourage the team to be polite to each other
  11. encourage the team to eat together
  12. encourage the team to see the customer's point of view
Changing Mind Set

I guess the whole point of this post is that many developers have the wrong mind-set. They become defensive when their mistakes are pointed out. Alternatively they are reluctant to give feedback when someone does a bad job for fear of offending them or getting a bad reaction. People forget to give positive feedback when someone does a good job.

Somehow you need to encourage the team to be open and honest with each other. A collaborative environment where people bounce ideas off each other will produce the best possible software.

Here are 12 tips for team members:

  1. trust team-mates to do their job
  2. be open to new or different ideas
  3. take responsibility for committed tasks
  4. always find a way to make a contribution
  5. be open to helping in areas outside your expertise
  6. look at things from the customer viewpoint
  7. don't be defensive when someone points out a mistake or a better way
  8. don't hide your mistakes
  9. don't be afraid to give negative feedback BUT
  10. don't be offensive in pointing out other people's mistakes
  11. be polite and respectful
  12. never put personal ambition ahead of the project

As I discussed a few years ago (see Why Scrum Fails) the root cause of the failure of Scrum is the culture of the organization. The team will not change their mind-set if the culture discourages it.

First, managers need to relinquish their control, so the team can effectively self-organize. Rewards should encourage collaboration - ie, team rewards, not individual bonuses. Encourage the team to identify and fix their own problems.

Most importantly, it is important to drive out fear. People won't take risks if they are fearful of the repercussions when things don't go to plan.


Scrum is centered around the team - a small, stable, self-organizing, self-managing, collaborative team. When the team works well together Scrum works very well. When the team does not collaborate Scrum does not work well.

Unfortunately, many developers by their nature are not prone to collaborative behavior. Luckily this is not an inherent limitation but is a mainly a conditioned response due to years of working in an oppressive environment.

The advantages of removing an untrusting, blaming culture and overcoming developers defensiveness to develop a collaborative team are big. As explained above there are many benefits but in essence better results are produced, often more efficiently and with greater satisfaction to customer and developers.

Many of the practices of Scrum help to encourage collaboration such as the visibility provided by the daily standup and the focus provided by the backlog. Feature teams as discussed last week are also important. Further, other Agile practices such as continuous integration, collective code ownership, customer focus, collocation, etc are very useful.

The most important thing is to change the environment to one where the developers are empowered to do their best and rewards are team-based not individual.

Further, developers need to be open to the change and fight their defensiveness and preference for working alone. It may be surprising that working as part of a highly-functional small team can be deeply satisfying.