ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinThe Essence of Success

Overload Journal #82 - Dec 2007 + Journal Editorial   Author: Alan Griffiths

What makes a successful project?

Looking both ways

Last month you had a guest editor - Ric Parkin - one of the regular team that stepped into the editorial role whilst I went off on holiday. I'm pleased that things went smoothly, and I enjoyed the rest. In fact I enjoyed the rest so much that I've taken the opportunity for another one for the next issue: this time another team member - Roger Orr - is going to take over for an issue. I'm very confident that this too will go smoothly.

This seems very different from the situation this time last year when Ric, Roger and others first volunteered to help out with issue 76 and much closer to the 'golden age' story I told in that editorial:

At the moment it seems like it was a long time ago and in a galaxy far, far away that lots of material was submitted to Overload by enthusiastic authors, an enthusiastic team of volunteer advisors reviewed it all, the editor swiftly passed their comments back to the authors who produced a new, much improved draft of the article which then was seen to be good by all and put in the magazine for your reading pleasure.

I'd like to thank everyone who contributed to realising this vision: the overload review team, the authors and my very understanding fiancée. Please keep up the good work!

What criteria identify success?

We often hear about the high rate of failure of software development projects. But are the right criteria being used to assess success? The usual measure quoted is 'on time, on budget, to specification' - and this ignores several important issues. Firstly, the specification is rarely nailed down adequately at the start of the project (and - far too often - not even at the end); secondly, the budget is rarely immovable either; and thirdly, except in rare cases (Y2K, statuary requirements, etc.) the timing isn't a business requirement.

There are good reasons why specifications are not generally fixed - most software projects are explorations of a both the problem and solution domains. While the high level requirements are often discoverable up front, the way in which they will met is only determined as the project progresses. The investigative nature of most projects also affects the budget and time scale.

Even though delivery dates, costs and functionality are written down at the start of projects (in many cases into contracts) it is very rare that they don't change. In the case of small changes no-one worries, in the case of large changes even contracts get renegotiated. But, is a project that doesn't meet its starting scope, budget and delivery date really a failure if it the customer gets what they need, when they need it and for a cost they can afford?

Well, there are other measures of success. One can look at whether the resources invested in a project could have been better deployed elsewhere: is it more profitable to spend time and money on better advertising or on better software? The value of the results of this work is the 'return on investment' and is a key factor in deciding whether a project is worthwhile. Generally, a business will only undertake projects that are expected to exceed a 'minimum return on investment' - and, if a project exceeds this, then it can be counted a partial success. (Of course, if it exceeds the expected return on investment it can be counted a total success.)

We all like to contribute to the profitability of the company that we work for. But for most of us that isn't our sole goal in life. There is a perspective that says a successful project is one that the participants would like to repeat. It is quite easy to imagine projects that are highly profitable, but where the developers on the project turn around and say 'I never want to do anything like that again'. This happens when their personal costs (in time, goodwill and opportunity) are greater than the rewards (in money and recognition).

These are not the only definitions of success - I once worked on a project that no-one enjoyed, went on for far longer than planned (and therefore cost far more than expected and, in particular, more than the customer paid), failed to deliver a key customer requirement (which happened to be impossible) and was still a 'success'. I did see the spreadsheet produce by the project manager concerned to justify this classification. In my more cynical moments I feel it owed more to the bonus structure he was working to than to anything else (several significant costs were not shown as part of the project budget), but it appears that his definition of success was accepted.

Most projects fail on some measures of success and succeed on others - I've just participated in a retrospective of a project that went far over the original time and budgets (but these were changed to reflect scope changes during the course of the project - it came in to their current values). It very narrowly passed a few regulatory milestone deliveries. The developers were all stressed by the lack of time to do things well - and there is a big build up of technical debt to be addressed. But there is a lot of value delivered to the business, and there is an appreciation of their achievement. I'd call it a successful project, but this would not qualify using the usual 'on time, on budget, to specification' criteria (or, for that matter, everyone wanting to 'do it again').

The right sort of success

Indeed it is easy to demonstrate that not all software development projects can be measured by the same criteria - one only has to consider a typical free/open source project like the Mozilla web browser. This project cannot be judged by the traditional 'on time, on budget, to specification' criteria. Admittedly the participants in this project could well be delivering benefits to their businesses (by implementing features or fixing bugs that allow them to provide value) and such contributions could be judged on this basis. However, closer examination shows this to be in error, it would be considerably easier and deliver the same value to a business to create an enhanced or fixed version that can be used internally - so why undertake the additional cost of getting a submission approved? And how could one fit the contributions of Toronto's Seneca College into this model?

One reason for highlighting the different flavours of success is that everyone should know which one matters in the current circumstances. When different participants in a project are seeking different goals none of these goals are likely to be achieved.

One of the identifying features of Agile Methods is the focus on identifying the piece of work that will deliver the best value (to the business) for development effort, and to tackle that next. The nature of the 'value' is deliberately left unspecified and assessed by the business concerned. Always allocating the work offering the best return is an easy way to communicate what is valuable: developers on these (and traditional) projects will naturally focus on the piece of work they are undertaking at the time and with this approach to planning are focussed on the work that gives the best return. In contrast traditional planning methods treat the project as a whole as delivering 'value' and only coarse classification of features into 'essential' or 'desirable' are even undertaken - with the possibility that high value items can end up at risk because of schedule overruns or resource overruns. (And when the project manager realised this, incomplete pieces of work will be abandoned in order to progress them - with all the pain and cost of context switching that this implies.)

However a project is planned, it is a good idea to know which (if any) features are fixed and which may be adjusted in scope without risking the success of the project itself.

So what about a project like Overload? How do we measure success? Clearly, the return is self-improvement: improving our knowledge of software development (or, in the case of authors, our ability to communicate what we know about it). Currently it feels like a success. What do you think?

Better late than never

At the last ACCU conference I ran a workshop on 'Agile Tools for C++' - the idea being to use the expertise available at the conference to collate knowledge about the wide range of alternative tools available. Why for C++? Well, as Russel Winder recently posted on accu-general:

Unlike most languages where there are one or two standard [unit-testing] frameworks and everyone just uses them, C++ seems to generate a plethora so no-one has any idea which one to use (unless you are a member of one of the tribes of course).

What Russel claims about unit testing frameworks applies to a lot of other things too: comms libraries, editors, build systems, etc. Because none of us has the time and energy to invest in trying the whole range of tools I arranged the workshop and promised to write up the results. To my embarrassment, since the conference the notes produced by this workshop have been sitting in a file awaiting this happening.

Three things brought this to mind recently - firstly the next conference is getting closer and with it my sense of guilt has been increasing; secondly, Allan Kelly has just written up his session from the same conference; and thirdly, at XPDay I encountered one of the participants of the workshop who asked me what had happened to the write up.

Anyway - I haven't forgotten, I just haven't got around to it yet. Hopefully, in time for the next issue. (And yes, not having to edit the next issue has a lot to do with getting the article finished at last.)

Seasons comments

This is the Christmas issue of Overload, but unfortunately there is no seasonal article this year. But it is a good occasion for you to take a moment to reflect upon the last year and what has happened. The production of Overload has changed a lot for the better. I hope that the reading of Overload is providing benefits to you readers.

Merry Christmas to you all!

Overload Journal #82 - Dec 2007 + Journal Editorial