I'm writing this in the second week after the ACCU Spring conference where I met many of the contributors whose articles have appeared in these pages. I enjoyed putting faces to names and the many conversations that ensued. I also enjoyed meeting all the others of you that attended and, judging from all of the feedback that has come my way, all of you enjoyed the conference too.
But was the purpose of the conference to have fun? (We did!) Or were we there to learn to make better software? (Did we?) Judging from the topics of conversation there, people went to learn about producing software, and the fun was a byproduct of an effective learning strategy.
In a talk by Allan Kelly (on learning) I learnt a name for a concept I've sometimes struggled to convey: a "community of practice". This, at least the way I interpret it, is a collection of individuals that share experiences, ideas and techniques in order to develop their collective knowledge. What better description of the group of people that read each other's articles, attend each other's presentations and provide feedback in the pub (or on mailing lists) afterwards?
Does it help to have a name for this? To be able to say "the ACCU is a software development community of practice"? Yes, it does. There is power in names: we can ask, "do you know other software development communities of practice?" far more effectively than "do you know other organisations like the ACCU?"
And recognising that the ACCU is about mutual learning makes it a lot easier to explain the value of participating in its activities. Why go to the conference? To learn what others have discovered! Why write for the journals? To get feedback on what you have discovered! With a community one gets back what one puts into it.
One of the characteristics of group learning is the way that ideas build upon one another. Some years ago a discussion of optimising assignment operators led to Francis Glassborow writing an article "The Problem of Self-Assignment" (Overload 19). An editorial comment about the exception-safety of one approach to writing assignment operators led to articles by Kevlin Henney "Self Assignment? No Problem!" (Overload 20 & 21). One could trace the development of this idea further. For example, in "Here be Dragons" (Overload 36) I incorporated a generalisation of the principal idea into an exception-safety pattern language as the "Copy-Before-Release" idiom (and current discussions of "move semantics" are also related).
Over the course of a few articles the canonical form of the assignment operator was rewritten and the C++ community (or part of it) has adjusted its practice accordingly. But where did the idea come from? Francis was describing a method of writing an assignment operator that didn't require a branch instruction to deal with self-assignment - he wasn't writing about exception safety and, as far as I am aware, took no account of this aspect of the form he described. The editor's remark recognised the exception-safety aspect, but didn't seek to revise the "right way" to write assignment operators. It was left for Kevlin's articles to address the issues with earlier practice directly and make the case for change.
Naturally, this is just one example of the way in which participation in a community of practice helps advance the frontiers of knowledge. The exchange of ideas within the community was what led to the recognition of the importance of this idiom - no one person deserves all the credit. As a result, the programs being written by members of this community became just a little bit more reliable and easier to maintain.
Do the ideas developed within the community help us deliver more valuable software? Or to be specific, how much difference does knowing how to write an exception-safe assignment operator make to the average program?
If one can judge by the quantity of code that, on successful projects, reaches production with exception-unsafe assignment operators, not a lot! On the other hand, not having exception-safe assignment presents a demonstrable risk of obscure, hard to reproduce problems. I've worked on several systems whose delivery schedule has been seriously disrupted by avoidable problems that required significant effort to diagnose.
One system I worked on lost a number of days when an intermittent problem was found that only occurred on Win2K (or maybe it was NT4 - one appeared to work reliably, the other failed at random intervals of up to a couple of hours). After the project missed a milestone I was asked to take over the investigation, as both the developer that identified the problem and the project lead had run out of ideas. I spent two days isolating an "undefined behaviour" problem that, once diagnosed, was clearly the result of a coding style that was not as robust as one might expect from ACCU members. (OK, in this example it wasn't actually an assignment operator, it was a "one definition rule" violation, but an exception-unsafe assignment could easily have the same effect on a project.)
There is an obvious cost to the business of developers spending time learning new techniques, not just the time spent learning, but also ensuring that "shiny new hammers" are kept in the toolbox and not the hand. The savings are not always so obvious. In the above example I know how much the client paid for the time I spent diagnosing the problem, and roughly how much time I spent explaining to team members what had gone wrong (and how I'd figured out a problem that had baffled them). What I don't know is the opportunity cost of delivering the software late, or the value of the work that would otherwise have been produced by the developers that tried to solve it. Whatever the cost was, it would have paid for a lot of learning!
Managers (at least the good ones) do understand the concept of insurance, paying small premiums in terms of learning activities to avoid the risk of the occasional, but very expensive, search for obscure problems that completely invalidate the planned delivery. Indeed, many of the practices being promoted by the "Agile" development community (frequent deliverables, continuous integration, automated acceptance test suites, ...) can be viewed as ways to minimise the risk of the project drifting off course.
Of course, there is a balance to be found - a point at which the additional cost of further improving the developers, or development process, or tools would exceed the added value delivered. This is clearly not an easy assessment to make. This process of assessment is frequently hindered by developers failing to demonstrate that they understand the need to deliver business value - which makes it appear that they are simply "seeking new toys".
It is hard to relate improved design idioms or more effective tools to the value they bring to the project or organisation. And herein lies part of the difficulty in assessing the value of the ideas we exchange and develop in such forums as the conference. Many of them seem to merely reduce the size of that vast pool of infrequently manifesting bugs that seems to exist in most codebases. (The existence of this pool of bugs and the appropriate way to manage it was another discussion at the conference - is it worth putting any effort into diagnosing and correcting a problem that may only manifest once a century?) A very substantial proportion of the infrequently manifesting bugs needs to be removed before any benefit can be perceived - and, as we don't know the size of this pool, it is hard to know in advance what impact reducing it can have.
Demonstrating that one approach to developing software is better than another is hard, there are lots of variables and people will interpret the evidence in accordance with their preconceptions. I once ran one of three teams producing parts of a project. My team put into practice all the best ideas I knew from my reading: daily builds, automated test suites, reviews of designs and code. Another team divided up the functionality and each team member coded their own bit. The process the third team adopted revolved around a self appointed expert programmer (who rewrote any code that wasn't "up to standard"). My team didn't suffer the time pressure seen on the other two (no need for overtime) and delivered software that had an order of magnitude fewer defects discovered in integration testing than either of the other two. I thought I'd done a good job and demonstrated the effectiveness of my approach. However, rather than my approach being adopted by the other teams, my "idle" developers were moved to the "more dedicated" teams that were working overtime to fix bugs and were soon as dedicated as the rest.
I learnt from the experience (even if the organisation that was employing me didn't) that it is not enough to succeed, it is more important to be seen to succeed. While my team had always had a clear idea of what functionality was completed (and was confident that it worked) there was very little visibility elsewhere. What was visible to everyone was my team was going home on time, sitting around chatting in front of PCs and having no difficulty in producing its deliverables. We were having fun while the other groups were making heroic efforts to deliver! I should have made sure that there was much more visibility of the progress we were making. It doesn't take much effort to present a list of the intended functionality on a whiteboard, pin-board or similar and to check off the items that have been verified as having been completed. A visible indicator that progress was being made would have done a lot to reassure other parts of the business that we knew what we were doing and to justify our apparently relaxed attitude. (There were other things to learn too: from inexperience in introducing ideas into a team or organisation, I kept too much control of the team practices - the team members didn't feel that they "owned" them and, in consequence, didn't promote them in the teams they were moved to.)
"Having fun" is an indicator that things are going well. Any time you're not having fun writing software, or learning, (or anything else) then it is time to assess the reason for it. That isn't to say that there are not occasions where rote learning and repetitive practice are effective: most software developers I know would benefit from learning to touch-type, which involves a lot of repetition to (re)form habits. (And even in these cases, it helps to introduce a fun element.)
There is nothing wrong with having fun: just consider how to show that you're delivering value at the same time!
Overload Journal #67 - Jun 2005 + Journal Editorial
Browse in : |
All
> Journals
> Overload
> 67
(8)
All > Journal Columns > Editorial (221) Any of these categories - All of these categories |