ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

Search in Book Reviews

The ACCU passes on review copies of computer books to its members for them to review. The result is a large, high quality collection of book reviews by programmers, for programmers. Currently there are 1949 reviews in the database and more every month.
Search is a simple string search in either book title or book author. The full text search is a search of the text of the review.
    View all alphabetically
Title:
The Economics of Software Quality
Author:
Capers Jones and Olivier Bonsignour
ISBN:
978-0132582209
Publisher:
Addison-Wesley (2011)
Pages:
624pp
Price:
£
Reviewer:
Paul Floyd
Subject:
Software Quality
Appeared in:
24-6

Reviewed: January 2013

There are two authors that get referenced a lot in the body of literature on software development when it comes to measuring software. They are Barry Boehm and Capers Jones. That’s not a large number, I guess because not many organizations have the means to do that sort of research and even fewer are willing to make it public. Whilst Boehm worked in the US defence industry, Capers Jones worked at IBM.

Starting on the positive side, this book covers all aspects of quality and in depth. For metrics, it does preach for the use of function points rather than lines of code. Probably the most interesting parts for me were the measurements of risks of defects by development stage (this somewhat assumes a waterfall-y development process) and the comparison of effectiveness of various methodologies and tools.

There is quite a lot of repetition in the book. It reminded me a bit of a politician being interviewed, when the politician is repeatedly making a point even if the interviewer is trying to move on to something else. This does get the point over, particularly if you’re picking the book up and trying to use it to look up a reference. When I read it cover to cover though, it did drag on a bit. I wasn’t too keen on the tables of subjective measurements given numerical values to 2 decimal places, like ‘Risk of application causing environmental damages – 9.80’. I feel that this gives a false sense of precision and authority. There were a few clangers that undermine this authority. For instance, Perl is written Pearl, Mac is written Mack and one that really set my teeth on edge (wearing my Electronics Engineer hat) was the claim that ‘Electrical engineering uses K to mean a value of 1,028, but software engineering uses K for an even 1,000’. Yes, that is 1,028, not 1,024. They are both even, though you might say that 1,000 is more of a ‘round’ figure. And no, Electrical engineers never use 1,024 for K for electrical things. 1Kohm or 1KV is always 1,000. When it comes to addressing memory then the 210 version is often used. Ah, bee in bonnet. Another electronics oddity crops up in a table of languages that do not support static analysis, which includes Verilog (a hardware description language that has a great deal of static analysis tool support).

One of the points repeatedly made is that the cost of fixing a defect does not change depending on when it is discovered in the development cycle, contrary to the popularly held belief that the later a defect is found, the more it costs to fix. I can accept this if the defect is a coding defect. I don’t agree if the defect is in the design error, which in my experience takes much longer to fix than something like an out-by-one coding error. Another point that is strongly emphasized is the increase in risk (and the number of defects) with the size of a project. I have no beef with that.

In summary, an interesting book that is ironically a bit let down by a few defects.