ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

Search in Book Reviews

The ACCU passes on review copies of computer books to its members for them to review. The result is a large, high quality collection of book reviews by programmers, for programmers. Currently there are 1949 reviews in the database and more every month.
Search is a simple string search in either book title or book author. The full text search is a search of the text of the review.
    View all alphabetically
Title:
Software Quality, Producing Practical, Consistent Software
Author:
Ben Menachem&Marliss
ISBN:
1 85032 326 7
Publisher:
ITP
Pages:
326pp
Price:
£29-95
Reviewer:
Tom Parke
Subject:
writing solid code
Appeared in:
10-4
This is a bad book. Where there should be clarity there is obfuscation, where there should be case history and examples there are vague and unconvincing statistics, where there should be explanation there is rambling rhetoric.

It purports to provide a quality methodology. Actually it is a rambling, badly written commentary on the IEEE standard for Software Quality Assurance Plans (ANSI/IEEE Standard 730).

The authors fail to place the quality activity in context or to understand the software development process. They fail to discuss the relative costs and benefits of different quality activities. They fail to discuss the risks of a quality plan - the time and effort it takes, the increased cost of change, the danger of the quality tail wagging the development dog.

They fail to show any appreciation of software engineering. For instance they don't understand the difference between Software Requirements and Software Design. There is no discussion of requirements traceability (from requirements through specification to design and on to testing).

They provide examples of worksheets, forms and in one case a process model and yet don't discuss them at all. For instance they provide a Code Inspection Summary report that seems to require a huge amount of manual effort (e.g. counting the number of loops, the number of jumps out of loops, the maximum number of nested levels) without any discussion of how these figures might be used, that these numbers can be gathered by tools and whether they are of any actual value.

The text is pompous, full of important sounding bluster that on closer inspection is vague or vapid. They love to use 'headline statistics' such as 'Fifty percent of software costs are directly attributable to error corrections' and 'only 10% of the errors are in the coding'. While failing to cite sources for many of their figures, failing to analyse what they might actually mean and failing, in the end, to relate the figures to quality planning. They are in fact only there to 'prove that quality is important'. I wanted to shout back, 'we know that,now can we talk about it like grown-ups'.

The authors irritating use of statistics can be shown by two, albeit minor, examples.

They extend Moore's Law ('processing power increases by 48% a year') to say that applies from the time of Babbage. Moore's Law, that applies from the 1950s onwards, is startling enough without this silly, pointless and inaccurate extension. Babbage never actually built his Analytical Engine and even if he had there's no scale on which to compare it to computers in the 1950's. Finally they are claiming, in effect, that there had been something like a 10 to the power 18 increase in processing power between Babbage and early computers. If we say early computers processed a thousand instructions per second, then this means they rate Babbage's machine at roughly one instruction every 30 years.

Secondly they then contrast the growth in the rate of processing power with the growth in the increased expressiveness of computer languages of 11% and programmer productivity of 4.8% (and to be fair they do cite a reference for these highly dubious numbers). They then imply that this disparity is some sort of problem.

Clearly this is a nonsense - there is simply no relation between the speed of individual computers and the amount of software the world needs. If I buy a computer that is twice as fast as my old one I don't suddenly require twice as much software, it is in any case actually easier to write software that requires more processing power rather than less and indeed new software does always seem to use all that increased CPU power. For instance I noticed the other day that Visual C++ 5.0 compiles on a 200Mhz Pentium just as slowly as the old UNIX v6 C compiler on a PDP-11 used to do nearly 20 years ago. It would appear that the complexity of programming languages and their development environments also increases at 48% a year!