Journal Articles
Browse in : |
All
> Journals
> CVu
> 165
(13)
All > Journal Columns > Francis' Scribbles (29) Any of these categories - All of these categories |
Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.
Title: Francis' Scribbles
Author: Administrator
Date: 07 October 2004 13:16:08 +01:00 or Thu, 07 October 2004 13:16:08 +01:00
Summary:
Body:
Pete Goodliffe has written 27 columns on Professionalism in Programming, so presumably readers know what claims such as 'I am a professional' and 'I am professional' mean. But do you? Both those apparently complete statements leave much unsaid. What does the claim to being a professional mean? Let me be more precise; what does a claim to being a professional software developer mean?
One answer is that it is a statement that the speaker earns their living by developing software. It says nothing about competence nor about any ethical dimension. It also says nothing about any qualifications to earn a living in software development.
The claim to being professional in one's software development may seem the same but is a different claim. It is a claim concerned with competence and ethics.
We have to watch the choice of words very carefully. In some countries the claim to be a software engineer requires some form of certification. For example Germany reserves the term 'engineer' to people who are certified as such. Some people have the mistaken belief that certification guarantees competence. I wish that were so because then we would have not need to de-register or un-certify people because they are incompetent. The best that most certification does is to 'guarantee' that the individual has received appropriate training and satisfied the certification board that they knew what they were supposed to know and had acquired the skills that were deemed necessary.
I still hold valid certification as a teacher and as a sailing instructor. My certification as a teacher is unlimited and qualifies me to teach at any level. It was only my professional standards that prevented me from attempting to teach ages or subjects for which I lacked the appropriate skills and experience.
My NSSA certification as a tidal waters Sailing-master qualifies me to be responsible for groups of young people sailing both inland and on tidal waters. My qualification as a RYA Senior Instructor allows me to hold similar responsibility for groups of adults. However it is too many years since I last sailed on tidal waters and I would never consider taking responsibility for any group of people sailing until I had taken several refresher courses. We have to distinguish between what we are officially certified as being able to do and an awareness of the current limits of our competence. Part of being professional lies in that latter quality.
Exactly what does certification imply? I think it is a way to absolve an employer from some of the responsibility if an employee is incompetent or does something that has bad consequences. It certainly is not some magic that makes the holder more skilful or knowledgeable.
Certification has no impact on an individual's competence to do a job though it does, often, have implications as regards employability. However there are, in my opinion, other far more important issues that distinguish professionalism.
Knowledge of one's limitations is essential. A willingness to continue to develop skills and knowledge is important. Some forms of certification require periodic re-endorsement based on either a demonstration of ongoing practical experience or on retraining. My life-saving certificate is an example; as I have neither applied the skills nor taught others those skills that qualification lapsed three years after the last time I had it re-endorsed. That does not mean that I am unable to act to save someone's life, but it does mean that I am not currently employable in jobs that require I be certified as a lifesaver.
Should we extend the requirement for regular re-endorsement of professional skills to all jobs that have safety implications?
Another feature of being 'professional' as opposed to being 'a professional' is respect for the skills and knowledge of other people. There were numerous occasions during my career as a teacher when I had unqualified (i.e. not qualified as teachers) people present lessons. These people had skills, knowledge and understanding that gave them something worthwhile to contribute to my pupils. I respected that and mostly these non-teachers also understood the limits of what they were allowed to do in the context of a classroom.
Since retiring as a teacher I have turned my hand to quite a few things. I believe that I have acted professionally throughout. I hold no qualification as a journalist, conference organiser, book reviewer etc. But in each case I have taken time to discover how such jobs should be done. I sometimes make mistakes. When I recognise them, I willingly, though not happily, put my hand up. To me, admitting mistakes is part of professional behaviour.
I recently had a member of ACCU tell me that my claim to be a programmer was meaningless because there was no qualification for doing that. The same person opined that only certified engineers should be involved in Standardising C (but chose not to add that requirement to Standardising C++).
I would have some sympathy for his view had the qualifications for certification as a software engineer got anything to do with language design as opposed to language use. I would have even more sympathy if said certification was limited to developing software in a language or languages in which the individual had proven competence. However that latter requirement is left to the professionalism of the individual.
Like many other tasks, working on computer language standards requires professionalism. It actually requires far more skill and knowledge than can be contributed by any single individual. That means that those involved must be able to respect the skills and contributions of other participants. It also means that those involved must be willing to spend time both understanding the current standards and understanding the issues raised by others.
I know of several individuals in the UK who put my knowledge of C to shame but they are currently committed to other work. When a job has to be done we sometimes have to make do with the people who are willing to do it even if someone who is not available could do it better.
While the above is largely personal, I hope that it gives you food for thought. There is no harm (indeed probably much good) in making software development a job that requires appropriate certification. However we should not consider certification as proof that an individual is competent, nor should we require it unless it is relevant to the job.
'I am a certified engineer therefore I am better than you.' should be relegated to the same garbage heap where 'I am older therefore I know better' (a view so often held by adults when dealing with children) belongs.
We all know that programs that contain undefined behaviour are abhorrent and no professional programmer would consciously write source code that included undefined behaviour unless they had verified that the actual behaviour on the specific platform was acceptable. So consider the following program:
#include <stdio.h> int main() { int i = 0; puts("Please type in a number: "); scanf("%d", &i); printf("%d", i*i); return 0; }
I have been lazy by using scanf() rather than a more robust mechanism. Just pretend that I have carefully written code that will extract an integer value from stdin. Given that, where is the undefined behaviour in the above program? How do we justify both C and C++ making that behaviour undefined? Should we do anything about it?
The problem is that signed integer overflow is undefined behaviour in both C and C++. All the five main arithmetic operators (+. -, *, / and %) can result in integer overflow. The simplest one is the modulus operator, which can only cause overflow if the divisor is 0. We can easily check for this condition before using the operator.
The division operator is rather subtler because there are two conditions for overflow; the first is division by zero. The second is restricted to 2s complement machines where division of INT_MIN by -1 results in overflow. Unfortunately 2s complement is the most common architecture for desktop machines.
Addition and subtraction can both overflow, but again there is a fairly simple pre-test. I leave it to the reader to write one.
Multiplication is the worst case because we have to pre-test by using a division. Let me assume that we start with two positive numbers a and b. Now compute INT_MAX/a. If the result is less than b then the result of a*b definitely overflows. If the result is equal to b we must now check the remainder, because if it is 0 a*b == INT_MAX. However if either but not both of the values are negative we have to test against INT_MIN. If both are negative, we have to test the absolute value of one of the values against INT_MAX.
We can reduce the number of tests if we are willing to accept some false negatives (i.e. rejected cases where the actual calculation does not overflow).
I have seen the argument for allowing overflow to be undefined because any program in which it happens is erroneous. The proponents of the status quo then add that undefined behaviour will actually not cause anything really bad to happen, such as reformatting your hard drive. Am I alone in finding that argument to be specious? We ask programmers to treat undefined behaviour as a serious issue and then tell them not to worry too much about one of the primary instances of it.
Writing beyond the end of an array is not only undefined behaviour but can result in genuinely bad things happening. I once reprogrammed the BIOS of an expensive graphics card by accidentally writing of the end of an array. In addition buffer overflows are one of the major sources of exploits for malware.
[I'm wouldn't go that far. A large number of software exploits are down to poorly written, insecure code - network code is riddled with such problems. It is not always the case that buffer overflows are the problem. - Ed]
However before I go further along this line, let us see if there is any legitimate code that is vulnerable to this undefined behaviour and that cannot be eliminated by pre-testing. Consider this code snippet:
#include <time.h> void work(void); int main() { clock_t start, end; double elapsed; start = clock(); work(); end = clock(); elapsed = ((double)(end - start))/CLOCKS_PER_SEC; printf("Elapsed time = %f", elapsed); return 0; }
Now the above program contains irremovable undefined behaviour if clock_t is a signed integer type. Neither C nor C++ places any constraint on the type of clock_t other than it be an arithmetic type. It does not seem that the actual type requires to be documented (though you could look in the time.h header if it has been provided as a file (again not required by either Standard).
Of course most student programs will not have a problem because most implementations will survive just over half an hour of CPU usage before overflow might occur in the return from clock().
However suppose your application runs for much longer and you want to use clock() to 'time out' a process. Given defined behaviour for signed integer overflow you have a chance to write code that can handle the problem but without defined behaviour you have no hope and clock() is useless to you if you want to write clean code devoid of undefined behaviour.
What concerns me is that a number of C and C++ heavyweight experts take the view that anything we have lived with for thirty years cannot be a problem. So, am I wrong to be concerned with this issue?
Should we be comfortable with undefined behaviour that will do nothing disastrous? Do we need another classification of behaviour that basically says that the worst that can happen is that the program aborts? Of course such behaviour is not acceptable in the control software for a nuclear power station but it is acceptable for many other purposes. Yes the program does not always do what the programmer intended but neither does it try to reformat my hard drive.
Those who come from a functional programming background will be familiar with the concept of a pure function but for the rest, a pure function is one that has no side effects, the opposite of a procedure that has only side effects and no return value.
In C++ a pure function would be a free function that does not access any globals, does not have any local statics in its definition and whose parameters are all value based (no pointers either). Under such circumstances the return value is solely based on the function's arguments. A pure function can only call pure functions.
Until recently the concept of pure functions has been interesting but of no great direct value to languages such as C++. However hardware is moving on. Multiple CPU machines are increasingly common. The latest hardware from Intel allows a single CPU to look like two. CPUs have multiple processing lines built in to them. In pursuit of ever faster hardware the next logical step is to put array processors into our CPUs. Single Instruction Multiple Data (SIMD) parallelism can result in great performance improvements for certain types of processing (and graphical processing is an example of such a specialist domain).
Given such hardware pure functions begin to become useful. Pure functions are obvious candidates for SIMD parallelism.
Is it time that C++ considered adding some function qualifiers so that we can identify functions as pure. Such information is statically enforceable. We would need to consider such details as including such qualifiers in the type of function pointers (the address of a pure function should be assignable to any suitable function pointer, but only addresses of pure functions should be assignable to pointers to pure functions.)
Please give some thought to this idea, write them down and email them to the editor or to me.
Here is what I invited you to comment on:
Have a look at the following tiny function. The problem is insidious; the same code is legal in Java and does exactly what you want, while in C++ it compiles without error.
string to_string(int n) { if(n == 0) { return "NULL"; } else { return "" + n; } }
The problem is because of the very different ways that the operator + is overloaded in the two languages. In the case of Java operator + creates a new string object that is the result of concatenating the left-hand operand with the conversion of the right-hand one to a string by using whatever method is provided by the rhs' type.
In the case of C++ there is no operator + overload that takes either an array of one char on the right. However there is an operator + that takes a char const * and an int; the one that increments the pointer by the specified value. If the int is 0 the result is to leave the pointer unchanged (but in this function we trap that case and handle it differently). If n is one the result is a one beyond the end pointer, which is OK until it gets used to initialise the return value where the string constructor will dereference the pointer.
In all other cases the evaluated "" + n expression has undefined behaviour even though nothing really bad happens on most systems. You just get garbage as the program treats the bytes that start at the computed address as if they were part of a null-terminated array of char.
Here is a minimalist version of main():
int main() { a * b; }
Given suitable precursors it will compile and execute. Can you provide suitable precursors so that the resulting program executes and outputs:
int main() { a * b; }
I will send the author of the solution that I like best (yes, entirely subjective) a copy of Exceptional C++ Style (and if I remember to get one autographed by Herb Sutter when I am at the WG21 meeting in Redmond it will be an autographed copy).
I had several clues offered. Ainsley Pereira offered 'Before he ate, he had to wait'. James Roberts offered both 'A new slant to infinity, and again?' (A good start but I think it needs some polish) and 'Produce of the disciples working every hour of the day and night.' (I think 'product' works better). Louis Lavery came up with 'Told to double weight after initial loss? I'd say that's too gross!' Any one of those would deserve to win and correctly identify 288. I chose James' so if he contacts me to tell me where to send it he gets the copy of The Elements of C++ Style.
For next time, what are your clues for:
Oh for love in the sea! It only values the fifth bit.
Notes:
More fields may be available via dynamicdata ..