cout << 1 << endl; cout << 2 << endl; cout << 3 << endl;
This small code fragment writes the values 1, 2, and 3 to cout. You'd be hard pressed to write this code more directly. It writes the value 1, then it writes the value 2, then it writes the value 3. Writing repeated code like this is unrewarding, understanding it is tedious, and modifying it is error-prone and painful. Of course, programmers almost never express repetition like this; they express the repetition more succinctly by adding a level of indirection. Hopefully, by adding a little stylized software you can remove a lot of code elsewhere. Less code, more software. (Note however that in the right context loop unrolling is a useful optimization technique).
for (int value = 1; value <= 3; ++value) { cout << value << endl; }
This is without question a more succinct way of expressing repetition. If you want to write out the values 1 to 42 simply change the 3 in the continuation condition into 42. However, there is a price to pay for this brevity - the purpose of the code is now expressed less directly. This is the price of indirection. It is totally clear what the purpose of the first code fragment is, to write 1, to write 2, and to write 3. It is not so directly clear what the purpose of the for statement is. To understand the for statement you have to master the extra complexity: understand for's semantics, mentally examine the initialization, continuation condition, and update parts, and make the correct logical deductions. Experienced programmers know that although the logical deductions required appear trivial and easy, they are in fact fraught with traps and pitfalls. Experienced programmers also know that avoiding mistakes is better than making them, then finding them, and then removing them. Debugging is slow. As a result, programmers tend to learn a few vital mental tools to avoid loop traps and pitfalls. The two most important tools are invariants (things that are always true) and intention (programming on purpose).
Here is the unrolled sequence of statements that comprise the previous for statement:
initialization: int value = 1; continuation-condition: (value <= 3); // 1 <= 3, true body: cout << value << endl; update: ++value; continuation-condition: (value <= 3); // 2 <= 3, true body: cout << value << endl; update: ++value; continuation-condition: (value <= 3); // 3 <= 3, true body: cout << value << endl; update: ++value; continuation-condition: (value <= 3); // 4 <= 3, false
During this sequence of statements one invariant is "the next value to be written is always inside value". That's not a useful invariant because it expresses a truth about the future. A useful invariant expresses a truth about the past, such as "value-1 is the number of times something has been written". Let's try it.
-
Before the loop starts, the invariant says that (value-1) writes have already occurred. No writes have occured yet so (value-1 == 0), hence value == 1. So value is initialized to 1.
-
Then a continuation condition check, and then a write. Now, because a write has occurred, the invariant is momentarily broken: value is still 1, 1 write has occurred, and 1-1==1 is not true.
-
To maintain the invariant you must change value to 2 since (2-1==1). The simplest way to change 1 to 2 is via an increment, so that's what the update part does. The update maintains the invariant. And, in general, if N writes have occurred we need to change value from N to N+1, which is the definition of 'increment'.
-
Now the invariant says that (2-1==1) writes have occurred, which is true.
Since the invariant is always true you can, in fact, completely ignore the loop and instead consider the context after the loop. When the loop has finished the continuation condition (value <= 3) must be false, so !(value <= 3) must be true, viz, (value > 3). Since value is initialized to 1, and is only ever incremented, it must be true that (value == 4). The invariant says that (4-1==3) writes have occurred, and three writes have indeed occurred.
So, in fact, often the most important thing about the continuation condition is not the continuation condition itself, but its negation because it's the negation that's true after the loop. Looping is easy; it's knowing when to stop that causes the problems. Consequently you want the negation of the continuation condition to be as strong as possible. In the example, the continuation condition was (value <= 3) and hence its negation was (value > 3). In order to strengthen this negation into (value == 4) you had to do extra mental work. However, if you weaken the continuation condition you automatically strengthen the negation of the continuation condition. So if instead of writing (value <= 3) as your continuation condition you write (value != 4) then when the loop finishes you'll know (value == 4) with (almost) no mental effort at all. Another benefit of weakening the continuation condition is that it weakens the loop requirements (significant when you consider function templates):
for (int value = 1; value != 4; ++value) { cout << value << endl; }
At this point it's worth reiterating (sorry) that this code fragment is less direct than the very first code fragment. In the first code fragment the values 1, 2, 3 were written and the values 1, 2, 3 all appeared directly in the code. In this latest code fragment the value 3 does not appear at all. Is this a problem? Does this mean you should use (value <= 3) instead of (value != 4)? I don't think so. I think it would be a mistake to base any decision on the first code fragment. The reason is simple; programmers don't write code like that. They just don't. Programmers write loops using dedicated loop constructs. That is the context. The secret of programming is not to over generalize or to over specialize; to be aware of, and sensitive to, the immediate problem context.
A loop has two parts. The loop control:
for (int value = 1; value != 4; ++value)
and the loop body being controlled:
{ cout << value << endl; }
These two parts play different roles. The former governs the latter, making sure it executes neither too few nor too many times. There is a clear one-way dependency; the body depends on the control but not vice versa. Any micro-modification that disturbs this dependency is ill advised. For example, suppose you rewrite the for statement like this:
for (int value = 1; value != 4; ) { cout << value++ << endl; }
The loop control now depends on the loop body. In other words the loop control is dependent on the very thing it is supposed to be controlling! Entirely wrong. In fact, the control and the body are becoming intertwined so tightly it's hard to talk about the control as a separate concept at all. The software is disappearing and the loop control and the loop body are gelling into an amorphous lump of code. A lump of code that is less transparent, harder to reason about and harder to understand.
To avoid this amorphous lump simply don't modify the loop variable inside the loop. That way the dependency remains a oneway dependency from the controlled to the controller, the loop control parts remain separate (and textually together), and the loop invariant remains transparent. As useful as this well-known piece of advice is it's not sufficient to protect your loops. It's not really a generative piece of advice. The most important thing is to keep the loop control separate from the loop body. Separation of Concerns. Modifying your loop variable inside its loop body is one way of breaking the separation and tangling the dependencies but there are plenty of others. Using goto, break, continue, throw, or return inside the loop body can all have the undesired effect as well. Here's another example where the loop control and the loop body are tightly interwoven. Does it write 1, 2, and 3 as before? Are you sure?
int value = 1; for (;;++value) { cout << value << endl; if (value != 4) continue; else break; }
You might be thinking that advising you not to use return statements inside loop bodies is over zealous. Do I really mean that? Yes I do. Functions that return something should do so via a single return statement at the very end of the function. Here are some practical reasons why:
-
Change The only thing absolutely guaranteed in software is change (well, maybe corrupt data too). A function sprinkled with return statements will almost certainly break when changed. Such functions are just too opaque. They are not transparent. It's too hard to see, let alone reason about, what effects a change will have. A classic example from C is adding a statement at the start of the function to acquire a resource (eg calling malloc) and adding a statement at the end of the function to release the resource (eg calling free). Better make sure there's no return statement in between.
-
No Change Software that separates out its concerns and manages the dependencies between the separate parts (in particularwhat's dependent on what) is significantly easier to refactor than software that does not. If your loop body contains a return statement then you won't be able to refactor that loop out into another method. You'll have to first refactor the loop so it doesn't contain any return statements.
A quick recap. We started with direct repetition:
cout << 1 << endl; cout << 2 << endl; cout << 3 << endl;
and we've worked through to indirect repetition:
for (int value = 1; value != 4; ++value) { cout << value << endl; }
The value 3 appears in the direct version but not in the indirect version. As I've already said, I don't think this is worth fretting about since programmers never actually write the first version. However, there is a sense in which 3 does appear, albeit indirectly, in the indirect version. This is because (1+3==4) or, equivalently, (4-1==3). Specifying an inclusive lower bound and an exclusive upper bound is an extremely common, powerful, and idiomatic way of expressing a loop. It even has a special notation and name. It's written like this:
[1, 4)
and it's called a half-open interval. Here are two well-known examples:
// [0, 42) char array[42]; // ... const size_t end = 42; for (size_t at = 0; at != end; ++at) { stuff(array[at]); } // [array, array+42) char array[42]; // ... const char * const end = array + 42; for (char * at = array; at != end; ++at) { stuff(*at); }
Note that:
-
By definition a legal array index cannot be negative so the first fragment uses a size_t to match this constraint (size_t is an ISO C/C++ unsigned integer typedef).
-
By definition the size of an object fits into a size_t. In other words, the valid indexes of the elements of an array of size N are [0, N-1] but only a size_t is guaranteed to be able to hold the value N.
-
The valid indexes of the elements of an array of size N are [0,N-1] and their addresses are [array + 0, array + N -1] respectively. However, ISO C/C++ explicitly says you can use the just-past-the-end-address (array + N) in pointer comparisons.
The secret of mastering loops (and in fact, of most programming tasks) is to work intentionally. That is, to program on purpose (deliberately) and for a purpose (know what it is you're trying to do).
Suppose your intention is to search through a range of elements looking for a value. A loop is just a mechanism to realize this intention. A loop is, quite literally, a means to an end. If you concentrate on the loop you're solving the solution rather than solving the problem. Concentrate on the problem. Always design a thing by considering it in its next larger context.
If you're searching for an element then either you'll find the element or you won't. You need to be able to distinguish between these two possibilities. One of the strengths of a Half-Open Interval is its exclusive upper bound. When searching:
[begin, end)
you can make any position (let's call it at) in [begin, end) correspond to the position of the element if found and make at == end correspond to not finding the element. Like this:
// ... if (at == end) // not found ... else // found ...
Furthermore, if (at == end) is not true it means the element at at must equal value: (*at == value) in the pointer case and (array[at] == value) in the indexer case.
// ... if (at == end) // not found ... else { // found assert(*at == value); ... }
Now we understand the problem context we can start to think about a solution. If we use a loop then once the loop finishes the following must be true:
(at == end) || (*at == value)
From the earlier !Invariant section we know that this expression is the negation of the continuation condition. In other words the continuation condition must be:
!((at == end) || (*at == value))
which, using De Morgans Law, is the same as this:
!(at == end) && !(*at == value))
which is the same as this:
(at != end) && (*at != value)
Which means "(we're not at the end) and (we haven't found value)". Note how you can't swap the left and right arguments to && because the left side acts a validity check on the right side. It's now just a matter of completing the loop by filling in the initialization part and the update part. Note that you can't declare the loop variable in the initialization part of a for statement (since it would be out of scope at the if statement).
char array[42]; // ... const char * const end = array + 42; char * at = array; for (; at != end && *at != value; ++at) { ; } if (at == end) // not found ... else // found ...
In the majority of cases finding the value is considered the successful outcome. It's usually best to emphasize the positive case rather than the negative case so a lot of programmers write the if statement like this:
// ... if (at != end) // found ... else // not found ...
The empty initialization part and the empty loop body are noticeable. You might be tempted to rewrite the fragment like this:
char array[42]; // ... const char * const end = array + 42; char * at = array; while (at != end && *at != value) { ++at; } if (at != end) // found ... else // not found ...
This is possibly a minor improvement. However, a much more relevant point is that to search another array you'd have to write another identically structured loop. Copy-and-paste duplication is a bad thing but it hints at something very important; that you have formed a common pattern of use to conquer similar problems. Instead of copying and pasting you should be considering how to capture and name the common pattern of use in a higher level abstraction. How about a function:
char * find(char * begin, char * end, char value) { while (begin != end && *begin != value) { ++begin; } return begin; }
Or a function template:
template<typename iterator_type, typename value_type> iterator_type find(iterator_type begin, iterator_type end, const value_type & value) { while (begin != end && *begin != value) { ++begin; } return begin; }
It's a mistake to think that these abstractions are "too small" to warrant existence. And so finally, we end up with a code fragment that is clear, concise, transparent, and intention revealing:
char array[42]; // ... const char * const end = array + 42; char * at = find(array, end, value); if (at != end) { // found it ... }
The form of this article as well as the content of the first two sections were collectively written by a dozen or so people during a Birds of a Feather session at the ACCU 2003 Spring Conference. Many thanks to everyone who contributed.
Overload Journal #55 - Jun 2003 + Programming Topics
Browse in : |
All
> Journals
> Overload
> 55
(6)
All > Topics > Programming (877) Any of these categories - All of these categories |