Journal Articles

CVu Journal Vol 16, #1 - Feb 2004 + Programming Topics + Professionalism in Programming, from CVu journal
Browse in : All > Journals > CVu > 161 (8)
All > Topics > Programming (877)
All > Journal Columns > Professionalism (40)
Any of these categories - All of these categories

Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.

Title: Professionalism in Programming #24

Author: Site Administrator

Date: 07 March 2004 19:36:44 +00:00 or Sun, 07 March 2004 19:36:44 +00:00

Summary: 

There is more to life than increasing its speed” - Mahatma Gandhi

We live in a fast food culture. Not only must our dinner arrive yesterday, our car should be fast, and our entertainment instant. Our code should also run like lightning. I want my result. And I want it now.

Ironically, writing fast programs takes a long time.

Optimisation is a spectre hanging over software development, as W.A. Wulf observed. “More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason – including blind stupidity”.

It’s a well-worn subject, with plenty of trite soundbites bounding around, and the same advice being served time and time again. But despite this, a lot of code is still not developed sensibly. Programmers get sidetracked by the lure of efficiency and write bad code in the name of performance.

In these articles we’ll address this. We’ll tread some familiar ground and wander well-worn paths, but look out for some new views on the way. Don’t worry – if the subject’s optimisation it shouldn’t take too long...

Body: 

What does it mean?

The word optimisation purely means to make something better; to improve it. In our world it’s generally taken to mean ‘making code run faster’, measuring a program’s performance against the clock. But this is only a part of the picture. Different programs have different requirements; what’s ‘better’ for one may not be ‘better’ for another. Software optimisation may actually mean any of the following:

  • speeding up program execution,

  • decreasing executable size,

  • improving code quality,

  • increasing data throughput (not necessarily the same as execution speed), or

  • decreasing storage overhead (say, database size).

The conventional optimisation wisdom is summed up by M.A. Jackson’s infamous laws of optimisation:

  1. Don’t do it.

  2. (for experts only) Don’t do it yet.

That is, you should avoid optimisation at all costs. Ignore it at first, and only consider it towards the end of development when your code’s shown not to be running fast enough.

In reality this is far too simplistic a viewpoint – accurate to a point, but potentially misleading and harmful. Performance is really a valid consideration right from humble beginnings of development, before a single line of code has been written.

Code performance is determined by a number of factors, including:

  • the execution platform,

  • the deployment/installation configuration,

  • architectural software decisions,

  • low level module design,

  • legacy artifacts (like the need to interoperate with older parts of the system), and

  • the quality of each line of source code.

Some of these are fundamental to the software system as a whole, and an efficiency problem there won’t be easy to rectify once the program has been written. Notice how little impact individual lines of code have, there is so much more that affects performance. Optimisation, whilst not a specific scheduled activity, is an ongoing concern through all stages of development.

Think about the performance of your program from the very start – do not ignore it, hoping to make quick fixes at the end of development. But don’t use this as an excuse to write tortured code, based on your notion of what is ‘fast’ or not. A programmer’s gut feeling for where bottlenecks lie is seldom right, no matter how experienced he or she is.

What makes code suboptimal?

In order to improve our code, we have to know the things that will slow it down, bloat it, or degrade performance. Later on this will help us to determine some code optimisation techniques. At this stage it’s just helpful to appreciate what we’re fighting against.

Complexity

is a killer. The more work there is to do, so the slower the code will run. Reducing the amount of work to do, or breaking it up into a different set of simpler, faster, tasks can greatly enhance performance.

Indirection

is touted as the solution to all known programming problems, but also blamed for a lot of slow code. This criticism is often levelled by old-school procedural programmers, aimed at modern OO designs. Whether any of it is actually true is debatable.

Repetition

can often be avoided, and will inevitably ruin code performance. It comes in many guises; for example, by failing to cache the result of expensive calculations or of remote procedure calls. Every time you recompute you waste precious efficiency. Repeated code sections extend executable size unnecessarily.

Bad design

will lead to bad code. For example, placing related units far away (say across module boundaries) will make their interaction slow. Bad design can lead to the most fundamental, the most subtle, and the most difficult performance problems.

I/O

is a remarkably common bottleneck. A program whose execution is blocked waiting for input or output (to/from the user, the disk, or a network connection) is bound to perform badly.

This list is nowhere near exhaustive. But it gives us a good idea of what to think about as we proceed to investigate how to write optimal code.

Why not optimise?

Historically optimisation was a crucial skill, since early computers ran very, very slowly. Getting a program to complete in anything like reasonable time required a lot of skill, and the hand-honing of individual machine instructions. That kind of skill is nowhere near as important these days; the personal computer revolution has changed the face of software development. We often have a surplus of computational power, quite the reverse of the days of yore. So it would seem that optimisation doesn’t really matter any more.

Well, not quite. We’ll see that there are still many situations requiring high performance code, but it is preferable to avoid optimising code if at all possible. Optimisation has a lot of downsides.

Lightning performance and heavy optimisations are seldom as important as you think – it’s either acceptable to put up with ‘adequate’ performance, or you can work around performance issues in other ways (more on this later). Before you even consider a stint of code optimisation, you must bear this advice in mind: Correct code is far more important than fast code. There’s no point in arriving at the wrong answer quickly.

You should spend more time and effort proving that your code is correct than getting it fast. Any later optimisation must not break this correctness.

Presuming the code wasn’t absolutely terrible in the first place, there’s a price to pay for more speed. Optimising code is the act of trading one desirable quality for another. Done well, the (correctly identified) more desirable quality is enhanced.

These are the top reasons to avoid optimising code. You’ll see that a number of them are examples of code writing tradeoffs:

Loss of readability

It’s rare for optimised code to read as clearly as it’s slower counterpart. By it’s very nature, the optimised version is not as direct an implementation of the logic, or as straightforward. You sacrifice readability for performance.

Most optimised code is ugly and hard to follow. Optimisation destroys neat design. Here is a simplistic example: you’ll see plenty of C code constructs like this: int value = *p++. It’s hard to read, sadly even for experienced C programmers. There’s nothing wrong with separating that into two statements like: int value = *p; p++;. The initial version may be more concise, but the second version is far, far clearer to read and understand.

This is a simplistic example for two reasons. First, it’s a code construct issue. Most optimisations are concerned more with logic than syntax. Second, whilst this might have generated more efficient code in the Good Old Days (when dinosaurs wrote preprocessors) modern optimising compilers will generate identical code for both versions. However, the general principle is clear.

Increase in complexity

A more ‘clever’ implementation – perhaps utilising special ‘backdoors’ (thereby increasing module coupling) or taking advantage of platform specific knowledge – will add complexity. Complexity is the enemy of good code.

Hard to maintain/extend

As a consequence of increased complexity and a lack of readability, the code will be harder to maintain. If an algorithm is not clearly presented the code can hide bugs more easily.

Optimising working code is a surefire way to add new subtle bugs – these will be harder to find because the code is more contrived and harder to follow. Optimisation leads to dangerous code.

This also stunts the extensibility of the code. Optimisations often come from making more assumptions, limiting generality and future growth.

Introducing conflicts

Often an optimisation will be quite platform specific. It might make certain operations faster on one system, at the expense of another platform. Picking optimal data types for one processor type may lead to slower execution on others.

The software world moves fast. Technologies change rapidly; today’s optimisation might be tomorrow’s bottleneck. But gnarly optimised code hides the original algorithm’s intent, so it will be hard to unpick the optimised code.

More effort

Optimisation is another job that needs to be done. We have quite enough to do already, thank you. If the code’s working adequately then we should focus our attentions on more pressing concerns.

Optimising code takes a long time, and it’s hard to target the real causes. If you optimised the wrong thing, you’ve wasted a lot of precious energy.

Too expensive/unnecessary

Often optimisation is not really worthwhile, or uneconomical. A few extra percent points in speed trials doesn’t justify a year’s extra work.

Inappropriate

Do you really believe that you can optimise better than a modern compiler’s optimiser? Trying to perform code level tweaks can be a big waste of time.

For these reasons, optimisation should be some way down your list of concerns. Balance the need to optimise your code against the requirement to fix faults, add new features, or to ship a product. If you take care to write efficient code in the first place you’re less likely to need to optimise anyway.

Alternatives

Often code optimisation is performed when it’s actually unnecessary. There are a number of alternative approaches that we can employ to avoid destroying code. Consider these solutions before you get too focused on optimisation:

  • Can you put up with this level of performance – is it really that disastrous?

  • Run the program on a faster machine. This seems laughably obvious, but if you have enough control over the execution platform it might be more economical to specify a faster computer than spend time tinkering with code. Given the average project duration, you are guaranteed that by the time you reach completion processors will be considerably faster.

    Not all problems can be fixed by a faster CPU, especially if the bottleneck is not execution speed – a slow storage system, for example. Sometimes a faster CPU can cause drastically worse performance; faster execution can exacerbate thread locking problems.

  • Look for hardware solutions: add a dedicated floating point unit to speed up calculations, add a bigger processor cache, more memory, a better network connection, or a wider bandwidth disk controller.

  • Consider reconfiguring the target platform to reduce the CPU load on it. Disable background tasks, or any unnecessary pieces of hardware. Avoid processes that consume a huge amount of memory.

  • Run the slow code asynchronously, in a background thread. Adding threads at the last minute is a road to disaster if you don’t know what you’re doing; but careful thread design can accommodate slow operations quite acceptably.

  • Work on user interface elements that affect the user’s perception of speed. Ensure that GUI buttons change immediately, even if their code takes over a second to execute. Implement a progress meter for slow tasks; a program that hangs during a long operation appears to have crashed. Visual feedback of operation progress conveys a better impression of the quality of performance.

  • Design the system for unattended operation, so that no one notices the speed of execution. Create a batch processing program with a neat UI that allows you to enqueue work.

  • Write time critical sections in another faster language – conventional compilers still beat JIT code interpreters for execution speed.

  • Try a newer compiler with a more aggressive optimiser, or target your code for the most specific processor variant (with all extra instructions and extensions enabled) to take advantage of all performance features.

Why optimise?

So that’s it – we should all give up on any foolish notion of optimising code, and put up with mediocre performance, or only ever attempt roundabout solutions? Well, not quite...

There are plenty of situations where optimisation is important. And contrary to the popular wisdom, some areas are guaranteed to require optimisation:

  • Games programming always needs well honed code. Despite the huge advances in PC power, the market demands more realistic graphics and more impressive artificial intelligence algorithms. This can only be delivered by stretching the execution environment to its very limits. It’s an incredibly challenging field of work; as each new piece of faster hardware is released, games programmers still have to wring every last drop of performance out.

  • DSP programming is all about high performance. Digital Signal Processors are dedicated devices specifically optimised to perform fast digital filtering on large amounts of data. If speed didn’t matter you wouldn’t be using a DSP. DSP programming generally relies less on an optimising compiler, since you want to have a high degree of control over what the processor is doing at all times. DSP programmers are skilled at driving these devices at their maximum performance.

  • Resource constrained environments, like deeply embedded platforms, can struggle to achieve reasonable performance with the available hardware. You’ll often have to rewrite code to achieve an acceptable quality of service.

  • Real time systems rely on timely execution, on being able to complete operations within well specified quanta. Algorithms have to be carefully honed and proven to execute in fixed time limits.

  • Numerical programming – in the financial sector, or for scientific research – demands high performance. These huge systems are run on very large computers with dedicated numerical support, supporting vector operations and parallel calculations.

Perhaps optimisation is not a serious consideration for ‘general purpose’ programming, but there are plenty of cases where optimisation is a crucial skill. Performance is seldom specified in a requirements document, yet the customer will complain when your program runs unacceptably slowly. If there are no alternatives, and the code doesn’t run fast enough, you have to optimise it.

Clearly there is a shorter list of reasons to optimise than not to. Unless you have a specific need to optimise, you should avoid doing so. But if you do need to optimise, make sure you know how to do it well. Understand when you do need to optimise code, but prefer to write efficient and good code in the first place.

Next time

We’ll look at good techniques for optimising code.

Notes: Entered by hand

More fields may be available via dynamicdata ..