ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinWatt's going on?

Overload Journal #89 - February 2009 + Journal Editorial   Author: Ric Parkin
Just how much power are you using...

Over the summer, I'd bought a few new electronic devices and so reorganised all the power cables to be neater, and allow me simple control over which were on. To do this properly I got a simple power meter which allowed me to make decisions informed by the actual power usage, and a few power blocks with individual switches. Didn't take very long and I now have an easy and accessible ability to choose which things are on and drawing power.

I was reminded of this during a recent thread on accu-general which touched on whether it was pointless or not to switch off a TV at the wall. Time to revisit my assumptions, and do some extra measuring at work - after all in an IT business, there are an awful lot of electronic devices on all day (and many people leave them on all night and weekend too.) If we can save significant amounts of energy, we can save money, and perhaps reduce CO2 emissions too.

So my power meter got dusted off and I went around getting real figures, shown in the following two tables. The measurements are how much a device draws when you turn it 'off' but it is still plugged into the power supply, and how much it uses when being used (which is often a range for devices such as PCs that do sophisticated power management.)

Device Off (W) In Use (W)

Amp

0

12

CD

0

9

Radio

0

8.5

TV (also used as PC monitor)

0.5

85

DVD

0

20

Power block

N/A

0.3

Broadband Cable Modem

N/A

8.5

Wireless Router

N/A

4.5

PC

4.5/8.8*

85...160

PS3

1.6

100...113

Wii

1.7

16.5

Table 1: Home

* for some reason my PC uses a smaller amount when it first gets power than after it has been shut down.

Device Off (W) In Use (W)

PC

5/75**

130...150

Monitor

0.4-1.5

30...45

Docking station

2.4

5.4

Phone Charger

0.7

3.2

Laptop

0.7

30...50

Laptop Transformer

N/A

0.3

USB Hub

N/A

3.8

Total system (4 monitors)

~10

250...300

Table 2: Work

** first figure is for shutting off the PC, the second is for putting it into Standby.

Some things jump out here immediately. Many devices have very good 'off' power consumption - even zero from old audio equipment and the DVD player - but most still draw a small amount, I suspect from the power transformers - eg a laptop power transformer draws 0.3W even without the laptop plugged in. The PC was an exception, which I suspect is a combination of the PSU and the network card (there's a function to wake up a PC remotely via the network which would need this to be available), but even that is only about the same as a low-power light bulb. I don't have a TV set-top box, but I understand that at some models were really bad - in 'standby' most of the electronics stay on to download updates.

With such figures you can compare various scenarios, such as for my work rig: leave PC on permanently (~250-300W); leave PC on but switch the monitors off, eg via screen savers (~150W); switch everything off but leave everything plugged in (~10W); and switch off at the wall (0W).

This shows that automatic power management can really help reduce power consumption - it will power off monitors and spin down hard drives when you go home or are in a meeting, saving a lot for no effort beyond the initial setup. Getting into the habit of manually switching off devices that are not in use can save even more, but takes some ongoing effort (although there are some power-blocks that use a drop in power use from one socket to switch off everything automatically which makes this even easier); and the final switch off at the wall saves a little bit more, roughly equivalent of a low energy light bulb, but can also be made easier by an accessible power block with a master switch - I've one that's designed to sit on a desk and act as a power block and network cabling.

This last explains why so many people don't bother switching off at the wall - it's more effort to do so, and doesn't save much. Some argue that it's not worth it, especially when compared to the fluctuations in normal usage. Of course those fluctuations happen anyway, and this saves an extra, independent, amount. And even if it looks small for the individual, when you add it up around a whole office, town and country, this small saving accumulates and becomes significant.

But this is just looking at the small, personal stuff. Are there more systematic changes to our IT systems to make a big difference?

The first is for us to not buy or use a device we don't really need. An interesting blog recently on how start-ups can save money [Startup] suggests not buying everyone a phone, and not to bother with an in-house email server.

The next is for manufacturers to reduce the energy needed to create and run devices. This takes time and we can only indirectly influence this as consumers, but sometimes we can make a difference as developers - I've spent time making sure software correctly puts hardware into a low power state. It might only save a small amount per device, but with a large customer base such actions have a significant total effect.

Suitably setting up a computer's power management settings can have a significant ongoing effect - powering off screens, hard disks, and eventually putting the PC into hibernate will trim unnecessary costs, and can be significant when they are added up over the whole office. A good IT department can influence things here.

Another possibility is to not use complex, power hungry, computers directly - with fast networks, virtualisation, and server farms we can do many tasks using only small network appliances sharing processing power. This still uses power-hungry computers, but they are much less likely to be idle, and they are now centrally managed and so power usage can be controlled more easily. And as power is a major cost, there is a strong incentive for the management to actively control it. A neat example of this: one company is going to build a server farm in Inverness, citing the lower ambient temperature as reducing the amount of cooling needed [Climate] (and it hopes to use the waste heat to provide heating and hot water to local buildings). You can take this further, and site them in places like Iceland - not only is it even colder, but a lot of the electricity is from geothermal sources, and so is cheap and practically carbon-free [Iceland] .

Of course some of these savings are minor for the individuals involved, and if you only have limited time and resources you can often get a better return by avoiding waste from the real energy hogs: heat. So don't overfill the kettle, turn the central heating down a degree, keep the jumper on, and top up the insulation...

Core blimey

A few years ago, Herb Sutter published an article 'The Free Lunch is Over' [FreeLunch] , and gave a talk version of it at the ACCU conference. This discussed how the increase in processor speeds had levelled off, and the extra transistors predicted by Moore's law were instead being used to provide more on-chip memory, and extra cores. The message was, to take advantage of this we will have to change our programming models from single-threaded to multi-threaded and multi-processor, and our programs can continue to get faster as before.

Or not. Recently some studies were published about how multicores will scale [Memory] . The answer was: not very. Well, almost - if you looked a bit more carefully, what it actually found was that sharing memory across cores didn't scale. This is because accessing main memory is a major bottleneck - memory speed hasn't improved by much compared to processors, and it takes time to get the data to and from the cores. Local caches help, but it adds the problem of keeping their view of memory consistent. And this was in fact mentioned in Herb's article (and has been known for years before that) but here was some more evidence that we really do have to rethink how our programs are designed - things won't improve by themselves. In many ways programming on multi-cores (and multi-processors) is just distributed computing on a smaller scale. There are (relatively) high call latencies, slow data access, and data consistency issues. There is a large literature on how to program with these constraints, but a lot of the ideas and techniques are very different from how many people currently program and it will take time to convert people's thinking - from asynchronous message passing, changing data processing to use algorithms that can be processed in parallel chunks, perhaps using to functional programming ideas and languages.

And a random thought: could OS writers and chip designers start organising things to reflect such a model? For example a chip could be designed to have several groups of cores with their own dedicated local memory. Then the OS could run on a dedicated core-group, and launches processes running on their own core-group. Thus each process' threads would be able to use its own memory without contention from other processes, and the only syncing would be to send messages to other processes which would be coordinated by the OS. Original? Unlikely - it wouldn't surprise me in the least if this hadn't already been done years ago, and the mainstream is only now catching up.

ACCU Conference

And finally a quick reminder - this year's conference lineup has been announced and booking has been open for just over a month. Early bird rates are still available until 28th Februrary, so if you haven't already booked, now is the time. I look forward to seeing you there!

References

[Climate] http://www.itpro.co.uk/608874/data-centre-heads-north-for-cooler-climate)

[FreeLunch] Free Lunch is Over: http://www.gotw.ca/publications/concurrency-ddj.htm

[Iceland] http://www.theregister.co.uk/2007/04/10/iceland_to_power_server_farms/

[Memory] Limits of multicore sharing memory http://www.sandia.gov/news/resources/releases/2009/multicore.html

[Startup] http://calacanis.com/2008/03/07/how-to-save-money-running-a-startup-17-really-good-tips/

Overload Journal #89 - February 2009 + Journal Editorial