Journal Articles

CVu Journal Vol 16, #5 - Oct 2004 + Professionalism in Programming, from CVu journal
Browse in : All > Journals > CVu > 165 (13)
All > Journal Columns > Professionalism (40)
Any of these categories - All of these categories

Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.

Title: Professionalism in Programming #28

Author: Administrator

Date: 04 October 2004 13:16:08 +01:00 or Mon, 04 October 2004 13:16:08 +01:00

Summary: 

Body: 

Security is mostly a superstition. It does not exist in nature... Life is either a daring adventure or nothing. (Helen Keller)

Not so long ago computer access was a scarce commodity. The world contained only a handful of machines, owned by a few organisations, accessed by small teams of highly trained personnel. In those days computer security meant wearing the right labcoat and pass card to get past the guard on the door.

Fast forward to today. We carry more computational power in a pocket than those operators ever dreamt of. Computers are plentiful and, more pertinently, highly connected.

The volume of information carried by computer systems is growing at a fantastic rate. We write programs to store, manipulate, interpret, and transfer this data. Our software must guard against data going astray: into the hands of malicious attackers, past the eyes of accidental observers, or even disappearing into the ether. This is critical; a leak of top-secret company information could spell financial ruin. You don't want sensitive personal information (your bank account or credit card details, for example) leaking out for anyone to use. Most software systems require some level of security[1].

Whose responsibility is it to build secure software? Here's the bad news: it's our headache. If we don't consider the security of our handiwork carefully, we will inevitably write insecure, leaky programs and reap the rewards.

Software security is a really big deal, but generally we're very bad at it. Nearly every day you'll hear of a new security vulnerability in a popular product, or see the results of viruses compromising system integrity.

This is an enormous topic, far larger than we have scope to go into here. It's a highly specialised field, requiring much training and experience. However, even the basics are still not adequately addressed by modern software engineering teaching. The aim of this series is to highlight security issues and explore the problem. We'll learn a number of basic techniques for protecting our code.

Why Do We Get It So Wrong?

Building secure software requires a mindset that is sadly lacking in the average programmer. In the day-to-day madness of the software factory we're too focused on getting the program working, on getting it out of the door on time and in a reasonable state. We sit back and breathe a sigh of relief when our streamlined application appears to be doing what its supposed to. Rarely do we turn our attention to how secure the code is. Unless the test department is particularly skilled in this area, it's easy to ignore the whole issue - we'd rather not think the worst of a new creation.

If you do eventually turn your gaze to security issues, perhaps with a little test department prodding, it's probably too late anyway. Once a system is built, patching up any security problems is a hard job; the problems are either too fundamental, too prevalent, or far too hard to identify.

It's probably hard to believe that anyone would take the time and effort to hack your applications. But these people exist. They're talented, motivated, and they are very, very patient. Why do they do it? Some malicious crackers intend to steal, commit fraud, or cause damage, but their motive can equally be to prove superior skills or to cause a little mischief. They might not want to compromise your application specifically, but won't hesitate to exploit its flaws if you leave a hole open.

Sadly, no application is totally hack-proof. Writing a secure program is no easy task. Yet even the most secure application must run in its operating environment: under a particular OS, on some specific piece of hardware, on a network, and with a certain set of users. An attacker is just as likely to compromise one of these as your actual code. Indeed they're probably more likely to; social engineering - the art of acquiring important information from people, items in an office, or even the outgoing trash - is usually a lot easier (and often quicker) than worming a way into your computer system.

Software security presents a myriad of problems and challenges for the poor overworked programmer.

The Risks

Better be despised for too anxious apprehensions, than ruined by too confident security. (Edmund Burke)

Why would anyone bother to attack your system? It's usually because you've got something that they want. This could be:

  • your processing power,

  • your ability to send data (e.g. send spam emails),

  • your privately stored information,

  • your capabilities; perhaps the specific software you have installed, or

  • your connection to more interesting remote systems.

They might even attack you for the sheer fun of it, or because they dislike you and want to cause harm by disrupting your computer resources. Of course, we must remember that whilst malicious people are lurking around looking for easy, insecure prey, a security vulnerability might equally be caused by a program that accidentally releases information to the wrong audience. Sometimes this won't matter. More often it's just embarrassing. In the worst case, though, that lucky user can opportunistically exploit the leak and cause you harm.

To understand the kinds of attack you might suffer, it's important to mark the difference between protecting an entire computer system (comprising of several computers, a network, and a number of collaborating applications) and writing a single secure program. Both are important aspects of computer security; they blur together since both are necessary. The latter is a subset of the former. It takes just one insecure program to render an entire computer system (or network) insecure, so we must take the utmost care at all times.

Let's take a look at the ways you can be caught with your pants down. These are some of the most common security risks and compromises of a live, running computer system:

  • Physically acquiring a machine, for example by stealing a laptop or PDA containing unsecured sensitive data. This data is freely readable by anyone with the inclination. Similarly, the stolen device might be configured to automatically dial into a private network, allowing a simple route straight through all your company's defences. This is a serious security threat, and one that you can't easily guard against in code! What we can do is write systems that aren't immediately accessible to computer thieves.

  • Exploiting flaws in a program's input routines. Not checking input validity can lead to many types of compromise, even to the attacker gaining access to the whole machine. We'll see examples of this later.

  • Breaking in through an unsecured public network interface is a specific variant of the previous point. This is particularly worrying. UI flaws can only be exploited by people actually using that UI, but when your insecure system is running on a public network the whole world could be trying to break down your doors.

  • Malicious authorised users copying and sharing data they're not supposed to. It's hard to guard against this one. You have to trust that each user is responsible enough to handle the level of system access they've been designated.

  • Malicious authorised users entering bad data to compromise the quality of your computer system. Any system has a small set of trusted users. If they're not trustworthy then you can't write a program to fix them. This shows that security is as much about administration and policy as it is about writing code.

  • Setting incorrect permissions, allowing the wrong users to gain access to sensitive parts of your system. This could be as basic as setting the correct access permissions on database files so casual users can't see everyone's salary details.

  • Privilege escalation. This occurs when a user with limited access rights tricks the system to gain a higher security level. The attacker could be an authentic user, or someone who has just broken into the system. Their ultimate aim is to achieve root or administrator privilege, where the attacker has total control of the machine.

  • "Tapping into" data as it is transmitted on the wire. If communication is unencrypted and traverses an insecure medium (e.g. the internet) then any computer en route can syphon off and read data without anyone else knowing. A variant of this is known as a man-in-the-middle attack - when an attacker's machine pretends to be the other communicant and sits between both senders, snooping on their data.

  • Virus attacks (self-replicating malicious programs, commonly spread by email attachment), trojans (hidden malicious payloads in seemingly benign software), and spyware (a form of trojan that spies on what you are doing, the webpages you visit, etc). These programs can capture even the most complex password with keystroke loggers, for example.

  • Careless users (or bad system design) can leave a system unnecessarily open and vulnerable. For example, users often forget to log off, and if there is no session timeout anyone can later pick up your program and start using it.

  • Storing data 'in the clear' (unencrypted). Even leaving it in memory is dangerous; memory is not as safe as many programmers think. A virus or trojan can scan computer memory and pull out a lot of interesting titbits for an attacker to exploit. This depends on how secure your OS is - does it allow this kind of memory access and can you lock your applications's memory pages manually?

  • Copying software. For example: running multiple installations in an office only licensed for one user, or allowing copies to spread on the internet.

  • Allowing weak, easily guessable passwords. Many attackers use dictionary-based password cracking tools that fire off many login attempts until one works. It's a sad fact that easily memorable passwords are also easily guessable passwords. More secure systems will suspend a user account after a few unsuccessful logins.

  • Out-of-date software installations. Many vendors issue security warnings and software patches. They come at a phenomenal rate and should really be carefully checked before being deployed. A computer system administrator can easily fall behind the cutting edge.

The problem scales as the number of routes into a system grows. It gets worse with: the more input methods (web access, command line, or GUI interfaces), the more individual inputs (different windows, prompts, web forms, or XML feeds), and the more users (there is more chance of someone discovering a password). With more outputs there is more chance for bugs to manifest in the display code, leaking out the wrong information

How do you know when your program has been compromised? Without detection measures you'll have no idea - and will just have to keep an eye out for unusual system behaviour or different patterns of activity. This is hardly scientific. A hacked system can remain a secret indefinitely. Even if the victim (or their software vendor) does spot an attack, they probably don't want to release detailed information about it to invite more intruders. What company would publicise that their product has security issues that are effectively wide-open doors? If they are conscientious enough to release a security patch not everyone will upgrade, leaving a well-documented security flaw in many operational systems.

The Opposition

To defend yourself adequately it's important to know whom you're fighting against. As they say: know your enemies. We must understand exactly what they're doing, how they do it, the tools they're using, and their objectives. Only then can we formulate a strategy to cope.

Your attacker might be a common crook, a talented cracker, a 'script kiddie'[2], a dishonest employee cheating the company, or a disgruntled ex-employee seeking revenge for unfair dismissal.

Thanks to pervasive networking they could be anywhere, in any continent, using any type of computer. When working over the internet attackers are very hard to locate; many are skilled at covering their tracks. Often they crack easy machines to use as a cover in more audacious attacks.

They could attack at any time, day or night. Across continents one person's day is another's night. You need to run secure programs around the clock, not just in business hours.

There is a cracker subculture where knowledge is passed on and easy-to-use cracker tools are distributed. Not knowing about this doesn't make you innocent and pure, just naive and open to the simplest of attack.

With such a large bunch of potential attackers, the motives for an attack are diverse. It might be malicious (a political activist wants to ruin your company, or a thief wants to access your bank account), or it might be for fun (a college prankster wants to post a comical banner on your website). It might be inquisitive (a hacker just wants to see what your network infrastructure looks like, or practice their cracking skills), or might be opportunist (a user stumbles over data they shouldn't see, and works out how to use it to their advantage).

Excuses, Excuses

How do attackers manage to break into code so often? They're armed with weapons we don't have or (due to lack of education) know nothing about. Tools, knowledge, skills: these all work in their favour. However, they have one key advantage that makes all the difference: time. In the heat of the software factory, programmers are pressed to deliver as much code as humanly possible (probably a little bit more), and to do so on time: or else. This code has to meet all requirements (for functionality, usability, reliability, etc) leaving precious little time to focus on other 'peripheral' concerns, like security. Attackers don't share this burden, have plenty of time to learn the intricacies of your system, and have learnt to attack from many different angles.

The game is stacked heavily in their favour. As software developers we must defend all possible points of the system; an attacker can pick the weakest point and focus there. We can only defend against the known exploits; an attacker can take their time to find any number of unknown vulnerabilities. We must be constantly on the lookout for attacks; the attacker can strike at will. We have to write good, clean software that works nicely with the rest of the world; an attacker can play as dirty as they like.

What does this tell us? Simply that we must do better. We must be better informed, better armed, more aware of our enemies, and more conscious of the way we write code. We must design in security from the outset and put it into our development processes and schedules.

Next Time

We'll conclude this topic by investigating some specific code vulnerabilities, and working out good techniques to defend our code from attack.



[1] As we'll see, this is true whether they handle sensitive data or not. If a 'non-critical' component has a public interface then it poses a security risk to the system as a whole.

[2] A derogatory name for crackers who run automated 'crack er scripts'. They exploit well-known vulnerabilities with little skill themselves.

Notes: 

More fields may be available via dynamicdata ..