Journal Articles
Browse in : |
All
> Journals
> CVu
> 176
(12)
All > Topics > Design (236) Any of these categories - All of these categories |
Note: when you create a new publication type, the articles module will automatically use the templates user-display-[publicationtype].xt and user-summary-[publicationtype].xt. If those templates do not exist when you try to preview or display a new article, you'll get this warning :-) Please place your own templates in themes/yourtheme/modules/articles . The templates will get the extension .xt there.
Title: Cryptographic Mistakes Made in Programming
Author: Administrator
Date: 02 December 2005 06:00:00 +00:00 or Fri, 02 December 2005 06:00:00 +00:00
Summary:
Body:
Cryptography is regarded by many, including some of its practitioners, as a black art. This is not without good reason as very few people truly understand the subject. Most cryptographers are mathematicians, and now virtually all programmers are not true mathematicians (unlike in the early days of computing). As a result programmers who actually understand cryptography are very rare.
Hearing the warnings about attempting to program cryptography algorithms yourself, it is often viewed as easier, and safer, to use pre-built cryptographic libraries to handle any encryption requirements. It's natural to assume if these libraries have been constructed by cryptographic experts, then use of them should result in secure data. Unfortunately this assumption is the root of a lot of encryption mistakes by programmers.
In cryptography the strength of the algorithm plus the length of the key are only one part of the security of encryption. Given poor usage, or poor implementation the most secure encryption systems can be broken. This is a common mistake in the field of cryptographic programming; the assumption that strong cryptography always makes the data secure. So what errors can we, as programmers, make that turns a secure encryption algorithm into an insecure system? To demonstrate this let's look at the only encryption algorithm that is mathematically unbreakable, the one time pad.
One time pad works via XOR'ing data bits with a pad as long as the message. The pad is selected at random. The result is data that is only de-cryptable with the pad and is unbreakable without the pad. The one time pad is impracticable in the real world to implement, but is ideal to demonstrate how programming mistakes can ruin encryption, if the programmer isn't thinking when coding, even this 'unbreakable' system can become breakable.
Let's look at a few of the more common mistakes that can be made:
With eight bit ASCII, most characters used in English documents are in the lower seven bits. Due to different development platforms, some encryption libraries may be configured to assume a seven bit ASCII input. If we mismatch the two we could end up encrypting seven out of eight bits in our data and end up 'leaking' one unencrypted bit per byte.
The resulting data will look encrypted, but could give clues over the plain text in the encrypted data. Values greater the 128 decimal, are probably the pound sign in financial documents for example.
Most public and symmetric key encryption systems (real world cryptography) repeat the key. The longer the key and the less it ss repeated in encrypting data, so is regarded as more secure. Data by users may also be repeated, if this occurs then comparisons 'block by block' could reveal the encryption key.
A good example of this is the weakness in WEP (Wireless Encryption Protocol), the encryption standard used with 802.11B wireless networks. This uses a 24bit field and a wireless access point transmits a lot of similar packets, it is guaranteed to reuse the same key stream at some point. By comparing two encrypted packets its possible to find the plain encrypted text. Once that's done its trivial to decrypt other encrypted data.
The algorithm WEP uses RC4 which most would accept as a decent algorithm for most uses, what lets WEP down in this case, is how the algorithm was used, and the data it was used to encrypt.
In the following logon system, a user types in a password. The systems breaks the password up into chunks of three characters and upcases each character. Separately each chunk is encrypted with the algorithm and compared to the stored encrypted password to see if it matches. Passwords are limited to letters and numbers only.
The algorithm is solid, but unfortunately the possible inputs to it are now only 46,665. This is a low number and hence allows anyone with access to the password file the possibility of working out the password in less than a second.
The programmer has effectively ruined a secure encryption system, by the way the data was processed prior to encryption.
A good algorithm (or one time pad) needs a truly random key. Problems occur with generating random numbers on computers as computer generated numbers aren't truly random. It has been known for programmers to use their compilers default random number generators to create keys, a repeatable process.
To get around this some programmers develop their own method's of key randomisation, but these often fail to produce keys that aren't predictable, or result in picking a key that's weaker than could be.
Just because a key is random, doesn't mean it is secure. Depending upon the algorithm some keys are stronger than others, particularly with public key cryptography. This is similar to physical locks as most lock pickers can, if they see a key, judge how hard it will be to pick that lock. The same make of lock will be easier to pick, or harder, depending on how the key is cut.
The failure to concentrate on good key generation is a major drawback in many practical deployments of cryptography.
We have encrypted data stored on media, we wish to work with that data. so we load into memory our decryption key, decrypt all the data off the media and work with the unencrypted data in memory. The result? We are trusting the operating system of the computer to keep the data in memory secure, and not dump memory to disk. Virtual memory is a very large concern as both the plain text, and encrypted text could be present to aid an attacker working out the key. In the worst case scenario the key could be in the virtual memory.
If the lock is weak, any length of key will result in weak encryption. Several algorithms over the centuries have been proven to have flaws that aid an attacker in decrypting data. It doesn't mean you shouldn't use these weak algorithms, but only use them if the data doesn't need to be secure, but instead just look secure. This is often acceptable if the people who are after the data, lack the resources to break the code.
Encryption needs to be approached carefully and not just assumed to be an add-on. Many problems can be solved by taking care in program design. For example if we keep most data encrypted and only decrypt data when needed we can avoid having large samples of decrypted data in memory. Other techniques involve being careful over how the data is encrypted or what data is selected for encryption.
-
Pick a strong modern algorithm
-
Generate real random keys, that are not weak for your encryption system.
-
Use a long key length if security is not secondary to performance concerns.
-
Decrypt data only when needed
-
Be careful about pre-processing data prior to encryption.
-
Avoid encrypting the same data multiple times.
Encryption many be a black art, but understanding how it works isn't needed to incorporate secure encryption into programs. Instead the programmer needs to know how poor implementation occurs and how to avoid reproducing it, unless your aim is to just give the your users the illusion their data is secure!
Notes:
More fields may be available via dynamicdata ..