A

David

Darling

cryptography

Cryptography is the science and mathematics of encoding and decoding information. For several thousand years, at least as far back as the Babylonians, people have been sending secret information back and forth for military, political, and other purposes. Famously, Julius Caesar is supposed to have used a system (now named, after him, the Caesar cipher), in which every letter in the original message, or plaintext, is shifted by a fixed number of letters. For example, if the shift is by four letters, then every A is replaced by an E, every B by an F, and so on.

 


Some cryptographical terms

The "shift by n" rule is a simple cryptosystem and results in a disguised message, or ciphertext, that can be read only by someone who knows the rule. This someone is usually the the intended receiver, but it could also be a shadowy third party, or eavesdropper, who has intercepted the message and figured out how to crack the code. Cryptography is the art of creating and using cryptosystems; cryptanalysis is the art of breaking cryptosystems, or seeing through the disguise even when you're not supposed to be able to; cryptology is the combined field of cryptography and cryptanalysis. The information needed to encrypt or decrypt a message is known as the key. One of the big challenges in cryptography is to devise keys that are hard or impossible for an eavesdropper who manages to get hold of a ciphertext to extract. A seasoned cryptanalyst wouldn't take long to see through anything as simple as the Caesar cipher, and even a much more elaborate system fails utterly if both the message and the key fall into enemy hands.

 


The Zimmermann note

On 16 January 1917, during World War I, the German foreign secretary Arthur Zimmermann sent a telegram in ciphertext to Count Johan von Bernstorff, the German ambassador to the United States. It said that in the event of war with the United States, Mexico should be asked to enter the war as a German ally. Upon emerging victorious, Germany would in return promise to hand back to Mexico the lost territories of Texas, New Mexico, and Arizona. This devastatingly incriminating message was intercepted by British naval intelligence, which held a copy of the diplomatic codebook previously captured from the Germans. The Zimmermann note, as it's become known, consists of a series of four- and five-digit numbers each of which corresponds to a word in the codebook (for example, 67893 = "Mexico").

 

With a copy of the deciphered telegram in their hands, the British had the means to do what they desperately wanted: persuade the United States to enter the war against Germany. But the British government was reluctant to expose the actual telegram, because the Germans would then immediately suspect their code and been cracked and would change it. The British knew a Mexican decrypted version of the telegram existed, which, if they could get hold of it, they could pretend was obtained through espionage activity in Mexico and not as a result of code breaking. Consequently, the British government contacted one of its agents in Mexico, who managed to obtain a copy of the Mexican version of the telegram. On 25 February 1917, this copy was delivered to President Woodrow Wilson, and on March 1, the US government put the plaintext of the telegram before the press. An initially skeptical American public, which thought the telegram might have been faked, was persuaded by none other than Zimmermann himself, who openly confirmed its authenticity two days after its publication. On 6 April 1917, Congress approved U.S. entry into World War I.

 


The Vernam cipher

In the same year, a group of electrical engineers working for AT&T in New York was put in charge of finding a secure way to send wartime messages by telegraph. The group discovered that even when multiple messages were speeding along a telegraph wire in both directions, a savvy third party equipped with an oscilloscope could monitor the frequency changes and transcribe the messages. One member of the group, Gilbert Vernam, came up with a method to foil such eavesdropping. He suggested feeding into the machine a paper tape prepunched with random characters. Character by character, the tape is consumed in synchrony with the input message characters. The set of random electrical pulses produced from the tape is combined with the pulses generated from the text in a special way, known as modulo-2 addition, to create an encrypted message. At identical tape at the other end performs another character-by-character, modulo-2 addition, which, by the nature of this arithmetic, cancels out the obscuring characters to reveal the plaintext. Anyone covertly tapping into the message en route, however, would be treated to a meaningless jumble of pulses.

 

Vernam had shown how messages could be encoded and decoded automatically, on the fly – a crucial breakthrough because it meant that future cryptography wouldn't have to depend on labor intensive, off-line processes. Such a coded message could easily be tagged on to any form of communications, from telephone calls to radio transmissions, and eventually to e-mail messages flashed over the Internet. But in the way it was originally implemented, Vernam's technique had a weakness that was quickly spotted by Joseph Mauborgne, a major in the US Army Signal Corps (and later its chief). Mauborgne found that if the same randomizing tape was used more than once, the coded messages could be partly unscrambled by superimposing them and looking for patterns. To deny an eavesdropper any chance of getting at the hidden content, a unique, random key had to be used for each transmission.

 

Compared with most cryptosystems, the Vernam cipher is disarmingly simple and so effective that, in time, it became a routine tool of the espionage trade. Suppose Alice and Bob – the names that cryptologists traditionally give to the sender and receiver – are two spies working for the same agency. Each has a copy of the encryption key in the form of a paper pad. These pads are identical, each consisting of pages on which blocks of random letters are written. A block must be at least as long as the plaintext message to ensure that every character can be disguised. To prepare a coded message, the sender, Alice, refers to the random letters on the topmost page of her pad. One way to carry out the encryption and decryption is similar to the Caesar cipher. Suppose the plaintext message starts "Meet Vladimir outside the French embassy at midnight ..." and the random string on the pad begins "CQTNOX ..." Alice takes the first letter of the random string, C, equates this to 3 (since C is the third letter of the alphabet) and replaces the first letter of the plaintext by P (three letters on from M). She then takes the second letter from the one-time pad, Q (17), and advances the second letter of the plaintext by 17 places to give V as the second letter in the ciphertext. Each letter in the ciphertext is generated by applying the same rule using one and only one of the random letters on the pad to the corresponding letter of the plaintext. The receiver, Bob, referring to the identical page of his pad, simply reverses the process to extract the original message. Crucial to the security of this system are several factors: the pads must never fall into enemy hands, the letters on the pads must be truly random, and no page or portion of the key must ever be reused for another encryption. From this one-page-per-message golden rule comes the Vernam cipher's most familiar name: the one-time pad.

 

In the real world, inevitably, mistakes were made and the rules not always followed – with occasionally disastrous results. The reuse of keys by the Soviet Union, owing to a manufacturer;s accidental duplication of one-time pad pages, enabled American cryptanalysts to unmask the atomic spy Klaus Fuchs in 1949. During World War II, code crackers at Bletchley Park in Buckinghamshire, England, decoded important transmissions from the German military on a regular basis because the encrypting method used wasn't as foolproof as the designers had intended. The German army high command had asked the Lorenz company to produce a high security teletypewriter to enable secret radio communications. Lorenz designed a code machine based on the Vernam cipher, but one significant difference. The problem in a war situation was how to make sure that the same random character tapes were available at each end of a communications link, and that they were both set to the same starting position. Lorenz decided it would be simpler under the circumstances to build a machine to generate the obscuring character sequence. Being a machine – a deterministic device, albeit a fiendishly elaborate one – it couldn't produce a truly random sequence of characters but instead only what's known as a pseudorandom sequence. Unfortunately for the German army, it was more pseudo than random, and this led to its unraveling at the hands of the Bletchley Colossus, an early electronic computer built especially for the job.

 

Up to this point, no one knew whether there was any way to crack Vernam's cipher if it was used as prescribed – one time only, with true randomness and no intercepts. But in 1949, in a brilliant paper that swiftly following his founding of information theory, Claude Shannon showed by watertight mathematical logic that, providing all the conditions for security are met, the one-time pad is totally unbreakable.1 The ciphertext of the one-time pad gives an enemy cryptanalyst no information at all about the message except for its length. On the other hand, Shannon demonstrated, if the key is shorter than the plaintext, portions of the encoded message can always be extracted by someone who mounts a powerful enough analytical attack. Given the provably perfect level of security offered by Vernam's cipher, you might wonder why efforts to develop other systems continued.

 


Practical cryptography in the electronic age

The most obvious reason is the length of the one-time pad key, which has to be at least as great as that of the plaintext. In the past, this was far too cumbersome for most purposes. Espionage, yes. Diplomatic communications, including the Washington-Moscow hotline, yes. These high priority, episodic tasks could realistically take advantage of the only provably secure cryptographic system on the planet. But for routine commercial applications involving huge daily volumes of data, a far more succinct method of keeping information safe from prying eyes was needed. Following the spread of electronic computers, a number of commercial ciphers were developed, the most widely used of which became the Data Encryption Standard (DES), adapted from an IBM cipher in the 1970s. This is based on a secret key of 56 bits (binary digits – ones and zeros) that's used repeatedly over a certain length of time before being changed. The 56-bit key is combined with plaintext, then chopped into 64-bit blocks in a complicated way involving permutations and arcane mathematical functions. Although it lacks the 100 percent security of the one-time pad, it's more than adequate in practice. No quicker method is known for breaking the DES cipher than to try all 256 values of the key – a hopeless task, for a would-be cracker even using the fastest computers on Earth.

 

All systems that depend on a single secret key, however, have an Achilles' heel. Because exactly the same key is used for encoding and decoding, resulting in what's termed a symmetric cipher, anyone wanting to use the system legitimately has to be given a copy of the key in advance. This raises the problem of how to make sure that the key remains secret while it's being distributed. In the case of our two spies, for instance, what if the courier delivering the a new codebook to Bob in Beijing is bribed en route to reveal its contents to the enemy? Or, what if the other side makes a copy of it while the courier's taking a nap? Then the security of the system is totally compromised and, worse still, everyone but the enemy is oblivious to the fact.

 

A way around the key distribution problem was demonstrated in the mid-1970s by Martin Hellman, a mathematical professor at Stanford University in California, and two of his graduate students, Whitfield Diffie and Ralph Merkle.2 (More recently, it's emerged that a similar technique was developed a bit earlier but under wraps at the British government's eavesdropping agency, GCHQ, and possibly also at the National Security Agency in the United States. ) Called public-key cryptography (PKC) it uses, instead of a single, long key shared between sender and receiver, two sorts of keys that are complimentary and mathematically related. One of these is a public key, which can be shouted from the rooftops if desired; the other is a private key, known only to the receiver. Anyone with the public key can encrypt and send confidential messages, but only someone who has the private key can decrypt and read them. Many of us benefit from PKC everyday, probably without realizing it: it's the basis for most secure transactions, such as credit card payments, on the Internet. In practice, a public-key cipher is usually used in tandem with a symmetric cipher; the public-key cipher ensures the secure exchange of a key for the symmetric cipher, while the symmetric cipher, which works much faster than its public-key counterpart, is used to encrypt the actual message. Because of its crucial role in electronic data security, PKC has been hailed as one of the most significant practical discoveries of twentieth-century math.

 

Less well-touted but, in the end, perhaps even more far-reaching, was a breakthrough made in 1970 by Stephen Wiesner at Columbia University in New York. Wiesner effectively invented quantum cryptography, an important branch of quantum information science. In a paper entitled "Conjugate Coding" he showed how cryptography could join forced with quantum mechanics two problems in confidentiality that lay beyond the reach of of ordinary (Classical) physics. The first was how to make banknotes that were impossible to counterfeit. The second was how to combine two classical (nonquantum) messages into a single transmission from which the receiver could extract one message or the other but not both. For more, seen the article of quantum cryptography.

 


References

1. Shannon, Claude. "Communication theory of secrecy systems," Bell System Technical Journal, 28(4), 656–715 (19149).
2. Diffie, W. and M. E. Hellman, "New Directions in Cryptography," IEEE Transactions on Information Theory, 22, 644-654 (1977).