Anda di halaman 1dari 38

Chapter 7: Cryptographic Systems

There are a number of ways to secure a network. Networks can be secured through device hardening,
authentication, authorization, and accounting (AAA) access control lists (ACLs), firewall features, and intrusion
prevention system (IPS) implementations. These combined features protect the infrastructure and end devices
within the local network. But how is network traffic protected when traversing the public Internet? The answer is
through cryptographic methods.

The principles of cryptology can be used to explain how modern day protocols and algorithms are used to secure
communications. Cryptology is the science of making and breaking secret codes. The development and use of
codes is called cryptography, and breaking codes is called cryptanalysis. Cryptography has been used for
centuries to protect secret documents. For example, Julius Caesar used a simple alphabetic cipher to encrypt
messages to his generals in the field. His generals would have knowledge of the cipher key required to decrypt
the messages.

Today, modern day cryptographic methods are used in many Authentication, Integrity, and
Confidentiality

To ensure secure communications across both the public and private infrastructure, the network administrator’s
first goal is to secure the network infrastructure, including routers, switches, servers, and hosts. This can be
accomplished using device hardening, AAA access control, ACLs, firewalls, monitoring threats using IPS,
securing endpoints using Advanced Malware Protection (AMP), and enforcing email and web security using the
Cisco Email Security Appliance (ESA) and Cisco Web Security Appliance (WSA). The figure shows an example
of a secure network topology.

The next goal is to secure the data as it travels across various links. This may include internal traffic, but of
greater concern is protecting the data that travels outside of the organization to branch sites, telecommuter sites,
and partner sites.

There are three primary objectives of securing communications:

 Authentication - Guarantees that the message is not a forgery and does actually come from whom it
states.

 Integrity - Guarantees that no one intercepted the message and altered it; similar to a checksum function
in a frame.

 Confidentiality - Guarantees that if the message is captured, it cannot be deciphered.

Many modern networks ensure authentication with protocols, such as hash message authentication code
(HMAC). Integrity is ensured by implementing either MD5 or SHA hash-generating algorithms. Data
confidentiality is ensured through symmetric encryption algorithms, including Data Encryption Standard (DES),
3DES, and Advanced Encryption Standard (AES). Symmetric encryption algorithms are based on the premise
that each communicating party knows the pre-shared key. Data confidentiality can also be ensured using
asymmetric algorithms, including Rivest, Shamir, and Adleman (RSA) and the public key infrastructure (PKI).
Asymmetric encryption algorithms are based on the assumption that the two communicating parties have not
previously shared a secret and must establish a secure method to do so.

Note: These primary objectives are similar but not identical to the three primary issues in securing and
maintaining a computer network which are confidentiality, integrity, and availability.

different ways to ensure secure communications.

Authentication

There are two primary methods for validating a source in network communications: authentication services and
data nonrepudiation services.
Authentication guarantees that a message comes from the source that it claims to come from. Authentication is
similar to entering a secure personal identification number (PIN) for banking at an ATM, as shown in the figure.
The PIN should only be known to the user and the financial institution. The PIN is a shared secret that helps
protect against forgeries. In network communications, authentication can be accomplished using cryptographic
methods. This is especially important for applications or protocols, such as email or IP, that do not have built-in
mechanisms to prevent spoofing of the source.

Data nonrepudiation is a similar service that allows the sender of a message to be uniquely identified. With
nonrepudiation services in place, a sender cannot deny having been the source of that message. It might appear
that the authenticity service and the nonrepudiation service are fulfilling the same function. Although both address
the question of the proven identity of the sender, there is a difference between the two.

The most important part of nonrepudiation is that a device cannot repudiate, or refute, the validity of a message
sent. Nonrepudiation relies on the fact that only the sender has the unique characteristics or signature for how
that message is treated. Not even the receiving device can know how the sender treated this message to prove
authenticity because the receiver could then pretend to be the source.

If the major concern is for the receiving device to validate the source and there is no concern about the receiving
device imitating the source, it does not matter whether the sender and receiver both know how to treat a
message to provide authenticity. An example of authenticity versus nonrepudiation is a data exchange between
two computers of the same company versus a data exchange between a customer and an e-commerce website.
The two computers exchanging data within an organization do not have to prove to the other which of them sent
a message.

This practice is not acceptable in business applications, such as when purchasing items online. If the online store
knows how a customer message was created to prove the authenticity, then it could easily fake “authentic”
orders. In such a scenario, the sender must be the only party with the knowledge of how the message was
created. The online store can prove to others that the order was, in fact, sent by the customer, and the customer
cannot argue that the order is invalid.
Data Integrity

Data integrity ensures that messages are not altered in transit. With data integrity, the receiver can verify that the
received message is identical to the sent message and that no manipulation occurred.

European nobility ensured the data integrity of documents by creating a wax seal to close an envelope, as shown
in the figure. The seal was often created using a signet ring. These bore the family crest, initials, a portrait, or a
personal symbol or motto of the owner of the signet ring. An unbroken seal on an envelope guaranteed the
integrity of its contents. It also guaranteed authenticity based on the unique signet ring impression.

Data Confidentiality

Data confidentiality ensures privacy so that only the receiver can read the message. This can be achieved
through encryption. Encryption is the process of scrambling data so that it cannot be easily read by unauthorized
parties.

When enabling encryption, readable data is called plaintext, or cleartext, while the encrypted version is called
encrypted text or ciphertext. In this course, we will use the term ciphertext. The plaintext readable message is
converted to ciphertext, which is the unreadable, disguised message. Decryption reverses the process. A key is
required to encrypt and decrypt a message. The key is the link between the plaintext and ciphertext.

Historically, various encryption algorithms and methods have been used. Julius Caesar is said to have secured
messages by putting two sets of the alphabet, side-by-side, and then shifting one of them by a specific number of
places. The number of places in the shift serves as the key. He converted plaintext into ciphertext using this key,
and only his generals, who also had the key, knew how to decipher the messages. This method is now known as
the Caesar cipher. An encoded message using the Caesar cipher is shown in the figure.

Using a hash function is another way to ensure data confidentiality. A hash function transforms a string of
characters into a usually shorter, fixed-length value or key that represents the original string. The difference
between hashing and encryption is in how the data is stored. With encrypted text, the data can be decrypted with
a key. With the hash function, after the data is entered and converted using the hash function, the plaintext is
gone. The hashed data is simply there for comparison. For example, when a user enters a password, the
password is hashed and then compared to the stored hashed value. If the user forgets the password, it is
impossible to decrypt the stored value, and the password must be reset.

The purpose of encryption and hashing is to guarantee confidentiality so that only authorized entities can read
the message.
Creating Cipher Text

The history of cryptography starts in diplomatic circles thousands of years ago. Messengers from a king’s court
took encrypted messages to other courts. Occasionally, other courts not involved in the communication,
attempted to steal messages sent to a kingdom they considered an adversary. Not long after, military
commanders started using encryption to secure messages.

Over the centuries, various cipher methods, physical devices, and aids have been used to encrypt and decrypt
text:
 Scytale (Figure 1)

 Caesar Cipher (Figure 2)

 Vigenère Cipher (Figure 3)

 Enigma Machine (Figure 4)

Each of these encryption methods uses a specific algorithm, called a cipher, to encrypt and decrypt messages. A
cipher is a series of well-defined steps that can be followed as a procedure when encrypting and decrypting
messages. There are several methods of creating ciphertext:

 Transposition

 Substitution

 One-time pad
Transposition Ciphers

In transposition ciphers, no letters are replaced; they are simply rearranged. An example of this type of cipher is
taking the FLANK EAST ATTACK AT DAWN message and transposing it to read NWAD TAKCATTA TSAE
KNALF. In this example, the key is to reverse the letters.

Another example of a transposition cipher is known as the rail fence cipher. In this transposition, the words are
spelled out as if they were a rail fence. They are staggered, some in front, some in the middle and some in back,
across several parallel lines. For example, refer to the plaintext message in Figure 1. Figure 2 displays how to
transpose the message using a rail fence cipher with a key of three. The key specifies that three lines are
required when creating the encrypted code. The resulting ciphertext is displayed in Figure 3.

Modern encryption algorithms, such as the DES and the 3DES, still use transposition as part of the algorithm.
Substitution Ciphers

Substitution ciphers substitute one letter for another. In their simplest form, substitution ciphers retain the letter
frequency of the original message.

The Caesar cipher was a simple substitution cipher. For example, refer to the plaintext message in Figure 1. If
the key used was 3, the letter A was moved three spaces to the right, resulting in an encoded message that used
the letter D in place of the letter A, as shown in Figure 2. The letter E would be the substitute for the letter B, and
so on. The resulting ciphertext is displayed in Figure 3. If the key used was 8, then A becomes I, B becomes J,
and so on.

Because the entire message relied on the same single key shift, the Caesar cipher is referred to as a
monoalphabetic substitution cipher. It is also fairly easy to crack. For this reason, polyalphabetic ciphers, such as
the Vigenère cipher, were invented. The method was originally described by Giovan Battista Bellaso in 1553, but
the scheme was later misattributed to the French diplomat and cryptographer, Blaise de Vigenère.
Substitution Ciphers (Cont.)

The Vigenère cipher is based on the Caesar cipher, except that it encrypts text by using a different polyalphabetic
key shift for every plaintext letter. The different key shift is identified using a shared key between sender and
receiver. The plaintext message can be encrypted and decrypted using the Vigenère Cipher Table, as shown in
the figure.

To illustrate how the Vigenère Cipher Table works, suppose that a sender and receiver have a shared secret key
composed of these letters: SECRETKEY. The sender uses this secret key to encode the plaintext FLANK EAST
ATTACK AT DAWN:

 The F (FLANK) is encoded by looking at the intersection of column F and the row starting with S
(SECRETKEY), resulting in the cipher letter X.

 The L (FLANK) is encoded by looking at the intersection of column L and the row starting with E
(SECRETKEY), resulting in the cipher letter P.

 The A (FLANK) is encoded by looking at the intersection of column A and the row starting with C
(SECRETKEY), resulting in the cipher letter C.

 The N (FLANK) is encoded by looking at the intersection of column N and the row starting with R
(SECRETKEY), resulting in the cipher letter E.

 The K (FLANK) is encoded by looking at the intersection of column K and the row starting with E
(SECRETKEY), resulting in the cipher letter O.
The process continues until the entire text message FLANK EAST ATTACK AT DAWN is encrypted. The process
can also be reversed. For instance, the F is still the cipher letter X if encoded by looking at the intersection
of row F (FLANK) and the column starting with S (SECRETKEY).

When using the Vigenère cipher, if the message is longer than the key, the key is repeated. For example,
SECRETKEYSECRETKEYSEC is required to encode FLANK EAST ATTACK AT DAWN:

Secret key: SECRE TKEY SECRET KE YSEC

Plaintext: FLANK EAST ATTACK AT DAWN

Cipher text: XPCEO XKUR SXVRGD KX BSAP

Although the Vigenère cipher uses a longer key, it can still be cracked. For this reason, a better cipher method
was required.

One-Time Pad Ciphers

Gilbert Vernam was an AT&T Bell Labs engineer who, in 1917, invented, and later patented, the stream cipher
displayed in Figure 1. He also co-invented the one-time pad cipher. Vernam proposed a teletype cipher in which
a prepared key consisting of an arbitrarily long, non-repeating sequence of numbers was kept on paper tape. It
was then combined character by character with the plaintext message to produce the ciphertext. Figure 2
displays an example of a Vernam cipher teletype device.

To decipher the ciphertext, the same paper tape key was again combined character by character, producing the
plaintext. Each tape was used only once; hence, the name one-time pad. As long as the key tape does not repeat
or is not reused, this type of cipher is immune to cryptanalytic attack. This is because the available ciphertext
does not display the pattern of the key.

Several difficulties are inherent in using one-time pads in the real world. One difficulty is the challenge of creating
random data. Computers, because they have a mathematical foundation, are incapable of creating true random
data. Additionally, if the key is used more than once, it is easy to break. RC4 is an example of this type of cipher
that is widely used on the Internet. Again, because the key is generated by a computer, it is not truly random. In
addition to these issues, key distribution is also challenging with this type of cipher.
Cracking Code

For as long as there has been cryptography, there has been cryptanalysis. Cryptanalysis is the practice and
study of determining the meaning of encrypted information (cracking the code), without access to the shared
secret key.

Throughout history, there have been many instances of cryptanalysis:

 The Vigenère cipher had been absolutely secure until it was broken in the 19th century by English
cryptographer Charles Babbage.

 Mary, Queen of Scots, was plotting to overthrow Queen Elizabeth I from the throne and sent encrypted
messages to her co-conspirators. The cracking of the code used in this plot led to the beheading of Mary in
1587.

 The Enigma-encrypted communications were used by the Germans to navigate and direct their U-boats in
the Atlantic. The Polish and British cryptanalysts broke the German Enigma code. Winston Churchill was of
the opinion that it was a turning point in WWII.

The figure symbolizes that many keys must be tried before successfully breaking a code
Methods for Cracking Code

Several methods are used in cryptanalysis:

 Brute-force method - The attacker tries every possible key knowing that eventually one of them will work.

 Ciphertext method - The attacker has the ciphertext of several encrypted messages but no knowledge of
the underlying plaintext.

 Known-Plaintext method - The attacker has access to the ciphertext of several messages and knows
something about the plaintext underlying that ciphertext.

 Chosen-Plaintext method - The attacker chooses which data the encryption device encrypts and
observes the ciphertext output.

 Chosen-Ciphertext method - The attacker can choose different ciphertext to be decrypted and has
access to the decrypted plaintext.

 Meet-in-the-Middle method - The attacker knows a portion of the plaintext and the corresponding
ciphertext.

Note: Details of how these methods are implemented is beyond the scope of this course.
The simplest method to understand is the brute-force method. For example, if a thief attempted to steal a bicycle
secured with the combination lock displayed in the figure, they would have to attempt a maximum of 10,000
different possibilities (0000 to 9999). All encryption algorithms are vulnerable to this attack. On average, a brute-
force attack succeeds about 50 percent of the way through the keyspace, which is the set of all possible keys.

The objective of modern cryptographers is to have a keyspace large enough that it takes too much time and
money to accomplish a brute-force attack.

Cracking Code Example

When choosing a cryptanalysis method, consider the Caesar cipher encrypted code. The best way to crack the
code is to use brute force. Because there are only 25 possible rotations, the effort is relatively small to try all
possible rotations and see which one returns something that makes sense.

A more scientific approach is to use the fact that some characters in the English alphabet are used more often
than others. This method is called frequency analysis. For example, the graph in Figure 1 outlines the frequency
of letters in the English language. The letters E, T, and A are the most popular letters used in the English
language. The letters J, Q, X, and Z are the least popular. Understanding this pattern can help discover which
letters are probably included in the cipher message.

In the Caesar ciphered message IODQN HDVW DWWDFN DW GDZQ, shown in Figure 2, the cipher letter D
appears six times while the cipher letter W appears four times. There is a good possibility that the cipher letters D
and W represent either the plaintext E, T or A. In this case, the D represents the letter A, and the W represents
the letter T.
Making and Breaking Secret Codes

Cryptology is the science of making and breaking secret codes. As shown in the figure, cryptology combines two
separate disciplines:

 Cryptography - the development and use of codes

 Cryptanalysis - the breaking of those codes

There is a symbiotic relationship between the two disciplines because each makes the other one stronger.
National security organizations employ practitioners of both disciplines and put them to work against each other.

There have been times when one of the disciplines has been ahead of the other. For example, during the
Hundred Years War between France and England, the cryptanalysts were leading the cryptographers. France
mistakenly believed that the Vigenère cipher was unbreakable, and then the British cracked it. Some historians
believe that the successful cracking of encrypted codes and messages had a major impact on the outcome of
World War II. Currently, it is believed that cryptographers are in the lead.
Cryptanalysis

Cryptanalysis is often used by governments in military and diplomatic surveillance, by enterprises in testing the
strength of security procedures, and by malicious hackers in exploiting weaknesses in websites.

Cryptanalysts are individuals who perform cryptanalysis to crack secret codes. A sample job description is
displayed in the figure.

While cryptanalysis is often linked to mischievous purposes, it is actually a necessity. It is an ironic fact of
cryptography that it is impossible to prove that any algorithm is secure. It can only be proven that it is not
vulnerable to known cryptanalytic attacks. Therefore, there is a need for mathematicians, scholars, and security
forensic experts to keep trying to break the encryption methods.
The Secret is in the Keys

In the world of communications and networking, authentication, integrity, and data confidentiality are
implemented in many ways using various protocols and algorithms. The choice of protocol and algorithm varies
based on the level of security required to meet the goals of the network security policy.

As an example, for message integrity, message-digest 5 (MD5) is faster but less secure than Secure Hash
Algorithm 2 (SHA2). Confidentiality can be implemented using DES, 3DES, or the very secure AES. Again, the
choice varies depending on the security requirements specified in the network security policy document. The
table in the figure lists common cryptographic hashes, protocols, and algorithms.

Old encryption algorithms, such as the Caesar cipher or the Enigma machine, were based on the secrecy of the
algorithm to achieve confidentiality. With modern technology, where reverse engineering is often simple, public-
domain algorithms are frequently used. With most modern algorithms, successful decryption requires knowledge
of the appropriate cryptographic keys. This means that the security of encryption lies in the secrecy of the keys,
not the algorithm.
Cryptographic Hash Function

Hashes are used for integrity assurance. As shown in the figure, a hash function takes binary data, called the
message, and produces a fixed-length, condensed representation, called the hash. The resulting hash is also
sometimes called the message digest, digest, or digital fingerprint.

Hashing is based on a one-way mathematical function that is relatively easy to compute, but significantly harder
to reverse. Grinding coffee is a good analogy of a one-way function. It is easy to grind coffee beans, but it is
almost impossible to put all of the tiny pieces back together to rebuild the original beans.

The cryptographic hashing function is designed to verify and ensure data integrity. It can also be used to verify
authentication. The procedure takes a variable block of data and returns a fixed-length bit string called the hash
value or message digest.

Hashing is similar to calculating cyclic redundancy check (CRC) checksums, but it is much stronger
cryptographically. For instance, given a CRC value, it is easy to generate data with the same CRC. With hash
functions, it is computationally infeasible for two different sets of data to come up with the same hash output.
Every time the data is changed or altered, the hash value also changes. Because of this, cryptographic hash
values are often called digital fingerprints. They can be used to detect duplicate data files, file version changes,
and similar applications. These values are used to guard against an accidental or intentional change to the data
and accidental data corruption.

The cryptographic hash function is applied in many different situations:


 To provide proof of authenticity when it is used with a symmetric secret authentication key, such as IP
Security (IPsec) or routing protocol authentication

 To provide authentication by generating one-time and one-way responses to challenges in authentication


protocols such as the PPP Challenge Handshake Authentication Protocol (CHAP)

 To provide message integrity check proof, such as those used in digitally signed contracts, and public key
infrastructure (PKI) certificates, like those accepted when accessing a secure site using a browser

Cryptographic Hash Function Properties

Mathematically, a hash function H takes an input x and returns a fixed-size string called the hash value h. The
equation reads: h= H(x).

The example in the figure summarizes the mathematical process.

A cryptographic hash function should have the following properties:

 The input can be any length.

 The output has a fixed length.

 H(x) is relatively easy to compute for any given x.

 H(x) is one way and not reversible.


 H(x) is collision free, meaning that two different input values will result in different hash values.

 If a hash function is hard to invert, it is considered a one-way hash. Hard to invert means that given a hash
value of h, it is computationally infeasible to find an input for x such that h=H(x).

Well-Known Hash Functions

Hash functions are helpful when ensuring data is not changed accidentally, such as by a communication error.
For instance, the sender wants to ensure that the message is not altered on its way to the receiver. The sending
device inputs the message into a hashing algorithm and computes its fixed-length digest or fingerprint.

In the example in the figure, the calculated hash is 4ehiDx67NMop9. Both the message and the hash are in
plaintext. This fingerprint is then attached to the message and sent to the receiver. The receiving device removes
the fingerprint from the message and inputs the message into the same hashing algorithm. If the hash that is
computed by the receiving device is equal to the one that is attached to the message, the message has not been
altered during transit. If the hashes are not equal, as shown in the figure, then the integrity of the message can no
longer be trusted.

While hashing can be used to detect accidental changes, it cannot be used to guard against deliberate changes.
There is no unique identifying information from the sender in the hashing procedure. This means that anyone can
compute a hash for any data, as long as they have the correct hash function. For example, when the message
traverses the network, a potential attacker could intercept the message, change it, recalculate the hash, and
append it to the message. The receiving device will only validate against whatever hash is appended. Therefore
hashing is vulnerable to man-in-the-middle attacks and does not provide security to transmitted data.

Two well-known hash functions are:

 MD5 with 128-bit digest

 SHA-256 with 256-bit digest


Message Digest 5 Algorithm

The MD5 algorithm is a hashing algorithm that was developed by Ron Rivest and is used in a variety of Internet
applications today.

MD5 is a one-way function that makes it easy to compute a hash from the given input data but makes it very
difficult to compute input data given only a hash value. MD5 is essentially a complex sequence of simple binary
operations, such as exclusive OR (XOR) and rotations, which are performed on input data and produce a 128-bit
hashed message digest, as shown in the figure.

MD5 is now considered a legacy algorithm and should be avoided. MD5 should be used only when no better
alternatives are available, such as when interoperating with legacy equipment. It is recommended that MD5 be
phased out and replaced with a stronger algorithm such as SHA-2.
Secure Hash Algorithm

The U.S. National Institute of Standards and Technology (NIST) developed SHA, the algorithm specified in the
Secure Hash Standard (SHS). SHA-1, published in 1994, corrected an unpublished flaw in SHA. The SHA design
is very similar to the MD5 hash functions that Ron Rivest developed.

The SHA-1 algorithm takes a message of less than 2^64 bits in length and produces a 160-bit message digest.
The algorithm is slightly slower than MD5, but the larger message digest makes it more secure against brute-
force collision and inversion attacks.

Note: 2^64 represents the exponential number of 2 raised to the 64th power.

SHA-1 is now considered to be a legacy algorithm. Therefore, NIST published four additional hash functions in
the SHA family, which are collectively known as SHA-2:

 SHA-224 (224 bit)

 SHA-256 (256 bit)

 SHA-384 (384 bit)

 SHA-512 (512 bit)

SHA-2 algorithms are the secure hash algorithms that the U.S. Government requires by law for use in certain
applications. This includes use in other cryptographic algorithms and protocols, for the protection of sensitive
unclassified information.
Note: SHA-256, SHA-384, and SHA-512 are considered to be next-generation algorithms and should be used
whenever possible.

MD5 versus SHA

Figure 1 displays the resulting hashes of the various hashing algorithms. Remember that the longer the hash
values are, the more secure they are.

Both MD5 and SHA-1 are based on a previous version of the message digest algorithm. This makes MD5 and
SHA-1 similar in many ways. SHA-1 and SHA-2 are more resistant to brute-force attacks because their digest is
at least 32 bits longer than the MD5 digest.

SHA-1 involves 80 steps, and MD5 involves 64 steps. The SHA-1 algorithm must also process a 160-bit buffer
instead of the 128-bit buffer of MD5. Because there are fewer steps, MD5 usually executes more quickly, given
the same device.

To verify the integrity of an IOS image, Cisco provides MD5 and SHA digests for all IOS images available here at
Cisco’s Download Software website. A comparison of this MD5 digest against the MD5 digest of an IOS image
installed on a device can be made using the verify /md5 command, as shown in Figure 2.

When choosing a hashing algorithm, use SHA-256 or higher as they are currently the most secure. Security flaws
were discovered in SHA-1 and MD5. Therefore, it is now recommended that these algorithms be avoided.

Note: Specifically, only SHA-256 or higher should be implemented in production networks. Click here for more
information.
Keyed-Hash Message Authentication Code

In cryptography, a keyed-hash message authentication code (HMAC or KHMAC) is a type of message


authentication code (MAC). HMACs use an additional secret key as input to the hash function. This adds
authentication to integrity assurance. An HMAC is calculated using a specific algorithm that combines a
cryptographic hash function with a secret key, as shown in the figure. Hash functions are the basis of the
protection mechanism of HMACs.

Only the sender and the receiver know the secret key, and the output of the hash function now depends on the
input data and the secret key. Only parties who have access to that secret key can compute the digest of an
HMAC function. This characteristic defeats man-in-the-middle attacks and provides authentication of the data
origin.

If two parties share a secret key and use HMAC functions for authentication, a properly constructed HMAC digest
of a message that a party has received indicates that the other party was the originator of the message. This is
because the other party is the only other entity possessing the secret key.

The cryptographic strength of the HMAC depends on the cryptographic strength of the underlying hash function,
on the size and quality of the key, and the size of the hash output length in bits.

Cisco technologies use two well-known HMAC functions:

 Keyed MD5 (HMAC-MD5) - Based on the MD5 hashing algorithm, it provides a marginal but acceptable
security level. It should be used only when no better alternatives are available, such as when interoperating
with legacy equipment.

 Keyed SHA-1 (HMAC-SHA-1) - Based on the SHA-1 hashing algorithm, it provides adequate security.

When an HMAC digest is created, data of an arbitrary length and the secret key are input into the hash function.
The result is a fixed-length hash that depends on the data and the secret key.

Care must be taken to distribute secret keys only to those who require the key. If the secret key is compromised,
packets can be forged; thereby violating the data integrity.
HMAC Operation

Consider an example where a sender wants to ensure that the message is not altered in transit and wants to
provide a way for the receiver to authenticate the origin of the message.

As shown in Figure 1, the sending device inputs data (such as Terry Smith’s pay of $100 and the secret key) into
the hashing algorithm and calculates the fixed-length HMAC digest or fingerprint. This authenticated fingerprint is
then attached to the message and sent to the receiver.

In Figure 2, the receiving device removes the fingerprint from the message and uses the plaintext message with
its secret key as input to the same hashing function. If the fingerprint that is calculated by the receiving device is
equal to the fingerprint that was sent, the message has not been altered. Additionally, the origin of the message
is authenticated because only the sender possesses a copy of the shared secret key. The HMAC function has
ensured the authenticity of the message.

IPsec VPNs rely on HMAC functions to authenticate the origin of every packet and provide data integrity
checking.
.

Hashing in Cisco Products

As shown in the figure, Cisco products use hashing for entity authentication, data integrity, and data authenticity
purposes:

 Cisco IOS routers use hashing with secret keys in an HMAC-like manner to add authentication information
to routing protocol updates.

 IPsec gateways and clients use hashing algorithms, such as MD5 and SHA-1 in HMAC mode, to provide
packet integrity and authenticity.

 Cisco software images that are downloaded from Cisco.com have an MD5-based checksum available so
that customers can check the integrity of downloaded images.

Note: The term entity can refer to devices or systems within an organization.

Note: Digital signatures are an alternative to HMAC.


Characteristics of Key Management

Key management is often considered the most difficult part of designing a cryptosystem. Many cryptosystems
have failed because of mistakes in their key management, and all modern cryptographic algorithms require key
management procedures. In practice, most attacks on cryptographic systems are aimed at the key management
level, rather than at the cryptographic algorithm itself.

As shown in the figure, there are several essential characteristics of key management to consider.
Key Length and Keyspace

Two terms that are used to describe keys are:

 Key length - Also called the key size, this is the measure in bits. In this course, we will use the term key
length.
 Keyspace - This is the number of possibilities that can be generated by a specific key length.

As key length increase, the keyspace increases exponentially:

 A 2-bit (2^2) key length = a keyspace of 4 because there are four possible keys (00, 01, 10, and 11).

 A 3-bit (2^3) key length = a keyspace of 8, because there are eight possible keys (000, 001, 010, 011, 100,
101, 110, 111).

 A 4-bit (2^4) key length = a keyspace of 16 possible keys.

 A 40-bit (2^40) key length = a keyspace of 1,099,511,627,776 possible keys.

The figure displays the characteristics of the AES encryption algorithm. Notice how AES uses long key lengths.
This dramatically increases the keyspace which affects the time it takes to crack the code.

The Keyspace

The keyspace of an algorithm is the set of all possible key values. A key that has n bits produces a keyspace that
has 2^n possible key values. By adding one bit to the key, the keyspace is effectively doubled. For example, DES
with its 56-bit keys has a keyspace of more than 72,000,000,000,000,000 (2^56) possible keys. By adding one bit
to the key length, the keyspace doubles, and an attacker needs twice the amount of time to search the keyspace.
The figure summarizes the number of possible keys that are created by adding additional bits. For example,
adding 1 bit to 56-bit (i.e., 57-bit) doubles the number of keys. Adding an additional bit to a 57-bit key size means
that it would now take an attacker four times the amount of time to search the keyspace. Adding 4 more bits to
56-bits would create a 60-bit key. A 60-bit key would take 16 times longer to crack than a 56-bit key.

Note: Longer keys are more secure; however, they are also more resource intensive. Caution should be
exercised when choosing longer keys because handling them could add a significant load to the processor in
lower-end products.

Almost every algorithm has some weak keys in its keyspace that enable an attacker to break the encryption via a
shortcut. Weak keys show the regularities in encryption. For instance, DES has four keys for which encryption is
the same as decryption. This means that if one of these weak keys is used to encrypt plaintext, an attacker can
use the weak key to decrypt the ciphertext and reveal the plaintext.

The DES weak keys are those that produce 16 identical subkeys. This occurs when the key bits are:
 Alternating ones and zeros (0101010101010101)

 Alternating F and E (FEFEFEFEFEFEFEFE)

 E0E0E0E0F1F1F1F1

 1F1F1F1F0E0E0E0E

It is very unlikely that such keys would be chosen, but network administrators should still verify all keys that are
implemented and prevent weak keys from being used. With manual key generation, take special care to avoid
defining weak keys.

Types of Cryptographic Keys

Several types of cryptographic keys can be generated:

 Symmetric keys - Can be exchanged between two routers supporting a VPN

 Asymmetric keys - Are used in secure HTTPS applications

 Digital signatures - Are used when connecting to a secure website

 Hash keys - Are used in symmetric and asymmetric key generation, digital signatures, and other types of
applications

Regardless of the key type, all keys share similar issues. Choosing a suitable key length is one issue. If the
cryptographic system is trustworthy, the only way to break it is with a brute-force attack. If the keyspace is large
enough, the search requires an enormous amount of time, making such an exhaustive effort impractical. The
figure summarizes the key length required to secure data for the indicated amount of time.

On average, an attacker has to search through half of the keyspace before the correct key is found. The time that
is needed to accomplish this search depends on the computer power that is available to the attacker. Current key
lengths can easily make any attempt insignificant because it takes millions or billions of years to complete the
search when a sufficiently long key is used. With modern algorithms that are trusted, the strength of protection
depends solely on the size of the key. Choose the key length so that it protects data confidentiality or integrity for
an adequate period of time. Data that is more sensitive and needs to be kept secret longer must use longer keys.

Choosing Cryptographic Keys

Performance is another issue that can influence the choice of a key length. An administrator must find a good
balance between the speed and protective strength of an algorithm, because some algorithms, such as the
Rivest, Shamir, and Adleman (RSA) algorithm, run slowly due to large key lengths. Strive for adequate
protection, while enabling communication over untrusted networks.

The estimated funding of the attacker should also affect the choice of key length. When assessing the risk of
someone breaking the encryption algorithm, estimate the resources of the attacker and how long the data must
be protected. For example, classic DES can be broken by a $1 million machine in a couple of minutes. If the data
that is being protected is worth significantly more than the $1 million dollars needed to acquire a cracking device,
then classic DES is a bad choice.

Because of the rapid advances in technology and cryptanalytic methods, the key length that is needed for a
particular application is constantly increasing. For example, part of the strength of the RSA algorithm is the
difficulty of factoring large numbers where factors are the numbers that are multiplied together to get another
number. For example, the factors of 12 would be 1 x 12, 2 x 6, and 3 x 4. Therefore, a 1024-bit number is an
unimaginably large number with many factors. Increasing that number to 2048-bit number creates even more
factors. Of course, this advantage is lost if an easy way to factor large numbers is found, but cryptographers
consider this possibility unlikely. The rule “the longer the key, the better” is valid, except for possible performance
reasons, as shown in the figure.

Anda mungkin juga menyukai