/etc/passwd
file is 64 bits long and is generated from
a key that is 56 bits long.
To search all the possibilities for the key, one would need to make
2**56 attempts - quite a large number even by today's standards.
However, since the key is a simple function of a password, a smarter
attacker could create a dictionary of common passwords and
try those first.
Even if the dictionary is a million words long, searching the entire
dictionary would take less than one trillionth the amount of time
compared a so-called brute-force attack that looked at all
possible keys.
This can mean the difference between hours and years of CPU time.
On typical multiuser systems, a dictionary of that size might allow
an attacker to guess a significant fraction of user passwords.
Even if the server keeps the password database completely secure to guard against an attack there, network traffic is nearly impossible to secure in many cases. Examples include cellular phone networks, set-top boxes for cable access, and public Internet access points. The problem of a dictionary attack is exacerbated by the need for small passwords, e.g. PINs or access codes that must be chosen from a relatively small keyspace. In these situations an attacker may be able to search the entire valid keyspace with a dictionary in a fairly short time.
It is obvious that immunity to network-based dictionary attacks would be a highly desirable property in any password-authentication mechanism. Unfortunately, nearly all password-only challenge-response protocols are vulnerable to such attacks. Protocols that defend successfully against dictionary attacks only started appearing in the early 90s, with the discovery of EKE and related protocols.
This problem persists in more complex protocols. Consider a challenge-response system that uses a one-way function f() to hash passwords for storage. Alice and Bob authenticate as follows:
Plaintext-equivalence is actually a very difficult issue to resolve in password-authentication systems. In general, it is fairly easy to design protocols if one can assume that both parties possess an identical shared secret of some kind. This is mainly because both parties can perform fairly complex but symmetric operations based on the shared secret and exchanged messages, and a protocol designer can add operations and messages to strengthen a protocol without worrying much about "breaking" the protocol, i.e. causing the protocol to cease to function correctly. On the other hand, avoiding plaintext-equivalence requires the careful selection of mathematical operations that will in most cases operate asymmetrically on the client and server sides yet still authenticate properly when the correct password is used. This is analagous to the relative abundance of symmetric encryption algorithms as compared to the relatively few public-key cryptosystems currently available.
No matter how hard or easy it is to avoid plaintext-equivalence, it has been and will continue to be a highly desirable property of almost any password-authentication mechanism. While it is always a good idea to protect a server's password database from intruders, this protection is usually limited to operating system mechanisms like file permissions and access control lists. At some point, the server itself needs to be able to read the file to perform authentication. Thus, the means for accessing the contents of the entire password database must reside somewhere on the server. In a sense, UNIX users have enjoyed the relative security of a non- plaintext-equivalent password mechanism for decades; on most older systems, the password database is world-readable! Users of UNIX and other multiuser operating systems would not be willing to give up such a system and trade off resistance against internal attacks for a little bit of network security.
Vendors of modern operating systems are beginning to realize the hazards of plaintext-equivalence. As recent security woes plaguing both Microsoft Windows NT and Novell NetWare have become widely reported in the media, it has become increasingly clear that existing protocols offer inadequate protection against password file attacks. As soon as a weak authentication mechanism becomes incorporated into a large number of products, a hacker or group of hackers who dislikes/distrusts the company writes and releases utilities to find and extract the contents of the password file, which promptly results in a full compromise of system security. Much bad publicity ensues. The only way around this problem is to use a system that makes the password file inherently useless to attackers, even if it were to be made world-readable. True security starts there.
To illustrate how lack of forward secrecy might be exploited, we start with a modified challenge-response protocol that also performs a key exchange:
This attack also works in reverse. In many instances, the session key can be compromised, either through negligence or the use of weak encryption. For example, to be exported legally from the U.S., encryption software must generally be limited to a key length of 40 bits or shorter. Such keys have been demonstrated to be easily compromised through brute-force attacks in a matter of hours. A number of currently-used systems tackle the authentication problem by establishing a secure connection in advance and then sending the password over this secure connection. Once the session key is compromised under this arrangement, the password is instantly revealed.
It is obvious why forward secrecy is a good requirement in password-authentication systems. Although exploiting a lack of forward secrecy generally involves a security breach elsewhere, containing the extent of the breach can often mean the difference between a localized, repairable intrusion and a complete compromise of an entire network.
This list is by no means a complete list of issues that arise in the design of strong password-authentication systems. I do believe, however, that these issues tend to be featured prominently in security analyses of proposed protocols. They are intriguing partially because they are inherently hard problems that have only begun to be solved recently, and because there exists a large body of work dealing with how to exploit weaker protocols that fail to address these issues properly.