DSA was made public on August 30, 1991 in the Federal Register. The American National Institute of Standards and Technology (NIST) proposed this brand new, unproven algorithm as the national standard for digital signatures, over the well known and widely used RSA. DSA was developed primarily by David Kravitz, an employee of the shadowy NSA. It was later revealed that the NSA forced NIST to adopt DSA over RSA as part of the NSA's strategy of delaying the deployment of strong crypto. Because DSA could only sign and not encrypt, products that used DSA for signatures would not get encryption "for free".

Despite this suspect beginning, the DSA algorithm has been publicly available for peer review and no obvious flaws or backdoors have been found. It's primary disadvantage is that it requires much more computation time than RSA. It once had a significant advantage over RSA: until September 20, 2000, the RSA algorithm was covered by patents owned by Public Key Partners (itself partially owned by RSA Data Security Inc.); DSA was patent free, and thus could be used without paying a licensing fee. However, now that the RSA patent has expired, RSA is generally prefered over DSA.

The algorithm DSA uses internally is restricted to signing of messages of exactly 160 bits, so the DSA first computes the SHA1 hash of the real message and signs that. This means that DSA implicitly relies on the cryptographic security of SHA1, and any flaw in that algorithm will also comprimise DSA.

DSA is used in many crypto products for authentication (but obviously not encryption). These include SSL and related products such as SSH (including OpenSSH and OpenSSL), TLS, SSL-enabled web browsers and web servers, and GnuPG.

A Response

AT's DSA writeup is, in my opinion, overly critical of the situation. In an attempt at providing some balance to this node, I'm going to weigh in on various claims made by AT (which have been made by many other people in the past, I should note).

First, people object to the idea that DSA can only sign, and not do encryption. This is quite simply a straw man - NIST wanted to produce a digital signature algorithm, and that's what they did. The suitability of said algorithm for doing encryption is not relevant with regards to it's usefulness as a signature algorithm. There are many applications where digital signatures are required, but public key encryption is not (a few examples being authentication, contract signing, issuing X.509 certificates, and code signing)

Secondly, the efficiency card. You can play this a lot of different ways, but in the end, they're both "fast enough". For example, I just compared the time difference between DSA and RSA with 1024 bit keys using OpenSSL on a trusty old IntelPentium II. We find that, first, you can verify many more RSA signatures than DSA signatures (roughly by a factor of 10). This is where all the efficiency claims come from. But at the same time, DSA can do over twice as many signatures as RSA in the same amount of time (and in addition, with DSA you can do precomputations so that the actual signature operation takes almost zero time).
But, none of this means anything, because the slowest operation, RSA signatures, can be performed over 50 times per second on a 5 year old computer. You are not going to notice the extra .011 seconds required to do a DSA signature verification instead of an RSA one.

Lastly comes an argument which I find most disturbing in AT's writeup, especially given his otherwise high-quality crypto writeups:
"This means that DSA implicitly relies on the cryptographic security of SHA1, and any flaw in that algorithm will also comprimise (sic) DSA."

This is also true for RSA, and essentially every other digital signature algorithm in use today - if the hash function used is broken, then so is the signature algorithm as a whole. For example, using RSA with the (now-very-much-broken) hash function MD4 would result in easily forgeable signatures. This is not in any way unique to DSA - nothing 'especially bad' happens when SHA-1 is broken as compared to RSA or Rabin-Williams or Nyberg-Rueppel or any other signature algorithm. If, for example, your DSA key could be compromised by someone breaking SHA-1, that would be bad. But this is not the case.

Finally, I will note that FIPS 186-2 (the current DSA standard) allows the use of DSA, RSA, or ECDSA. So all of this paranoid "NIST doesn't want us to have public key crypto" stuff is moot anyway.

A Description of DSA

Since there is currently no description of DSA on e2, I'll give one here. I'm skipping various details, so if you're actually going to implement it, read FIPS 186-2 first.

Parameters: We start by choosing a primep (typically 1024 bits) and another prime q (typically 160 bits), such that q divides p-1. Then we choose a g, which is a generator of a subgroup of p of size q†.

Generating a key: Choose an integer x which is between 1 and q. This is the private key. The corresponding public key is y=gx mod p.

Creating a signature: Term the input (the hash of the message being signed) as i:

Choose a random k between 1 and q, exclusive.

Compute r=(gk mod p) mod q

Compute s=(k-1 * (x * r + i)) mod q

If either r or s is equal to 0, choose a new k and try again. Output the pair (r,s) as the signature.

Verifying a signature; again i is the input, and (r,s) is the signature.

Verify that both r and s are less than q and greater than 0.

u1 = (s-1 * i) mod q

u2 = (s-1 * r) mod q

v = ((gu1 * yu2) mod p) mod q

If v equals r, the signature is valid.

For a proof that this actually works, read the appendix of FIPS PUB 186-2, which explains exactly how it does its thing.

Known Attacks on DSA

This is not a complete coverage, it just covers some of the better known DSA attacks:

First, there are two different problems involving k. If an implementation chooses the same k twice, and an attacker can get ahold of both signatures, then they can recover the private key. The second problem is that if k is not uniformally distributed between 1 and q, then with a sufficient number of signatures an attacker can recover the private key (this is not practical in most cases, but implementations should be careful).

A third attack I'll mention here is that someone can generate a set of DSA parameters (p, q, and g) such that they can generate exactly one forgery for a pre-chosen message. This attack can be prevented by using the FIPS approved parameter generation routine, which will let you prove to anyone you like that the parameters you are using were generated randomly.

†: This can be done by choosing random values for h, and calculating g=h(p-1)/q mod p until g is not 1.