When constructing a PRF that has n bit input using the GGM PRG, why do we always have to recursively run the PRG using its previous output as a seed key n times. Instead, why don't we run the PRG n times, and use that as the output.

Any answers would be great. My assumptions is that it has something do with producing independent random blocks, but surely a secure PRG would satisfy this?

"why don't we run the PRG n times, and use that as the output." It's not clear what you mean by this. What exactly is the input to the PRG each time? Remember that the input & output lengths of the PRG are different. And how are you proposing to incorporate the $n$ bits (i.e., $2^n$ possibilities) of the input into the computation?
–
MikeroJan 25 '13 at 6:24

2 Answers
2

Short answer: I'm pretty sure that your suggestion would "work," but I don't think that it would be better than the GGM approach from either a performance or a security point of view.

Long answer: As Mikero suggestions, we have to be careful about the intput and output lengths of the PRGs. So the GGM problem starts with a family of pseudorandom generators whose output length is 1 more than its input length:

$$ G_n : \{0,1\}^n \to \{0,1\}^{n+1} $$

The goal is to produce a family of PRGs with bigger expansion. For the sake of concreteness, let's say that you want a new family of functions that doubles the length of its input:

$$ F_n : \{0,1\}^n \to \{0,1\}^{2n} $$

The GGM construction accomplishes this by using $G_n$ a total of $2n$ times to produce the corresponding $F_n$. It sounds like you want to do things slightly differently. If I understand your question correctly, you want to apply $G_n$ to the input in order to get $n+1$ bits, and then apply $G_{n+1}$ to that in order to get to $n+2$ bits, and so on. Thus, you'll set

$$ F_n (x) = G_{2n-1}(G_{2n-2}(\cdots(G_n(x)\cdots)).$$

I'm pretty sure that you could prove this secure through a simple hybrid argument. However, from a theoretical point of view, I think that I'd prefer the GGM method for a few reasons:

The PRGs with bigger-input-lengths, like $G_{2n-1}$, could be much slower than the initial PRG $G_n$. Remember, each function in the family runs in time polynomial in its own input length.

The hybrid argument will have a rather large error term in the indistinguishability claim. With the GGM approach, you're using $n$ pseudorandom bits each time and you're claiming that they are indistinguishable from $n$ random bits. With your suggestion, things are worse: you're using more pseudorandom bits each time ($n+1$, $n+2$, and so on) and claiming that they are indistinguishable from the corresponding number of random bits.

In practice, it is unlikely that you'd want to use either the GGM construction or your own when building a PRG. However, if for some strange reason you wanted to do so, I think the GGM method is better again, but for a more subtle reason. I'm not sure what initial PRG you'd use in the construction, but I'm assuming it would be something like the assumption "I'm willing to believe that SHA1 is a PRG when used on 159-bit inputs." (HUGE caveat: I'm not suggesting that you should actually make this assumption! It just seems like the most logical way to use a GGM-style approach in practice.) What this really means is that you think

$$ SHA1 : \{0,1\}^{159} \to \{0,1\}^{160} $$

somehow is part of a mythical family of PRGs, even if you don't know what the other members of this family are. Under this assumption, I suppose you could use the GGM construction on SHA1 in order to make bigger PRGs. Your method wouldn't work though, because we don't know the other members from this "mythical family" so we cannot use them in a construction.

Okay, I think I know why. If you have a n bit input, and you wanted to run the PRG, you would need to run it maximum $2^n$ times (I made a mistake by thinking you could run the PRG n times). Whereas if you change the key and run the PRG in a recursive manner using the output of the last PRG as input, then you would only need to run it $2 \cdot n$ times.