There are protocols which make use of invoking an oracle or submitting queries to an oracle and getting response from that. There are many examples in which oracles are used to define security, like IND-CCA security and there are many examples of using the term oracle to just a procedure accepting input and providing some output, independent of the input, like non-information oracles of Canetti (you can refer to my previous question to read more about non-information oracles). My question is about the Random Oracle, that is an oracle with random output.

When can we determine that we are using the random oracle model? For example, when we use the random output of a hash function, definitely we are in the random oracle model, but if we use one way property of hash function we are not using random oracle model. Can you provide more examples on this?

Let me clarify my ambiguity through an example. In this article the authors use a new oracle which is relatively unknown called linear oracle bilinear Diffie-Hellman (or in short LO-BDH) and security of their approach lies in the hardness of this assumption. How can I determine that using this assumption, we are not using the random oracle model (the authors claim that they are not in the random oracle). Here is another source of ambiguity:

They are using commitments over UC framework which are impossible in the plain model thus they are at least using common reference string (CRS) model. How can I make sure that if I use such an oracle (LO-BDH) in the security proof of another protocol, I am still in the plain model or using it does not cause undesired shifting of models?

When defining pseudorandom functions, the authors have considered algorithms having oracle access to a random function. But usage of a random function there is very different from the usage of a random function in the RO methodology. In the pseudorandom functions, a random function is used as a way of defining what it means for a concrete keyed function to be pseudorandom. In the random oracle model, the random function is used as part of the construction of the primitive, so must somehow be instantiated in the real world if we want a concrete realization of the primitive.

BTW, the authors have noted that a random oracle is not a one way function because random oracle is not a fixed function, but in order to implement it in a real world, it must first be instantiated and description of it must be fixed. So a hash function is fixed before executing the protocol.

$\begingroup$Do you mean "given a hardness assumption (possibly defined using some oracle), how can we be sure this hardness assumption does not imply a random oracle?"$\endgroup$
– mikeroNov 15 '10 at 21:19

$\begingroup$Yes. A part of my question is exactly about this.$\endgroup$
– Yasser SobhdelNov 16 '10 at 6:43

2 Answers
2

Let us first step back and say more precisely (mathematically) what "the Random Oracle Model (ROM)" is, because it is fundamentally different from an "assumption" as they are thought of in cryptography.

One way to formalize the ROM is to see it as alternative model of computation. In particular, algorithms in this model of computation have a extra distinguished state (or subroutine, or circuit gate, whatever), that computes some function $H:\{0,1\}^\ast\to\{0,1\}^n$, where n is the security parameter. An algorithm A is executed in this model by picking $H$ at random from all possible functions, and then running $A$ with its $H$ subroutine set to compute that function.

When we "analyze a protocol in the ROM", we mean first define the protocol via algorithms that have access to $H$, and then we prove a theorem about the protocol when it is attacked by adversaries that also have access to $H$. The theorem will say that something happens with high probability, where the probability is over the choice of $H$, amongst other things.

This model of computation is not realistic, however, because we do not all have access to a truly random function. So instead we take our algorithms (which were defined in the ROM) and set $H$ to something efficiently computable, like SHA-256. Of course, the theorem does not actually cover such usage, but this approach is remarkably effective for avoiding wide classes of attacks against protocols. We just hope ("assume") that things will work out for the protocols under consideration, but in general this assumption is false[1].

A cryptographic assumption, on the other hand, say something about the limitations of the "standard model of computation", i.e. Turing machines, circuit families, etc. A distinguishing feature here is that assumptions might true, but the ROM will never "true". This holds even for interactive assumptions that involve weird oracles - even a random oracle.

Things brings us to the point of confusion: When you're sitting down to design a protocol (algorithm) and prove it secure in the standard model under an assumption, you can't assume that your algorithm has access to any oracles, because it needs to be an algorithm in the usual sense. Even if your assumption has a random oracles represented somewhere in it, you still need to give a well-defined algorithm or protocol.

$\begingroup$If I have understood your answer, you are saying that although we are somehow forced to see some portions of our algorithm in blackbox, while there is no usage of random output, we are not in ROM. In case of a well defined protocol, it is the same though without any doubt.$\endgroup$
– Yasser SobhdelMay 12 '11 at 18:48