For a certain application I need a commitment scheme where each user could make a commitment, and a single verification operation could verify all the commitments simultaneously, faster than single verifications. It's like a batch verification of signatures but we are able to build each commitment based on the previous one (that's not the usual case for batch verification of signatures)

Formally:
Each user i, makes a commitment $C(i)$ to a value $x(i)$, as $C(i):=Commit(C(i-1),x(i),open(i))$, with $open(i)$ the randomness used for computing the commitment, or nothing if the scheme is deterministic. $C(0)$ is an initial value.

I think that The Small Exponents Test could be adapted to the DL cumulative commitment scheme to achieve O(s) modexps for a security parameter s = 80 aprox. But can it be done better?
–
SDLAug 29 '11 at 15:05

2 Answers
2

It sounds like you are thinking about this the wrong way. Verifying a commitment is very fast, if you choose the commitment scheme properly.

In particular, I recommend that you use a hash-function-based commitment scheme: C(i) = Hash(x(i) || open(i)). Then verifying an opened commitment requires just one hash evaluation, which is very fast. Based on my measurements with openssl speed sha1, you should be able to verify several million commitments per second, if you use a SHA1-based commitment.

The question talks about modexps. That suggests that you are considering the wrong kind of commitment scheme. Modexps are much slower than hash evaluations. If you are choosing a modexp-based commitment scheme, you are going to end up with something significantly slower than the best possible. Rather than trying to make up for the suboptimal choice of commitment scheme by looking for fancy batching tricks, I recommend you simply move straight to a fast commitment scheme (based on a cryptographic hash), and then you won't need anything fancy. My guess is that straightforward use of a hash-based commitment is going to be faster than any modexp-based commitment scheme, even with batching.

True, in principle. There are certain differences. A deterministic cumulative commitment scheme requires only O(N) messages, while a hash based approach would require O(N^2) messages (if everyone wants to verify the commitments) and O(N) space for the commitment itself.
–
SDLAug 29 '11 at 16:01

@SDL, I confess I can't intuit what assumptions you are making in your complexity analysis. It sounds like you have a particular communication model and usage situation in mind. If so, please edit your question to clarify. Who are the parties? Which parties are committing to something? Which parties are verifying, and which of the commitments do they want to verify? (all of them? a subset?) What assumptions are you making about communications model? (are broadcasts possible?) We can't give you the best possible answer if we don't know what your requirements are.
–
D.W.Aug 29 '11 at 17:59

Bandwidth restricted point-to-point communication (no broadcast) between parties. All parties commit, all parties check, as in a fair coin toss. I wondered if there was something developed that fit the cumulative commitment model. You are right that my requirements are not conventional.
–
SDLAug 29 '11 at 18:38

1

@SDL: If every user should be able to check the commitment of every other user, you somehow have to transfer at least the x(i) from i to j. This is independent of the scheme using to verify the commitment.
–
Paŭlo EbermannAug 29 '11 at 19:14

@SDL, under your model, I don't see how your scheme can transmit less than O(N^2) messages. Paŭlo Ebermann explained it well. To put it another way: party j needs to receive x(1),x(2),...,x(N), to be able to verify all commitments. This means that each party must receive O(N) data. In a communication model with no broadcast, that sounds like it implies total communication of O(N^2).
–
D.W.Aug 30 '11 at 15:42

One approach that may work is the following. You can represent the accumulated values as a polynomial, where the roots are equal to the messages:

User $1$ creates a representation of $P_1(x)=(x-C(1))$

User $i$ creates a representation of $P_i(x)=(x-C(i))P_{i-1}$

A commitment scheme with properties very close to what you want is PolyCommit. It allows any size of a polynomial to be accumulated into a single representation and allows batch evaluations (i.e., batch opening of messages committed to).

The challenge is adapting the construction, which is designed for a single party to create a commitment, into allowing different parties to add in their contributions to the underlying polynomial (and performing a distributed opening of the commitment).

I haven't considered it beyond this preliminary sketch and so this isn't an answer, just "brainstorming."