Alice has a key Alice, Bob has Bob-Work and Bob-Friends. Peter has Peter.

Peter knows Alice. Alice expresses trust to Bob-Work.

Peter knows Bob-Friends.

The question is, How can Bob-Friends prove to Peter, that Bob-Work he's being trusted by Alice (and therefore Bob-Friends too), without revealing that Bob-Work and Bob-Friends belong to the same person, and/or plausibly deny such a fact?

NOTE: We do not discuss timing attacks here (such as "Please express trust to THAT pseudonym and I'll see who will appear as trusted to me afterwards")

I have a possible solution involving a "credit system", where participants emit "trust tokens" which can then be transferred by their recipients. But, this has a downside of having to support a LONG credit transfer history. Is there a solution that does not need it?

1 Answer
1

An anonymous credentials system relates three types of parties: authorities, users, and verifiers. An authority (Alice) can issue a credential to a user (Bob), that certifies that the user satisfies some property (in your case, that would be "is trusted"). Credentials are unforgeable. Then, that user (Bob) can prove to a verifier (Peter) that it possesses the credential, without revealing anything about himself other than the possession of the credential (anonymity), even if the verifier colludes with other verifiers and/or the authority.

The way anonymous credentials systems are typically implemented is that a user has a single private key sk, and multiple corresponding public keys pk, pk', etc. (pseudonyms) that can be generated on the fly as randomized commitments on sk. Then, the credential issuing protocol is a special kind of signature scheme that produces a signature on the value inside the commitment; in that way, Bob can obtain a credential from Alice via his pseudonym Bob-Work, and prove possession of it under his pseudonym Bob-Friends, in such a way that the two pseudonyms cannot be linked to each other.