On social scalability

TCRs have some well known limitations. First, the utility they provide needs to be above a “minimum economy size”. How much of its+140B annual spending is the digital advertising industry willing to redirect to AdChain-verified domains — what’s the sum of value applicants to the the registry will be staking to compete for? How much of the billionaire crypto investment industry are VCs willing to reallocate to tokens which happen to be in the Messari registry?

This requirement is not about the size of a list, but rather the economic valuebehind it. A short grocery list is unlikely to hold enough value to attract waves of applicants — unless it pertains to the Queen’s weekly supermarket plans.

Besides this “minimum economy size”, needed to drive new applicants and incentivise curators (token holders) through price increases, there’s the issue of a “maximum bandwidth”. It’s infeasible that a curatorship base will keep growing forever to afford adjudicating over an ever growing number of items. An interesting unknown with TCRs is just how large they can become.

🎯 The canonical TCR: a narrow set of applications

TCRs are simple sorting machines. Black & white, binary registries: either you’re in, or out. They are a good fit for lists whose focal point is very objective, application has a reason to be costly, and curation expertise is neither too cheap nor too expensive. Even better if membership is capped, or naturally limited. Few real-world, valuable (that people would actually pay for) curation engines satisfy these constraints.

Objective focal point & truth: the propose-challenge game must be able to converge towards a truthful outcome, one that’s unquestionable but previously not wide open or not consensual (otherwise it may be simpler to just use an oracle).

Publicly observable evidence, cheap to adjudicate over: on one extreme, this leads to curatorship at zero marginal cost (e.g. “are this list’s websites all cookie-free?”), work that will likely be done by machines. On another extreme, we get to work that’s too expensive, and will more likely be done by an expert alone (e.g. “are the diamonds registered in this list all above 20 carats?”). TCRs are fit for anything in between, with specific shortcomings when it comes to highly controversial issues.

Justifying costly applications: a key aspect is the stake or fee (in the case of Messari) required to apply for a listing, since this is the money that ultimately moves curators. There has to be some unique value in the registry from the point of view of new entrants. Note that such cost ideally can be staked from other forms of non-financial capital, in order to allow for potentially more inclusive registries.

Capped or limited membership: lists with a naturally defined cap can benefit from a constant or ever-growing churn as they approach saturation and keep generating interest from potential applicants. This applies to social status tiers such as schools, clubs, VIP events, premium catalogues, and even charity, where social status signalling means philanthropists basically “spend to escalate” between tiers.

Curation markets vs. Token-curated registries

Curation Markets are a variant of Token Curated Registries that aim to achieve richer signalling (some people would rather take the first as a broader definition that encompasses the second).

By translating curatorship into a non-discrete stake-based game, we can make for a continuous vetting system as opposed to a binary one. Think “grades of approval” (0 to 1), instead of the straightforward “yes” or “no” (0 or 1). Curation markets are more flexible, though even less battle-tested, than TCRs.

Each item i in the registry is associated with staked_i tokens, that signal belief in the item’s permanence in the list; and challenge_i tokens, that are staked as a signal of belief on the removal of the item.

To add a new item to the list, the applicant needs to stake at least MIN_DEPOSIT tokens to the stake_i.

To simplify, we set the applyStageLen to zero, effectively morphing this phase into an indefinitely extended stakingStage: an item is by default in the list from the moment it’s applied, and can be challenged for removal at any point thereon. This represents a tradeoff of security for scalability.

While an item is in the registry but not being voted on, others can freely add tokens to either staked_i or challenge_i. This is the "faites vos jeux" period, or the stakingStage.

As soon as “certain conditions are met” (see section below), the stakingStage ends, and the commitStage for the PLCR face-off voting begins.

Voting works just as in the canonical TCR: there are pre-fixed commitStageLen and revealStageLenperiods, during which any token holder can participate, whereas votes are stake-weighted and obfuscated via the PLCR scheme.

After the revealStage ends, votes are tallied. In the case of a delisting, dispensed tokens (a dispensationPct of all staked_i including the original listing owner + further stakers) are shared among challenge_i“counterstakers”, and the remaining staked_i tokens are shared among those who voted for a “remove”; in the opposite case, mutatis mutandis, and the original stake_istakers win.

One problem with the “there is a fixed stake that can be challenged by matching that stake” approach of the canonical TCR is that it’s more attractive to challenge “easy” items, i.e. those with a high probability of either winning or losing the challenge. Items that are somehow “ambiguous” may never be put to vote simply because challengers will go for the low-hanging fruits, which hold equal rewards for less risks.

The schema proposed above tries to address this issue by allowing for more granular signalling. Non-controversial items are expected to attract more stakes, increasing the “honeypot”, but also the amount of people the winnings will have to be shared among, in the perspective of counterstakers, making effective payouts lower.

Risk-taking is rewarded since optimising for the biggest “honeypots” with the less amount of people to share with means aiming to vote against the majority of stakers or counterstakers, in cases where the difference between both sides of the “faites vos jeux” prediction market is high (big honeypot, one side with few people). The most profitable opportunities require the most gut, too: once the voting that determines the payout is obscured, rational actors will likely assume the majority of voters will follow the prediction market signalling, making it tougher to stake on the other side.

To be more precise, given a subjective probability of P_i that an item will be voted to remain in the list, the expected payout for each staked token is (disregarding opportunity costs):

P_i * challenge_i / staked_i

Rational stakers will add to staked_i if this value is > 0, and to challenge_i if this value is < 0. If we oversimplify and assume that P_i is the same value for all token holders, there is a natural equilibrium where P_i * challenge_i/staked_i = 0, or challenge_i/staked_i = P_i

Divergences from this natural equilibrium represent arbitrage opportunities for curators. The description is grossly underspecified, but hints at how more sophisticated signalling can happen through stakes and counterstakes.

Indefinite application stage, random ‘challenging’

When does the stakingStage end, and an item’s listing goes into the commitStage? Given the considerations above, putting listees randomly to be voted upon seems feasible. We outline three possible approaches:

Initiating the commitStage for all items on which the MIN_DEPOSIT is met by “faites vos jeux" challenges, i.e. where challenge_i >= MIN_DEPOSIT, at every timed round — this is close to how the canonical TCR works.

Initiating the commitStage for every item on which the stake_i is met by “faites vos jeux" challenges, i.e. where challenge_i >= stake_i.

Selecting randomly, among these, a predefined percentage to be actually voted on, at every timed round, challenge_i - stake_i being a factor of the probability of each item being picked. The percentage can vary according to the amount of new applicants in the list.

🔮 Prediction markets & holographic consensus

Large lists suffer from a “bandwidth”, or “throughput” limitation in their propose-challenge mechanism. To maintain the level of diligence over an ever-growing registry, voter turnout must rise accordingly.

With the proposed scheme, we get effectively to a prediction market with regards to outcome of votes. This invokes Matan Field’s idea of holographic consensus, in which predictions rather than actual votes are used to achieve objective results (in a hologram, every piece contains the information about the whole). The decision to keep or delist an item is only occasionally exerted by a plenary among all token holders (which keeps the prediction market in good behaviour). In most cases, “professional” speculators do just as fine, betting on the outcome of voting and reflecting the will of the majority.

Besides all those who come to die before existing, the distributed curated registries that thrive will likely have to adapt ferociously to achieve proper protocol-market fit. We believe these ones will eventually spawn their own sub-categorising and self-governing mechanisms. And that’s how both predictions shared in the beginning of this text may be true at the same time.