Workshop on Blockchain Technology and Theory - Abstracts

Service Discovery for Hyperledger Fabric

Hyperledger Fabric (HLF) is a modular and extensible permissioned
blockchain platform released to open-source and hosted by the Linux
Foundation. The platform's design exhibits principles required by
enterprise grade business applications like supply-chains, financial
transactions, asset management, food safety, and many more. For that end
HLF introduces several innovations, two of which are smart contracts in
general purpose languages, and flexible endorsement policies, which
govern whether a transaction is considered valid.
Typical blockchain applications are comprised of two tiers: the first
tier focuses on the modelling of the data schema and embedding of
business rules into the blockchain by means of smart contracts
(chaincode) and endorsment policies; and the second tier uses the SDK
(Software Development Kit) provided by HLF to implement client side
application logic.
However there is a gap between the two tiers that hinders the rapid
adoption of changes in the chaincode and endorsement policies within the
client SDK. Currently, the chaincode location and endorsement policies
are statically configured into the client SDK. This limits the
reliability and availability of the client in the event of changes in the
platform, and makes the platform more difficult to use. In this work we
address and bridge the gap by describing the design and implementation of
Service Discovery.
Service Discovery provides APIs which allow dynamic discovery of the
configuration required for the client SDK to interact with the platform,
alleviating the client from the burden of maintaining it. This enables
the client to rapidly adapt to changes in the platform, thus
significantly improving the reliability of the application layer. It also
makes the HLF platform more consumable, simplifying the job of creating
blockchain applications.

Spacemesh

We combine Proof of Space-Time (PoST) with a mesh-based permissionless
consensus protocol. The mesh (layered DAG) does not rely on
leader-election. Rather, we incorporate ideas from Byzantine agreement
protocols to guarantee convergence to a consensus from any starting
state. Our construction combines a "local" protocol that guarantees fast
consensus on recent blocks, with a "global" protocol that guarantees
irreversibility. Our global protocol also allows the ledger to
"self-heal" from arbitrary violations of the security assumptions,
reconverging to consensus after the assumptions hold again. The mesh is
also designed to be race-free: there is no "race" to generate the next
block and honestly-generated blocks are always rewarded. This property,
which we define formally as a game-theoretic notion, makes the protocol
incentive-compatible and satisfies linearity of rewards. A PoST allows a
prover to convince a verifier that she spent a "space-time" resource
(storing data over a period of time). A rational PoST construction
implies that it is more profitable for a miner to use the lower-cost
space-time resource over CPU work. Compared to PoW, PoST requires less
energy use, as the difficulty can be increased by extending the time
period over which data is stored without increasing computation costs
(with a market-based mechanism for readjustment). Yet, PoST retains some
useful properties of PoW, and therefore has advantageous aspects that
other energy-efficient mechanisms (in particular Proof of Stake) lack.
This is joint work with Julian Loss and Tal Moran.

Bitcoin's Privacy Problem, and How to Fix it

In Bitcoin, the details of a payment are broadcast in the clear, so that
anyone can verify the validity. Unfortunately, this violates user privacy
and sacrifices coin fungibility. I will describe the Zerocash protocol,
which uses zero knowledge proofs to achieve privacy-preserving payments
in a Bitcoin-like system. This protocol has been deployed in the wild, as
part of the cryptocurrency Zcash.

Efficient MPC meets Blockchains

MPC protocols allow a set of mutually distrustful parties to compute a
program without revealing their private inputs. It has been suggested
that MPC can be combined with blockchain systems to achieve two goals:
(1) Determine cash distribution according to private inputs; (2) Improve
fairness of MPC protocols through financial punishments for misbehaving
parties. In this talk, we will present specific and general purpose MPC
protocols that can be efficiently combined with blockchain systems and
their applications, such as gambling, distributed cryptocurrency
exchanges and private smart contracts. First, we will present a general
approach for combining MPC protocols with public verifiability and
cheater identification properties with blockchain systems via smart
contracts. Following this approach, we will discuss a specific purpose
protocol for card games and an efficient general purpose MPC protocol for
boolean circuits that achieve these properties. Using these protocols and
auxiliary building blocks, we will then show how to realize applications
such as the ones discussed above.

Formalizing and Implementing Distributed Ledger Objects

Despite the hype about blockchains and distributed ledgers, no formal
abstraction of these objects has been proposed. In this talk, I will
overview a formulation of a distributed ledger object that together with
collaborators have recently proposed. In brief, we define a ledger
object as a sequence of records, and we provide the operations and the
properties that such an object should support. Implementation of a
ledger object on top of multiple (possibly geographically dispersed)
computing devices gives rise to the distributed ledger object. In
contrast to the centralized object, distribution allows operations to be
applied concurrently on the ledger, introducing challenges on the
consistency of the ledger in each participant. We provide the definitions
of three well known consistency guarantees in terms of the operations
supported by the ledger object: (1) atomic consistency (linearizability),
(2) sequential consistency, and (3) eventual consistency. We then provide
implementations of distributed ledgers on asynchronous message passing
crash-prone systems using an Atomic Broadcast service, and show that they
provide eventual, sequential or atomic consistency semantics. The talk
will conclude with a discussion on a variation of the ledger – the
validated ledger – which requires that each record in the ledger
satisfies a particular validation rule.
Joint work with Antonio Fernandez Anta (IMDEA Networks Institute ,Spain),
Kishori Konwar (MIT, USA) and Nicolas Nicolaou (KIOS CoE, Cyprus).

Byzantine Protocols on a Good Day

Interest in scaling Byzantine fault tolerant solutions to hundreds or
thousands of participants arises in various Blockchain settings: It is a
requisite in permissioned settings. It is needed for robust sampling of
sub-committees in permissionless settings. And it is needed for
proof-of-stake settings in order to sustain decentralized
control. However, various Byzantine problems incur quadratic
communication complexities in the worst case. In particular, reaching
agreement by a group of n=3f+1 parties requires Omega(n^2) communication
in the worst case, even in purely synchronous settings. Sharing a secret
key to a Byzantine group, a fundamental setup for multi-sigs and other
forms of secure computations, requires quadratic communication in the
asynchronous settings. For moderate to large system sizes, e.g.,
n=2,000, incurring O(n^2) complexities may be prohibitive. The focus of
this talk is on escaping these quadratic complexities in many common
cases. We explain how to harness known and new methods to remove
unnecessary quadratic complexities both for BFT consensus and for
asynchronous verifiable secret sharing.

Betting on Blockchain Consensus with Fantomette

Blockchain-based consensus protocols present the opportunity to develop new protocols, due to their novel requirements of open participation and explicit incentivization of participants. In this paper, we propose a new leader election protocol, Caucus, which scales to a large and untrusted set of participants. We fit this leader election protocol into a broader blockchain-based consensus protocol, Fantomette, that treats incentivization as a first-class concern but avoids the resource-intensive proofs-of-work used in Bitcoin, and argue for the security and cost-effectiveness of both components of the system. This is joint work with Sarah Azouvi and Patrick McCorry.

Privacy-Preserving Auditing for Distributed Ledgers

Distributed ledgers (e.g. blockchains) enable financial institutions to
efficiently reconcile cross-organization transactions. For example, banks
might use a distributed ledger as a settlement log for digital assets.
Unfortunately, these ledgers are either entirely public to all
participants, revealing sensitive strategy and trading information, or
are private but do not support third-party auditing without revealing the
contents of transactions to the auditor. Auditing and financial oversight
are critical to proving institutions are complying with regulation.
zkLedger is the first system to protect ledger participants’ privacy and
provide fast, provably correct auditing. Banks create digital asset
transactions that are visible only to the organizations party to the
transaction, but are still publicly verifiable. An auditor sends queries
to banks and gets a response and cryptographic assurance that the
response is correct. We implemented a distributed version of zkLedger
that can produce provably correct answers to auditor queries on a ledger
with a hundred thousand transactions in less than 10 milliseconds.

Analyzing Blockchain Consistency

A few recent works [GKL15, PSS16, GKL17] have analyzed important properties of blockchains, including most significantly, *consistency*, which is a guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol.
To analyze consistency, the prior analysis of Pass, Seeman and Shelat required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under the simplifying assumption that the network was synchronous.
Here we develop a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains:

Our method identifies an error in the original analysis of Pass et al. Our new analysis provides a tighter guarantee on the
consistency property of Nakamoto's protocol, including for parameter regimes which PSS could not study;

We analyze a family of delaying attacks and extend them to other protocols;

We analyze how long a participant should wait before considering a high-value transaction ``confirmed'';

We analyze the consistency of CliqueChain, a variation of the Chainweb system;

We provide the first rigorous consistency analysis of GHOST and also analyze a folklore ``balancing"-attack.

In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages. We hope our techniques enable authors of future blockchain proposals to provide better analysis of their schemes.

PHANTOM and GHOSTDAG: A Generalization of Satoshi's Longest Chain Rule

Bitcoin's consensus rule can be seen as a transaction ordering
protocol---transactions that were included in the longest chain of blocks
precede those included in other chains. The slow block creation in Bitcoin,
together with the cooperation and communication of honest miners (which are
assumed to possess a majority of the proof-of-work), guarantee that the
longest chain is robust, and that replacing large portions of it becomes
exponentially difficult. In this work we present a generalization of this
paradigm, based on our PHANTOM protocol. Our framework follows closely
Satoshi's proof-of-work paradigm, except that blocks may have multiple
predecessors, which results in a structure of a Directed Acyclic Graph of
blocks -- a blockDAG. This blockDAG may contain conflicting transactions.
In order to recover consistency, PHANTOM outputs an ordering over
transactions in the blockDAG, and accepts transactions chronologically
according to this order, provided that they are consistent with those
accepted before them. We introduce the notion of a $k$-cluster, which is a
set of blocks in the blockDAG that are "well connected" (as we formally
define). PHANTOM's ordering rule utilizes the largest $k$-cluster to output
an order over transactions. Satoshi's longest chain rule can be seen as a
special case of PHANTOM, suitable for slow block creation rate, with $k=0$.
We further introduce the notion of a k-chain, and devise a protocol that
selects the heaviest $k$-chain. This protocol is called GHOSTDAG, and it is
essentially a greedy version of PHANTOM. We prove that GHOSTDAG enjoys
similar consensus properties as the longest chain rule (which here too
corresponds to $k=0$), without requiring that the block creation rate be
kept negligible relative to the propagation delay.
GHOSTDAG and PHANTOM assume an upper bound on the propagation delay, and
use this assumption to set their parameter $k$ appropriately. This allows
PHANTOM and GHOSTDAG to scale up the block creation rate without
compromising the security and robustness of the ordering rule, thereby
alleviating a severe scalability limitation of Satoshi's original protocol.

Game of Coins

We formalize the current practice of strategic mining in
multi-cryptocurrency markets as a game, and prove that any
better-response learning in such games converges to equilibrium. We then
offer a reward design scheme that moves the system configuration from any
initial equilibrium to a desired one for any better response of the
miners. Our work introduces the first multi-coin strategic attack for
adaptive and learning miners, as well as the study of reward design in a
multi-agent system of learning agents.