Menu

Uncategorized

In this post we describe a concrete design enabling privacy for e-participation platforms falling under the category of social information filtering systems. This is an extension of work presented in the User Anonymization for Decidim Barcelona report. Although this report was written in the context of the Decidim e-participation tool, the techniques generalize to any platform with similar characteristics and intent. Continue reading here.

I once heard a metaphor about progress that I now use when talking about AI. Don’t know where I heard it first, so can’t give credit.

Imagine you want to get to the moon, but you don’t know how. Someone suggests using a ladder propped up against a tree. So a ladder is built, put in place, and the climb begins. To be scientific about it, distance to the moon is measured during the ascent. Sure enough, the distance decreases, all the way to the top of the ladder. To get closer, bigger and bigger ladders are constructed, propped up against taller and taller trees. The enterprise is making progress, as the distance to the moon diminishes with each attempt. If we continue on this way, we will surely make it to the moon.

Or you could try building a rocket. However, building the rocket does not result in any progress of the same kind being reported. You can build all the rocket parts correctly, but building any one of those wont result in any direct progress, as defined by distance to the moon. By that metric, there is no progress at all.

The metaphor illustrates that progress towards an objective does not imply that the employed solution will scale all the way towards reaching its goal. Conversely, it also illustrates that a solution that will scale may not reveal any progress during its development. These two situations are the core of what makes progress towards AI difficult to assess. If we factor in the hype, things are even more confusing because there is a propensity to report progress on advances that may be of the ladder type rather than the rocket type.

Because we have not solved AI, there is no way to know to which type a certain advance belongs. We can have intuitions. For example, an advance that involves generality and can solve many different problems is more likely to be a rocket type component. But not necessarily so. Most techniques in AI developed all the way from the 1960’s until the present day will never scale to full AI, and surely many of them were thought to. And today, what can we say of the latest techniques in deep learning? Perhaps they are a rocket component that solves a small piece of the puzzle, such as low level perception. Or maybe in 20 years time they will be remembered a promising direction that ultimately failed to overcome its data efficiency issues.

But there is one general and large scale trend that we can be confident is bringing us closer to AI, if only in the vague chronological sense. This trend is the continued alignment of the economy with AI research. We can take a previous and related example to illustrate the phenomenon: the semiconductor industry. Before the information age, the allocation of resources to the development of microprocessors and related technologies was orders of magnitude smaller than today. When society entered the information age, the explosion in demand for devices resulted in a shift in resources that gave rise to the exponential advances known as Moore’s law.

Of course, a resource shift is in itself not sufficient to guarantee explosive advances; rather it is a culmination of previous enabling scientific advances that typically occur at a slower pace. However, the limiting factor in scientific progress is typically aggregate cognitive power, which is itself one of the resources being allocated. Whether we are talking about a slow accretion of knowledge, or a fast explosion of applications, the two can only occur when the right people are working on the right task. Which brings us back to the surrounding conditions these individuals are in when making decisions.

Today, we have entered another phase of the information age. A phase in which society is demanding services that project great value onto technologies that can deliver analysis, prediction and automation in data intensive domains. This has shifted huge resources towards AI research. Whether or not specific AI advances are real progress towards AI, the large scale activity resulting from a stronger economic alignment is very likely producing overall progress. And this would be true even if we restricted ourselves to enabling conditions such as increasing computational power or accumulation of training data.

We may not know which are the ladders and which are the rocket components. But we do know that as the resources in play rise the chances that somewhere some group is building rocket components become increasingly significant.

This is the subject of a new post on the nvotes blog, found here. Here’s the summary:

The term “secure voting” is generally thought to refer to cybersecurity and resistance to cyberattacks.

However, cybersecurity is a general property of hardware/software that does not reflect the specific requirements of voting as a political process. The secret ballot is an established and indispensable requirement for voting.

Secure voting systems must support privacy as well as integrity; these two requirements stand in opposition.

In a system supporting privacy, no one, not even system administrators or other privileged observers can violate the secret ballot.

In a system supporting end-to-end verifiability, voters can ensure that their vote was cast, recorded, and counted correctly.

This post will describe the main steps and operations that compose the cryptographic protocol of a re-encryption mixnet based voting system we are currently prototyping. This prototype is based around work[1][2] by the E-Voting Group at the Bern University of Applied Sciences, and uses their unicrypt library. The main elements of the cryptographic scheme are

These have been listed to roughly correspond chronologically with the protocol phases. Here they are at a glance.

This example shows two authorities both for key generation/joint decryption and mixing, but the protocol generalizes to any number of authorities. Also note that although above the key generation/decryption authorities are the same as the mixing authorities, this need not be the case. One could have four authorities such that two of them were key custodians and two of them were mixers. It is standard practice, however, that the number of authorities of each type is the same. There would be no privacy gain having more of one type as the limiting factor would the smaller number.

Key generation

In the first step the key generation/decryption authorities jointly create the election public key. This is the key with which voters will encrypt their votes before casting them. As can be seen in the diagram, this process occurs in parallel at each authority. Furthermore the simple distributed key generation scheme does not require communication between the authorities (as would be the case for example with a threshold scheme such as joint-feldman/pedersen). Each authority creates its share of the key and posts the public fragment, along with proofs of correctness, at the bulletin board. The bulletin board then checks the proofs and combines the shares, resulting in the public key which is also posted. The purpose of a distributed key generation scheme is to distribute trust such that the privacy of the vote is safe as long as just one of the authorities is not corrupted. Note that the corresponding private key only exists conceptually as the combination of private information at each authority, it is never recreated.

Voting

Once the public key is generated and publicly available at the bulletin board the election can begin. When casting votes, user’s voting clients, in this case the voting booth hosted by the browser, download the public key from the bulletin board. Once they have made their selection, the voting client encrypts the ballot and produces a hash. Before casting, users are presented with the option to audit their ballots according to Benaloh’s cast-or-cancel procedure. This provides cast-as-intended verifiability. Finally, the ballot is cast and sent to the bulletin board. The bulletin board verifies the user’s eligibility in the election as well as the proofs of plaintext knowledge, and then posts the vote. The user records the hash corresponding to their vote to allow verifying that their ballot was actually stored and counted correctly.

Mixing

When the election period is over and all the votes have been recorded the mixing phase begins. The purpose of this phase is to anonymize the votes such that it is impossible to trace what ciphertext belongs to what voter. This is necessary to protect privacy, as the joint decryption phase will reveal vote contents to allow tallying. Just like in the key generation phase is trust distributed across several authorities, so too is the mixing. Each mixing authority permutes and re-encrypts the votes handling them as input to the next authority. Only if all of the mixing authorities are corrupt is it possible to establish the correspondence between the cast votes on the bulletin board and the output of the mixing phase, which will be decrypted. The prototype uses the Terelius-Wikstrom proof of shuffle, which is composed of an offline and online phase. Although the offline phase (permutation) can be precomputed prior the election start our prototype does not exploit this feature, opting for simplicity. However, what is exploited is the parallelism made possible by the fact that the offline only depends on the vote count. This allows all authorities to this phase simultaneously once the election period ends. This is in contrast to the online shuffle phase, where each authority must wait for the mixing results of the previous authority. The diagram above reflects this; we can see the computation bars overlap in time for the permutation but not the shuffle. Each authority submits the vote mixing results along with the proofs to the bulletin board. The bulletin board verifies the shuffle and posts the mixed votes for the next authority to mix. Once all the authorities have completed the mix and the bulletin board has verified all the proofs the mixing phase is over.

Decrypting

Having completed the mixing phase the bulleting board contains the set of anonymized votes for the election. Because they are anonymized these votes can now be decrypted without compromising privacy. Just as the key generation phase was distributed across several authorities, these same authorities must intervene to decrypt the votes. Since the scheme is distributed but not a threshold system, all the authorities must participate in joint decryption. Similarly, as long as one authority remains honest it is not possible to decrypt non-anonymized votes. To carry out the joint decryption each authority downloads the mixed votes from the bulletin board and calculates their partial decryptions using its private share of the key, along with corresponding proofs of correctness. As can be seen above this process is parallel and occurs simultaneously at all authorities once the mixing phase is finished. The partial decryptions and proofs are then posted to the bulletin board, which verifies the proofs. Once all partial decryptions are available the bulletin board combines them and subsequently obtains the plaintexts. As noted previously, the private key is never reconstructed, the combination occurs only for partial decryptions.

Tally and verification

The plaintexts are posted on the bulletin board. This completes the public data for the election, which we summarize below:

Election public key shares and proofs of correctness (for each key authority)

Election public key

Cast votes, proofs of knowledge of plaintext, and signatures

Votes mixes and proofs of shuffle (for each mix authority)

Mixed votes partial decryptions and proofs (for each key authority)

Combined partial decryption and plaintexts

With this information:

The election result can be obtained by tallying plaintext votes

Each voter can verify that their vote was recorded correctly

Anyone can verify that the set of mixed votes corresponds to the recorded (cast) votes

Anyone can verify that the plaintexts correspond to correct decryption of the mixed votes

Anyone can verify that the election results corresponds to a correct tally of the plaintexts

The above properties, together with the ballot auditing procedure, make the prototype a secure[3] end-to-end verifiable voting system.

In this post I’m going to talk about three types of uncertainty, and how the foundations of cryptography can be understood in their terms. Wikpedia says

Uncertainty is the situation which involves imperfect and / or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown.

Two main concepts to note here, information and knowledge. We could say that uncertainty is lack of knowledge or lack of information. As we’ll see these two ideas are not equivalent and do not cover all cases. We start at the strongest form of uncertainty.

Ontological uncertainty: indeterminacy

The Bloch sphere representing the state of a spin 1/2 particle.

In quantum mechanics, certain particles (spin 1/2 particles such as electrons) have a property called spin that when measured[1] can give two discrete results, call them “spin up” and “spin down”. This is described by the equation

such that when measured, the probability of obtaining spin up is α², and spin down is β². A question we could ask ourselves is, before we measure it, is the spin up or is it down? But the equation above only gives us probabilities of what will happen when we make the measurement.

In mainstream interpretations of quantum mechanics there is no fact of the matter as to what the value of the spin was before we made the measurement. And there is no fact of the matter as to what the measurement will yield prior to it happening. The information that our question is asking for simply does not exist in the universe.

It is this intrinsic indeterminacy that makes us use the term ontological uncertainty: the uncertainty is not a property of our knowledge, but a property of nature. Our confusion is a consequence of the ontological mismatch between nature and our model of it. We sum up this type of uncertainty as:

The information does not exist and therefore we cannot know it.

By the way, the heisenberg uncertainty principle which is of this type is not very well named, as it can be confused with the subject of the next section. A better name would be indeterminacy principle.

Epistemic uncertainty: information deficit

We started with the strongest and also strangest form of uncertainty. The second is the every day type encountered when dealing with incomplete information. In contrast to the previous type, this uncertainty is a property of our state knowledge, not a property of nature itself. So when we ask, for example, what caused the dinosaur extinction, we are referring to some fact about reality, whether or not we have or will have access to it. Or if playing poker we wonder if we have the best hand, we are referring to an unknown but existing fact, the set of all hands dealt.

Boltzmann’s tombstone with the entropy equation.

Uncertainty as incomplete information is central to fields like information theory, probability and thermodynamics where it is given a formal and quantitative treatment. The technical term is entropy, and it’s measured in bits. We say a description has high entropy if there is a lot of information missing from it. If we ask whether a fair coin will land heads or tails we are missing 1 bit of information. If we ask what number will come out from throwing a fair 8 sided die, we are missing 3 bits. The result of the die throw has more possible results, and therefore higher uncertainty about it than the coin flip result. So it has more bits of entropy. We sum up this type of uncertainty as:

The information exists, but we do not know it.

Before finishing a small clarification. If you were expecting the concept of randomness to appear when talking about coin flips and die rolls here’s the reason why it did not. In this section I have restricted the discussion to classical physics, where phenomena are deterministic although we may not know all the initial conditions. The combination of determinism + unknown initial conditions is what underlies the use of randomness in the macroscopic world. This type of randomness is sometimes called subjective randomness to distinguish it from intrinsic randomness, which is basically another term for ontological uncertainty of the first section.

A deterministic coin flipping machine

The third type..

And now to the interesting bit. Let’s say I tell you that I have all the information about something, but I still don’t know everything about it. Sounds contradictory right? Here’s an example to illustrate this kind of situation.

All men are mortal

Socrates is a man

If now somebody tells you that

3. Socrates is mortal.

Are they giving you any information? Hopefully it seems to you like they told you something you already knew. Does that mean you had all the information before given statement 3? Put differently, does statement 3 contain any information not present in 1,2?

One of the 24 valid syllogism types

Consider another example.

x = 1051

y = 3067

x * y = 3223417

In this case statement 3 tells us something we probably didn’t know. But does statement 3 contain information not present int 1,2? We can use definitions from information theory to offer one answer. Define three random variables (for convenience in some arbitrary range a-b)

x ∈ {a-b}, y ∈ {a-b}, x*y {…}

We can calculate the conditional entropy according to the standard equation

which in our case gives

H(x*y | x, y) = 0

The conditional entropy of x*y given x and y is zero. This is just a technical way to say that given statements 1 and 2, statement 3 contains no extra information: whatever 3 tells us was already contained in 1,2. Once x and y are fixed, x*y follows necessarily. This brings us back to the beginning of the post

We could say that uncertainty is lack of knowledge or lack of information. As we’ll see these two ideas are not equivalent and do not cover all cases.

It should be apparent now that these two ideas are different. We have here cases where we have all the information about something (x, y), and yet we do not know everything about it (x*y).

Logical uncertainty: computation deficit

The step that bridges having all the information with having all the knowledge has a name: computation. Deducing (computing) the conclusion from the premises in the Socrates syllogism does not add any information. Neither does computing x*y from x and y. But computation can tell us things we did not know even though the information was there all along.

We are uncertain about the blanks, even though we have all the necessary information to fill them.

In this context, computing is a process that extracts consequences present implicitly in information. The difference between deducing the conclusion of a simple syllogism, and multiplying two large numbers is a difference in degree, not a difference in kind. However, there is a clear difference in that without sufficient computation, we will remain uncertain about things that are in a sense already there. At the upper end we have cases like Fermat’s last theorem, about which mathematicians had been uncertain for 350 years. We finish with this summary of logical uncertainty:

The information exists, we have all of it, but there are logical consequences we don’t know.

Pierre de Fermat

Cryptography: secrecy and uncertainty

Cryptography (from Greek κρυπτόςkryptós, “hidden, secret”; and γράφεινgraphein, “writing”) is the practice and study of techniques for secure communication in the presence of third parties called adversaries

The important word here is secret, which should remind of us uncertainty. Saying that we want a message to remain secret with respect to an adversary is equivalent to saying that we want this adversary to be uncertain about the message content. Although our first intuition would point in the direction of epistemic uncertainty, the fact is that in practice this is not usually the case.

Let’s look at an example with the Caesar cipher, named after Julius Caesar, who used it ~2000 years go. The Caesar replaces each letter in the message with another letter obtained by shifting the alphabet a fixed number of places. This number of places plays the role of encryption key. For example, with a shift of +3

1

2

abcdefghijklmnopqrstuvwxyz

defghijklmnopqrstuvwxyzabc

Let’s encrypt a message using this +3 key:

1

2

cryptography isbased on uncertainty

fubswrjudskb lv edvhg rq xqfhuwdlqwb

We hope that if our adversary gets hold of the encrypted message he/she will not learn its secret, whereas our intended recipient, knowing the +3 shift key just needs to apply the reverse procedure (-3 shift) to recover it. When analyzing ciphers it is assumed that our adversary will capture our messages and also will know the procedure, if not the key (in this case +3) used to encrypt. Using these assumptions, let’s imagine we are the adversary and capture this encrypted message:

1

govv nyxo iye rkfo pyexn dro combod

We want to know the secret, but we don’t know the secret key shift value. But then we realize that the alphabet has 26 characters, and therefore there are only 25 possible shifts, a shift of 26 leaves the message unchanged. So how about trying all the keys and seeing what happens:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

FNUU MXWN HXD QJEN OXDWM CQN BNLANC

EMTT LWVM GWC PIDM NWCVL BPM AMKZMB

DLSS KVUL FVB OHCL MVBUK AOL ZLJYLA

CKRR JUTK EUA NGBK LUATJ ZNK YKIXKZ

BJQQ ITSJ DTZ MFAJ KTZSI YMJ XJHWJY

AIPP HSRI CSY LEZI JSYRH XLI WIGVIX

ZHOO GRQH BRX KDYH IRXQG WKH VHFUHW

YGNN FQPG AQW JCXG HQWPF VJG UGETGV

XFMM EPOF ZPV IBWF GPVOE UIF TFDSFU

<strong>WELL DONE YOU HAVE FOUND THE SECRET</strong>

VDKK CNMD XNT GZUD ENTMC SGD RDBQDS

UCJJ BMLC WMS FYTC DMSLB RFC QCAPCR

TBII ALKB VLR EXSB CLRKA QEB PBZOBQ

SAHH ZKJA UKQ DWRA BKQJZ PDA OAYNAP

RZGG YJIZ TJP CVQZ AJPIY OCZ NZXMZO

QYFF XIHY SIO BUPY ZIOHX NBY MYWLYN

PXEE WHGX RHN ATOX YHNGW MAX LXVKXM

OWDD VGFW QGM ZSNW XGMFV LZW KWUJWL

NVCC UFEV PFL YRMV WFLEU KYV JVTIVK

MUBB TEDU OEK XQLU VEKDT JXU IUSHUJ

LTAA SDCT NDJ WPKT UDJCS IWT HTRGTI

KSZZ RCBS MCI VOJS TCIBR HVS GSQFSH

JRYY QBAR LBH UNIR SBHAQ GUR FRPERG

IQXX PAZQ KAG TMHQ RAGZP FTQ EQODQF

HPWW OZYP JZF SLGP QZFYO ESP DPNCPE

We found that the secret was revealed when trying a key shift of +10. Note how we were able to pick out the correct message because none of the other attempts gave meaningful results. This happens because the space of possible keys is so small that only one of them decrypts to a possible message. In technical terms, the key space and message space[2] are small enough compared to the length of the message that only one key will decrypt. The following equation[3] states this in terms of uncertainty:

The left part of the expression, H(Key | Ciphertext), tells us how much uncertainty about the key remains once we have obtained the encrypted message. Note the term S(c) which represents how many keys decrypt a meaningful message. As we saw above, S(c) = 1, which yields

H(K | C) = ∑ P(c) * log2 (1) = ∑ P(c) * 0 = 0

In words, there is no uncertainty about the key, and therefore the secret message, once we know the encrypted message[4]. Of course, when we initially captured this

1

govv nyxo iye rkfo pyexn dro combod

we did not know the secret, but we had all the information necessary to reveal it. We were only logically uncertain about the secret and needed computation, not information, to find it out.

Alberti’s cipher disk (1470)

Although we have seen this only for the simple Caesar cipher, it turns out that except for special cases, many ciphers have this property given a large enough message to encrypt. In public key ciphers, like those used in many secure voting systems, this is the case irrespective of message size. So we can say that practical cryptography is based around logical uncertainty, since our adversaries have enough information to obtain the secret. But as we saw previously, there are different degrees of logical uncertainty. Cryptography depends on this uncertainty being “strong” enough to protect secrets.

Computational complexity and logical uncertainty

Just as entropy measures epistemic uncertainty, computational complexity can be said to measure logical uncertainty. In probability theory we study how much information one needs to remove epistemic uncertainty. Computational complexity studies how much computation one needs to remove logical uncertainty. We saw that deducing the conclusion of the Socrates syllogism was easy, but multiplying two large numbers was harder. Complexity looks at how hard these problems are relative to each other. So if we are looking for the foundations of cryptography we should definitely look there.

Take for example the widely used RSA public key cryptosystem. This scheme is based (among other things) on the computational difficulty of factoring large numbers. We can represent this situation with two statements, for example

Statement 2 (the factors) is entailed by statement 1, but obtaining 2 from 1 requires significant computational resources. In a real world case, an adversary that captures a message encrypted under the RSA scheme will require such an amount of computation to reveal its content, that this possibility is labeled infeasible. Let’s be a bit more precise than that. This means that an adversary, using the fastest known algorithm for the task, will require thousands of years of computing on a modern pc.

If the last statement didn’t trigger alarm bells, perhaps I should emphasize the words “known algorithm”. We know that with known algorithms the task is infeasible, but what if a faster algorithm is found? You would expect complexity theory would have an answer to that hypothetical situation. The simple fact of the matter is that it doesn’t.

In complexity theory, problems for which efficient algorithms exist are put into a class called P. Although no efficient algorithm is known for integer factorization, whether it is in P or not is an open problem[5]. In other words, we are logically uncertain about whether factorization is in P or not!

Several complexity classes

If we assume that integer factorization is not in P then a message encrypted with RSA is secure. So in order to guarantee an adversary’s logical uncertainty about secret messages, cryptographic techniques rely on assumptions that are themselves the object of logical uncertainty at the computational complexity level! Not the kind of thing you want to find when looking for foundations.

The bottom line

It’s really not that bad though. If you think carefully about it, what matters is not just whether factorization and other problems are in P or not, but whether adversaries will find the corresponding efficient algorithms. The condition that factorization is in P and that the efficient algorithms are secretly found by adversaries is much stronger than the first requirement on its own. More importantly, the second condition seems to be one we can find partial evidence for.

Whether or not evidence can be found for a logical statement is a controversial subject. Does the fact that no one has proved that factorization is in P count as evidence that it is not in P? Some say yes and some say no. But it seems less controversial to say that the fact that no algorithm has been found counts as evidence for the possibility that we (as a species with given cognitive and scientific level of advancement) will not find it in the near future.

A quantum subroutine

The bottom line for the foundations of cryptography is a question of both logical and epistemic uncertainty. On one hand, computational complexity questions belong in the realm of logic, and empirical evidence for this seems conceptually shaky. But the practical aspects of cryptography not only depend on complexity questions, but also on our ability to solve them. Another point along these lines is that computational complexity tells us about difficulty for algorithms given certain computational primitives. But the question of what primitives we have access to when building computing devices is a question of physics (as quantumcomputing illustrates). This means we can justify or question confidence in the security of cryptography through empirical evidence about the physical world. Today, it is the combination of results from computational complexity together with empirical evidence about the world that form the ultimate foundations of cryptography.

References

[1] Along the x, y, or z axes

[2] Without going into details, the message space is smaller than the set of all combinations of letters given that most of these combinations are meaningless. Meaningful messages are redundantly encoded.