SummaryIn cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.

In cryptographic protocol theory, we consider a situation where a number of entities want to solve some problem over a computer network. Each entity has some secret data it does not want the other entities to learn, yet, they all want to learn something about the common set of data. In an electronic election, they want to know the number of yes-votes without revealing who voted what. For instance, in an electronic auction, they want to find the winner without leaking the bids of the losers.
A main focus of the project is to develop new techniques for solving such protocol problems. We are in particular interested in techniques which can automatically construct a protocol solving a problem given only a description of what the problem is. My focus will be theoretical basic research, but I believe that advancing the theory of secure protocol compilers will have an immense impact on the practice of developing secure protocols for practice.
When one develops complex protocols, it is important to be able to verify their correctness before they are deployed, in particular so, when the purpose of the protocols is to protect information. If and when an error is found and corrected, the sensitive data will possibly already be compromised. Therefore, cryptographic protocol theory develops models of what it means for a protocol to be secure, and techniques for analyzing whether a given protocol is secure or not.
A main focuses of the project is to develop better security models, as existing security models either suffer from the problem that it is possible to prove some protocols secure which are not secure in practice, or they suffer from the problem that it is impossible to prove security of some protocol which are believed to be secure in practice. My focus will again be on theoretical basic research, but I believe that better security models are important for advancing a practice where protocols are verified as secure before deployed.

Max ERC Funding

1 171 019 €

Duration

Start date: 2011-12-01, End date: 2016-11-30

Project acronymNoTape

ProjectMeasuring with no tape

Researcher (PI)Søren HAUBERG

Host Institution (HI)DANMARKS TEKNISKE UNIVERSITET

Call DetailsStarting Grant (StG), PE6, ERC-2017-STG

SummarySociety generates increasing amounts of data, which is both a resource and a challenge. The data reveal new insights that may potentially improve our livelihood, but their quantity renders such insights difficult to find. Machine learning techniques sift through the data looking for statistical patterns of interest to a given task. Due to an exponential growth in available data, these techniques enable us to automate difficult decisions, such as those needed for personalized medicine and self-driving cars.
NoTape note that machine learning techniques depend on a distance measure to determine which data points are similar and which are not. As this measure is difficult to choose, NoTape develop methods for estimating an optimal distance measure directly from data. Empirical evidence suggest that the optimal distance measure in one region of data space need not coincide with the optimal measure in another region, i.e.that the distance measure should locally adapt to the data. Local adaptability imply that the distance measure itself will be sensitive to noise in the data, and therefore should be described as a random variable. NoTape estimate distance measures as random Riemannian metrics and perform statistical data analysis accordingly. The notion of statistical computations with respect to an uncertain locally adaptive distance measure is uncharted territory, which need new algorithms for numerical integration and for solving differential equations.
As a guiding example, we estimate statistical models that reflect human perception. As perception processes are not fully understood, an optimal distance measure cannot be precisely estimated and the uncertainty of NoTape is needed.
The geometric nature of the developed methods ensure that attained models are interpretable by humans, which contrast current locally adaptive techniques. As society automate more decisions, interpretability is increasing important to ensure that the machine learning system can be trusted.

Society generates increasing amounts of data, which is both a resource and a challenge. The data reveal new insights that may potentially improve our livelihood, but their quantity renders such insights difficult to find. Machine learning techniques sift through the data looking for statistical patterns of interest to a given task. Due to an exponential growth in available data, these techniques enable us to automate difficult decisions, such as those needed for personalized medicine and self-driving cars.
NoTape note that machine learning techniques depend on a distance measure to determine which data points are similar and which are not. As this measure is difficult to choose, NoTape develop methods for estimating an optimal distance measure directly from data. Empirical evidence suggest that the optimal distance measure in one region of data space need not coincide with the optimal measure in another region, i.e.that the distance measure should locally adapt to the data. Local adaptability imply that the distance measure itself will be sensitive to noise in the data, and therefore should be described as a random variable. NoTape estimate distance measures as random Riemannian metrics and perform statistical data analysis accordingly. The notion of statistical computations with respect to an uncertain locally adaptive distance measure is uncharted territory, which need new algorithms for numerical integration and for solving differential equations.
As a guiding example, we estimate statistical models that reflect human perception. As perception processes are not fully understood, an optimal distance measure cannot be precisely estimated and the uncertainty of NoTape is needed.
The geometric nature of the developed methods ensure that attained models are interpretable by humans, which contrast current locally adaptive techniques. As society automate more decisions, interpretability is increasing important to ensure that the machine learning system can be trusted.

Max ERC Funding

1 463 805 €

Duration

Start date: 2017-12-01, End date: 2022-11-30

Project acronymQMULT

ProjectMultipartite Quantum Information Theory

Researcher (PI)Matthias Christandl

Host Institution (HI)KOBENHAVNS UNIVERSITET

Call DetailsStarting Grant (StG), PE6, ERC-2013-StG

SummaryQuantum information theory studies the way information is stored, transmitted and processed in quantum devices. Mathematically, quantum information theory extends Shannon's theory of information but differs from it by allowing both for stronger correlations known as entanglement and for non-commutative effects resulting in measurement uncertainty as required by the laws of quantum physics. Entanglement has been shown to be crucial for the advantages offered by quantum communication and computation.
In recent years, researchers have gained a good understanding of quantum information theory involving two parties, for instance in the transmission of quantum bits from a sender to a receiver. Yet the study of quantum protocols for communication tasks involving multiple parties, for instance the joint counting of online votes or the compression of data distributed in a network, is still in its infancy. The reason for this is two-fold: (i) a lack of understanding of entanglement among multiple particles and (ii) the non-commutative nature of quantum theory, two facts that pose major difficulties for the design of multiparty quantum coding schemes.
It is the goal of this research project to overcome these two main obstacles so that a comprehensive theory of quantum information can be developed. Just as the Internet, where a network of many interacting computers has replaced point-to-point communication channels such as phone lines, the future of quantum communication will involve communication among many parties. The multipartite quantum information theory explored in this project is therefore expected to impact not only current experiments but also our future communication infrastructure.

Quantum information theory studies the way information is stored, transmitted and processed in quantum devices. Mathematically, quantum information theory extends Shannon's theory of information but differs from it by allowing both for stronger correlations known as entanglement and for non-commutative effects resulting in measurement uncertainty as required by the laws of quantum physics. Entanglement has been shown to be crucial for the advantages offered by quantum communication and computation.
In recent years, researchers have gained a good understanding of quantum information theory involving two parties, for instance in the transmission of quantum bits from a sender to a receiver. Yet the study of quantum protocols for communication tasks involving multiple parties, for instance the joint counting of online votes or the compression of data distributed in a network, is still in its infancy. The reason for this is two-fold: (i) a lack of understanding of entanglement among multiple particles and (ii) the non-commutative nature of quantum theory, two facts that pose major difficulties for the design of multiparty quantum coding schemes.
It is the goal of this research project to overcome these two main obstacles so that a comprehensive theory of quantum information can be developed. Just as the Internet, where a network of many interacting computers has replaced point-to-point communication channels such as phone lines, the future of quantum communication will involve communication among many parties. The multipartite quantum information theory explored in this project is therefore expected to impact not only current experiments but also our future communication infrastructure.

Max ERC Funding

1 389 581 €

Duration

Start date: 2013-09-01, End date: 2019-05-31

Project acronymSPEC

ProjectSecure, Private, Efficient Multiparty Computation

Researcher (PI)Claudio ORLANDI

Host Institution (HI)AARHUS UNIVERSITET

Call DetailsStarting Grant (StG), PE6, ERC-2018-STG

SummaryMPC is a cryptographic technique that allows a set of mutually distrusting parties to compute any joint function of their private inputs in a way that preserves the confidentiality of the inputs and the correctness of the result. Examples of MPC applications include secure auctions, benchmarking, privacy-preserving data mining, etc.
In the last decade, the efficiency of MPC has improved significantly, especially with respect to evaluating functions expressed as Boolean and arithmetic circuits. These advances have allowed several companies worldwide to implement and include MPC solutions in their products.
Unfortunately, it now appears (and it’s partially confirmed by theoretical lower bounds) that we have reached a wall with respect to possible optimizations of current building blocks of MPC, which prevents MPC to be used in critical large-scale applications. I therefore believe that a radical paradigm-shift in MPC research is needed in order to make MPC truly practical.
With this project, I intend to take a step back, challenge current assumptions in MPC research and design novel MPC solutions. My hypothesis is that taking MPC to the next level requires more realistic modelling of the way that security, privacy and efficiency are defined and measured. By combining classic MPC techniques with research in neighbouring areas of computer science I will fulfill the aim of the project and in particular:
1) Understand the limitations of current abstract models for MPC and refine them to more precisely capture real world requirements in terms of security, privacy and efficiency.
2) Use the new models to guide the developments of the next generation of MPC protocols, going beyond current performances and therefore enabling large-scale applications.
3) Investigate the necessary privacy-utility trade-offs that parties undertake when participating in distributed computations and define MPC functionalities that encourage cooperation for rational parties.

MPC is a cryptographic technique that allows a set of mutually distrusting parties to compute any joint function of their private inputs in a way that preserves the confidentiality of the inputs and the correctness of the result. Examples of MPC applications include secure auctions, benchmarking, privacy-preserving data mining, etc.
In the last decade, the efficiency of MPC has improved significantly, especially with respect to evaluating functions expressed as Boolean and arithmetic circuits. These advances have allowed several companies worldwide to implement and include MPC solutions in their products.
Unfortunately, it now appears (and it’s partially confirmed by theoretical lower bounds) that we have reached a wall with respect to possible optimizations of current building blocks of MPC, which prevents MPC to be used in critical large-scale applications. I therefore believe that a radical paradigm-shift in MPC research is needed in order to make MPC truly practical.
With this project, I intend to take a step back, challenge current assumptions in MPC research and design novel MPC solutions. My hypothesis is that taking MPC to the next level requires more realistic modelling of the way that security, privacy and efficiency are defined and measured. By combining classic MPC techniques with research in neighbouring areas of computer science I will fulfill the aim of the project and in particular:
1) Understand the limitations of current abstract models for MPC and refine them to more precisely capture real world requirements in terms of security, privacy and efficiency.
2) Use the new models to guide the developments of the next generation of MPC protocols, going beyond current performances and therefore enabling large-scale applications.
3) Investigate the necessary privacy-utility trade-offs that parties undertake when participating in distributed computations and define MPC functionalities that encourage cooperation for rational parties.