Riemann’s hypothesis and test for primality
(1976)

Tools

by
J. D. Tygar, Bennet Yee
- Proceedings of the Joint Harvard-MIT Workshop on Technological Strategies for the Protection of Intellectual Property in the Network Multimedia Environment, 1991

"... The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated circuit chips and can be directly inserted in standard workstati ..."

The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated circuit chips and can be directly inserted in standard workstations or PC-style computers. This paper presents a set of security problems and easily implementable solutions that exploit the power of physically secure coprocessors: (1) protecting the integrity of publicly accessible workstations, (2) tamper-proof accounting/audit trails, (3) copy protection, and (4) electronic currency without centralized servers. We outline the architectural requirements for the use of secure coprocessors. 1 Introduction and Motivation The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated ...

...und in [47, 21, 36, 35, 39, 15, 2, 44, 16, 19, 49, 17, 50, 51, 6, 30, 54, 4, 24, 20, 10]. More general information on some of the number theoretic tools behind many of these protocols may be found in =-=[34, 42, 37, 52]-=-. The tools for checking data integrity are described in [27, 40, 43, 28]. Research on protection systems and general distributed system security may be found in [45, 41, 48]. [8] provides a logic for...

"... This paper presents a new probabilistic primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen ..."

This paper presents a new probabilistic primality test. Upon termination the test outputs &quot;composite&quot; or &quot;prime&quot;, along with a short proof of correctness, which can be verified in deterministic polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] in that its assertions of primality are certain, rather than being correct with high prob-ability or dependent on an unproven assumption. Thc test terminates in expected polynomial time on all but at most an exponentially vanishing fraction of the inputs of length k, for every k. This result implies: • There exist an infinite set of primes which can be recognized in expected polynomial time. • Large certified primes can be generated in expected polynomial time. Under a very plausible condition on the distribution of primes in &quot;small&quot; intervals, the proposed algorithm can be shown&apos;to run in expected polynomial time on every input. This

"... Abstract. The notion of on-line/off-line signature schemes was introduced in 1990 by Even, Goldreich and Micali. They presented a general method for converting any signature scheme into an on-line/off-line signature scheme, but their method is not very practical as it increases the length of each si ..."

Abstract. The notion of on-line/off-line signature schemes was introduced in 1990 by Even, Goldreich and Micali. They presented a general method for converting any signature scheme into an on-line/off-line signature scheme, but their method is not very practical as it increases the length of each signature by a quadratic factor. In this paper we use the recently introduced notion of a trapdoor hash function to develop a new paradigm called hash-sign-switch, which can convert any signature scheme into a highly efficient on-line/off-line signature scheme: In its recommended implementation, the on-line complexity is equivalent to about 0.1 modular multiplications, and the size of each signature increases only by a factor of two. In addition, the new paradigm enhances the security of the original signature scheme since it is only used to sign random strings chosen off-line by the signer. This makes the converted scheme secure against adaptive chosen message attacks even if the original scheme is secure only against generic chosen message attacks or against random message attacks.

by
Luca Trevisan, Salil Vadhaný
- In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science, 2000

"... The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, ..."

The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e. can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large min-entropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist. We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting. One of our uniform constructions is as follows: assuming that there are problems in���ÌÁÅ�ÇÒthat are not solvable by subexponential-size circuits with¦� gates, there is an efficient extractor that transforms any samplable distribution of lengthÒand min-entropy Ò into an output distribution of length ÇÒ, whereis any sufficiently small constant. The running time of the extractor is polynomial inÒand the circuit complexity of the sampler. These extractors are based on a connection be-

"... Three new trapdoor one-way functions are proposed that are based on elliptic curves over the ring Z_n. The first class of functions is a naive construction, which can be used only in a digital signature scheme, and not in a public-key cryptosystem. The second, preferred class of function, does not s ..."

Three new trapdoor one-way functions are proposed that are based on elliptic curves over the ring Z_n. The first class of functions is a naive construction, which can be used only in a digital signature scheme, and not in a public-key cryptosystem. The second, preferred class of function, does not suffer from this problem and can be used for the same applications as the RSA trapdoor one-way function, including zero-knowledge identification protocols. The third class of functions has similar properties to the Rabin trapdoor one-way functions. Although the security of these proposed schemes is based on the difficulty of factoring n, like the RSA and Rabin schemes, these schemes seem to be more secure than those schemes from the viewpoint of attacks without factoring such as low multiplier attacks.

"... We consider the asymptotic complexity of algorithms to manipulate matrix groups over finite fields. Groups are given by a list of generators. Some of the rudimentary tasks such as membership testing and computing the order are not expected to admit polynomial-time solutions due to number theoretic o ..."

We consider the asymptotic complexity of algorithms to manipulate matrix groups over finite fields. Groups are given by a list of generators. Some of the rudimentary tasks such as membership testing and computing the order are not expected to admit polynomial-time solutions due to number theoretic obstacles such as factoring integers and discrete logarithm. While these and other “abelian obstacles ” persist, we demonstrate that the “nonabelian normal structure ” of matrix groups over finite fields can be mapped out in great detail by polynomial-time randomized (Monte Carlo) algorithms. The methods are based on statistical results on finite simple groups. We indicate the elements of a project under way towards a more complete “recognition” of such groups in polynomial time. In particular, under a now plausible hypothesis, we are able to determine the names of all nonabelian composition factors of a matrix group over a finite field. Our context is actually far more general than matrix groups: most of the algorithms work for “black-box groups ” under minimal assumptions. In a black-box group, the group elements are encoded by strings of uniform length, and the group operations are performed by a “black box.”

"... We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to de ..."

We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to decision for L, and L is not self-reducible. Funding for this research was provided by the National Science Foundation under grant CCR9002292. y Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 z Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 x Research performed while visiting the Department of Computer Science, State University of New York at Buffalo, Jan. 1992--Dec. 1992. Current address: Department of Computer Science, University of Electro-Communications, Chofu-shi, Tokyo 182, Japan. -- Department of Computer Science, State University of New York at Buffalo, 226...