8 The Deterministic AKS Primality Method in the PRAM-Model and a Parallel Implementation on a High Performance Cluster Jörg Lässig, Stefanie Thiem and Matthias Baumgart Chemnitz University of Technology Chemnitz Germany In August 2002 the three Indian researchers Manindra Agrawal, Neeraj Kayal & Nitin Saxena at the Indian Institute of Technology in Kanpur published the manuscript PRIMES is in P [Ag02] and presented a deterministic, polynomial time algorithm to determine whether a given integer is prime or composite. Later on a series of variations of the original algorithm, has been published (e.g. [Br02]), the so-called AKS-class of algorithms. The improvements over the original AKS breakthrough are orders of magnitude, referring to Daniel J. Bernstein, one of the major protagonists [Be03]. The objective of these studies is to present a completely deterministic AKS instance with conjectured running time T (n) = Õ(log4 n) in the PRAM Model, including all available speedups. Additionally, we present its parallel implementation and some experimental results besides a practical comparison to standard methods. Key Words and Phrases: PRIMES, AKS-class, PRAM-model, Deterministic primality method References [Ag02] [Br02] M. Agrawal and N. Kayal and N. Saxena. PRIMES is in P (revised version). Preprint, Indian Institute of Technology Kanpur. December P. Berrizbeitia. Sharpening Primes is in P for a Large Family of Numbers. Preprint, Universidad Simón Bolívar. November [Be03] D. J. Bernstein. Proving Primality after Agrawal-Kayal-Saxena. Preprint, University of Illinois at Chicago. January

9 Hyperelliptic Cryptosystems in Practice Christof Paar and Jan Pelzl and Thomas Wollinger Communication Security Group COSY Ruhr Universität Bochum Germany The Hyperelliptic curve cryptosystem is one of the emerging cryptographic primitives of the last years. This system offers the same security as established public-key cryptosystems, such as those based on RSA or elliptic curves, with much shorter operand length. However, until recently the common belief in industry and in the research community was that hyperelliptic curves are out of scope for any practical application. We were able to show the practical use of hyperelliptic curve cryptosystems (HECC) by narrowing the performance gap between elliptic curve (EC) and hyperelliptic curve cryptosystems. The complexity of the group operation for small genus hyperelliptic curves was reduced and efficient algorithms have been proposed [PWGP03, PWP03]. We developed a new metric to compare different cryptographic primitives based on the atomic operations of a processor and our theoretical comparison between elliptic curve and hyperelliptic curve cryptosystems, as well as our software and hardware implementations show that the performance of both cryptographic primitives are in the same range [P02]. Surprisingly, the hyperelliptic curve cryptosystems even outperform elliptic curves using certain curve parameters. We implemented these cryptosystems on general purpose processor and on a variety of different embedded processors, and build even a prototype implementation of a hyperelliptic curve coprocessor on FPGAs [WPWPSK04, Wol04]. References [Wol04] [PWGP03] Thomas Wollinger. Software and Hardware Implementation of Hyperelliptic Curve Cryptosystems Ph.D., Ruhr-University Bochum, Bochum, Germany July Jan Pelzl and Thomas Wollinger and Jorge Guajardo and Christof Paar. Hyperelliptic Curve Cryptosystems: Closing the Performance Gap to Elliptic Curves Workshop on Cryptographic Hardware and Embedded Systems - CHES 2003 September [PWP03] Jan Pelzl and Thomas Wollinger and Christof Paar. Low Cost Security: Explicit Formulae for Genus-4 Hyperelliptic Curves Tenth Annual Workshop on Selected Areas in Cryptography - SAC 2003 August [P02] Jan Pelzl. Hyperelliptic Cryptosystems on Embedded Microprocessors Diplomarbeit, Ruhr- Universitt Bochum September [WPWPSK04] Thomas Wollinger and Jan Pelzl and Volker Wittelsberger and Christof Paar and Gokay Saldamli and Cetin Koc. Elliptic and Hyperelliptic Curves on Embedded up Special issue on Embedded Systems and Security of the ACM Transactions in Embedded Computing Systems (TECS) 6

13 Solving Systems of Equations with incompatible operations Magnus Daum CITS Research Group, Ruhr University Bochum In many cryptographic algorithms, which are aimed at being efficiently implementable rather than having some formal proof of security (e.g. in many dedicated hash functions or block ciphers), a mixture of very different kinds of operations is used. These operations include GF(2)-linear operations, additions modulo 2 n, bitwise applied Boolean functions and bit shifts and rotations. As these operations are not very compatible from a mathematical point of view, it is hard to analyse these structures theoretically and you need sophisticated algorithms to solve equations in which some different of these operations are involved. In his attacks on various hash functions (see for example [Do97]) Dobbertin had to solve large systems of equations of this kind. We analyse the algorithms used by Dobbertin and describe improvements which lead to directed graphs, which represent the sets of solutions of such equations quite efficiently. These graphs are related very closely to binary decision diagrams (see [We03]). Thus from the theory of decision diagrams many algorithms can be adopted. For example, it is possible to efficiently compute the number of solutions or to combine two such graphs to compute the intersection of two sets of solutions. T-functions, as proposed by Klimov and Shamir (see [Kl04]), are another topic for which these algorithms could be of some interest, because due to their defining property, equations which only include T-functions should be solvable quite efficiently with such algorithms. References [Do97] H. Dobbertin (1997). RIPEMD with two-round compress function is not collision-free. Journal of Cryptology 10, pp [We03] I. Wegener (2003). Branching Programs and Binary Decision Diagrams Theory and Applications. SIAM. [Kl04] A. Klimov (2004). Applications of T-functions in Cryptography. PhD Thesis, Weizmann Institute of Science. (available from ask/) 10

15 Group Authentication and Encryption in Distributed Environments Philipp A. Baer University of Ulm Department of Theoretical Computer Science Germany Security is not always considered an important issue for groups of distributed systems. Message authentication and encryption are often disregarded, sometimes just because of missing implementations. Nevertheless, especially in the context of communication, control and monitoring, security is an extremely important issue. This paper discusses techniques that address some of the basic security requirements for unreliable group communication scenarios. It combines existing security technologies (DH, GDH [STW96], DSA, AES) and communication protocols/schemes (IPv6, Multicast) for group collaboration scenarios in unreliable environments. Message authentication and message stream encryption for groups, to only mention the most important ones, are considered exemplary. The architecture and its communication primitives are tailored to the needs of unreliable environments. This is mostly due to its intended field of application: groups of autonomous mobile systems. The architecture is designed for a wide variety of systems and open in the sense of extensibility. For the proposed techniques, i.e. key agreement, authentication and encryption, very simple yet extensible protocols are used. Because of the many unsolved problems in the area of secure ad-hoc communication, and due to the wide variety of involved scientific subjects, only a superficial solution can be presented. References [Bae04] Philipp A. Baer. Group Authentication and Encryption in Distributed Environments. University of Ulm, July [STW96] Michael Steiner, Gene Tsudik, and Michael Waidner. Diffie- Hellman key distribution extended to group communication. In ACM Conference on Computer and Communications Security, pages 31 37, March

16 Authentification within Tree Parity Machine Rekeying André Schaumburg Hamburg University of Science and Technology, Distributed Systems Schwarzenbergstraße 95, Hamburg - Germany Interaction of Tree Parity Machines (TPMs) has been discussed as an alternative secure key exchange concept and attacks have been proposed [KKK02]. Authentication is at least as important as a secure exchange of keys. Adding an authentication e.g. via hashing is straightforward but outside the concept named Neural Cryptography. The here presented is a consequent formulation of an implicit Zero-Knowledge authentication from within the key exchange concept and another alternative, integrating an explicit Zero- Knowledge authentication into the already interactive protocol. A Man-In- The-Middle attack and even all currently known attacks can so be averted. This in turn allows to securely exploit the trajectory in key space along with rapid key exchange and an efficient increase of key length. Another benefit of the here presented authentication method is that all currently known findings concerning Neural Cryptography are untouched and still valid - even with the extension of authentication. Further on there is no need to reimplement the interface, it only gets extended by an authentication control unit. The general trade-off in applied cryptography between available resources and the required level of security also applies using the TPM principle. In many practical embedded security solutions it is often admissive to provide a system safe enough for the particular application, and given certain attack scenarios. The TPM principle extended with the proposed authentication is very attractive for such embedded applications due to its hardware-friendly basic operations, particularly not operating on large numbers. References [KKK02] Kanter, I., Kinzel, W. and Kanter, E.: Secure exchange of information by synchronization of neural networks Europhysics Letters 57 (2002), pp [VS04] Volkmer, M. and Schaumburg, A.: Authenticated tree parity machine key exchange Preprint cs.cr/ (2004), submitted to Europhysics Letters 13

17 Overview of Authenticated Encryption Modes of Operation Elena Ivanova Andreeva Saarland University Traditional block cipher modes of operation, namely CBC, CFB, OFB and CTR, provide encryption with achieving the confidentiality goal without any integrity guarantees. On the other hand, there exist authentication modes that are custom-made to ensure integrity. However, they do not provide a secure encryption. A conventional way to satisfy both security properties, confidentiality and integrity, is to make two separate passes on the data. One encryption pass for the encrypting of the data block cipher-wise, and a second authentication pass for checking the data integrity. Recently, new unconventional integrity-aware modes of operation for block ciphers have been proposed. They provide confidentiality and integrity by combining authentication modes with the traditional block ciphers making only a single pass on the data. These modes are called authenticated encryption modes. In our presentation we give an overview on some of these newly proposed authenticated encryption modes, like IACBC, IAPM[JU01], XCBC[GD01] and OCB[RBBK01], their properties and advantages for a future use. References [GD01] V. Gligor, P. Donescu. Fast encryption and authentication: XCBC Encryption and XECB Authentication modes. Proceedings of the Fast Software Encryption Workshop - FSE 01. Springer-Verlag, October [JU01] C. Jutla. Encryption Modes with Almost Free Message Integrity. Advances in Cryptology-EUROCRYPT [RBBK01] P. Rogaway, M. Bellare, J. Black and T. Krovetz. OCB: A Block- Cipher Mode of Operation for Efficient Authenticated Encryption ACM Transactions on Information and System Security (TISSEC), vol. 6, no. 3, pp , August

18 Improving The Security Of Watermarking Schemes With Cryptographic Techniques André Adelsbach, Markus Rohe and Ahmad-Reza Sadeghi Universität des Saarlandes Ruhr-Universität Bochum Saarbrücken Standard information hiding schemes, such as watermarking schemes suffer from a major problem: They require to reveal security critical information to potentially untrusted parties, when proving the presence of a watermark to these parties. Zero-knowledge watermark detection is a promising means to overcome this problem: The embedded information is concealed from the verifying party while the proving party is committed to it. A prover and a verifier perform a zeroknowledge detection protocol after which the verifier is convinced that the committed information is imperceptibly present in the given digital content without gaining any new knowledge on the security critical information. However, concealing the embedded information prevents the verifying party from performing additional checks on this data, e.g., on its probability distribution, which may be required for certain applications. We overcome this limitation by proposing concrete and practical protocols, which pursue two promising strategies: The first one is to prove in zero-knowledge that a concealed information suffices a certain predicate, whereas the second strategy is to interactively and verifiably generate committed information that suffices the desired predicate, e.g., matches a certain probability distribution. Moreover, we have designed an efficient implementation of a zero-knowledge watermark detection protocol to demonstrate its applicability in practice. 15

19 Cryptographic Watermarking Stefan Katzenbeisser Institut für Informatik, Technische Universität München D Garching bei München The rapid growth of the Internet as a distribution medium for digital goods increased the risk of copyright infringements. From an economic point of view, this risk makes the commercialization of digital works difficult, if not impossible. Therefore, the need for technical copyright protection solutions has increased steadily over the last years. Robust digital watermarking became a promising technology in the context of copyright protection and was proposed as a central building block in various e-commerce protocols (such as dispute-resolving, copy protection and traitor tracing schemes or DRM applications). Traditionally, the design of watermarking schemes was seen as a signal-processing problem and concentrated on issues such as the imperceptibility of the watermark or its resistance against unauthorized removal. However, when a watermark is to be used in an e-commerce system, its properties may become critical to the security of the overall scheme. It is therefore necessary to gain a thorough and mathematically precise understanding of the essential security properties of watermarks. In this talk, I review recent results [1, 2] that establish the security of two cryptographic protocols that employ watermarking operations as basic primitives. The first protocol can be used in dispute-resolving schemes in order to assure their resistance against an important class of attacks. The second protocol allows to detect forgeries in image files or video streams by embedding a watermark carrying a cryptographic signature. Both constructions are provably secure under standard cryptographic assumptions. References [1] A. Adelsbach, S. Katzenbeisser, H. Veith, Watermarking Schemes Provably Secure Against Copy and Ambiguity Attacks, in ACM Workshop on Digital Rights Management (DRM 2003), Proceedings, Washington DC, 2003, pp [2] J. Dittmann, S. Katzenbeisser, C. Schallhart, H. Veith, Provably Secure Authentication of Digital Media Through Invertible Watermarks, to appear as IACR eprint report,

20 Bridging the Usability Gap of PKI Tobias Straub Computer Science Department Technische Universitaet Darmstadt From a user s viewpoint, security in general and PKI (public key infrastructure) in particular are complex matters. According to a recent study [kes04], the majority of security incidents still can be traced back to user errors or carelessness, although users troubles with PKI-enabled applications are known since the study of Whitten and Tygar investigating the handling of secure [WT99]. Unfortunately, due to the nature of PKI, user interaction cannot be avoided entirely, e.g. when an unknown certificate has to be imported as a trust anchor. In my work, I am proposing several measures to face the usability challenges of PKI. One of them is a generic framework to comprehensively evaluate usability and utility of PKI-enabled applications [SB04]. Apart from being a tool to detect usability problems and assess applications, it may as well serve as a requirements specification for application designers. Another idea is to delegate complex and security-critical tasks to skilled personnel or a service provider. For instance, the protocol described in [Str04] allows an organization to outsource the task of maintaining a PKI in a way that it retains full control and does not have to trust the service provider. References [kes04] Lagebericht zur Informationssicherheit. In: kes 4/2004 and 5/2004. [Str04] T. Straub. A Method for Strengthening Certificate Enrollment. WartaCrypt, [SB04] T. Straub and H. Baier. A Framework for Evaluating the Usability and the Usefulness of PKI-enabled Applications. European PKI- Workshop, (Springer LNCS 3093) [WT99] A. Whitten, J.D. Tygar. Why Johnny Can t Encrpyt: A Usability Evaluation of PGP 5.0. USENIX Security Symposium,

24 Cryptographic Watermarking Stefan Katzenbeisser Institut für Informatik, Technische Universität München D Garching bei München The rapid growth of the Internet as a distribution medium for digital goods increased the risk of copyright infringements. From an economic point of view, this risk makes the commercialization of digital works difficult, if not impossible. Therefore, the need for technical copyright protection solutions has increased steadily over the last years. Robust digital watermarking became a promising technology in the context of copyright protection and was proposed as a central building block in various e-commerce protocols (such as dispute-resolving, copy protection and traitor tracing schemes or DRM applications). Traditionally, the design of watermarking schemes was seen as a signal-processing problem and concentrated on issues such as the imperceptibility of the watermark or its resistance against unauthorized removal. However, when a watermark is to be used in an e-commerce system, its properties may become critical to the security of the overall scheme. It is therefore necessary to gain a thorough and mathematically precise understanding of the essential security properties of watermarks. In this talk, I review recent results [1, 2] that establish the security of two cryptographic protocols that employ watermarking operations as basic primitives. The first protocol can be used in dispute-resolving schemes in order to assure their resistance against an important class of attacks. The second protocol allows to detect forgeries in image files or video streams by embedding a watermark carrying a cryptographic signature. Both constructions are provably secure under standard cryptographic assumptions. References [1] A. Adelsbach, S. Katzenbeisser, H. Veith, Watermarking Schemes Provably Secure Against Copy and Ambiguity Attacks, in ACM Workshop on Digital Rights Management (DRM 2003), Proceedings, Washington DC, 2003, pp [2] J. Dittmann, S. Katzenbeisser, C. Schallhart, H. Veith, Provably Secure Authentication of Digital Media Through Invertible Watermarks, to appear as IACR eprint report,

25 Mental Poker in practice: An extended implementation of Schindelhauer s Toolbox for Mental Card Games Heiko Stamer University of Kassel, Department of Mathematics/Computer Science Heinrich-Plett-Straße 40, D Kassel, Germany A lot of cryptographic research has been carried out on Mental Poker during the last decades. But efficient implementations are still very rarely. Few years ago Schindelhauer [Sc98] introduced a general toolbox which extends previous work of Crépeau [Cr87]. Roughly speaking, the type of a card is shared among the players through bitwise representation by quadratic (non-)residues. Thus the security relies on the well known Quadratic Residuosity Assumption (QRA). Unfortunately, the size of a card grows linearly in the number of players and logarithmically in the number of different cards. Recently a more efficient solution [BS03] was proposed, whose security can be based on the Decisional Diffie-Hellman Assumption (DDH). Moreover, the encoding is independent of the number of players respectively cards. My talk presents the technical details of a extended implementation for the open source library libtmcg [St05]. Further we discuss the practicability while considering as example the german card game Skat [St04]. References [Sc98] [Cr87] [BS03] Christian Schindelhauer. Toolbox for Mental Card Games. Technical Report A-98-14, University of Lübeck, Claude Crépeau. A zero-knowledge poker protocol that achieves confidentiality of the players strategy or how to achieve an electronic poker face. In Advances in Cryptology: CRYPTO 86 Proceedings, Lecture Notes in Computer Science 263, pp , Adam Barnett and Nigel P. Smart. Mental Poker Revisited. In K.G. Paterson (Ed.): Cryptography and Coding 2003, Lecture Notes in Computer Science 2898, pp , [St04] Heiko Stamer. Kryptographische Skatrunde. In Offene Systeme 4, pp , ISSN [St05] Heiko Stamer. 5

28 Fair DRM Fair Use by Secret Sharing André Adelsbach, Ulrich Greveler, Jörg Schwenk Horst Görtz Institute for IT security Ruhr-Universität Bochum, Germany We present a fair DRM environment, where each user can act pseudonymously and is entitled to make a fixed number of copies for private purposes (e.g., a maximum of 7 copies) but where the user can be identified and prosecuted when more copies are produced. Our system constitutes three central authorities (pseudonymization, licensing, issuer) in order to protect the user agains malicious providers and uses secret-sharing schemes in a way that a central authority can only reconstruct the secret (the user s identity) when a number of shares is transmitted by the media players. The user s identity will not be known to this authority as long as the user limits the number of private copies to the maximum (being a parameter in the system). The set-up phase of the DRM system let each user choose a pseudonym that is a key pair (sk P, pk P ) and let him send a request for a certificate on this pseudonym Req Pseud = Sign(sk B, pk P terms) to a pseudonymization authority, where terms is a description of the contractual conditions regarding liability and depseudonymization. The users may obtain a licence for an object identified by DOI when he sends a signed request Req = Sign(sk P, < DOI cert P terms >) to the licensing agency. This licence can then be of the form Licence = Sign(sk L, DOI P ol Rights Enc(pk P, key)) where P ol is a random polynom of the same degree as the number of allowed copies so that Shamir s secret-sharing scheme may be used here. We will also present other forms with different performance characteristics. The media player evaluates Pol at position i := Hash(Licence player ID) and transmit share S := DOI Hash(Licence) i P ol(i) so each time the object is played the same share is computed but different players will compute different shares with high probability. 8

30 Immediate Rekeying by Tree Parity Machines in a WLAN-System Nazita Behroozi Hamburg University of Technology, Distributed Systems, Schwarzenbergstraße 95, D Hamburg Germany nazita WPA and IEEE802.11i provide the Pre-Shared-Key (PSK) for key establishment in Wireless LAN (WLAN) in the case that no 802.1x authentication server (RADIUS) is available. Recent research shows the risk of using PSK because the security depends on the length and quality of the pass phrase used [2]. We investigate secure key exchange in WLAN via Tree Parity Machines (TPMs) [1]. In order to evaluate the capability and suitability of an integrated key exchange via TPM, a system was developed which allows the transfer of encrypted data between two WLAN clients. One WLAN client is an embedded system that includes a hardware- and a software-part. It was developed on an embedded ARM processor based development kit with a FPGA device. The other WLAN client was implemented in software running on a Personal Computer. In this work an immediate rekeying scenario was implemented to study frequent key exchange via TPMs in parallel to the encrypted data transmission in WLAN. The immediate rekeying technique allows exchange a new key as soon as the previous key has been exchanged. We measured the number of keys exchanged and used for encryption of different amounts of data to be transferred. The evaluation shows that each Ethernet packet with a maximum length of 1500 byte can be encrypted with a new key in the best case. In Temporal Key Integrity Protocol which is used in WPA and IEEE802.11i, a rekeying message will be sent around every packets to request for a new key [3]. References [1] Kanter, I., Kinzel, W. and Kanter, E.: Secure exchange of information by synchronisation of neural networks Europhysics Letters 57 (1), pp, (2002) [2] Moskowitz, R.: WLAN Testing Reports, PSK as the Key Establishment Method, ICSA Labs, 2003 [3] Walker, J.: Security Series, PartII: The Temporal Integrity Protocol (TKIP), Network Security Intel Corporation,

31 Security Challenges of Location-Aware Mobile Business Emin Islam Tatli, Dirk Stegemann, Stefan Lucks Department of Computer Science, University of Mannheim In addition to mobility, the ability of context awareness and especially location awareness has enhanced mobile businesses to support context-aware services. Today, many different kinds of context-aware services, ranging from finding nearby restaurants [FR] to sending ambulances to people in emergency [LP], have already taken their places in the business. The m-business research group at the University of Mannheim [MB04] aims at building a generic framework that is able to execute any kind of context-aware service. Our talk presents the security challenges of this mobile business framework with special focus on location as a context property. Our analysis shows that, in addition to privacy and confidentiality, other security challenges, especially anonymous and unlinkable services, usability with security, integrity and authenticity of services, secure payment, fair exchange, location-based spamming, rogue access points and forged GPSsignals would directly affect the user acceptance of the m-business framework. Having specified the challenges and possible solutions, our next step is to design an open and flexible security architecture that can be integrated into the application framework. References [MB04] [LP] [FR] The Mobile Business Research Group URL: Locating people in emergency URL: Location-based services for mobile communities URL: 11

32 Secure Group Communication in WLAN Ad-Hoc Networks with Tree Parity Machines Björn Saballus Hamburg University of Technology, Distributed Systems Schwarzenbergstraße 95, Hamburg - Germany Most portable customer devices, such as laptops and PDA s, have the ability to communicate via Wireless Local Area Networks (WLAN). These are described in the IEEE standard which also allows to set up selfconfiguring and self- organising ad-hoc networks without a central server. This work investigates a method to allow secure group communication in ad-hoc networks using symmetric key exchange by Tree Parity Machines (TPMs) [KKK02]. A main problem in secure group communication is the need to establish and distribute a shared, secret key between the members in the group, especially on join- or leave-actions [HD03]. With TPMs, multiparty key exchange is inherently possible based on multiparty synchronisation. A group of TPMs, all sharing one key, can perform a synchronisation with another group of TPMs sharing another key. Once the synchronisation process is finished, all TPMs will share the same key. The suggested chained synchronisation has a runtime-complexity of O(n), with n being the group size. Another concurrent synchronisation has a runtime-complexity of O(log(n)). This Work-in-Progress presents some preliminary experimental results on the usability of TPMs in secure group communication. References [KKK02] Kanter, I., Kinzel, W. and Kanter, E.: Secure exchange of information by synchronisation of neural networks, Europhysics Letters 57 (1), pp, , 2002 [HD03] Hardjono, T. and Dondeti, L.R.: Multicast and Group Security, Artech House Inc,

35 Fault Attacks on Combiners with Memory Frederik Armknecht and Willi Meier Universität Mannheim FH Aargau Germany Switzerland Fault attacks are powerful cryptanalytic tools that are applicable to many types of cryptosystems. Recently, general techniques have been developed which can be used to attack many standard constructions of stream ciphers based on LFSR s. Some more elaborated methods have been invented to attack RC4. These fault attacks are not applicable in general to combiners with memory. In this paper, techniques are developed that specifically allow to attack this class of stream ciphers. These methods are expected to work against any LFSR-based construction that uses only a small memory and few input bits in its output function. In particular, efficient attacks are described against the stream cipher E0 used in Bluetooth, either by inducing faults in the memory or in one of its LFSR s. In both cases, the outputs derived from the faulty runs finally allow to describe the secret key by a system of linear equations. Computer simulations showed that inducing 12 faults sufficed in most cases if about 2500 output bits were available. Another specific fault attack is developed against the stream cipher SNOW 2.0, whose output function has a 64-bit memory. Similar to E 0, the secret key is finally the solution of a system of linear equations. We expect that one fault is enough if about 2 12 output words are known. References [1] Biham, Granboulan, Nguyen: Impossible Fault Analysis of RC4 and Differential Fault Analysis of RC4, FSE 2005, Springer, [2] Boneh, DeMillo, R.J. Lipton: On the Importance of Checking Cryptographic Protocols for Faults, Eurocrypt 1997, LNCS 1233, pp , Springer, [3] Biham, Shamir: Differential fault analysis of secret key cryptosystems, Crypto 1997, LNCS 1294, pp , Springer, [4] Hoch, Shamir: Fault Analysis of Stream Ciphers, CHES 2004, LNCS 3156, pp , Springer,

36 Some Thoughts about Block Ciphers and Stream Ciphers Erik Zenner Cryptico A/S Block ciphers and stream ciphers are well-known concepts in cryptography. They are encountered in almost all major textbooks, and they form the basis of all known symmetric encryption algorithms. Nonetheless, many misconceptions surround these concepts. Adi Shamir announced in early 2004 that stream ciphers were dead, just to revoke that statement on the SASC workshop and on Asiacrypt At the SKEW workshop that was held in May 2005 in Aarhus, it turned out that the specialists in stream cipher cryptography do not even agree on what a stream cipher actually is. Security-wise, things do not look much better. The common perception is that block ciphers (like AES) are more secure than stream ciphers - but are they? In fact, such a statement is comparing apples with oranges, since block ciphers are not used directly for encryption. Instead, they are used in a mode of operation, which is usually a stream cipher that often turns out to be cryptographically weaker than the dedicated stream ciphers themselves. In this talk, we attempt to clear up some of the conceptional muddle around block and stream ciphers. We will review some definitions and notions of security, point out the true advantages of block and stream ciphers, and advocate a clear use of terminology. 4

37 Runners, Starting Lines and Mutual Distances: On the Security of Tree Parity Machine Key Exchange Markus Volkmer and Florian Grewe Hamburg University of Technology, Computer Engineering VI Schwarzenbergstraße 95, D Hamburg, Germany Attacks on key exchange by Tree Parity Machines (TPMs) [1] exist that also employ one or more TPMs (e.g. [2]). An attacker tries to learn the internal state of the interacting parties from observing the publicly communicated outputs of their synchronisation process. The security of the principle has so far only been accessed experimentally in terms of success probabilities of an attacker with respect to the chosen TPM parameters. This contribution suggests to take a different view on the key exchange and the related attacks. The interacting as well as the attacking TPMs are considered to be runners that start a race at different starting lines chosen at random. Although the finish is determined by the choice of the interacting runners, it is unknown to all the runners. The attacking runner has the disadvantage of being slower than the two interacting runners, because he can only chase them and does not interact. In this unusual race, the starting lines can be chosen freely and neither runner knows the starting line of the other runners. A slower attacking runner can thus win by chance, if he picks an advantageous starting line relative to one of the other two runners. In particular, the initial mutual distances between the runners and the attacking runner determine who will win. This perspective still leaves one with success probabilities, after all. Yet it allows for fundamental insights in terms of the relation between the initial mutual distances, synchronisation times, success probabilities and thus the discussion of security. Countermeasures against attacks are also motivated: increase the interaction of your runners, slow down the attacking runner or choose close starting lines. References [1] Kanter, I., Kinzel, W. and Kanter, E.: Secure exchange of information by synchronisation of neural networks, Europhysics Letters 57 (1), pp, , [2] Klimov, A., Mityagin, A., and Shamir, A.: Analysis of neural cryptography, In Proc. of AsiaCrypt 2002, volume 2501 of LNCS, pages , Queenstown, New Zealand, December , Springer Verlag. 5

38 A Framework for Computer Proofs in Probability Theory for Use in Cryptography 1 Markus Kaiser, Johannes Buchmann, and Tsuyoshi Takagi Darmstadt University of Technology Germany Future University - Hakodate Japan Mathematical proofs are often complex and hard to verify by their readers. Consequently, the application of formal proof systems are a useful approach in the area of verification. We present a framework for computer proofs in probability theory. Therefore we describe formalized probability distributions and fundamental lemmata concerning σ-algebras, probability spaces and conditional probabilities. These are given in the formal language of the formal proof system Isabelle/HOL. Besides we describe an application of the presented formalized probability distributions and fundamental lemmata to cryptography. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in probability theory for interactive proof constructions within the formal proof system mentioned above. Computer verification can be applied to further problems in cryptographic research, if the corresponding basic mathematical knowledge is available in a database. References [BR93] M. Bellare and P. Rogaway. Random oracles are practical: a paradigm for designing efficient protocols. In proceedings of the First ACM Conference on Computer and Communication Security, [Hur01] Joe Hurd. Formal Verification of Probabilistic Algorithms. PhD thesis, Trinity College University of Cambridge, [Ric03] [Sho01] Stefan Richter. Formalizing Integration theory, with an Application to Probabilistic Algorithms. Diplomarbeit, Technische Universität München, V. Shoup. OAEP Reconsidered. In Advances in Cryptology CRYPTO 2001, volume 2139 of Lecture Notes of Computer Science, pages Springer-Verlag, This work was partially funded by the German Federal Ministry of Education and Technology (BMBF) in the framework of the Verisoft project under grant 01 IS C38. The responsibility for this article lies with the authors. 6

42 Designing Secure Protocol Implementations Philipp A. Baer University of Kassel, FB 16, FG Distributed Systems, Germany Security network protocols specified in only a formal language normally cannot be translated into software right away, mostly due to missing implementation details. Furthermore, a naïve implementation is often error-prone because of the variety of environmental configurations. We propose the interactive assisted modeling (IAM) architecture for security protocol specification. Its objective is to improve the quality of protocol implementations and portability. The IAM architecture offers detail level-filtered modeling, support for group communication, and optimized code generation. An abstract and platform-independent representation language is introduced to guarantee portability of protocol specifications. The AIM modeling interface provides an abstract view on the communication scenario and the environment. It furthermore supports specification of environmental properties such as characteristics of the communication media. Third-party tools for protocol and security analysis will also be supported. Projects like [1] follow a similar approach. Cryptographic or communication primitives and common networking parameters are directly mapped into our representation language. It is similar to MuCAPSL [2] which is primarily targeted towards specification of multicast authentication protocols. The objective of MuCAPSL is protocol analysis whereas our language was explicitly designed for automatic code generation. In another transformation process a protocol specification is translated into intermediate or native code. The intermediate code target is similar to Microsoft s Intermediate Language (MSIL). An optimized interpreter executes this code (communication sandboxing). Literatur [1] E. Saul, A.C.M. Hutchison. SPEAR II: The Security Protocol Engineering and Analysis Resource. 2nd Annual South African Telecommunications, Networks and Applications Conference, [2] J. Millen, G. Denker. CAPSL and MuCAPSL. Journal of Telecommunications and Information Technology 4,

43 Improved Boomerang Attack on Eight-Round-Serpent Anne Schwalb Mathematisches Institut Justus-Liebig-Universität Giessen, Germany One of the five AES finalists is the block cipher Serpent (see [ABK98]) which is a 32-round SP-network. In the beginning of this talk a short introduction to this cipher is given. Then the boomerang attack on 8-round-Serpent is presented as an extension of the differential cryptanalysis. The boomerang attack is a key-recovery attack which needs chosen-plaintexts and adaptive-chosen-ciphertexts. Both, the differential cryptanalysis and the boomerang attack, use characteristics. Since for the efficiency of the attack it is important that the used characteristics have a probability as high as possible, an introduction to differential characteristics is also given as a component of the boomerang attack. As a novel contribution, the attack on the 8-round-Serpent as given in [KKS00] is improved by using a characteristic with a probability higher than the one used there. This better characteristic is taken from [BDK01]. The attack presented in [KKS00] requires chosen plaintexts and ciphertexts (which means the entire codebook), bytes of memory and time equivalent to approximately round-Serpent-encryptions. The new attack which uses the better characteristic from [BDK01] also works with the entire codebook, which means it also requires chosen plaintexts and ciphertexts and bytes of memory but it decreases the required time to approximately round-Serpent-encryptions. To the best of our knowledge this new attack is the best published attack on 8-round-Serpent. References [ABK98] R. J. Anderson, E. Biham, L. R. Knudsen. Serpent: A Proposal for the Advanced Encryption Standard. NIST AES Proposal, [KKS00] T. Kohno, J. Kelsey, B. Schneier. Preliminary Cryptanalysis of Reduced- Round Serpent. Third AES Candidate Conference, [BDK01] E. Biham, O. Dunkelman, N. Keller. The Rectangle Attack - Rectangling the Serpent. Proceedings of EUROCRYPT 2001, Advances in Cryptology, LNCS 2045, Springer 2001, S

45 Strengthening the E 0 Keystream Generator against Correlation Attacks and Algebraic Attacks Frederik Armknecht and Matthias Krause and Dirk Stegemann University of Mannheim Germany Stream ciphers are widely used for online-encryption of arbitrarily long data. An important class of stream ciphers are combiners with memory, with the E 0 generator from the Bluetooth standard for wireless communication [2] being their most prominent example. E 0 consists of 4 driving devices, a finite state machine (FSM) C with a 4 bit state, an output function f and a memory update function δ. At each clock, one keystream bit z t is produced from the output X t {0, 1} 4 of the driving devices and the current state C t {0, 1} 4 of the FSM according to z t = f(c t, X t ), and the state of the FSM is updated to C t+1 := δ(c t, X t ). So far, the best publicly known attacks against combiners with memory are correlation attacks [4] and algebraic attacks [1]. Correlation attacks exploit linear equations L(X t,..., X t+r 1, z t,..., z t+r 1 ) = 0 that are true with some probability λ with λ 0. Algebraic attacks use valid nonlinear equations of preferably low degree to describe the secret key by a system of equations. We show how to avert a special class of correlation attacks [3] that is currently the most effective against E 0 and introduce a general design principle which guarantees that all valid equations have a degree not smaller than a certain lower bound. Combining these results, we construct a slightly modified version of E 0 with significantly improved resistance against correlation attacks and algebraic attacks. References [1] Armknecht, Krause: Algebraic Attacks on Combiners with Memory, Crypto [2] Bluetooth specification Version [3] Lu, Vaudenay: Faster Correlation Attack on the Bluetooth Keystream Generator, Crypto [4] Salmasizadeh, Golić, Dawson, Simpson: A Systematic Procedure for Applying Fast Correlation Attacks to Combiners with Memory, SAC

50 New Paradigms for Group Signatures in Federated Systems Mark Manulis Horst-Görtz Institute for IT-Security Ruhr-University of Bochum, Germany For many mutli-party applications group signatures are important cryptographic primitives that can be used for the purpose of anonymity and privacy. Group signatures can be used by employees of a company to sign documents on behalf of the company, or in electronic voting and bidding scenarios. In classical group signatures members of a group are able to sign messages anonymously on behalf of the group. However, there exists a designated authority, called group manager, that initializes the scheme, adds new group members, and is able to open group signatures, i.e., identify the signer. Some group signature schemes distinguish between two management authorities: a membership manager that sets up the scheme and controls admission to the group, and a revocation manager that opens the signatures. Obviously, in classical group signatures the group manager is given enormous power compared to other group members and is required to be trusted to act as predestinated. On the other hand there exist multi-party applications where such centralized control (trust) is undesirable, e.g., distributed or federated systems. For this kind of applications it is desirable to have a group signature scheme which provides similar properties but is independent of any centralized control. In this talk we summarize research results concerning this issue. In particular we have proposed a novel group-oriented signature scheme called democratic group signatures [Ma06] which can be seen as a variant of classical group signatures with group manager s rights equally distributed between all members of the group. In democratic group signatures each group member can sign on behalf of the group and is also able to identify the signer of a given group signature. The signer s anonymity is provided only against non-members who are only able to verify group signatures. The group membership is controlled jointly by all group members. We also consider dynamic groups where group membership may vary other time. Obviously, for security reasons in this case relevant group secrets have to be changed. In a subsequent work [MaSaSc06] we have described linkable democratic group signatures where anonymity requirement has been relaxed to allow linkability of issued group signatures. Linkability is useful in some application scenarios which subsume communication of group members and non-members. By allowing linkability we were also able to obtain more efficient constructions. References [Ma06] Mark Manulis. Democratic Group Signatures - On an Example of Joint Ventures - Fast Abstract. In Proceedings of ACM Symposium on Information, Computer and Communications Security (ASI- ACCS 06), pp. 365, ACM Press, Full version at: [MaSaSc06] Mark Manulis and Ahmad-Reza Sadeghi and Jörg Schwenk. Linkable Democratic Group Signatures. In Proceedings of the 2nd Information Security Practice and Experience Conference (ISPEC 2006), LNCS 3903, pp , Springer,

53 Cryptographically Sound Automatic Analysis of Security Protocols Ralf Küsters Institut für Informatik Chrstian-Albrechts-Universität zu Kiel Kiel, Germany Two distinct approaches for the rigorous design and analysis of cryptographic protocols have been pursued in the literature: the so-called Dolev-Yao, symbolic, or formal approach on the one hand and the cryptographic, computational, or concrete approach on the other hand. In the symbolic approach, messages are considered as formal terms and the adversary can manipulate these terms based on a fixed set of operations. The main advantage of this approach is its relative simplicity, which makes it amenable to automated analysis. In the cryptographic approach, messages are bit strings and the adversary is an arbitrary probabilistic polynomial-time (ppt) Turing machine. While proofs in this model yield strong security guarantees, the proofs are often quite involved and only rarely suitable for automation. Starting with the seminal work of Abadi and Rogaway (2000), a significant amount of research has been directed at bridging the gap between the two approaches. The goal is to obtain the best of both worlds: simple, automated security proofs that entail strong security guarantees. In this talk, I present results for the automatic analysis of cryptographic protocols and report on current research on linking the symbolic and computational approach. 6

55 Encrypted and Authenticated Communication via Tree-Parity Machines in AMBA Bus Systems Sascha Mühlbach, Markus Volkmer and Sebastian Wallner Hamburg University of Technology, Institute for Computer Technology, Schwarzenbergstrasse 95, D Hamburg, Germany Security is one of the most important factors in embedded systems such as gaming consoles, identification, pay TV, SIM cards in mobile phones, USB keys for network access or secure devices that handle Digital Rights Management (DRM) for audio and video content. The protection of chip-level microcomputer bus systems in these devices is essential to prevent the growing number of hardware hacking attacks such as presented in [Hu02]. This contribution presents a low-cost encryption and key exchange solution in order to secure chip-level microcomputer bus systems with tree parity machines [VW05]. Due to this intention, a parameterizable Tree Parity Machine Rekeying Architecture (TPMRA) IP-core was designed and implemented. It matches different performance requirements and allows the authentication of bus participants and the encryption of external chip-to-chip buses. We use a stream cipher unit for which the TPM Trajectory Mode together with a universal hash algorithm act as keystream generator. The designer can choose between encrypted or plain bus communication seperately for every bus participant in an early design phase. The solution is transparent and easy applicable to an arbitrary microcomputer bus system for embedded devices. A proof of concept implementation shows the applicability of the TPMRA in the standardized AMBA I/O 1 bus system by implementing the IP-core into the peripheral busto-bus interface (AHB-APB-bridge) without lowering the bus throughput. It will be shown that TPMs can be used in order to protect the ARM bus system considering all AMBA bus features. References [Hu02] A. Huang, Keeping Secrets in Hardware: The Microsoft Xbox Case Study, Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems (CHES 2002), pp , Aug , LNCS, Springer 2002 [VW05] Markus Volkmer, Sebastian Wallner, Lightweight Key Exchange and Stream Cipher based solely on Tree Parity Machines, ECRYPT (European Network of Excellence for Cryptology), pp , July , Graz, Austria 1 8

56 Reducing the Memory Requirements of BDD-Attacks on LFSR-based Stream Ciphers Matthias Krause and Dirk Stegemann Theoretical Computer Science University of Mannheim, Germany The main application of stream ciphers is online-encryption of arbitrarily long data, for example when transmitting speech data between a Bluetooth-headset and a mobile GSM-phone or between the phone and a GSM base station. Examples for practically used and intensively discussed stream ciphers are the E 0 generator used in Bluetooth [1], the GSM cipher A5/1 [2], and the self-shrinking generator [4]. These ciphers consist of a small number of linear feedback shift registers (LFSRs) and a non-linear compression function C : {0, 1} {0, 1}. Based on a secret key k {0, 1} n, the LFSRs produce an internal bitstream z {0, 1} which is then transformed into an output keystream y {0, 1} via y = C(z). For a given plaintext stream p, the ciphertext stream c is computed by bitwise XORing the plaintext and the keystream, i.e., c i = p i y i for all i. Any receiver who knows the secret key k can produce the keystream y himself and compute the plaintext bits as p i = c i y i. In 2002, Krause proposed a generic, Binary Decision Diagram (BDD) based attack [3] on this type of ciphers that reconstructs the internal bitstream z and thereby the secret key k from a short prefix of a given output keystream y. Currently, the BDD-attack is the best known short-keystream attack against E 0 and one of the best generic attacks against A5/1. However, BDD-attacks require a large amount of memory. We approach this problem by presenting various efficiently parallelizable divide-and-conquer strategies (DCS) for E 0 and A5/1 that substantially reduce the memory requirements and allow us to tackle much larger keylengths with fixed computational resources. In the case of E 0, our DCS lowers the attack s memory requirements by a factor of 2 25 and slightly improves its runtime. In [3], the application of the basic BDD-based attack to E 0, A5/1 and the self-shrinking generator were theoretically described, but with rather pessimistic assumptions on the time and memory requirements. We present comprehensive experimental results for the BDD-attack on reduced versions of these ciphers, showing that the performance in practice does not substantially deviate from the theoretical figures. References [1] The Bluetooth SIG. Specification of the Bluetooth System, February [2] M. Briceno, I. Goldberg, and D. Wagner. A pedagogical implementation of A5/1, May [3] M. Krause. BDD-based cryptanalysis of keystream generators. In Proceedings of EUROCRYPT 2002, volume 2332 of LNCS, pages Springer, [4] W. Meier and O. Staffelbach. The self-shrinking generator. In Proceedings of EUROCRYPT 1994, volume 950 of LNCS, pages Springer,

57 On proving Completeness, Soundness and Security of Authenticated Tree Parity Machine Key Exchange Markus Volkmer Hamburg University of Technology, Institute for Computer Technology Schwarzenbergstraße 95, D Hamburg, Germany Attacks on non-authenticated key exchange by Tree Parity Machines (TPMs) also employ one or more TPMs. They try to learn the internal state of the interacting parties from eavesdropping on the publicly communicated outputs of the synchronization process. In this contribution, proofs with regard to Completeness, Soundness and the Security of an authenticated variant of the protocol are sketched. The concept and early practical considerations for an entity authentication and authenticated key exchange in the framework of TPMs were already presented in [1]. The Soundness of the protocol can be proven by showing that a TPM B that has an identical structure to TPM A, as well as an identical output generation, cannot become synchronous by updating its weights according to the learning rule, when having different inputs. The Security with regard to attacks using TPMs can be proven by showing that an attacker E using a TPM with identical structure to the TPMs of parties A and B, as well as with identical output generation, can never remain (and become) synchronous (with A or B) having different inputs from the synchronizing parties. These proofs are available as preprint [2]. Additionally, the proof of Completeness is sketched. It is much more difficult (and lengthy) than the proof of soundness, as (not surprisingly) the relative probabilities for neutral, repulsive and attractive steps in the protocol have to be considered. Next to averting a Man-In-The-Middle attack, the currently known attacks on the symmetric key exchange principle using TPMs can provably be averted for the authenticated variant. References [1] Schaumburg, A.: Authentication within Tree Parity Machine Rekeying, Technical Report TR , Reihe Informatik, Universität Mannheim, Stefan Lucks and Christopher Wolf eds., October [2] Volkmer, M.: Entity Authentication and Authenticated Key Exchange with Tree Parity Machines, IACR Cryptology eprint Archive, Report 2006/112, March

66 Äquivalente Schlüssel in Multivariate Quadratic Public Key Systemen Aktueller Stand Christopher Wolf École Normale Supérieure, Département d Informatique 45 rue d Ulm, F Paris Cedex 05, France or 1 Initial Considerations In the last 20 years, several schemes based on the problem of Multivariate Quadratic equations (or MQ for short) have been proposed. The most important ones certainly are MIA / C and Hidden Field Equations (HFE) plus their variations MIA- / C, HFE-, HFEv, and HFEv-. Both classes have been used to construct signature schemes for the European cryptography project NESSIE, namely the MIA- variation in Sflash, the HFEv- variation in Quartz and the HFE- variation in the tweaked version Quartz-7m. Unbalanced Oil and Vinegar schemes and Stepwise Triangular Schemes are also important in practice. While the first is secure with the correct choice of parameters, the second forms the basis of nested constructions like the enhanced TTM, Tractable Rational Maps, or Rainbow. An overview of all these systems can be found in the taxonomy article [WPc]. In this talk, we give an overview on the question of equivalent keys of MQ-schemes. At first glance, this question seems to be purely theoretical. But for practical applications, we need memory and time efficient instances of Multivariate Quadratic public key systems. One important point in this context is the overall size of the private key: in restricted environments such as smart cards, we want it as small as possible. Hence, if we can show that a given private key is only a representative of a much larger class of equivalent private keys, it makes sense to compute (and store) only a normal form of this key. Similar, we should construct new Multivariate Quadratic schemes such that they do not have a large number of equivalent private keys but only a small number, preferably only one per equivalence class. This way, we make optimal use of the randomness in the private key space and neither waste computation time nor storage space without any security benefit. All systems based on MQ-equations use a public key of the form p i (x 1,..., x n ) := 1 j k n γ i,j,k x j x k + n β i,j x j + α i, with n Z + variables and m Z + equations. Moreover, we have 1 i m; 1 j k n and α i, β i,j, γ i,j,k F (constant, linear, and quadratic terms). We write the set of all such systems of polynomials as MQ(F n, F m ). Moreover, the private key consists of the triple (S, P, T ) where S Aff 1 (F n ), T Aff 1 (F m ) are bijective affine transformations. Moreover, we have P MQ(F n, F m ) is a polynomial-vector P := (p 1,..., p m) with m components; each component is a polynomial in n variables x 1,..., x n. Throughout this paper, we will denote components of this private vector P by a prime. In contrast to the public polynomial vector P MQ(F n, F m ), the private polynomial vector P does allow an efficient computation of x 1,..., x n for given y 1,..., y m. Still, the goal of MQ-schemes is that this inversion should be hard if the public key P alone is given. The main difference between MQ-schemes lies in their special construction of the central j=1 6

76 Jointly Generating Random Keys for the Fully Distributed Environment Sebastian Faust and Stefan Lucks K.U.Leuven ESAT-COSIC Universität Mannheim Kasteelpark Arenberg 10, Lehrstuhl fuer Theoretische Informatik B-3001 Leuven-Heverlee, Belgium Mannheim, Germany Abstract In this paper we introduce a new efficient method to jointly generate and share k random secret keys for discrete log based cryptosystems in a fully distributed environment between a group of parties P = {P 1,..., P n }. We call such a scheme a k joint random key generation (k-jrkg) protocol. Compared with the well-known technique of distributed key generation, where the shared key is not known by any one party, the intention of a JRKG protocol is slightly different: every random key is known and shared by only one party. Here, our protocol guarantees the randomness of the keys under the DDH assumption. In particular, this applies to the keys of the corrupted parties. Hence, they do not have a chance to bias their keys to a non-uniform distribution. Our protocol reduces the dominating factor for the communication complexity, the number of reliable broadcasts, by a factor of n compared with other approaches to this problem. The security of our protocol can be proven for less than ( n 2 )-corrupted parties under the DL-assumption in the random oracle model. 1 Introduction In the current literature, protocols for jointly generating random keys are frequently used as building blocks in various protocols designed for a fully distributed environment like the internet [GJ04, GJKR96]. In particular the well-analyzed technique of distributed key generation [GJKR99, Ped91] has a wide area of application and allows often in the first place to formally prove the protocols security. Unfortunately, most distributed key generation protocols do suffer from high communication and computation complexity, hence limiting their useability in practice to small and static networks. A first approach to overcome these drawbacks was presented by John Canny and Stephen Sorkin in 2004 [CS04]. Their idea relies on the fragmentation of the set of parties P in a network to reduce the size of the broadcast groups. However, this technique is only probabilistic, i.e., has a failure probability, needs a dealer to build up and manage the broadcast groups and finally requires that honest and corrupted parties are randomly distributed in P. In our work we take a different stance. Rather than trying to develop an efficient generally applicable solution for doing distributed key generation, we focus on specific variants of DKG to reduce their communication complexity. Such a specific variant is presented in this paper by the k-jrkg protocol as an efficient solution for the generation of k random secrets, where each secret is known by only one party. This technique is applicable, for example, in the setup phase of the protocols described by Golle and Juels in [GJ04]. In particular, it decreases their communication costs compared to a trivial solution and altogether provides a more natural approach to fulfill the needed requirements. At first glance the generation of k 2 uniformly distributed secrets x 1,..., x k and corresponding public values y i = g x i, where each x i is known by only one party, seems to be an application 16

77 for common verifiable secret sharing (VSS) schemes. There, a dealer chooses a private key x and shares it in a verifiable manner with the participants in P according to the mechanisms of a threshold scheme. Although this method is very efficient, it cannot be used for our purposes, because corrupted dealers can choose their private keys non-randomly, hence, contradicting the proof of security in [GJ04]. Therefore the authors propose to use the DKG protocol of Gennaro et al. (GJKR-DKG), which indeed guarantees that all keys are uniformly distributed, but unfortunately decreases the efficiency. In contrast to this, our protocol is more efficient in terms of communication complexity in that it reduces the number of broadcasts by a factor of n, but still guarantees the randomness of the keys. In particular, we are able to show that the randomness of all keys, including the keys of corrupted parties, is guaranteed under the DDH assumption. Furthermore, we prove the security of the protocol for less than ( n 2 )-corrupted parties under the DL-assumption in the random oracle model. 2 The k-jrkg protocol In general, the protocol is structured into two phases: An initial phase which is executed only once at the beginning of the protocol and a key generation (KG) phase which has to be repeated for every needed key. In particular, in the initial phase the parties P = {P 1,..., P n } perform two instances of the GJKR-DKG scheme and use in the KG phase the ElGamal encryption and Feldman VSS. In the following let p, q be two large, odd primes with p = 2q + 1. Let g Z p be an element with order q and <g>= G Z p denote the subgroup of quadratic residues in Z p generated by g for which the DDH assumption holds. Finally, let F : Z q G be an efficiently invertible bijection. I. Initialization (executed only once): 1. The parties in P execute an instance of the GJKR-DKG scheme to generate a secret key X Z q and the corresponding public key Y = g X mod p. This key pair will be used for ElGamal threshold encryption. 2. The parties in P execute a second instance of the GJKR-DKG scheme to generate the value h = Y bx mod p. II. KG phase (k-iterations): 1. Generate random x = f(0) for dealer P D : (a) Each party P i P chooses randomly r i R Z q and broadcasts the commitment commit i = g r i h r i mod p. If one party P j doesn t broadcast her commitment she is disqualified. We denote by QUAL P the set of non-disqualified parties. (b) P i QUAL chooses randomly z i R Z q and computes the ElGamal encryption of F i = F (z i ) G: C i = (D i, E i ) = (g r i, Y ri F i ). (2) 17

78 We set: H i = commit i D i = h r i. Besides, the following non-interactive zero-knowledge proof is generated: NIZK i = P ok{r i : H i = h r i D i = g r i }. P i broadcasts C i and NIZK i in QUAL. Obviously each party can easily verify the correctness of the zk-proof. Incorrect behavior leads to disqualification. By using the multiplicative homomorphic property of the ElGamal encryption, each party can now compute the ciphertext of F ( z) := P j QUAL F (z j), i.e., C = ( P j QUAL g r j, P j QUAL Y r j F (z j )). (3) (c) P D chooses a uniformly distributed value ã 0 Z q and broadcasts Ã0 = g ea 0 in QUAL. (d) P D chooses a set T of t + 1 parties for publishing their decryption shares of C. Hence, each party in P can decrypt the value C and finally knows the unique value F all = Decrypt(C). It follows that each party can easily compute the value z = F 1 (F all ) by inverting F all. Hence, P i can calculate the unique verification value: A 0 = Ã0 g ez mod p = g ea 0+ez mod p = g a 0 mod p. In particular, only P D knows a 0 = z + ã Generate a polynomial to share the secret (Feldman-VSS): (a) P D creates a random polynomial f(z) over Z q with degree t: f(z) = a 1 z a t z t. P D chooses the value a 0 generated in II.1 as the constant coefficient. Hence, the polynomial to compute the parties shares is composed of f(z) = a 0 + f(z) = a 0 + a 1 z a t z t. P D broadcasts the following verification values in QUAL: A k = g a k, with k = 1,..., t. P D computes s i = f(i) mod q and sends it securely to P i. (b) Each party verifies the shares she received from other participants by using the following equation: g s i = t (A k ) ik (4) k=0 If the check fails P i broadcasts a complaint against P D. 18

79 (c) P D can answer this accusal by publishing valid shares s i, which satisfy equation 4. (d) The dealer P D is disqualified by each honest party P i QUAL, if either: P i receives more than t complaints, or P D wasn t able to answer with valid shares s i in II.2c. The k-jrkg scheme is called t-secure if in the presence of an attacker that corrupts at most t parties, the following requirements for correctness and secrecy are satisfied: Definition 2.1 The k-jrkg protocol is t-correct, if for all qualified dealers P D private key x and public key y = g x the following conditions hold: with shared (C1) All subsets of t + 1 shares provided by honest parties define the same unique secret x. (C2) All honest parties can compute the dealer s unique public key y = g x mod p. (C3) x is uniformly distributed in Z q and hence y is uniformly distributed in the subgroup G. (C4) Cheating leads to disqualification. Theorem 2.1 For every polynomial-time bounded adversary which corrupts at most t < n/2 parties the following holds: If the DDH-assumption is true, then for each secret x distributed in the KG-phase the correctness properties of definition 2.1 hold. The following theorem states that k-jrkg is t-secure: Theorem 2.2 For every polynomial-time bounded adversary which corrupts at most t < n 2 parties the following holds: If the DL-assumption is true, then for each secret x distributed in the KG-phase by an honest dealer P D no information on x can be learned by the adversary except for what is implied by the publicly known value y = g x. References [CS04] Canny, J., Sorkin, S.: Practical Large-Scale Distributed Key Generation. Lecture Notes in Computer Science 3027 (2004), [GJ04] Golle, P., Juels, A.: Dining Cryptographers Revisited. Lecture Notes in Computer Science 3027 (2004), [GJKR96] Gennaro, R., Jarecki, S., Krawczyk, H., Rabin, T.: Robust Threshold DSS Signatures. Lecture Notes in Computer Science 1070 (1996), [GJKR99] Gennaro, R., Jarecki, S., Krawczyk, H., Rabin, T.: Secure Distributed Key Generation for Discrete-Log Based Cryptosystems. Lecture Notes in Computer Science 1592 (1999), [Ped91] Pedersen, T.: A threshold cryptosystem without a trusted party. Lecture Notes in Computer Science 547 (1991),

89 Privacy Friendly Location Based Service Protocols using Efficient Oblivious Transfer Markulf Kohlweiss and Bartek Gedrojc KU-Leuven Leuven Belgium TU-Delft Delft Nederlands Mobile devices add an additional dimension to context-based services: location. Bob, providing a location-based service (LBS), uses the location of Alice to answer her request, e.g., to find the next Italian restaurant. From a security standpoint the two main assets to be protected are Bob s database, and the location of Alice. Cryptographically this problem corresponds to an oblivious transfer (OT) of Bob s location specific data, where the index of the 1-out-of-n OT is the location σ of Alice, 1 σ n. By the properties of the OT, Alice learns only the information of the single map cell σ, while Bob is oblivious of Alice s location. In our work we investigate the specific needs of privacy friendly LBSs, and we design solutions based on efficient OT that take them into considerations. For instance, service providers have an interest in reducing the costs of the OT through economies of scale. Adaptive OT, where the same database is queried with little additional cost, provides a natural starting point. Similarly, the restricted capabilities of mobile users require a careful design of the system. In many of today s mobile networks there exists a dedicated party, the operator, that knows Alice s location. We investigate the role of this party to act as a proxy that inputs the user s location to the protocol and helps with doing the computation, but which otherwise remains oblivious of the protocols result. Finally we investigate the use of homomorphic encryption in order to support the access of multiple LBSs by the same user. This can be seen as a split oblivious transfer involving up to l location-based services simultaneously. Each LBS is handling a different database. Again the privacy sensitive information, i.e., which services Alice subscribed to, remains hidden from everyone else. Moreover the homomorphic property is utilized to facilitate the privacy preserving payment of the services. Privacy friendly LBS. Privacy is an enormous topic [12]. It is a sociological phenomenon which has many legal and commercial implications. The different ways location is used by an LBS greatly influences Alice s privacy experience. Is she interacting only with the service or with other users of the service? Is her location only used at the time of her request, or is she constantly tracked and notified upon certain events? Rather than covering all of this topics, we focus on a very specific sub-problem. Some of the techniques employed can however also be use for improving the privacy properties of other types of LBS protocols as surveyed in [11]. We do not consider solutions involving anonymity or service-side location specific privacy policies. Moreover we consider only solutions where no information at all about Alice s location is revealed to the LBS. The goal is to base the security of the system only on information theory and complexity theoretic assumptions. After the execution of the protocol a malicious LBS (even if collaborating with the operator) cannot compute anything, he (they) could not have computed before. It is easy to see that in this setting a notification service that contacts the user only upon events is impossible. The knowledge that the event occurred would already reveal information about Alice s location to the LBS. 29

90 Sender (d, e) $ Kg For i = 1... n C i Enc(m i ; H(i) d ) Chooser C 1,..., C n H(σ)b e H(σ) d b m σ Dec(C σ ; H(σ) d ) Figure 2: Adaptive OT based on Chaum blind signatures Oblivious transfer. We model a privacy friendly LBS as a database that maps every location i to some information m i. The number of different locations is restricted to n. A location can for instance be the name of a region, or a cell of a certain size on a map. Now the provisioning of a service corresponds to the retrieval of m σ for a hidden σ from a database m 1,..., m n. We call σ also the index into the database. The privacy requirements of the user imply the need for private information retrieval (PIR) [5]. Symmetric PIR (SPIR) is required if the LBS wants to avoid leakage of information about locations that have not been queried. It was shown that for the case where there is only one copy of the database there exists a communication-efficient reduction from any PIR protocol to a 1-out-of-n OT. Moreover for the single copy case SPIR corresponds to 1-out-of-n OT (OT 1 n) [7]. Oblivious transfer was first introduced by Rabin [18]. It captures the on first sight paradoxical notion of a protocol by which a sender sends some information to the receiver, but remains oblivious as to what is sent. The paradox is resolved by recognizing that it are the actions of the receiver and the sender that determine the outcome of the protocol. Even [8] generalized it to 1-out-of-2 oblivious transfer (OT 1 2). The receiver determines which message out of two possible messages she is going to receive. In turn it was shown how to construct OT 1 n from n [2] and even log n [13] applications of OT 1 2. [15, 1, 10] provided direct constructions for OT 1 n based on the decisional Diffie-Hellman and quadratic residuosity assumptions. Adaptive OT For location-based services, we are not so much interested in single executions of oblivious transfer, but want to query the same database multiple times at different indexes. This can be achieved by letting the sender commit to the database and running OT 1 n multiple times. However this is not the most efficient solution. Moreover the security requirements of such a system differ from those of normal oblivious transfer, as the protocol keeps internal state and queries can be chosen adaptively based on the results of previous queries. The first adaptive oblivious transfer protocol was proposed in [14]. Recently more efficient schemes were proposed by [16, 6]. [4] recognized that the last two schemes are based on a common principle to construct adaptive oblivious transfer from unique blind signature schemes. We briefly sketch the basic idea of the scheme using an example based on Chaum blind signatures (cf. Fig. 2). First, all messages are symmetrically encrypted using the RSA signature of the index. H( ) is a full domain cryptographic hash function. The encrypted database is transferred to Alice. When Alice wants to obtain the information for location σ, she runs a Chaum blind signature protocol with the sender to obtain the key. 30

91 Dynamic OT For practicality reasons we are also interested in dynamic databases that can shrink and grow during the execution of the adaptive oblivious transfer. This allows us to update parts of the database. For an update a new message is added to the database. Instead of accessing an old index, the user now has to access the new index. We require an additional table, that maps locations to their current indices. This update procedure reveals information about the database as Alice learns which entries have changed. It is an open research question whether we can do updates which don t reveal any information but are still substantially more efficient than running the whole protocol with a new database. Increasing the size of the database is straightforward. The sender just transfers a new cipher text C n+1 = Enc(m n+1 ; H(n + 1) d ) and transfers it to the sender. The sender can now also ask for blind signatures on n + 1. And decrypt C n+1. Deletion is more complicated. Our approach is to let the receiver prove that the requested σ is in the set of still valid indices V, e.g., by using a dynamic accumulator [3]. Together with every C i the sender now sends a witness w i = v (p 1 i ), with p i a prime. Before obtaining the signature the receiver now needs to prove that she knows a valid witness that corresponds to the blind signature request for index σ. For efficient protocols, it is now no longer possible to use a full domain hash RSA signature. Proxy OT and multi-database extensions For today s mobile networks it is natural to assume a third party, the operator, that knows Alice s location and can help her in executing the OT despite of limited device capabilities. This party executes most of the receiver s part of the OT protocol, but only Alice obtains the final result that allows her to decrypt C σ. We call the third party a proxy and the new protocol a proxy OT protocol. In location-based services, not only Alice s location, but also the type of service she is accessing is privacy sensitive information, which we do not want to reveal to the operator, or even the service himself. Thus a solution to this problem is of particular importance for LBS which use proxy OT, but may be of individual interest as an independent primitive. The selection of up to k services can be interpreted as an additional k-out-of-l OT, which is run independently but interleaving the proxy OT. Only Alice knows which of the l services she is accessing. Additional extensions are needed to facilitate payment for such hidden service usage. Preliminary ideas for a comprehensive solution are based on the use of homomorphic encryption in PIR [17], payment [1], and voting schemes [9]. References [1] William Aiello, Yuval Ishai, and Omer Reingold. Priced oblivious transfer: How to sell digital goods. In Birgit Pfitzmann, editor, EUROCRYPT 2001, volume 2045 of LNCS, pages , Innsbruck, Austria, May 6 10, Springer-Verlag, Berlin, Germany. [2] Gilles Brassard, Claude Crépeau, and Jean-Marc Robert. All-or-nothing disclosure of secrets. In Andrew M. Odlyzko, editor, CRYPTO 86, volume 263 of LNCS, pages , Santa Barbara, CA, USA, August Springer-Verlag, Berlin, Germany. [3] Jan Camenisch and Anna Lysyanskaya. Dynamic accumulators and application to efficient revocation of anonymous credentials. In Moti Yung, editor, CRYPTO, volume 2442 of Lecture Notes in Computer Science, pages Springer,

93 Google Reveals Cryptographic Secrets Emin Islam Tatlı Department of Computer Science, University of Mannheim Google hacking is a term to describe the search queries that find out security and privacy flaws. Finding vulnerable servers and web applications, server fingerprinting, accessing to admin and user login pages and revealing username-passwords are all possible in Google with a single click. Google can also reveal secrets of cryptography applications, i.e., clear text and hashed passwords, secret and private keys, encrypted messages, signed messages etc. In this paper, advanced search techniques in Google and the search queries that reveal cryptographic secrets are explained with examples in details. 1 Motivation Having an index with over 25 billion entries, Google is the most popular web search engine. It indexes any information from web servers thanks to its hardworking web crawlers. But many sensitive data that should be kept secret and confidential are indexed by Google, too. Vulnerable servers and web applications, username-passwords for login sites, admin interfaces of database servers and online devices like web cameras without any access control, reports of security scanners and many more private information are available to hackers via Google. This paper focuses on the advanced search queries that enable users to search different cryptographic values which are expected to stay private and safe. The paper is organized as follows: Section 2 summarizes the useful parameters for the advanced search in Google. In Section 3, examples of search queries for each type of cryptographic secret are illustrated. Finally, Section 4 explains possible security measures against Google hacking. 2 Advanced Parameters Google supports many parameters for the advanced search and filters its results according to the parameters given by the user. The [all]inurl parameter is used to filter out the results according to if the url contains a certain keyword or not. If more keywords are needed, the allinurl parameter should be used. [all]intitle filters the results according to the title of web pages. [all]intext searches keywords in the body of web pages. With the parameter site you can do host-specific search. filetype and ext parameters have the same functionality and are needed to filter out the results based on the file extensions like html, php, asp etc. The minus sign (-) can be put before any advanced parameter and reverses its behavior. As an example, a search containing the parameter -site:www.example.com will not list the results from The sign " " stands for the logical OR operation. 33

94 3 Google Search for Cryptographic Values From the cryptographic perspective, Google reveals also cryptographic secrets. Google can find out hashed passwords, secret keys, public and private keys, encrypted and signed files. What you need to do is only to enter the relevant search terms as explained in the following sections and click the search button. 3.1 Hashed Passwords Database structures and contents can be backed up in dump files. The following query searches for SQL clauses that may contain usernames and passwords in cleartext or in hashed values within dump files. Hash and encryption relevant keywords can also be searched within files. "create table" "insert into""pass passwd password"(ext:sql ext:dump ext:dmp) intext:"password pass passwd" intext:"md5 sha1 crypt" (ext:sql ext:dump ext:dmp) 3.2 Secret Keys Since the secret keys are generated mostly as session keys and destroyed after the session is closed, they are not stored on disks permanently. But there are still some applications that need to store secret keys, e.g., Kerberos [9] shares a secret key with each registered principal for authentication purposes. The following query lists the configuration files of a key distribution center (KDC) in Kerberos. Within the configuration files, the path of principal databases which contain principal ids and their secret keys is specified. inurl:"kdc.conf" ext:conf To find dumped Kerberos principal databases: inurl:"slave datatrans" OR inurl:"from master" Java provides a tool named keytool to create and manage secret keys in keystores. The extension of such keystores is ks. The following query searches for java keystores that may contain secret keys. Note that keytool can also manage private keys and certificate chains. keystore ext:ks 3.3 Public Keys Public keys, as the name implies, are public information and not secret. But for the sake of completeness, the search queries that list public keys are also written in this section. To list PGP public key files: 34

95 "BEGIN PGP PUBLIC KEY BLOCK" (ext:txt ext:asc ext:key) To list public keys in certificate files: "Certificate:Data:Version" "BEGIN CERTIFICATE" (ext:crt ext:asc ext:txt) 3.4 Private Keys Private keys should be kept secret for personal use but the following search queries show that people do not care about it and make it publicly accessible. "BEGIN (DSA RSA)" ext:key "BEGIN PGP PRIVATE KEY BLOCK" inurl:txt asc Gnupg [5] encodes the private key in secring.gpg. The following search reveals secring.gpg files: "index of" "secring.gpg" 3.5 Encrypted Files For confidentiality, cryptography provides encryption of data. By encrypting, one can store sensitive files and s securely on local storage devices. The following queries search for encrypted files and s. It is sure that you need to know the relevant keys to decrypt but as shown in the previous examples, it is also possible to find secret keys and private keys. Besides, other crypto analysis techniques can help to decrypt the encrypted files. The files that are encrypted with Gnupg get the extension gpg for binary encoding and the extension asc for ASCII encoding. The following first query searches files with gpg extension and tries to eliminate signed and public key files from the results. The second query lists ASCII encoded encrypted files. But note that signed files have also the same pattern and can be returned with the second query: -"public pubring pubkey signature pgp and or release" ext:gpg -"BEGIN PGP MESSAGE" ext:asc Many encryption applications use the extension enc for the encrypted files. There are some exceptions like AxCrypt File Encryption Software [6] which uses the extension axx for encrypted files: -intext:"and" (ext:enc ext:axx) In XML Security, the encrypted parts of messages are encoded under CipherValue element: "ciphervalue" ext:xml 35

96 3.6 Signed Messages Digital signatures provide integrity, authenticity and non-repudiation in cryptography. The following searches list some signed messages, signed s and file signatures. To list pgp signed messages ( s excluded): "BEGIN PGP SIGNED MESSAGE" -"From" (ext:txt ext:asc ext:xml) To list signed s: "BEGIN PGP SIGNED MESSAGE" "From" "Date" "Subject" (ext:eml ext:txt ext:asc) To list file signatures: -"and or" "BEGIN PGP SIGNATURE" ext:asc 4 Countermeasures Google hacking can be very harmful and therefore the required security measures should be taken against it. One method is using automatic scan tools [2, 3, 4] that search possible Google hacks for a given host. You can use the tools to search for the available flaws and risks in your system. The tools mostly use the hack database [1] when they do scan. Another solution is integration of robots.txt (robots exclusion standard) [7] files in your system. Web crawlers (hopefully) respect the directives specified in robots.txt. Providing this, you can prevent the crawlers from indexing your sensitive files and directories. The last and the most advanced suggestion is installing and managing Google honeypots [8] in your system and trying to figure out the behaviour of attackers before they deal with your real system. References [1] Google Hacking Database. [2] GooLink- Google Hacking Scanner. [3] SiteDigger v2.0 - Information Gathering Tool. [4] Johnny Long. Gooscan: Google Security Scanner. &file=index&req=getit&lid=33. [5] The GNU Privacy Guard. [6] AxCrypt File Encryption Software for Windows. [7] Robots Exclusion Standard. [8] Google Hack Honeypot Project. [9] Kerberos:The Network Authentication Protocol. 36

105 The estream Project Erik Zenner Cryptico A/S estream is an EU-funded project on stream cipher cryptography. After its predecessor NESSIE was unable to recommend a secure stream cipher for public use, the need for additional research in the area was recognized, and the estream project was born. Its purpose is to improve the understanding of stream cipher security and to identify a portfolio of ciphers that are both secure and resource-effective. In May 2005, the surprisingly large number of 34 stream cipher candidates were submitted to the project. After one year of public evaluation (and the submission of 128 academic papers to the project), the organisers announced a first selection in March Currently, the project is in evaluation phase 2, where the cryptographic community is again requested to continue the demolition derby and help in identifying the most suitable cipher candidates. In this talk, we will give an overview over the estream project. We will point out new trends in stream cipher cryptography and discuss some of the most interesting cipher candidates. Finally, we will go into some of the more controversial selection criteria, like the comparison of security features or resource consumption. 6

107 The SMS4 Block Cipher Ralf-Philipp Weinmann Technische Universität Darmstadt The Wired Authentication and Privacy Infrastructure (WAPI) standard is a Chinese National Standard for securing Wireless LANs. It is an alternative to IEEE i which has become mandatory in China. Originally the block cipher SMS4 that is exclusively used in WAPI has been secret; however, due to non-acceptance of WAPI by the ISO standards organization, the Chinese government published the block cipher in January 2006 [1]. SMS4 is a 32 round unbalanced Feistel network [2] with a block and key size of 128 bits. It is source heavy, complete and homogeneous and uses a single 8-bit S-Box that has good differential and linear properties. The specification of the cipher is simple and clean: A C implementation of the cipher was finished in less than an hour. In this talk we will demonstrate that the design of SMS4 is somewhat brittle: A small change in the key schedule (different key constants) yields a variant that exhibits a large class of weak keys. Furthermore we will show differential and linear attacks against reduced-round versions of this cipher. References [1] Specification of SMS4 (in Chinese) [2] B. Schneier and J. Kelsey. Unbalanced Feistel Networks and Block Cipher Design. Fast Software Encryption 1996, Third International Workshop Proceedings (February 1996), Springer-Verlag, 1996, pp

108 On the Application of Merkle s Puzzle for Telemedicine and M-Health Frederik Armknecht and Dirk Westhoff NEC Europe, Network Laboratories, Heidelberg, Germany Cryptography for networks with low-end devices is usually designed to meet the capabilities of the weakest party. At this we are aiming at low-priced, unprotected hardware with extremely limited computing and storage capabilities which makes modular arithmetic with large numbers unsuitable or even impossible. However, in several cases, the network has an asymmetric topology with a full functioning powerful device and an extremely limited device with only reduced functioning. This is for example true in telemedicine, or, more concretely, in an m-health scenario where the patient s biosensors form a network with a more powerful control node. We argue that in these cases the well known Merkle s Puzzle [1] has its practical application and provide concrete parameter settings for specific device characteristics. As opposed to other mechanisms it takes advantage of the asymmetric topology by shifting most of the workload toward the more powerful device. The proposed solution has particular value in scenarios where security associations are required for a relatively short, but well-defined duration. Furthermore, no preinstalled secrets are required, making (often expensive) tamper resistance superfluous. References [1] R. C. Merkle. Secure communications over insecure channels. Communications of the ACM 21(4), pp , ACM Press,

109 Dissecting Apple s FileVault Ralf-Philipp Weinmann Technische Universität Darmstadt FileVault [1] is a security feature of Mac OS X that allows users to encrypt their home directories using their login passwords. Although Apple claims that security is achieved by encrypting its entire contents using the Advanced Encryption Standard with 128-bit keys, publically available technical information publically available beyond that statement is scarce: the only other thing known is that encrypted volumes (so-called disk images) are employed. Unfortunately the source code for this part of the operating system (the DiskImages framework) is not available for inspection. This makes accessing the contents of such encrypted volumes from other operating systems such as Linux or *BSD a impossible at the moment. This talk will show work in progress made by the author in reverse-engineering the the programs surrounding the Apple FileVault technology with the aim of creating a compatible driver for Linux. Being a cryptographer, the author also likes to second-guess the designers choices: however, no glaring holes in the design have been discovered yet. Blocks are encrypted in CBC mode, the IV for each block is computed using an HMAC-SHA1 variant. This raises an obvious question: Why has the cryptographic design of FileVault not been opened up for peer review? References [1] Apple Mac OS X FileVault 10

113 Automated derivation of public key authenticity using a formal model of certificate based security infrastructures Thomas Wölfl University of Regensburg The authenticity of public keys is a well-known prerequisite for the applicability of asymmetric cryptography. Formal models of public key infrastructures (PKI) provide a theoretical foundation for the validation of public key certificates with the goal to derive the required key authenticity. A significant example is Maurer s PKI model [2]. However, existing models neglect validity periods and revocations of public key certificates. This work presents the formal model [1] which covers these temporal aspects. It allows the derivation of public key authenticity for a certain point in time. Additionally, it enables the authentication of attributes different from public keys, such as biometric reference templates, access privileges or liability commitments. The core of the model consists of eight axioms formulated in first order logic. It takes the perspective of the user Alice who represents her knowledge about digital certificates by means of two logical statements. After this, Alice verifies if public key (or attribute) authenticity is a logical consequence of the completion [3] of her knowledge and the model s axioms. Because of the complexity of the modeled scenarios, a manual derivation can be long and time consuming. Therefore, a PROLOG program is presented as an automated derivation method. It can be used in an interactive way (using e.g. SWI-PROLOG [4]) or it can be encapsulated in software and serve as module for the decision about public key (or attribute) authenticity. The latter has the advantage that soundness, completeness and termination of the logic program are formally shown which are crucial aspects for the software system or a cryptographic algorithm relying on the decision about authenticity. Another task of the program is the detection of revocation cycles. Consider the following paradox situation: A revocation r for digital certificate c exists which is used for the authentication or the authorization of the revocation r. It cannot be decided if either c or r is valid. For example, the validity of r implies the invalidity of c which implies the invalidity of r. The logic program detects these revocation cycles and delivers a warning. All in all, the formal model and the logic program can be used for the authentication of public keys and other attributes. This allows the reliable application of asymmetric cryptographic methods. References [1] T. Wölfl, Formale Modellierung von Authentifizierungs- und Autorisierungsinfrastrukturen, Deutscher Universitäts-Verlag, [2] U. Maurer, Modelling a public-key infrastructure, in: E. Bertino (Ed.), Proceedings of 1996 European Symposium on Research in Computer Security (ESORICS96), no in Lecture Notes in Computer Science, Springer, 1996, pp [3] U. Nilsson, J. Maluszynski, Logic, Programming and PROLOG, 2nd Edition, John Wiley and Sons, [4] 1

115 A Cryptographic Model for Branching Time Security Properties the Case of Contract Signing Protocols Ralf Küsters ETH Zürich Some cryptographic tasks, such as contract signing and other related tasks, need to ensure complex, branching time security properties. When defining such properties one needs to deal with subtle problems regarding the scheduling of non-deterministic decisions, the delivery of messages sent on resilient (non-adversarially controlled) channels, fair executions (executions where no party, both honest and dishonest, is unreasonably precluded to perform its actions), and defining strategies of adversaries against all possible non-deterministic choices of parties and arbitrary delivery of messages via resilient channels. These problems are typically not, or not all, addressed in cryptographic models and these models therefore do not suffice to formalize branching time properties, such as those required of contract signing protocols. In this talk, a cryptographic model that deals with all of the above problems is proposed. One central feature of this model is a general definition of fair scheduling which not only formalizes fair scheduling of resilient channels but also fair scheduling of actions of honest and dishonest principals. Based on this model and the notion of fair scheduling, a definition of a prominent branching time property of contract signing protocols, namely balance, is provided, along with the first cryptographic proof that the Asokan-Shoup-Waidner two-party contract signing protocol is balanced. The cryptographic models and notions proprosed here provide a basis for relating cryptographic and formal definitions of branching time security properties. Joint work with Veronique Cortier and Bogdan Warinschi. 1

127 Arithmetic operators for pairing-based cryptography Jérémie Detrey Computer Security group, B-IT, Bonn, Germany Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. Software implementations being rather slow, the study of hardware architectures became an active research area. In this talk, I will first describe an accelerator for the η T pairing over F 3 [x]/(x 97 + x ). Our architecture is based on a unified arithmetic operator which performs addition, multiplication, and cubing over F This design methodology allows us to design a compact coprocessor (1888 slices on a Virtex-II Pro 4 FPGA) which compares favorably with other solutions described in the open literature. References [1] Jean-Luc Beuchat, Nicolas Brisebarre, Jérémie Detrey, and Eiji Okamoto. Arithmetic operators for pairing-based cryptography. In P. Paillier and I. Verbauwhede, editors, 9th International Workshop on Cryptographic Hardware and Embedded Systems (CHES 07), pages , Vienna, Austria, September Springer-Verlag. 1 6

129 Tracking Dog - A Privacy Tool against Google Hacking Martin Keßler, Stefan Lucks and Emin Islam Tatlı Bauhaus Universität Weimar, Medien Fakultät Bauhausstraße 11, Weimar {martin.kessler, stefan.lucks, Protection of personal data is a privacy right from both ethical and legislation perspectives. Internet users require safeguarding their privacy against misuses and exploits. On the other hand, internet search engines and especially the most popular Google threaten user privacy. Google Hacking is a general term describing how Google can be used to find out vulnerable servers, files and web applications, unauthenticated programs, various online devices, etc.[ghdb]. As threats to privacy, Google indexes and reveals sensitive and confidential user data that include names, addresses, CVs, files containing usernames-passwords, confidential s and forum postings, private directories, chat log files, secret and private keys, etc. to unauthorized persons [Tatli06, Tatli07]. As a countermeasure, the users should be equipped with privacy enhancing tools to protect their privacy. In this talk, we present Tracking Dog [Kessler07], a penetration testing tool for searching cryptographic secrets and personal private data for a given host and/or given person name. The tool helps the individuals to detect if any of their confidential data have become public over the internet via Google. Tracking Dog supports both English and German language-specific queries and enables the users to edit raw search queries. References [Tatli06] Emin Islam Tatlı. Google Reveals Cryptographic Secrets. 1. Kryptowochenende, Kloster Bronbach, July [Tatli07] Emin Islam Tatlı. Google Hacking Against Privacy. 3. International IFIP Privacy Summer School, Karlstad-Sweden, August [Kessler07] Martin Keßler. Bachelorarbeit: Tracking Dog - Implementation of a penetration testing tool for searching cryptographic secrets and personal secrets with Google. Bauhaus Universität Weimar, Fakultät Medien, Oktober [Ghdb] Google Hacking Database. 1 8

130 Visual Mutual Authentication - an approach to secure online banking Denise Doberitz and Sebastian Gajek Horst Görtz Institut für IT-Sicherheit Ruhr-Universität Bochum Today, most applications are only as secure as their underlying system. Since the design and technology of malware has improved steadily, their dectection is a difficult problem. As a result it is nearly impossible to be sure whether a computer, that is connected to the internet, can be considered trustworthy and secure or not. The question is, how to handle applications, that require a high level of security, such as online banking. In consequence we are interested in a solution, that allows us to establish a secure and trusted communication, although the underlying system is untrusted. In this paper we present an approach, that is based on Visual Cryptography. In [NaorS94] Shamir and Naor presented the concept of Visual Cryptography, that handles the plaintext to be encrypted as a graphic, that is processed pixel by pixel and thus implies interesting characteristics concerning security and fault tolerance. In [NaorP97] Naor and Pinkas demonstrated how to use Visual Cryptography for the authentication of one party without trusting the underlying system. Based on this, we develope a scheme for mutual visual authentication of a user and a server, that allows us to establish a trusted channel between the two parties, and can be used for secure communication. We describe the Visual Mutual Authentication - Scheme and demonstrate its application as a protocol on the example of online-banking. The protocol provides a practical and user-friendly approach to secure online banking, that is not only resilient to malware, but also resistant to phishing. Using Visual Cryptography, the user s key is a transparency that has to be applied on the computer monitor. With the tansparency, the decrypted message can simply be read from the monitor by the user. We transfer this application to a TAN-scheme and integrate a one-time pad structure. As a result, we achieve a user friendly scheme with a high level of secrecy. References [NaorS94] Moni Naor and Adi Shamir. Visual Cryptography, EUROCRYPT 94, Volume 950 of Lecture Notes in Computer Science, pp Springer-Verlag, [NaorP97] Moni Naor and Benny Pinkas. Visual Authentication and Identification. CRYPTO 97, Volume 1294 of Lecture Notesin Computer Science, pp Springer-Verlag,

141 Practical Secure Function Evaluation Vladimir Kolesnikov, Thomas Schneider, Volker Strehl Friedrich-Alexander Universität Bell Laboratories Erlangen-Nürnberg 600 Mountain Ave. Murray Hill, NJ Germany USA Since the first publication of Yao [Yao86], Secure Function Evaluation (SFE) is a well-researched problem. Continuing advances in available computational power and communication have made secure computation of many useful functions affordable. Recent work like Fairplay [MNPS04] demonstrate practicability of general SFE. This thesis focuses on several practical aspects of SFE. Our new improved SFE protocol allows free evaluation of XOR gates and is provably secure against semi-honest adversaries in the random oracle model - the same assumptions that Fairplay relies on. The protocol merges elements of the information-theoretic SFE protocol GESS [Kol05] with Fairplay. This results in substantial performance improvements of 50% for many important circuit structures like addition or number comparison. SFE is extended to allow the evaluated function to be secret and only known by one party, called SFE of private functions (PF-SFE). These settings occur naturally in applications like no-fly-list-, credit report-, or medical history checking. It is known that PF-SFE can easily be reduced to SFE of universal circuits (UC). We give a practical UC construction [KS08] that is up to 50% smaller than the best UC of Valiant [Val76] when used in today s PF-SFE. FairplayPF was implemented as extension of Fairplay to demonstrate practicability of PF-SFE based on the new UC construction. Using the improved SFE protocol, UC-based PF-SFE can be improved by another factor of 4. Besides these circuit-based approaches for SFE and PF-SFE new protocols for SFE and PF-SFE of functions represented as Ordered Binary Decision Diagrams (OBDDs) are given that are based on [KJGB06]. This SFE protocol for OBDDs is extended to the malicious model and shown how to obtain a PF-SFE protocol for OBDDs at the cost of a small overhead only. The results of this thesis substantially improve general SFE for many practical functions and demonstrate practicability of general PF-SFE for small functions. References [Kol05] Vladimir Kolesnikov. Gate evaluation secret sharing and secure one-round two-party computation. In Advances in Cryptology ASIACRYPT05, volume 3788 of LNCS, pages Springer, [KS08] Vladimir Kolesnikov and Thomas Schneider. A practical universal circuit construction and secure evaluation of private functions. In Financial Cryptography and Data Security, FC08, LNCS. Springer, [KJGB06] Louis Kruger, Somesh Jha, Eu-Jin Goh, and Dan Boneh. Secure function evaluation with ordered binary decision diagrams. In CCS, pages ACM Press, [MNPS04] Dahlia Malkhi, Noam Nisan, Benny Pinkas, and Yaron Sella. computation system. In USENIX, [Val76] Fairplay - a secure two-party Leslie G. Valiant. Universal circuits (preliminary report). In Proc. 8th ACM Symp. on Theory of Computing, pages , New York, NY, USA, ACM Press. [Yao86] Andrew C. Yao. How to generate and exchange secrets. In Proc. 27th IEEE Symp. on Foundations of Comp. Science, pages , Toronto, IEEE. 4

143 A Protocol for Inter-Domain Authentication With a Trust-Rating Mechanism Ralph Holz, Heiko Niedermayer University of Tübingen We present an authentication scheme and new protocol for domain-based scenarios with interdomain authentication. It is primarily intended for domain-structured Peer-to-Peer systems, but is applicable in any domain scenario. We make use of Trusted Third Parties in the form of Domain Authentication Servers (DAS) in each domain. These act on behalf of their clients, resulting in a four-party protocol. Our scheme differs from traditional protocols in two regards. First, its design is such that communication between domains is strictly limited to two channels: communication between client principals, and communication between the DAS of two domains. This decouples domains as much as possible and is of advantage for scalability. Second, it also enables us to introduce a mechanism that addresses the (frequent) use case of domains that have no a priori knowledge of each other, where the DAS have not been able to securely exchange their keys. Our protocol is designed to fulfill the goals Authentication as Fresh Injective Agreement [2], Key Establishment, Fresh Session Key, Mutual Belief in Key, and (optionally) Perfect Forward Secrecy. We sketch the protocol flow: an initiator B obtains a Credential from its DAS S B and presents it to the responder A. A cannot verify the Credential and passes it to her DAS S A. S A is in possession of S B s public key and verifies the Credential. The freshness is confirmed with a query to S B, and S A also obtains an Authentication Token for A to authenticate to B. The security of our protocol lies solely with the properties of the channel between the DAS. If the channel is secure, our protocol can provide secure authentication. If not, it follows from the work of Boyd [1] that the authentication cannot be made secure. We address this special case and exploit domain structures in order to provide what we call trust-rated authentication. The DAS also communicate their knowledge about the inter-domain channel and about the other domain to their clients in the form of a Trust Token. If S B is the DAS for B and S A the DAS for A, the information that S B passes to B is: 1) whether there exists a secure channel to S A and 2) knowledge about S A, the other domain, and A. The first is a simple yes/no statement. Knowledge is delivered in the form of a set of properties, e.g. details on channel, a-priori knowledge, observations, etc. The clients evaluate this information and decide whether to proceed. Such knowledge can provide basic trust or distrust into the other party. We have verified that our protocol achieves the set security goals and evaluated protocol security with the AVISPA model checker [3]. Due to the size of the state space, this was limited to systems with three concurrent sessions. The evaluation showed our protocol to be secure for these scenarios. Our scheme allows key revocation without the need to distribute Key Revocation Lists. It can be seen as a PKI that is distributed over several domains, yet operates without a single global authority. References [1] Boyd, C.: Security architecture using formal methods. IEEE Journal on Selected Topics in Communications 11 (1993) [2] Lowe, G.: A hierarchy of authentication specifications. In: Proceedings of the 10th IEEE Computer Security Foundations Workshop, Rockport, MA, USA (CSFW 97). (1997) [3] The AVISPA Project: Automated Validation of Internet Security Protocols and Applications (homepage). (January 2008) 6

144 Equivalent Representations of the F-FCSR Keystream Generator Family Simon Fischer, Willi Meier, Dirk Stegemann FHNW Windisch Switzerland University of Mannheim Mannheim Germany Linear feedback shift registers (LFSRs) are widely used devices for producing pseudorandom sequences with good statistical properties. When used in stream ciphers, the inherent linear structure of their output has to be overcome with additional methods such as nonlinear filter or combination functions. Another approach is to replace LFSRs by nonlinear feedback shift registers (NFSRs), i.e., feedback shift registers with nonlinear feedback functions. The F-FCSR stream cipher family [1,2], which has made its way into the final phase of the EUfunded stream cipher project estream [4], is based on a special type of NFSRs called feedback with carry shift registers (FCSRs) [3,6,7]. In contrast to the linear mod 2-addition used in LFSRs, the feedback function of FCSRs is based on a (nonlinear) addition with carry. Similarly to LFSRs, feedback with carry shift registers can be equivalently represented in Galois architecture and in Fibonacci architecture [5]. We show how this correspondence can be used to derive three additional equivalent representations of the F-FCSR ciphers. How these representations may be exploited for cryptanalytic attacks is subject to further research. References [1] F. Arnault and T. P. Berger, Design and Properties of a New Pseudorandom Generator Based on a Filtered FCSR Automaton. In IEEE Transactions on Information Theory, 54(11): , [2] F. Arnault, T. P. Berger, and C. Lauradoux. Update on F-FCSR Stream Cipher. In estream, ECRYPT Stream Cipher Project, Report 2006/025. See also [4]. [3] R. Couture and P. L Ecuyer. On the lattice structure of certain linear congruential sequences related to AWC/SWB generators. In Math. Comput., 62(206): , [4] estream - The ECRYPT Stream Cipher Project - Phase 3. See [5] M. Goresky and A. Klapper. Fibonacci and Galois Representations of Feedback-With-Carry Shift Registers. In IEEE Transactions on Information Theory, 48(11): , [6] A. Klapper and M. Goresky. Feedback Shift Registers, 2-Adic Span, and Combiners with Memory. In Journal of Cryptology, 10: , [7] G. Marsaglia and A. Zaman. A new class of random number generators. Annals of Appl. Prob., 1(3): ,

145 Related-Key Boomerang Attacks on 7, 8 and 9 Round Reduced Versions of AES-192 Michael Gorski Bauhaus-University of Weimar In this paper we present the first implementable attack on 7-round reduced AES-192. We use the related-key boomerang attack to obtain an attack which has a times lower time complexity than the best known attack so far. It also has a very low data complexity which makes it possible for implementation. We also extend this attack to an 8 and 9-round attack on AES-192 which has the lowest time complexity than previously known attacks. Keywords: Differential Cryptanalysis, Block Ciphers, AES, Related-Key Boomerang Attack. The related-key boomerang attack was published first in [1], but not used to break reduced versions of AES. It utilizes the fact, that the boomerang attack uses short differentials and thus the fewer diffusion of differences in the subkeys can better be used than in the original related-key differential attack.we present the first related-key boomerang attack on AES-192. Another application of the ordinary boomerang attack on AES-128 can be found in [3], which can break up to 5 and 6 out of 10 rounds of AES-128. Our related-key boomerang attack uses 4 related keys to break 7 and 8 out of 12 rounds of AES-192. Up to now our attack on 7-round AES-192 is the best known attack on AES in terms of data and time complexity. We decrease the complexity of the best known attack [2] on 7-round AES-192 by a factor of for the data and a factor of for the time complexity. Our 9-round attack has a 2 36 times lower time complexity than previous attacks. References [1] Eli Biham, Orr Dunkelman, and Nathan Keller. Related-Key Boomerang and Rectangle Attacks. In Ronald Cramer, editor, EUROCRYPT, volume 3494 of Lecture Notes in Computer Science, pages Springer, [2] Eli Biham, Orr Dunkelman, and Nathan Keller. Related-Key Impossible Differential Attacks on 8- Round AES-192. In David Pointcheval, editor, CT-RSA, volume 3860 of Lecture Notes in Computer Science, pages Springer, [3] Alex Biryukov. The Boomerang Attack on 5 and 6-Round Reduced AES. In Hans Dobbertin, Vincent Rijmen, and Aleksandra Sowa, editors, AES Conference, volume 3373 of Lecture Notes in Computer Science, pages Springer,

153 Fault Attacken und Gegenmaßnahmen Marcel Medwed TU Graz So far many software countermeasures against fault attacks have been proposed. However, most of them are tailored to a specific cryptographic algorithm or focus on securing the processed data only. In this work we present a generic and elegant approach by using a highly fault secure algebraic structure. This structure is compatible to finite fields and rings and preserves its error detection property throughout addition and multiplication. Additionally, we introduce a method to generate a fingerprint of the instruction sequence. Thus, it is possible to check the result for data corruption as well as for modifications in the program flow. This is even possible if the order of the instructions is randomized. Furthermore, the properties of the countermeasure allow the deployment of error detection as well as error diffusion. We point out that the overhead for the calculations and for the error checking within this structure is reasonable and that the transformations are efficient. In addition we discuss how our approach increases the security in various kinds of fault scenarios. Dazu kommt noch eine kleine Einführung in Side channel und Fault Attacks. 1 of 55

154 1 Introduction Parallel Generation of l Sequences 1 Cedric Lauradoux and Andrea Röck Princeton University Department of electrical engineering Princeton, NJ 08544, USA INRIA Paris-Rocquencourt Team SECRET Le Chesnay Cedex, France The generation of pseudo-random sequences at a high rate is an important issue in modern communication schemes. The representation of a sequence can be scaled by decimation to obtain parallelism and more precisely a sub-sequences generator. Sub-sequences generators and therefore decimation have been extensively used in the past for linear feedback shift registers (LFSRs). However, the case of automata with a non linear feedback is still in suspend. We have studied how to transform a feedback with carry shit register (FCSR) into a sub-sequences generator. We examine two solutions for this transformation, one based on the decimation properties of l-sequences, i.e. FCSR sequences with maximal period, and the other one based on multiple steps implementation. We show that the solution based on the decimation properties leads to much more costly results than in the case of LFSRs. For the multiple steps implementation, we show how the propagation of carries affects the design. The synthesis of shift registers consists in finding the smallest automaton able to generate a given sequence. This problem has many applications in cryptography, sequences and electronics. The synthesis of a single sequence with the smallest linear feedback shift register is achieved by the Berlekamp-Massey [Mas69] algorithm. In the case of FCSRs, we can use algorithms based on lattice approximation [KG97] or on Euclid s algorithm [ABN04]. We are interested in the following issue in the synthesis of shift registers: given an automaton generating a sequence S, how to find an automaton which generates in parallel the sub-sequences associated to S. We will refer to this problem as the sub-sequences generator problem. We aim to find the best solution to transform a 1- bit output pseudo-random generator into a multiple outputs generator. In particular, we investigate this problem when S is generated by a feedback with carry shit register (FCSR) with a maximal period, i.e. S is an l-sequence. This class of pseudo-random generators was introduced by Klapper and Goresky in [KG93]. FCSRs and LFSRs are very similar in terms of properties [GK02, GKnt]. However, FCSRs have a non-linear feedback which is a significant property to thwart algebraic attacks [CM03] in cryptographic applications [ABM07]. The design of sub-sequences generators has been investigated in the case of LFSRs [LE71] and two solutions have been proposed. The first solution [Rue86] is based on the classical synthesis of shift registers, i.e. the Berlekamp-Massey algorithm, to define each sub-sequence. The second solution [LE71] is based on a multiple steps design of the LFSR. We have applied those two solutions to FCSRs. Our contributions are as follows: We explore the decimation properties of l-sequences for the design of a sub-sequences generator by using an FCSR synthesis algorithm. We show how to implement a multiple steps FCSR in Fibonacci and Galois configuration. 1 This extended abstract is a short version of an article accepted at SETA 08, Lexington, Kentucky, USA. 1 2 of 55

155 2 Motivation Decimation is the main tool to transform a 1-bit output generator into a sub-sequences generator. This allows us to increase the throughput of a pseudo-random sequence generator (PRSG). Let S = (s 0, s 1, s 2, ) be an infinite binary sequence of period T, thus s j {0, 1} and s j+t = s j for all j 0. For a given integer d, a d decimation of S is the set of sub-sequences defined by: S i d = (s i, s i+d, s i+2d,, s i+jd, ) where i [0, d 1] and j = 0, 1, 2,. Hence, a sequence S is completely described by the sub-sequences: Sd 0 = (s 0, s d, ) Sd 1 = (s 1, s 1+d, )... S d 1 d = (s d 1, s 2d 1, ). A single automaton is often used to generate the pseudo-random sequence S. In this case, it is difficult to achieve parallelism. The decomposition into sub-sequences overcomes this issue as shown by Lempel and Eastman in [LE71]. Each sub-sequence is associated to an automaton. Then, the generation of the d sub-sequences of S uses d automata which operate in parallel. Parallelism has two benefits, it can increase the throughput or reduce the power consumption of the automaton generating a sequence. Throughput The throughput T of a PRSG is defined by: T = n f, with n is the number of bits produced in every cycle and f is the clock frequency of the PRSG. Usually, we have n = 1, which is often the case with LFSRs. The decimation achieves a very interesting tradeoff for the throughput: T d = d γf with 0 < γ 1 the degradation factor of the original automaton frequency. The decimation provides an improvement of the throughput if and only if γd > 1. It is then highly critical to find good automata for the generation of the sub-sequences. In an ideal case, we would have γ = 1 and then a d-decimation would imply a multiplication of the throughput by d. Power consumption The power consumption of a CMOS device can be estimated by the following equation: P = C Vdd 2 f, with C the capacity of the device and V dd the supply voltage. The sequence decimation can be used to reduce the frequency of the device by interleaving the sub-sequences. The sub-sequences generator will be clocked at frequency γf d and the outputs will be combined with a d-input multiplexer clocked at frequency γf. The original power consumption can then be reduced by the factor γ d, where γ must be close to 1 to guarantee that the final representation of S is generated at frequency f. The study of the γ parameter is out of the scope of this work since it is highly related to the physical characteristics of the technology used for the implementation. In the following, we consider m-sequences and l-sequences which are produced respectively by LFSRs and FCSRs. We detail different representations of several automata. We denote by x i a memory cell and by (x i ) t the content of the cell x i at time t. The internal state of an automaton at time t is denoted by X t. 2 3 of 55

156 3 Previous Results An LFSR is an automaton which generates linear recursive sequences. A detailed description of this topic can be found in the monographs of Golomb and McEliece [Gol81, McE87]. The decimation of LFSR sequences has been used in cryptography in the design of new stream ciphers [MR84]. There exists two approaches to use decimation theory to define the automata associated to the sub-sequences. Construction using LFSR synthesis. This first solution associates an LFSR to each subsequence. It is based on well-known results on the decimation of LFSR sequences. It can be applied to both Fibonacci and Galois representation without any distinction. Theorem 3.1 ([Zie59, Rue86]). Let S be a sequence produced by an LFSR whose characteristic polynomial Q(x) is irreducible in F 2 of degree m. Let α be a root of Q(x) and let T be the period of Q(x). Let Sd i be a sub-sequence resulting from the d-decimation of S. Then, Si d can be generated by an LFSR with the following properties: The minimum polynomial of α d in F 2 m is the characteristic polynomial Q (x) of the resulting LFSR. The period T of Q (x) is equal to T gcd(d,t ). The degree m of Q (x) is equal to the multiplicative order of Q(x) in Z T. In practice, the characteristic polynomial Q (x) can be determined using the Berlekamp-Massey algorithm [Mas69]. The sub-sequences are generated using d LFSRs defined by the characteristic polynomial Q (x) but initialized with different values. In the case of LFSRs, the degree m must always be smaller or equal to m. Construction using a multiple steps LFSR. This method was first proposed by Lempel and Eastman [LE71]. It consists in clocking the LFSR d times in one clock cycle by changing the connections between the memory cells and by some duplications of the feedback function. We obtain a network of linearly interconnected shift registers. All the cells x i of the original LFSR, such that i mod d = k, are gathered to form a sub-shift register, where 0 k d 1. This is the basic operation to transform a LFSR into a sub-sequences generator with a multiple steps solution. In the case of the Fibonacci setup, we apply the update function f at the states X t, X t+1,..., X t+d 1 to obtain the new feedback functions. In the case of the Galois setup, we have to define the new feedback function at each feedback position. Comparison. Let wt(q(x)) denote the Hamming weight of the polynomial Q(x), i.e. the number of non-zero monomials. The method based on LFSR synthesis proves that there exists a solution for the synthesis of the sub-sequences generator. With this solution, both memory cost ( d m memory cells ) and gate number ( d wt(q ) logic gates) depends on the decimation factor d. The method proposed by Lempel and Eastman [LE71] uses a constant number of memory cells ( m ) for the synthesis of the sub-sequences generator, the number of logic gates ( d wt(q) ) still depends on d. 3 4 of 55

157 4 Sub-Sequences Generators and m-sequences FCSRs were introduced by Klapper and Goresky in [KG93]. Instead of addition modulo 2, FCSRs use additions with carry, which means that they need additional memory to store the carry. Their non linear update function makes them particularly interesting for areas where linearity is an issue, like for instance stream ciphers. As for the LFSRs, there exists a Fibonacci and a Galois setup [GK02]. The contribution of our work is to apply the two methods used in the previous section on the case of l-sequences, i.e.sequences with maximal period. Construction using FCSR synthesis. There exists algorithms based on Euclid s algorithm [ABN04] or on lattice approximation [KG97], which can determine the smallest FCSR to produce Sd i. These algorithms use the first k bits of Si d to find h and q such that h /q is the 2-adic representation of the sub-sequence, q < h 0 and gcd(q, h ) = 1. Subsequently, we can find the feedback positions and the initial state of the FCSR in Galois or Fibonacci architecture. The value k is in the range of twice the linear 2-adic complexity of the sequence. For our new sequence Sd i, let h and q define the values found by one of the algorithms mentioned above. By T and T, we mean the periods of respectively Sd i and S. For the period of the decimated sequences, we can make the following statement, which is true for all periodic sequences. Lemma 4.1. Let S = (s 0, s 1, s 2,...) be a periodic sequence with period T. For a given d > 1 and 0 i d 1, let S i d be the decimated sequence with period T. Then, it must hold: If gcd(t, d) = 1 then T = T. T T gcd(t, d). (1) In the case of gcd(t, d) > 1, the real value of T might depend on i, e.g. for S being the 2-adic representation of 1/19 and d = 3 we have T/gcd(T, d) = 6, however, for S 0 3 the period T = 2 and for S 1 3 the period T = 6. A critical point in this approach is that the size of the new FCSR can be exponentially bigger than the original one. In general, we only know that for the new q it must hold that q 2 T 1, where T can be as big as T/gcd(T, d). In the case of gcd(t, d) = 1, we know from [GKMS04] that 2 T/ Based on a conjecture in [GK97] we can even assume that q is always bigger than q, if gcd(t, d) = 1. This means that the space complexity of this method is much worse than for the original FCSR which is also an interesting aspect for decimation attacks on FCSR sequences. Construction using a multiple steps FCSR. We will apply the same technique than for the LFSR, however this time we have to take care of the carry path, i.e. we need the value of the carry at time t to compute the carry at time t + 1. The computation of these subsequent carry bits seems to reduce the effectiveness of the decimation. However, it can be done efficiently by using n-bit ripple carry adders, which are well-known arithmetic circuits. An example of a multiple steps Galois FCSR can be found in Figure of 55

159 Slide Attacks on Hash Functions Michael Gorski 1, Stefan Lucks 1, and Thomas Peyrin 2 1 Bauhaus-University of Weimar, Germany {Michael.Gorski, 2 Orange Labs and University of Versailles Abstract. This paper studies the application of slide attacks to hash functions. Slide attacks have mostly been used for block cipher cryptanalysis. But, as shown in the current paper, they also form a potential threat for hash functions, namely for sponge-function like structures. As it turns out, certain constructions for hash-function-based MACs can be vulnerable to forgery and even to key recovery attacks. In other cases, we can at least distinguish a given hash function from a random oracle. To illustrate our results, we describe attacks against the Grindahl-256 and Grindahl-512 hash functions. To the best of our knowledge, this is the first cryptanalytic result on Grindahl-512. Furthermore, we point out a slide-based distinguisher attack on a slightly modified version of RadioGatún. We finally discuss simple countermeasures as a defense against slide attacks. Key words: slide attacks, hash function, Grindahl, RadioGatún, MAC, sponge function. 1 Introduction A hash function H : {0, 1} {0, 1} n is used to compute an n-bit fingerprint from an arbitrarilysized input. Established security requirements for cryptographic hash functions are collision resistance, preimage and 2nd preimage resistance but ideally, cryptographers expect a good hash function to somehow behave like a random oracle. Current practical hash functions, such as SHA-1 or SHA-2 [13, 14], are iterated hash functions, using a compression function with a fixed-length input, say h : {0, 1} n+l {0, 1} n, and the Merkle- Damgård (MD) transformation [6, 12] for the full hash function H with arbitrary input sizes. The core idea is to split the message M into l-bit blocks M 1,..., M m {0, 1} l (with some padding, to ensure all the blocks are of size l-bit), to define an initial value X 0, and to apply the recurrence X i = h(x i 1, M i ). The final chaining variable X i is used as the hash output. The main benefit of the MD transformation is that it preserves collision resistance: if the compression function is collision resistant, then so is the hash function. Recent results, however, highlight some intrinsic limitations of the MD approach. This includes being vulnerable to multicollision attacks [7], long second-preimages attacks [9], and herding [8]. Even though the practical relevance of these attacks is unclear, they highlight some security issues, which designers of new hash functions should avoid. In general, and due to certain structural weaknesses, MD-based hash functions do not behave like a random oracles. Consider, e.g., a secret key K, a message M and define a Message Authentication Code MAC(K, M) = H(K M). If we model H as a random oracle, this is obviously secure. But for an MD-based hash function H, one can easily forge authentication codes: given MAC(K, M) = H(K M), compute a valid MAC(K, M Y ) = H(K M Y ) without knowing the secret key K. Coron et al. [5] recently discussed a formal model to prove hash functions being free from such structural weaknesses (but still weak against multicollision attacks). 7 of 55

160 Our contribution. Newly proposed hash function designs should not suffer from length extension. So for a new and well-designed hash function, the MAC(K, M) = H(K M) should be a secure MAC. We will show that this is not the case for some recently proposed hash functions. In contrast to the case of MD-based hash functions, where one can forge messages but cannot recover K, our attacks allow, in general, the adversary to find K (much faster than by exhaustively searching for it). Our attacks are an application of slide attacks. These are a classical tool for block ciphers cryptanalysis, but have so far not been used for hash function cryptanalysis. The Targets for Our Attacks. A natural idea for thwarting the MD limitations is to increase the size of the internal chaining variables in the iterated process, see, e.g., [11]. Using a similar patch, sponge functions [2] followed the idea to employ a huge internal state (to hold a huge chaining variable) and to claim a capacity c, typically c n. This defends against attackers even if these can perform 2 n/2 operations (but are still restricted to 2 c/2 units of work). Here n is considered a typical hash function output size (sponge functions may also provide for arbitrary output sizes, rather than for a fixed n). Several recent hash functions follow this approach, including Grindahl [10] and Radio- Gatún [1]. As far as we know, there are no cryptanalytic attack on either RadioGatún or the 512-bit version of Grindahl while some collision attacks for the 256-bit version of Grindahl have already been published [15]. In the current paper, we study the applicability of slide attacks for sponge functions. Our results indicate that slide attacks can be a serious threat for hash functions fitting into the sponge framework. On the other hand, if the hash function designer is aware of slide attacks, we believe it is easy to defend against such attacks. We give concrete examples by providing attacks against Grindahl [10] and two slightly tweaked versions of RadioGatún [1]. Our attack applies for both published flavours of Grindahl, the 256-bit version and the 512-bit version. As far as we know, this is the first cryptanalytic result for the 512-bit version. References 1. Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Radiogatun, a belt-and-mill hash function. Presented at Second Cryptographic Hash Workshop, Santa Barbara (August 24-25, 2006). See 2. Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. On the Indifferentiability of the Sponge Construction. In Nigel P. Smart, editor, EUROCRYPT, volume 4965 of Lecture Notes in Computer Science, pages Springer, Alex Biryukov, editor. Fast Software Encryption, 14th International Workshop, FSE 2007, Luxembourg, Luxembourg, March 26-28, 2007, Revised Selected Papers, volume 4593 of Lecture Notes in Computer Science. Springer, Gilles Brassard, editor. Advances in Cryptology - CRYPTO 89, 9th Annual International Cryptology Conference, Santa Barbara, California, USA, August 20-24, 1989, Proceedings, volume 435 of Lecture Notes in Computer Science. Springer, Jean-Sébastien Coron, Yevgeniy Dodis, Cécile Malinaud, and Prashant Puniya. Merkle-Damgård Revisited: How to Construct a Hash Function. In Victor Shoup, editor, CRYPTO, volume 3621 of Lecture Notes in Computer Science, pages Springer, Ivan Damgård. A Design Principle for Hash Functions. In Brassard [4], pages Antoine Joux. Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions. In Matthew K. Franklin, editor, CRYPTO, volume 3152 of Lecture Notes in Computer Science, pages Springer, of 55

167 1 Introduction SPAM Some Cryptographic Thoughts Christopher Wolf Horst Görtz Institute for IT-Security, Ruhr-University Bochum, Germany or When introducing electronic mail or , the designers of the Simple Mail Transfer Protocol (SMTP) imagined a cooperative Internet. In its first incarnation as a research network, this was certainly true. However, since the first newsgroup crosspostings and the successive rise of spam, this paradigm has certainly proven wrong. Due to the lack of even the most basic security features (e.g. sender authentication) in SMTP or its current version ESMPT, spam is on the rise. Actually, using some basic physical laws, it is even predicted in that spam will cause a resonance-cathastrophy in the entire -system which will first render totally useless as a mean of communication and second will lead to failures in the entire Internet. Hence, drastic measures seem to be necessary to elude spam. As spam has quite a long history, at least by Internet standards, it is not surprising, that such a drastic solution has already been proposed, namely Internet Mail Before outlining Internet Mail 2000, we will first look at the basic difference between traditional mail and . Although these differences are rather simple, this will help us to understand why spam exists in but was and still is virtually unknown in traditional mail. In traditional or surface mail, all a potential receiver has to do is to put up a post-box, attach her name to it and to distribute her address to potential senders, e.g. her friends, her bank, or favourite online-shops. After that, mail will arrive and will be paid for by the sender, e.g. through a stamp. Hence, the whole infrastructure for delivering surface mail, i.e. post-boxes and offices, store houses, lorries and staff, is paid by through the senders. The only exception are companies who may charge for shipping and handling or otherwise include this fee into their price calculation. In electronic mail, the situation is fundamentally different. Here, the receiver pays by far and large the mayority of the costs through operating a server 24h per day and in line with latest security findings, keeping it connected to the rest of the Internet and so forth. And instead of improving, the situation gets worse as all commonly used anti-spam techniques such as content filtering, fingerprinting, black- and white-listing have to be paid by the sender, not the receiver. In contrast, gray-listing is neutral in this respect as it requires both the sender and the receiver to invest in a connection: on the sender side, an has to be placed in a queue and resent, while on the receivers side, some memory has to be spent to remember the Internet address (IP address) of the prospective sender. Internet Mail 2000 (IM2000) tried to change this fundamental asymmetry between sender and receiver in the SMTP protocol and also this fundamental difference between traditional and electronic mail by requiring the sender to operate co-called out-box servers, which had to be kept online 24h a day. As we saw above, this is in fact the mayority of costs in the SMTP protocol. To notify a user about new messages, IM2000 suggested small notification messages. Upon receiving such a notification, the receiver would connect to the corresponding out-box server and download 1 15 of 55

168 his message or decide not to do this in case of unwanted . Hence, it would have been on the costs of the receiver to store bulks of unwanted messages. Although technically brilliant, IM2000 was obviously too drastic a change to supersede SMTP. Unfortunately, network effects have had a big impact here. However, the basic observation of IM2000 is still valid, i.e. in a fair system, the sender should pay a large or at least larger than today cost to send out s then the receiver. This observation is also strengthened by the statement Users rarely learn about the they do not receive, and even if they do, are seldom competent enough to assign the blame to their [Internet Service Provider] blocking it rather than the sender having made an error.. Although a bit cynical, the statement is certainly true in the sense that receivers will usually not learn about they do not receive while senders will learn about this, e.g. by answers which do not arrive or through bounce messages. Hence, the incentive for a sender to invest into her outbound infrastructure is larger than the incentive for a receiver to invest into his inbound infrastructure. Equipped with this observation, we therefore advocate a paradigm shift in SMTP: rather than having the receiver bearing the mayority of the costs, we show how to shift them to the sender, using different, partly surprisingly simple strategies (Sec. 2. However, the example of IM2000 has to make us deliberate if we want to see this paradigm shift to happen in practice (and hence, to find far less SPAM in our own inbox). Hence, we have to make sure that our suggested changes are compatible with existing techniques and the spirit of the SMTP protocol. We will then go through several use-cases to see how SF-MTP is expected to work in practice (Sec. 3). 2 Let Sarah Pay As already outlined above, we will outline in this section a few simple strategies which can be used to combat spam by shifting as much workload as possible from the receiver Robert to the sender Sarah. For the purpose of our article, it is not important if Sarah or Robert are actual human beings or computer programmes. Although this view is a bit of a simplification in terms of the current SMTP protocol, we found it a valid simplification to ease explanation. We will hint in later sections, at which stage we suggest to implement it. In most cases, you should assume Sarah and Robert being computer programmes or servers. In a nutshell, we suggest External White-and-Gray-Lists Sender-Call-Back Proof-of-Work User Interaction Standardized Abuse & Bounce Messages Distributed Repudiation System Before going into the details of our suggestions, we start with a brief sketch of a typical SMTP session to give the reader not familiar with the Simple Mail Transfer Protocol an idea of possible problems in terms of authentication and hence the ancor points for spammers of 55

169 2.1 SMTP session Hier kommt ein Bild 1. Sarah 220: Greeting (ESMTP) 2. Robert HELO/EHLO + domain 3. Sarah 250: Hello back 4. Robert MAIL FROM: Mail-Adress-Robert 5. Sarah 250: Sender OK 6. Robert RCPT TO: Mail-Adress-Sarah 7. Sarah 250: Recipient OK 8. Robert DATA (komplette , inkl. From, To,...) 9. Sarah 250 accepted for delivery 10. Robert QUIT 11. Sarah 221: Bye Not shown here is the fact that Robert and Sarah need to know each others Internet or IP address to send each other messages. However, we see that Robert can send any message he wants to sent to Sarah. In particular, Sarah cannot be sure about anything but the validity of her own address (given by Robert in Step 6), and Robert s IP addresss. Therefore, it has been common among spammers to use fake from addresses. In addition, the from-address given in Step 4 does not necessarily match the from-address given in the text in Step 8. On this other hand, this property allows for such features as -forwarding or -lists. 3 Use Senarios We assume that the following use scenarios need to be addressed by any solution which can superseed SMTP: Normal User Automated Forwarding Roaming Users Mailing List 3 17 of 55

170 The Indifferentiability Framework - Revisited Ewan Fleischmann, Stefan Lucks Bauhaus-University of Weimar, Germany Abstract. At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as a tool to see subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view. We show, using well known examples like NMAC and MCM, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting. These constructions do not differ in their implementation but only on an abstract level. Naturalls, this gives rise to the question what to conclude for our hash function. Our results cast doubts about the notion of indifferentiability from a random oracle as a practically relevant criterion (as e.g. proposed by Knudsen [KNU08] for the SHA-3 competition) to separate good hash function structures from bad ones. Keywords. hash function, random oracle, uninstantiable, indifferentiable 1 Introduction 1.1 Hash Functions Cryptographic hash functions map an input of unlimited size to a small fixed-size output. Due to their complexity and for theoretical purposes, cryptographic hash functions are often modeled as random oracles. In practice, cryptographic hash functions are usually implemented based on some compression function(s) with fixed-size input and output. If the compression functions are insecure, we can hardly expect much security from the iterated hash function. On the other hand, even if the compression functions are secure, there could be flaws in the hash structure which an adversary might possibly exploit. Current hash functions, for example, almost universally follow the Merkle- Damgård structure [Me89,Da89], which suffers from length-extendibility: Given the hash H(M) of a message M, one can easily compute some string X and some hash H(M X) without even knowing the message M itself. A good practical hash function H should prevent this. Motivated by the practical need to say anything about structural flaws in the design of H itself, Coron et al. [C05] presented a new notion of security for cryptographic hash functions called indifferentiability. In short, if one models the compression functions as random oracles with fixed-size inputs, then the iterated hash function composed from these compression functions should be indifferentiable from a random oracle with variably-sized inputs. This could be a criterion, e.g., to separate practical hash functions with a good structure from those which might suffer from structural flaws, e.g., in the context of the search for new standard hash functions [NIST]. The current paper discusses this issue of 55

171 Preliminaries. In the current paper, we use notions such as efficient, significant and negligible as usual in theoretical cryptography, e.g., an algorithm is efficient, if its running time is bounded from above by a polynomial in the security parameters. 2 Random Oracles and Indifferentiability Random oracles are a mathematical abstraction used in cryptographic proofs, abstracting away nearly all (instantiation-)specific details. Namely, a random oracle responds to every fresh query with a random response chosen uniformly from its output domain. If a query isn t fresh, i.e., if the same query has been made before, the random oracle repeats its previous answer, thus ensuring that random oracles always behave like well-defined functions. Random oracles are typically used when no known implementable function provides the mathematical properties required for the proof or when it gets too tedious to precisely formalise these properties. We consider cryptographic schemes with access to an oracle. The adversary is allowed to access the same oracle as the legitimate user. The idea is to prove the scheme secure assuming the random oracle, but, when used in practice, instantiate the oracle by some efficient function. From a theoretical point of view, it is clear that a security proof in the random oracle model is only a heuristic indication of the security of a system when instantiated with a particular function. Nevertheless, a formal proof in the random oracle model seems to indicate that there might be no structural flaws in the design (i.e. structure) of the system. 2.1 Being Indifferentiable from a Random Oracle a Security Notion for Hash Functions The notion of indifferentiability for hash functions is defined as follows [MRH04]: Definition 1. A Turing machine H Alg with oracle access to a set of ideal primitives G is said to be computationally indifferentiable from an ideal primitive H Rnd if there exists a simulator S, such that for any distinguisher D it holds that the difference P r[d H Alg,G = 1] P r[d H Rnd,S = 1] is negligible. Here, the simulator S is efficient, has oracle access to H, and simulates the primitive. The distinguisher D also is efficient (i.e., runs in polynomial time and thus makes at most a polynomial number of queries). See figure 1 for a visualisation of indifferentiability. As shown in [MRH04, Proposition 2], if H Alg is computationally indifferentiable from H Rnd, we can take any cryptosystem using H Rnd (i.e. a random oracle) and replace H Rnd by H Alg, without destroying the cryptosystem s security. This makes indifferentiability in general a useful tool for cryptography. Typically, an ideal n-bit hash function is thought of as a random function H Rnd : {0, 1} {0, 1} n. In practice, hash functions usually iterate some compression function(s) F : {0, 1} n {0, 1} m {0, 1} n. We denote such a compression function as an ideal primitive, if it is chosen randomly, uniformly distributed from the set of all such functions. (Clearly, we can model random functions as random oracles!) 2 19 of 55

172 Case I: Hash Algorithm using ideal primitives Case II: Random Hash Function Simulator for primitives Hash Algorithm Halg F G G? Distinguisher D Random Hrnd Simulator S F G Ideal Function Algorithm A B: A has access to B Figure 1. Defining H being indifferentiable from a random oracle RO Coron et al. [C05] used the following formalism to characterise the structural soundness of hash functions. Assume as set G of ideal primitives and a hash function H Alg utilising these primitives. We denote, H Alg as computationally indifferentiable from a random oracle, if no efficient adversary A can distinguish between the following two cases with any significant advantage: Algorithm: oracle access to H Alg and oracle access to the ideal primitives contained in G and Random: oracle access to a random oracle H Rnd and an efficient simulator S answering the oracle queries to the primitives in G. (The simulator S has no access to the H Rnd -oracle queries of the attacker but access to the random oracle H Rnd itself.) Here, the advantage of A is defined as the difference of two probabilities, the probability P r[a H Alg,G = 1] that A outputs 1 when given access to H Alg and the ideal primitives in G, and P r[a H Rnd,S = 1] of A outputting 1 when given access to a random oracle H Rnd and a simulator S for the primitives: Adv(A) = P r[a H Alg,G = 1] P r[a H Rnd,S = 1]. In the following, we will sometimes briefly write secure instead of computationally indifferentiable from a random oracle. Note that by assuming ideal primitives even in the Algorithm case, this definition is inherently based on the random oracle model. In the standard model we cannot assume ideal primitives (at least not without allowing an exponentially-sized memory to store a description of the function), so this notion of security only makes sense in the random oracle model. Nevertheless, as we understand [C05], a part of their motivation was to introduce a formalism for aiding the design of practical hash functions. Showing the above kind of security in the random oracle model ought to indicate the absence of structural flaws in the hash function. On the other hand, if one can efficiently differentiate a hash function (using ideal primitives) from a random oracle, this appears to indicate a weakness in the hash function structure. With this reasoning, we again follow the example of Coron et al., who debunk certain hash function structures as insecure by pointing out efficient differentiation attacks [C05, sections 3.1 and 3.2] of 55

173 2.2 Indifferentiability vs. Indistinguishability 3 (In)Security in the Indifferentiability World In the following we will examine a bunch of constructions secure in the indifferentiability framework involving one or more random oracles and show how slight modifications to them or partial instatiations drive them insecure (at least in this framework). Section Secure Insecure Insecure (partial instantiation or modification) (extension) ( 3 ) RO 1 X 1 RO X 2 X RO ( 3 ) RO RO RO X RO RO X X RO X RO RO ( 3 ) RO MD RO RO MD Z RO MD RO X (NMAC) X MD RO X RO MD RO ( 3 ) RO i X RO i RO x X Y RO i X RO i Y (MCM) Y X RO x Y RO i X RO i X RO x Y ( 3 ) Secure Structure Instantiate one of Add a finalization or with at least 2 RO s the RO s using a CROWF preprocessing CROWF Table 1. RO denotes a random oracle (with fixed or variable length input), RO i an injective random oracle, RO x a (fixed or variable length, injective or not) random oracle, X,Y and Z collision resistant one-way functions (CROWF). Our results are stronger than indicated by Table 1. Motivational, informal Example. Say we want to design a secure hash function and come up with the idea to design our hash function as a concatenation of a preprocessing function modelled as a random oracle RO and a collision resistant one way function (CROWF) X. Consequently, our hash function H for a message M is H := (RO X)(M). So we try to proof its security in the indifferentiability framework and come to the conclusion that this hash function is in fact insecure (see Theorem 1.1 (iii) 3 ). In the indifferentiability framework we have at least three straightforward approaches to get H secure: 1. Remove the CROWF X: H 1 = (RO)(M). 2. Strengthen X and make it a random oracle: H 2 = (RO RO)(M). 3. Weaken X and make it an easily invertible function: H 3 = (RO X)(M). The hash functions H 1, H 2 and H 3 can be proven secure in the indifferentiability framework (see Thm. 1.1(i) and (ii) 3 ). 1 Proofs are trivial and are not given here. A random oracle alone can of course not be instantiated partially. 2 same as right below in this table 3 see full version of the paper, available soon on our homepage medsec.medien.uni-weimar.de 4 21 of 55

174 Indifferentiability was devised as a tool to see subtle real world weaknesses while in the random oracle world. But we can prove H insecure and H 3 secure. In the real world (i.e. comparing the instantiated hash functions) H 3 is sure to be substantially weaker than H. Additionally, the hash functions H and H 2 could be implemented using the same algorithms but one is proved to be insecure, the other one is secure. What shall we conclude for the security of our instantiated hash function in the real world? We don t see how we can conclude that H has some real world weaknesses that H 2 has not. References [BBP04] M. Bellare, A. Boldyreva, A. Palacio. An Uninstantiable random oracle-model Scheme for a Hybrid- Encryption Problem. Eurocrypt [BR06] M. Bellare, T. Ristenpart. Multi-Property-Preserving Hash Domain Extension and the EMD Transform. Asiacrypt [BR93] M. Bellare, P. Rogaway. Random Oracles are Practical: A Paradigm for Designing Efficient Protocols. ACM Conference on Computer and Communications Security, [CCH04] Canetti and Goldreich and Halevi, The Random Oracle Methodology, Revisited, JACM: Journal of the ACM, volume 51, 2004 [CCH04a] Canetti and Goldreich and Halevi, On the random oracle Methodology as Applied to Length-Restricted Signature Schemes. Theory of Cryptography Conference (TCC) [C05] J. Coron, Y. Dodis, C. Malinaud, P. Puniya. Merkle Damgård Revisited: How to Construct a Hash Function, Crypto [Da89] I. Damgård. A Design Principle for Hash Funcitons. Crypto 89. [Go06] O Goldreich, On Post-Modern Cryptography, Cryptology eprint Archive, Report 2006/461, [GT03] S. Goldwasser, Y. Taumann, On the (In)security of the Fiat-Shamir Paradigm. Symposium on Foundations of Computer Science (FoCS) [Ko07] N. Koblitz, The Uneasy Relationship Between Mathematics and Cryptography, NOTICES: Notices of the American Mathematical Society, volume 54, 2007 [KNU08] Knudsen, Lars R. Hash Functions and SHA-3, Invited Talk at FSE 2008, slides/day_1_sess_2/knudsen-fse2008.pdf [Li06] Constructing Secure Hash Functions from Weak Compression Functions: The Case for Non-Streamable Hash. Selected Areas in Cryptography (SAC) [MRH04] U. Maurer, R. Renner, C. Holenstein. Indifferentiability, Impossibility Results on Reductions, and Applications to the Random Oracle Methodology. Theory of Cryptography Conference (TCC), [Me89] R. Merkle. One-Way Hash Functions and DES. Crypto [Ni02] Nielsen, Separating Random Oracle Proofs from Complexity Theoretic Proofs: The Non-committing Encryption Case. Crypto [NIST] National Institute of Standards and Technology (NIST). Tentative Timeline of the Development of New Hash Functions. [RS08] Ristenpart Thomas, Shrimpton Thomas. How to Build a Hash Function from Any Collision-Resistant Function, 2007, ASIACRYPT, pages , of 55

175 Code based signature schemes Pierre-Louis Cayrel Université de Limoges, XLIM-DMI, 123, Av. Albert Thomas Limoges, France Abstract. In this paper, I propose a survey of signature schemes using error correcting codes. I describe coding theory, its application to cryptography and the different identification and signature schemes using error correcting codes. Next I briefly describe a secure implementation of the Stern Scheme against DPA [3], an identity based Signature Scheme [2] and the Kabatianskii Krouk Smeets Signature Scheme [4]. 1 Introduction Coding theory deals with the error-prone process of transmitting data across noisy channels, via clever means, so that a large number of errors that occur can be corrected. It also deals with the properties of codes, and thus with their fitness for a specific application. We can use the coding theory in cryptography. It is partly on the difficulty of the Syndrome Decoding problem and the Goppa Parameterized Bounded Decoding problem that the safety of several cryptosystems (e.g. McEliece [10] or Niederreiter [11]) lies. This makes it an issue for cryptographic applications of error-correcting codes is not only being NP-complete, but also the fact that, in general, a random instance of this problem is always difficult. It is difficult to prove specific results on this matter, but it seems that in terms of complexity in average, the problem is almost always difficult. Code-based cryptography was introduced by McEliece [10], two years after the introduction of public key cryptography by Diffie and Hellman [6] in In his paper he proposed a public key encryption scheme. In 1986 Niederreiter proposed [11] an equivalent code-based cryptosystem [9]. McEliece first had the idea in 1978, of using the theory of error-correcting codes for cryptographic purposes, and more specifically for asymmetric encryption algorithm. The principle of the protocol described is for Alice to send a message containing a large number of mistakes, mistakes that only Bob knows how to detect and correct. In this paper, I present recent improvements in code based cryptography. After I detailed the identification and signature schemes using error-correcting codes, I briefly describe three new results on code based signature schemes : a secure implementation of the Stern Scheme Section 3, an identity-based Signature Scheme Section 4, a stud of the Kabatianskii Krouk Smeets Signature Scheme Sectiin 5. 2 Identification and signature with error correcting codes An open question was the existence of a signature scheme based on coding theory, if one puts aside the signature scheme induced by the Fiat-Shamir paradigm from the Stern identification scheme of 1993, the first signature scheme was proposed in 2001 by Courtois, Finiasz and Sendrier in [5]. It adapts the Full Domain Hash approach of Bellare and Rogaway [1] to Niederreiter s encryption scheme. Another approch to this problem was investigated in 1997 by Kabastianskii, Krouk and Smeets who proposed a (few times) digital signature scheme based on random errorcorrecting codes. 2.1 Stern identification and signature scheme At Crypto 93, Stern proposed a new identification and signature scheme based on coding theory [12]. Stern s Scheme is an interactive zero-knowledge protocol which aims at enabling any user (usually called the prover and denoted here by P ) to identify himself to another user (usually called the verifier and denoted here by V ). 23 of 55

176 Let n and k be two integers such that n k. Stern s Scheme assumes the existence of a public (n k) n matrix H defined over F 2. It also assumes that an integer t n has been chosen. For security reasons (discussed in [12]) it is recommended that t is chosen slightly below the value given by the Gilbert-Varshamov bound. The matrix H and the weight t are protocol parameters and may be used by several (even numerous) different provers. Each prover P receives a n-bit secret key s P (also denoted by s if there is no ambiguity about the prover) of Hamming weight t and computes a public identifier i V such that i V = Hs T P. This identifier is calculated once in the lifetime of H and can thus be used for several identifications. When a user P needs to prove to V that he is indeed the person associated with the public identifier i V, then the two protagonists perform the following protocol where h denotes a standard hash function: 1. [Commitment Step] P randomly chooses y F n and a permutation σ defined over F n 2. Then P sends to V the commitments c 1, c 2 and c 3 such that : c 1 = h(σ Hy T ); c 2 = h(σ(y)); c 3 = h(σ(y s)), where h(a b) denotes the hash of the concatenation of the sequences a and b. 2. [Challenge Step] V sends b {0, 1, 2} to P. 3. [Answer Step] Three possibilities: if b = 0 : P reveals y and σ. if b = 1 : P reveals (y s) and σ. if b = 2 : P reveals σ(y) and σ(s). 4. [Verification Step] Three possibilities: if b = 0 : V verifies that c 1, c 2 have been honestly calculated. if b = 1 : V verifies that c 1, c 3 have been honestly calculated. if b = 2 : V verifies that c 2, c 3 have been honestly calculated, and that the weight of σ(s) is t. 5. Iterate the steps 1,2,3,4 until the expected security level is reached. Fig. 1. Stern s Protocol During the fourth Step, when b equals 1, it can be noticed that Hy T derives directly from H(y s) T since we have: Hy T = H(y s) T i V = H(y s) T Hs T. As proved in [12], the protocol is zero-knowledge and for a round iteration, the probability that a dishonest person succeeds in cheating is (2/3). Therefore, to get a confidence level of β, the protocol must be iterated a number of times k such that (2/3) k β holds. When the number of iterations satisfies the last condition, then the security of the scheme relies on the NPcomplete problem SD. By using the so-called Fiat-Shamir Paradigm [7], it is theoretically possible to convert Stern s Protocol into a signature scheme, but then the signature is very long (about 140-kbit long for a security of 2 80 binary operations). Notice that it is large in comparison with classical signature schemes, but close to, or less than, the size of many files used in everyday life. One also can notice that the memory of common devices has considerably increased when comparing to the time (1978) when the first code-based system was proposed. 2.2 CFS signature s scheme As we already mentioned, contrarely to the RSA scheme which is naturally invertible, the McEliece or the Niederreiter schemes are not invertible, i.e. if one starts from a random element y of F n 2 and a code C[n, k, d] that we are able to decode up to d/2, it is almost sure that we won t be able to decode y into a codeword of C. This comes from the fact that the density of the whole space which is decodable is very small. Courtois, Finiasz and Sendrier proposed in [5] a practical signature scheme based on coding theory (CFS). The idea of the CFS scheme is to fix parameters [n, k, d] such that the density of decodable codewords is reasonable and pick up random elements until one is able to decode it. More precisely, given M a message to sign and h a hash function of {0, 1} n k. We try to find a way to build s F n 2 of given weight t such that h(m) = Hs T. For D() a decoding algorithm, the algorithm works as follows: 2 24 of 55

177 1. i 0 2. while h(m i) is not decodable do i i compute s = D(h(M i)) where arg 1 arg 2 notes the concatenation of arg 1 and arg 2. Fig. 2. The CFS signature scheme We get at the end an {s, j} couple, such that h(m j) = Hs T. Let us notice that we can suppose that s has weight t = [d/2]. Using a Goppa code, we start from a word of length k = n mt that we transform into a codeword of length n = 2m, and then we have an error of weight t. The decoding algorithm (the trapdoor) permits us to decode any word built that way, meaning, the set of all words to a distance t or less of a codeword. These words represent a set of spheres of balls of radius t centered on the codewords. 2.3 Kabatianskii Krouk Smeets KKS signature s scheme Kabatianskii et al. [8] proposed a digital signature scheme based on random linear codes. They exploited the fact that for every linear code the set of its correctable syndrome contains a linear subspace of relatively large dimension. Firstly, we suppose that C is defined by a random parity check matrix H. We also assume that we have a very good estimate d of its minimum distance. Next, we consider a linear code U of length n n and dimension k defined by a generator matrix G = [g i,j ]. We suppose that there exist two integers t 1 and t 2 such that t 1 u t 2 for any non-zero codeword u U. Let J be a subset of {1,..., n} of cardinality n, H(J) be the sub matrix of H consisting of the columns h i where i J and define an r n matrix F def = H(J)G T. The application f : GF (q) k M n,t is then defined by f(m) = mg for any m GF (q) k where G = [gi,j ] is the k n matrix with g i,j = g i,j if j J and gi,j = 0 otherwise. The public application χ is then χ(m) = F m T because HG T = H(J)G T. The main difference with the CFS s signature scheme (Section 2.2) resides in the verification step where the receiver checks that: t 1 wt(z) t 2 and F m T = H z T. 1. Setup. The signer chooses a random matrix H = [I r D] that represents the parity check matrix of a code C[n, n r, d]. He also chooses a random generator matrix G that defines a code U[n, k, t 1] such that wt(u) t 2 for any u U. He chooses a random set of size n J {1,..., n} and he forms F = H(J)G T. 2. Parameters. Private key. The set J {1,..., n} and the k n matrix G Public key. The r k matrix F and the r n matrix H 3. Signature. Given m GF (q) k, the signer sends (m, m G ) 4. Verification. Given (m, z), the receiver verifies that: t 1 wt(z) t 2 and F m T = H z T. Fig. 3. KKS signature scheme. Compared with other public-key cryptosystems which involve modular exponentiation, these schemes have the advantage of high-speed encryption and decryption. However, they suffer from the fact that the public key is very large. 2.4 Pros and cons Stern s Scheme : The operations used are really simple, the identification is possible, the signature is done quickly and in a zero-knowledge context, using quasi-cyclic construction the public key is small but the signature produced is of large size of 55

178 CFS s Scheme : On one hand, we have seen that both keysize and signing cost will remain high. On the other hand the signature length and verification cost will always remain extremely small. KKS s Scheme : The signature is quite small and the system is fast, using almost quasi-cyclic construction the public key is small but the fact that we can just make few times signature is a real drawback. 3 Secure implementation of the Stern Scheme In [3], the authors presente the first implementation on smartcard of the Stern authentication and signature scheme. This gives a securization of the scheme against side channel attacks. On the whole, this provides a secure implementation of a very practical authentication (and possibly signature) scheme which is mostly attractive for light-weight cryptography. A quick analysis of Stern s protocol shows that the different steps are merely composed of the four following main operators: Matrix-vector product: the multiplication of a vector by a random double circulant matrix; Hash function: the action of a hash function; Permutation: the generation and the action of a random permutation on words; PRNG: a pseudo-random generator used to generate random permutations and random vectors. The securisation is based on the fact that the linear operations (scalar products or bit-permutations) are easy to implement in hardware and are very efficient to secure against DPA attacks. We obtain an authentication in 5, 5 seconds and and a signature in 22 seconds for a security of The communication cost is around 40-kbits in the authentication scheme and around 140-kbits for the signature. It must be noticed that the timing performances would be highly improved by using a hardware SHA-256 instead of a software implementation. This implementation doesn t include countermeasure but the value of the overhead is really small ( 3). 4 Identity-based Signature Scheme In [2], the authors presente an IBI and an IBS scheme based on error-correcting code. Here is the scheme : Given C a q-ary linear code of length n and of dimension k. Let H be a parity check matrix of C. Given H = V HP with V invertible and P a matrix of permutation. Let h a hash function with values in {0, 1} n k. Let id A Alice s identity, id A can be compute by everyone. Similarly, H is public. The decomposition of H is, on the contrary, a secret of the authority and not of Alice. We shall describe an identity-based authentication method : Alice the prover is identifying herself to Bob the verifier. Alice has to authentify herself in a classic way, to get the private key which will then allow her to authentify herself to a third person as Bob. For that purpose, we use variation on identity. Let us admit that we know Alice s identity id A. Given h a hash function with values in {0, 1} n k. We search a way to find s E q,n,t such that h(id A ) = Hs T. The main point is to decode h(id A ). The main problem is that h(id A ) is not in principle in the arrival space of x Hx T. That is to say that h(id A ) is not in principle in the space of decodable elements of F n 2. That problem can be solved thanks to the CFS signature scheme. We get at the end a couple {s, j}, such that h(id A j) = Hs T. We can note that we have s of weight t or less. We use a slight derivation of Stern s protocol. We suppose that A obtained a couple {s, j} verifying h(id A j) = Hs T. h(id A j) is A s public key. The new protocol is based on Stern s protocol but with two changes (steps 1 and 4). The security of this system is the same as the security of Stern s one. It is possible to derive a signature scheme from the zero-knowledge authentication scheme of Stern by classical constructions. Hence it permits to derive an IBS scheme. This scheme is the first proposed non number theory based identity based scheme. The scheme combines two well known schemes and inherits from bad properties of these schemes: the public data is large, the communication cost for the IBI scheme is large and the signature length for the IBS scheme is also very large but besides these weaknesses our scheme presents the first alternative to number theory for ID-based cryptography and may open new area of research of 55

179 5 Kabatianskii Krouk Smeets Signature Scheme In [4], the authors investigate the security and the efficiency of their proposal. They show that a passive attacker who may intercept just a few signatures can recover the private key. Thay give precisely the number of signatures required to achieve this goal. This enables them to prove that all the schemes given in the original paper can be broken with at most 20 signatures. They improve the efficiency of these schemes by firstly providing parameters that enable to sign about 40 messages, and secondly, by describing a way to extend these few-times signatures into classical multi-time signatures. They finally study their key sizes and a mean to reduce them by means of more compact matrices (quasi-cyclics matrices). Such matrices are completely determined if the first row is known. An r n matrix M with r 2 and n 2 is a almost quasi-cyclic matrix if each row vector is rotated one element to the right relative to the preceding row vector: c 1 c c n c n c 1 c 2 c n 1 M = c n 1 c n c 1 c 2 c n c n r+2... c n 1 c n c 1 c 2... c n r+1 Their new scheme relies on the use of random almost quasi-cyclic codes rather than pure random linear codes. 6 Conclusion Recently a few improvements have been done in code based signature schemes. The reduction of the size of the public key is a real breakthrought for an use in low ressource context [3]. We have shown in this paper that the code based signature scheme could be interesting in different areas (embedded devices Section 3, identity based signature Section 4 and few times signatures schemes Section 5). Besides this, the code based cryptography has also the 3 following advantages : it can be an alternative to the number-theory based protocols in case a putative quantum computer may exist; code based cryptography is just based on linear operations (scalar products or bit-permutations); the secret key is smaller than the one of the other protocols (a few hundred bits) for the same security level. References 1. M. Bellare and P. Rogaway. Random oracles are practical : A paradigm for designing efficient protocols. In ACM Conference on Computer and Communications Security, pages 62 73, P.-L. Cayrel, P. Gaborit, and M. Girault. Identity-based identification and signature schemes using correcting codes. In D. Augot, N. Sendrier, and J.-P. Tillich, editors, WCC INRIA, P.-L. Cayrel, P. Gaborit, and E. Prouff. Secure implementation of the stern authentication and signature schemes for lowresource devices. CARDIS, P.-L. Cayrel, A. Otmani, and D. Vergnaud. On kabatianskii-krouk-smeets signatures. WAIFI 2007, Springer Carlet C. and Sunar B. LNCS: , N. Courtois, M. Finiasz, and N. Sendrier. How to achieve a mceliece based digital signature scheme. Springer-Verlag, W. Diffie and M. E. Hellman. New directions in cryptography. IEEE Transactions on Information Theory, IT-22(6): , A. Fiat and A. Shamir. How to prove yourself: Practical solutions to identification and signature problems. In C. Pomeronce, editor, Advances in Cryptology CRYPTO 87, volume 263 of LNCS, pages Springer-Verlag, G. Kabatianskii, E.Krouk, and B. J. M. Smeets. A digital signature scheme based on random error-correcting codes. IMA Int. Conf., Springer LNCS 1355: , Y. Li, R. Deng, and X. Wang. On the equivalence of mceliece s and niederreiter s cryptosystems. IEEE Trans. on Information Theory IT-40, 40 no. 1: , R. J. McEliece. A public-key cryptosystem based on algebraic coding theory. JPL DSN Progress Report, pages , H. Niederreiter. Knapsack-type cryptosystems and algebraic coding theory. Problems Control Inform. Theory, 15(2): , J. Stern. A new identification scheme based on syndrome decoding. In D. Stinson, editor, Advances in Cryptology CRYPTO 93, volume 773 of LNCS, pages Springer-Verlag, of 55

180 Building Stream Ciphers from FCSRs Dirk Stegemann Theoretical Computer Science University of Mannheim Mannheim, Germany 1 Introduction Stream ciphers are commonly used for fast encryption of arbitrarily long data streams. A common design principle is to use a keystream generator to produce a pseudorandom stream of running key bits z = (z t ) t 0, which is added bitwise to the plaintext stream (p t ) t 0 in order to obtain the ciphertext (c t ) t 0 with c t = p t z t. The receiver computes the same keystream z and recovers the plaintext via p t = c t z t. Many keystream generators operate as finite state machines (FSMs). Their initial state is derived from a secret key K and a public initialization vector IV by a procedure named key/iv setup, and in each clock t the FSM outputs a piece of keystream and changes its state according to the state transition function. For the construction of the FSM, linear feedback shift registers (LFSRs) have been widely used in practical applications due to their pseudorandomness properties and their efficiency. However, in the light of recent cryptanalysis results on LFSR-based stream ciphers (especially correlation attacks and algebraic attacks) it seems appropriate to look for other constructions that provide similar pseudorandomness properties but are more resistant to known attacks. Feedback with carry shift registers (cf. for example [4]), which we address in this paper, have been identified as possible alternatives. 2 Preliminaries We call a state of a finite state machine (e.g., an FCSR) periodic if, left to run, the machine will return to that same state after a finite number of steps. We call a sequence u = (u i ) i 0 strictly periodic (or simply periodic) with period T if u i+t = u i for all i 0. We call a sequence u eventually periodic if there exists a t 0 such that u = (u i ) i t is periodic. By wt(d), we denote the Hamming weight of a binary vector d. We will identify a vector (u 0,..., u k 1 ) {0, 1} k with the integer u = k 1 i=0 u i2 i. A 2-adic integer is a formal power series α = i=0 u i2 i with u i {0, 1}. The collection of all such formal power series forms the ring of 2-adic numbers. This ring especially contains rational numbers p/q, where p and q are integers and q is odd. 2-adic numbers and eventually periodic sequences are linked by the following Theorem [1]. Theorem 1 There is a one-to-one correspondence between rational numbers α = p/q (with odd q) and eventually periodic binary sequences u which associates to each such rational number α the bit sequence u = (u 0, u 1,...) of its 2-adic expansion. The sequence u is strictly periodic if and only if α 0 and α of 55

181 a n 2 a 1 a 0 x n 1 x 1 x 0 b div 2 mod 2 y n 1 y n 2 y 0 d 0 d 1 d n 1 d n 1 d n 2 d 1 d 0 Figure 1: FCSR in Galois and Fibonacci architecture 3 Feedback With Carry Shift Registers (FCSRs) 3.1 Galois and Fibonacci Architectures An FCSR of length n in Fibonacci architecture contains n binary register cells (y 0,..., y n 1 ) with fixed binary feedback taps (d 0,..., d n 1 ) and an additional l-bit memory b, see Fig. 1. From an initial configuration (y, b), the FCSR outputs in each clock t the value y 0, computes the sum σ = b + n 1 i=0 y id n i 1 over the integers and updates the register and memory according to b = σ div 2 and y = (σ mod 2, y n 1,..., y 1 ). If the Fibonacci FCSR is in a periodic state, we have 0 b < wt(d), i.e., we will need at most log 2 (wt(d) 1) + 1 memory bits to store b [4]. An FCSR of length n in Galois architecture contains n binary register cells x i with fixed binary feedback taps (d 0,..., d n 1 ) and n 1 memory cells a 0,..., a n 2, see Fig. 1. Starting from an initial configuration (x, a), the Galois FCSR outputs in each clock the value x 0, computes the sums σ i = x i+1 + a i d i + x 0 d i for 0 i < n (with x n = 0 and a n 1 = 0) and updates x i to σ i mod 2 and a i to σ i div 2 for all 0 i < n 1. We will assume that memory cells are only present at those positions with feedback taps, i.e., a i = 0 if d i = 0 for all 0 i < n Properties of FCSR-Sequences Identify a Galois state (x, a) with the integer p = x + 2a, a Fibonacci state (y, b) with the integer n 1 p = b2 n k=0 i=0 k q i y k i 2 k, and define for both architectures the connection integer q as q = 1 2d. Then the output of the FCSR is the 2-adic expansion of α = p q (cf. [1], Proposition 1, and [4], Theorem 2.4). Theorem 1 implies that the output will be strictly periodic if and only if 0 p q. Proposition 2 For an FCSR with connection integer q and a periodic initial state corresponding to p {0,..., q 1}, the sequence of integer representations of the states (p t ) t 0 is given by p t = 2 t p mod q and the t-th output bit can be computed as z t = p t mod 2 = (2 t p mod q) mod 2. If 0 < p < q, q odd, and p and q are coprime, Prop. 2 implies that the period of the sequence (p t ) t 0 is the order of 2 modulo q (cf. Theorem 2 in [1]). The period reaches its maximum q 1 if q is a prime for which 2 is a primitive root. We call such FCSRs maximum-length FCSRs and the 2 29 of 55

182 sequences they produce l-sequences. The properties of 2-adic numbers imply that an l-sequence consists of two half-periods, where the second half is the binary complement of the first [1]. For a binary sequence u, we denote by its 2-adic complexity the length of the shortest FCSR that produces the sequence. The 2-adic complexity along with an actual FCSR that produces the given sequence can be efficiently computed using FCSR-synthesis algorithms (cf. [6] for an overview). Clearly, the 2-adic complexity is upper-bounded by the length of the sequence, and we can expect the 2-adic complexity of a random sequence to be close to this bound, whereas its value will be small for an FCSR-sequence. This bias can be used to efficiently distinguish FCSR-sequences from truely random sequences. However, no other statistical tests except for computing the 2-adic complexity are known to reveal biases from the behaviour that is expected from truely random sequences. Particularly, the expected autocorrelation of maximum-length FCSR-sequences is zero [10], and the linear complexity is high [8]. Experiments indicate that FCSR-sequences pass the NIST statistical test suite [7]. 3.3 Particular Properties of the Galois Architecture Implementors will often prefer the Galois architecture to the Fibonacci architecture since the size of the memory is intrinsically limited and the memory bits can be updated in parallel. For a Galois FCSR, we have 0 p q. The output corresponds to the all-zero sequence if p = 0 and to the all-one sequence if p = q [1]. We call Galois states (x, a) with p {0, q } invariant states. The sequence (x i,t ) t 0 of values taken by the main register cell i is again an FCSR-sequence, more precisely the 2-adic expansion of p i /q with p i = F i (x, a) q + M i p, F i (x, a) = n 1 j=i (x j + 2a j )2 j i, and with constants M i = 2 n 1 j=i d j2 j i [2]. This expression can be further simplified for periodic initial states [3]. Proposition 3 For a maximum-length Galois FCSR with connection integer q, a periodic initial state (x(0), a(0)), and p 0 = x + 2a, the sequence (x i,t ) t 0 of values taken by a fixed main register cell i corresponds to (p t+si mod 2) t 0 with s i = log 2 (M i ) mod q and M i = 2 n 1 j=i d j2 j i. Proposition 3 implies that the sequence (x i,t ) t 0 corresponds to the sequence produced by the whole FCSR (which is in turn equal to the sequence (x 0,t ) t 0 ) shifted by s i positions. Note that the phase shifts s i are independent of the initial state p and depend on i and q only. Note that although equivalent states produce the same output, the sequence of states (x(t), a(t)) t 0 obtained by running the FCSR from equivalent starting states may be different. 3.4 Mappings between Galois and Fibonacci States There is an onto function E : {(x, a)} {(1,..., 1; a 0,..., a n 2 )} Z q that assigns to a Galois state (x, a) the number E(x, a) = x + 2a mod q [4]. Moreover, there exists a one to one mapping S from Z q onto the set L of strictly periodic states of the Fibonacci FCSR with connection integer q except for the state (1,..., 1; wt(q + 1) 1). More precisely, the bijective mapping S assigns to a p Z q the initial Fibonacci main register state y i = ((2 i p mod q) mod 2) for 0 i n 1 and the initial memory state ( ) b = 1 n 1 k 2 n p + q i y k i 2 k. k=0 i= of 55

183 Conversely, for a given periodic Fibonacci state (y, b) the corresponding integer p will satisfy 0 p < q. Obviously, the mapping E from the Galois states to Z q is not one to one, i.e., generally more than one state is mapped to the same p Z q. However, we can compute for a given p Z q the uniquely determined corresponding periodic state (x, a) [3]. Proposition 4 For all p Z q, the only strictly periodic state (x, a) with x+2a = p of a maximumlength Galois FCSR with connection integer q is given by x i = M i p mod q mod 2 and a = (p x)/2 with M i defined as in Prop. 3. For an initial state of a Galois FCSR with connection integer q, we can thus compute a (periodic) initial state of a Fibonacci FCSR with connection integer q (and vice versa) such that the two registers will produce the same output (cf. [4], Corollary 3.7). Moreover, we can map between periodic Fibonacci states and periodic Galois states using Prop. 4. Corollary 5 Let R g and R f denote Galois resp. Fibonacci FCSRs with the same connection integer q. Then there is a bijective mapping T between the periodic states of R g and R f. 4 From FCSRs to Stream Ciphers We have observed in Sect. 3.2 that FCSR-sequences and especially l-sequences have many desirable pseudorandomness properties but low 2-adic complexity, which prevents them from being used directly as keystream. In the context of LFSRs, which suffer from a similar weakness (their low, efficiently computable linear complexity), popular approaches are combination and filter generators. A combination generator consists of a small number of feedback shift registers and a Boolean function f that combines the output sequences of the internal registers in order to produce the output keystream. A filter generator contains only one feedback shift register and a Boolean filter function g that produces the output keystream from the current content of certain register cells. It seems natural to apply the ideas of combination and filter generators to the FCSR-context. This has been done in the case of the F-FCSR stream cipher family [1], which is to the best of our knowledge the first FCSR-based stream cipher. F-FCSR is a filter generator based on a single Galois FCSR and a filter function g that simply computes the binary XOR of its inputs. The initialization procedure of F-FCSR ensures that the initial state of the generator is periodic. Hence, by Prop. 3, the keystream generation procedure of F-FCSR is equivalent to taking the bitwise XOR-sum of different parts of the same FCSR-sequence. This design is motivated by the conjecture that linear and 2-adic operations are unrelated and that the correlation between two distant parts of the same FCSR-sequence is low. At present, we are not aware of any cryptanalytical results that contradict this assumption. 1 Proposition 3 further implies that a Galois-based filter generator like F-FCSR can be equivalently represented as a combination generator based on Galois FCSRs that contains as many FCSRs as the filter function g has inputs, where the FCSR producing the sequence (x i,t ) t 0 is initialized with p si. Furthermore, one or more Galois registers in the combination generator may be replaced by Fibonacci registers producing the same output according to Corollary 5. We can also build an equivalent filter generator based on a Fibonacci FCSR with the following result [3]. 1 Generic attack strategies have been reported in [1] and [9] but are less efficient than exhaustive search for wellchosen cipher parameters of 55

184 Proposition 6 The value x i of the i-th cell in the main register of the Galois FCSR can be computed from the (strictly) periodic state (y, b) of the corresponding Fibonacci FCSR by n 1 k x i = M i b2 n d j 1 y k j 2 k mod q mod 2. k=0 j=0 Note that these representations are all equivalent in the sense that they produce the same keystream sequence from a given (possibly transformed) starting state, but one representation may be more suitable for further cryptanalysis than the other. Finally, we want to point out that the security of the keystream generation (based on a secret initial state) does not imply the security of the whole cipher, since the initial state of the FSM is not entirely secret but partly depends on the publicly known initialization vector. In fact, many stream ciphers (particularly an early version of F-FCSR [5]) have been broken by exploiting weaknesses in the key/iv setup. References [1] F. Arnault and T. P. Berger. Design and properties of a new pseudorandom generator based on a filtered FCSR automaton. IEEE Trans.Comp., 54(11): , [2] F. Arnault, T. P. Berger, and M. Minier. Some results on FCSR automata with applications to the security of FCSR-based pseudorandom generators. IEEE Trans. Inform. Theory, 54(2): , [3] S. Fischer, W. Meier, and D. Stegemann. Equivalent representations of the F-FCSR keystream generator. In Christophe de Cannière and Orr Dunkelman, editors, The State of the Art of Stream Ciphers (SASC 2008) Workshop Record, pages 87 96, [4] M. Goresky and A. Klapper. Fibonacci and Galois representations of feedback-with-carry shift registers. IEEE Trans. Inform. Theory, 48(11): , [5] È. Jaulmes and F. Muller. Cryptanalysis of the F-FCSR Stream Cipher Family In B. Preneel and S. Tavares, editors, Proc. of Selected Areas in Cryptography (SAC 2005), volume 3897 of LNCS, pages Springer, [6] A. Klapper. A survey of feedback with carry shift registers. In T. Helleseth et al., editors, Proc. of SETA 2004, volume 3486 of LNCS, pages Springer, [7] NIST. A statistical test suite for the validation of random number generators and pseudo randoom number generators for cryptographic applications [8] C. Seo, S. Lee, Y. Sung, K. Han, and S. Kim. A lower bound on the linear span of an FCSR. IEEE Trans. Inform. Theory, 46(2): , March [9] D. Stegemann. Extended BDD-based cryptanalysis of keystream generators. In C. Adams, A. Miri, and M. Wiener, editors, Proc. of Selected Areas in Cryptography (SAC 2007), volume 4876 of LNCS, pages Springer, [10] H. Xu and W.-F. Qi. Autocorrelations of maximum-length FCSR sequences. SIAM J. Discrete Math., 20(3): , of 55

188 Slid Pairs in Salsa20 Deike Priemuth-Schmid and Alex Biryukov FSTC, University of Luxembourg 6, rue Richard Coudenhove-Kalergi, L-1359 Luxembourg (deike.priemuth-schmid, The stream cipher Salsa20 is one of the finalists of the estream project [4] which is in the final portfolio of new promising stream ciphers. In 2005 Bernstein [2] submitted the original Salsa20 with 20 rounds, later 8 and 12 rounds versions Salsa20/8 and Salsa20/12 were also proposed. We show that initialization and key-stream generation of this cipher is slidable, i.e. one can find distinct (Key, IV) pairs that produce closely related key-streams. Previous attacks on Salsa used differential cryptanalysis exploiting a truncated differential over three or four rounds. The first attack was presented by Crowley [3] which could break the 5 round version of Salsa20 within claimed trials. Later a four round differential was exploited by Fischer et al. [5] to break 6 rounds in trials and by Tsnunoo et al. [6] to break 7 rounds in about trials. The currently best attack by Aumasson et al. [1] covers 8 round version of Salsa20 with estimated complexity of We show that the following observation holds: suppose that you are given two black boxes, one with Salsa20 and one with a random mapping. The attacker is allowed to chose a relation F for a pair of inputs, after which a secret initial input x is chosen and a pair (x, F(x)) is encrypted either by Salsa20 or by a random mapping. We stress that only the relation F is known to the attacker. The goal of the attacker is given a pair of ciphertexts to tell whether they were encrypted by Salsa20 or by a random mapping. To make the life of the attacker more difficult the pair may be hidden in a large collection of other ciphertexts. It is clear that for a truly random mapping no useful relation F would exist and moreover there is no way of checking a large list except for checking all the pairs or doing a birthday attack. On the other hand Salsa20 can be easily distinguished from random in both scenarios if F is a carefully selected function related to the round-structure of Salsa20. Moreover since in Salsa20 the initial state is initialized with the secret key, known nonce, counter and constant it is not only a distinguishing but also a complete key-recovery attack. Our attacks are independent of the number of rounds in Salsa and thus work for all the 3 versions of Salsa. We also show a general birthday attack on 256-bit key Salsa20 with complexity which can be further sped up twice using sliding observations. Brief Description of Salsa20 The Salsa20 encryption function uses the Salsa20 hash function in a counter mode. The internal state of Salsa20 is a matrix of 32-bit words. A vector (y 0, y 1, y 2, y 3 ) of four words is transformed into (z 0, z 1, z 2, z 3 ) by calculating 1 z 1 = y 1 ((y 0 + y 3 ) 7), z 2 = y 2 ((z 1 + y 0 ) 9), z 3 = y 3 ((z 2 + z 1 ) 13), z 0 = y 0 ((z 3 + z 2 ) 18). 1 Here the symbol + denotes the addition modulo 2 32, the other two symbols work at the level of the bits with as XOR-addition and as a shift of bits of 55

189 This nonlinear operation is called quarterround and it is the basic part of the columnround where it is applied to columns (y 0, y 4, y 8, y 12 ), (y 5, y 9, y 13, y 1 ), (y 10, y 14, y 2, y 6 ) and (y 15, y 3, y 7, y 11 ) as well as of the rowround which transforms rows (y 0, y 1, y 2, y 3 ), (y 4, y 5, y 6, y 7 ), (y 8, y 9, y 10, y 11 ) and (y 12, y 13, y 14, y 15 ). A so called doubleround consists of a columnround followed by a rowround. The doubleround function of Salsa20 is repeated 10 times. If Y denotes the matrix a key-stream block is defined by Z = Y + doubleround 10 (Y ). In this feedforward the symbol + denotes the addition modulo Salsa20 has 976 word operations in total for one encryption. The cipher takes as input a 256-bit key (k 0,..., k 7 ), a 64-bit nonce (n 0, n 1 ) and a 64-bit counter (c 0, c 1 ). A 128-bit key version of Salsa20 copies the 128-bit key twice. We mainly concentrate on the 256-bit key version. The remaining four words are set to fixed publicly known constants, denoted with σ 0, σ 1, σ 2 and σ 3. Slid Pairs The structure of a doubleround can be rewritten as columnround then a matrix transposition another columnround followed by a second transposition. Now the 10 doublerounds can be transferred into 20 columnrounds each followed by a transposition. We define F to be a function which consists of a columnround followed by a transposition. If we have 2 triples (key1, nonce1, counter1) and (key2, nonce2, counter2) so that starting state (key2, nonce2, counter2) = F [ starting state (key1, nonce1, counter1)] then this property holds for each point during the round computation of Salsa20 especially for the end of the round computation. Pay attention that the feedforward at the end of Salsa20 destroys this property. We call such a pair of a 1 st and 2 nd starting state a slid pair. The relation of a slid pair is shown in Fig. 1. In a starting state four words are constants and 12 words can be chosen freely which leads to a total amount of possible starting states. If we want that a starting state after applying function F results in a 2 nd starting state we obtain four wordwise equations. This means we can choose eight words of the 1 st starting state freely whereas the other four words are determined by the equations as well as the words for the 2 nd starting state. This leads to a total amount of possible slid pairs. For the 128-bit key version no such slid pair exists due to the additional constrains of four fewer words freedom in the 1 st starting state and four more equations in the 2 nd starting state. S F 19 F X Z S 19 F F X Z Figure 1: relation of a slid pair 2 37 of 55

190 With function F we get two equations S = F(S) and X = F(X). The words for the matrices of the two starting states we denote with S = σ 0 k 0 k 1 k 2 k 3 σ 1 n 0 n 1 c 0 c 1 σ 2 k 4 k 5 k 6 k 7 σ 3 S = σ 0 k 0 k 1 k 2 k 3 σ 1 n 0 n 1 c 0 c 1 σ 2 k 4 k 5 k 6 k 7 σ 3 The set up of the system of equations for a whole Salsa20 computation is too complicated but the equations for the computation of F are very clear. Due to the eight words freedom we have in a 1 st or 2 nd starting state there are some relations in the 12 non-fixed words. For the 2 nd starting state these relations are very clear as they deal only with words 0 = k 2 + k 1, 0 = k 3 + n 1, 0 = c 1 + c 0 and 0 = k 7 + k 6, (1) whereas for the 1 st starting state these relations depend on the bits and thus are more complicated. Sliding by the function F is applicable to any version of Salsa20/r where r is even. For r odd there would be no transposition at the end of the round computation, equations are a bit different, though still solvable. Related Key Key-Recovery Attack Let us assume that we know two ciphertexts and the corresponding nonces and counters. We do not know both keys but we know that both starting states differ by function F which gives the relation shown in Fig. 2 and that the 2 nd starting state conforms to the initial state format of Salsa20 (i.e. has proper 128-bit constant on the diagonal). We can show that this information is sufficient to completely recover the two related 256-bit secret keys of Salsa20 with O(1) complexity.. σ 0 k 0 k 1 k 2 σ 0 k 3 c 0 k 5 σ 0 k 0 k 1 k 2 k 3 σ 1 n 0 n 1 column k 0 σ 1 c 1 k 6 trans- k 3 σ 1 n 0 n 1 c c σ k round k 7 position k σ 2 c 1 n 0 k 1 0 c 0 1 σ 2 k k σ 3 k 7 n 1 4 σ 3 k k 7 5 k 6 k 2 k 5 6 σ 3 Figure 2: relation of the 1 st and 2 nd starting state A Generalized Related Key Attack on Salsa20 Suppose we are given a (possibly large) list of ciphertexts with the corresponding nonces and counters and we are told that in this list the slid pair is hidden. The question is, can we find slid pairs in a large list of ciphertexts efficiently? As shown for the key-recovery attack, given such slid ciphertext pair it is easy to compute both keys. The task is made more difficult by the feedforward of Salsa20, which destroys the sliding relationship. Nevertheless we can show that given a list of ciphertexts of size O(2 l ) it is possible to detect a slid pair with memory and time complexity of just O(2 l ). The naive approach which would require to check for each possible pair the equations from function F will have complexity O(2 2l ) 3 38 of 55

191 Table 1: Complexities for different list seizes list size memory time which is too expensive. Our idea is to reduce the amount of potential pairs by sorting them by eight precomputed words, so that only elements where these eight words match have the possibility to yield a slid pair. After decreasing the number of possible pairs in that way we can check the remaining pairs using additional constraints coming from the sliding equations. In total we have at least filtering power of bits. Thus we expect that only the correct slid pairs survive this check. The remaining pairs are the correct slid pairs for which we completely know both keys. A summary for the complexity of different lists is given in Table 1. The memory is given in words and the time in Salsa encryptions. Time-Memory Tradeoff Attacks on Salsa Salsa20 has possible starting states. We notice that this is less than the square root of the keyspace size for keys longer than 192-bits. Thus a trivial birthday attack on 256-bit key Salsa20 would proceed as follows: During the preprocessing stage generate a list of randomly chosen starting states and run for each of them Salsa20 to get a sample of ciphertexts. Afterwards we sort this list by the ciphertexts. During the on-line stage we capture ciphertexts for which we want to find the keys. We do not have to store these ciphertexts and can check each of them immediately for a match with the sorted array of precomputed ciphertexts. If we have a match we retrieve the corresponding key from our table. Of course due to very high memory complexity this attack can be only viewed as a certificational weakness. If we can choose the nonce or the counter there exist only different starting states reducing the attack to precomputation and memory complexity of and if we can choose both the state space drops to and the attack complexity drops to Similar reasoning for 128-bit key Salsa20 would yield attack with 2 64 complexity. Thus it is crucial for the security of Salsa20 that nonces are chosen at random for each new key and the counter is not stuck at some fixed value (like 0, for example). The complexities are summarized in Table 2 where R stands for a complete run of Salsa20 and M for a matrix of Salsa (16 words). Improved Birthday Using the Sliding Property. We can use the sliding property to increase the efficiency of the birthday attack twice (which can be translated into reduction of memory, time or increase of success probability of the birthday attack). During the preprocessing stage we generate a sample of 2 nd starting states by using (1) and choose the rest eight words at random. We compute the corresponding ciphertexts for these states as well as the eight specified words for the corresponding 1 st starting states. We use two kinds of pointers to sort this generated list by the ciphertexts for the 2 nd starting and by the eight words for the corresponding 1 st starting state. We capture ciphertext from the key-stream where we also know the nonce and the counter 4 39 of 55

192 Table 2: Complexities for the Birthday attack attack precomputation memory time captured ciphert. chosen nonce and counter R M chosen nonce or counter R M general R M using sliding property R M and check if it is matching a 2 nd starting state from our list (direct birthday) or is a correct 1 st starting state for one of the states from our collection (indirect birthday). In both cases we learn the key for this ciphertext. Conclusion We have described sliding property of Salsa20 which leads to distinguishing, key recovery and related-key attacks on this cipher. We also show that Salsa20 does not offer 256-bit security due to a simple birthday attack on its 384-bit state. Since the likelihood of falling in our related key classes by chance is relatively low (2 256 out of ) this attack does not threaten most of the real-life usage of this cipher. However designer of protocols which would use this primitive should be definitely aware of these non-randomness property, which can be exploited in certain scenarios. References [1] J.-P. Aumasson, S. Fischer, S. Khazaei, W. Meier and C. Rechberger. New Features of Latin Dances: Analysis of Salsa, ChaCha and Rumba To appear in Proceedings of Fast Software Encryption 2008 (FSE 2008), LNCS, Lausanne, Switzerland, February 10-13, Full version as IACR eprint, [2] D. J. Bernstein. Salsa20. estream, ECRYPT Stream Cipher Project, Report 2005/025, [3] P. Crowley. Truncated differential cryptanalysis of five rounds of Salsa20. SASC Stream Ciphers Revisited, [4] estream: The ECRYPT Stream Cipher Projekt, [5] S. Fischer, W. Meier, C. Berbain, J.-F. Biasse, M. J. B. Robshaw. Non-randomness in estream Candidates Salsa20 and TSC-4. INDOCRYPT, volume 4329 of LNCS, pages Springer, [6] Y. Tsunoo, T. Saito, H. Kubo, T. Suzaki and H. Nakashima. Differential Cryptanalysis of Salsa20/8. SASC The State of the Art of Stream Ciphers, of 55

193 Joint State Theorems for Public-Key Encryption and Digital Signature Functionalities with Local Computation Max Tuengerthal (joint work with Ralf Küsters) Universität Trier Composition theorems in simulation-based approaches allow to build complex protocols from subprotocols in a modular way. However, as first pointed out and studied by Canetti and Rabin [3], this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper [5], we present a joint state theorem which is more general than the original theorem of Canetti and Rabin [3], for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations are shown to be unsuitable. Our work is based on a recently proposed, rigorous model for simulation-based security by Küsters [4], called the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti s UC model [1, 2], in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model. References [1] R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols. In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS 2001), pages IEEE Computer Society, [2] R. Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols. Technical Report 2000/067, Cryptology eprint Archive, December Available at eprint.iacr.org/2000/067. [3] R. Canetti and T. Rabin. Universal Composition with Joint State. In D. Boneh, editor, Advances in Cryptology, 23rd Annual International Cryptology Conference (CRYPTO 2003), Proceedings, volume 2729 of Lecture Notes in Computer Science, pages Springer, [4] R. Küsters. Simulation-Based Security with Inexhaustible Interactive Turing Machines. In Proceedings of the 19th IEEE Computer Security Foundations Workshop (CSFW ), pages IEEE Computer Society, [5] R. Küsters and M. Tuengerthal. Joint State Theorems for Public-Key Encryption and Digital Signature Functionalities with Local Computation. In 21st IEEE Computer Security Foundations Symposium (CSF 2008), pages IEEE Computer Society, of 55

194 Anargumentforexacttravellingsalesmanproblem VadymFedyukovych Abstract We introduce an argument of knowledge protocol for a variant of travelling salesman problem. We consider a problemtodecidewhetherahamiltonpathofagivencostexistsinagivengraph. OurprotocolallowsaProver to show his knowledge of a solution to this problem without giving any useful information about his solution, and withoutachanceforaverifiertoshowexistenceofasolutiontoanythirdparty.ourprotocolusepolynomialgraph representation and a novel proof system to show properties of polynomials. Our protocol is of knowledge and is special honest verifier zero knowledge. 1 Introduction Travelling salesman is a classical combinatorial problem. This problem is an alternative to factoring and discrete logarithm useful for cryptographic applications. A proof of knowledge protocol for Hamilton cycle with binary challenges was introduced by Blum[1](citing by[2]). ArelatedproblemtodecidewhetheraHamiltonpathofagivencost(accordingtoacostmatrix)existsinagraphis Exact Travelling Salesman Problem(XTSP) and is known to be NP-hard[8]. An authentication protocol with ternary challenges was introduced in[9]. Weconsidera modular variantofxtspwithcostsfromafinitefield F q forsomeprimeq. Wealsoconsider thecaseofaprimenumberofverticesinthegraph.ourresultalsoapplyforahamiltonpathwithdifferentstart/end nodes. WeintroduceanefficientprotocoltoletaProvershowhisknowledgeofasolutiontoXTSP.Ourprotocoluses Verifier challenges chosen from a large set, and achieve soundness without repeating. This protocol uses a novel polynomial graph representation such that existence and cost of a Hamilton cycle are show as properties of polynomials. Soundness of this protocol is based on a probability estimate to choose a root of a polynomial at random(schwartz- Zippel lemma[13]). Protocol is an argument of knowledge on assumption of taking logarithms is hard for Prover in the group used. Protocol is of knowledge, and is special honest verifier information-theoretic zero knowledge. A polynomial set representation with a characteristic function was introduced for set reconciliation protocol[10]. A proof protocol for set similarity was introduced in[3]. Protocols for graph isomorphism and colourability were introduced in[5]. Three new protocols to show properties of polynomials were introduced in[6]. A protocol to show knowledgeofacodewordofgoppacode,codeparameters,andaanerrorofaboundedweightwasintroducedin[4]. A protocol to show substring matching was introduced in[7]. 2 Preliminaries Let F q beafinitefieldforsomeprimeq.consideradirectedgraph Γ.Lettherebeasubgraph P(Γ)thatisaHamilton cycle.let (a ij )becostsmatrixwitha ij F q.cyclecostbisthesumofcostsassociatedwithedgesinthecycle. Exacttravellingsalesmanproblem(XSTP)istodecidewhetheraHamiltoncycleofagivencostbexistsinthe givengraph Γforacostmatrix (a ij ). InthisreportweconsideraproblemtodecidewhetheraHamiltoncycleofagivencostexistsinthegivengraph suchthatl= V(Γ) isaprime.weusecommitmentstothecostofthecycleandtocostsmatrixasproverinput. WeekendofCryptography2008,Tabarz,Germany of 55

199 CypherMatrix as cryptographic Data Generator Ernst Erich Schnoor, Munich Abstract The original intention was to search for a new encryption method possibly in excess of today usually used bit manipulations no bits, but with whole characters and numbers (bytes) just as cryptographers have done since centuries especially when using modern technology. The procedure found as a result named by the Author: CypherMatrix method (filed patent DPMA as of ) has little common actions with those methods mostly used today. The CypherMatrix procedure uses simple arithmetik, MODULO calculations and higher number systems. Divergent from actual methods CypherMatrix works with two different sectors: Data generator and Coding area. Both sectors are combined with each other but may be used even separately. The Data generator controls the whole procedure and results in several control parameters necessary for processing combined applications. The CypherMatrix especially is used for: 1. Creating unlimited character series (random number generator), 2. calculating collision free dynamic hash values, 47 of 55

200 3. generating extended digital signatures and 4. all modes of encryptions. The input (pass phrase) of optimal 42 bytes here denoted as Start sequence - initializes and controls the total course of the procedure, but is no initial vector (IV) in a conventional sense. Examples: 7 kangaroos jumping along the Times Square Im Thüringer Wald gibt es 7132 Fliegenpilze Pass phrases should be somewhat funny, but unusual. They have to be easy to remember and it's not necessary to write them down. Because of their length they will not be object of lexical attacks or iterative search techniques. Because of their still somewhat short length the input has to be widened and assigned to a univalent and collision free control value CypherMatrix (hash value) by the following four steps: 1. Calculating of a position weightened and collision free interim result H(k), 2. expanding to a hashfunktion series (HFR) and another partial hash value H(p): Expansion, 3. contracting hashfunktion series to a (BASIC VARIATION) (array of 16x16 elements) Contraction and 4. threefold permutations to perform the CypherMatrix (16x16 characters) as final result and a definite mapping of the input. All control parameters necessary to perform cryptographic applications are extracted out of the CypherMatrix, especially the Matrix key, a series of 42 bytes, which are to be led back (loop) to the beginning of the cycle in order to initialize the next round. In addition for encryptions purposes block keys and a cipher alphabet of 128 characters are extracted from the CypherMatrix. The four steps outlined above generate a new CypherMatrix repeated in each cycle as long as the cryptographic task is performed or the procedure is terminated. Start sequence as an input of length (n) comprises a series of definite signs a(i): a(1) a(2) a(3)... a(i)... a(n) To assign the start sequence univalent with a value H(k) each byte a(i) is denoted with an index value (ASCII character set) and the bytes have to be linked together in a qualified manner. Linking together by addition results as follows: n H(k) = ( a(i) + 1 ) i = 1 (Note: Single values for a(i) are increased by (+1) because otherwise ASCII-zero (0) would not be considered)) 48 of 55

201 But the thus calculated value is far away to serve as a hash value. In order to obtain a unmistakable result some more features have to be added. With respect to Renè Descartes every fact can be exactly determined by coordinates for subject, location and time (cartesian coordinate system). Subject is the single character a(i) and as location we state position p(i) inside the sequence, but time is not relevant, here (t(i)=1): Each character a(i) is position weighted by multiplying with its location p(i): n H(k) = ( a(i) + 1 ) * p(i) * t(i) i = 1 t(i) = 1 Occurrences of collisions are excluded by solving with the hash constant C(k) which depends on length of start sequence (n) and an individual fixed user code (1 up to 99). C(k) = n * (n 2) + code n = sqrt (C(k) + 1 code) + 1 Including hash constant C(k) a position weighted partial hash value H(k) is calculated as follows: n H(k) = ( a(i) + 1 ) * (C(k) + p(i)) i = 1 The calculated value H(k) excludes collisions but is still to narrow to establish a secure unchallengeable hash result, To obtain more security an expansion function is introduced which widens the determining factors to a voluminous scale without loosing their quality of being collision free. The decimal values a(i) are converted into digits of higher number systems here to base 77 or base 85 - the results are added to another partial hash value H(p) and accumulated to a hashfunction series (HFR) of about 160 to 2400 digits. n H(p) = ( ( a(i) + 1 ) * p(i) * H(k) ) + ( p(i) + code ) i = 1 This expansion constitutes a first one-way function of the procedure. There is no way back to the initial start sequence. In order to reduce the digits of hashfunction series to decimals again the figures are reconverted by MODULO 256 to decimal numbers (0 to 255) and stored in an array of 16x16 elements: BASIC VARIATION. To achieve the reverse convertion the hashfunktion series is assumed to be digits in number system on expansion base + 1 (base 78 or base 86). This contraction is the second one-way-function 49 of 55

202 of the procedure. The function is not reversible and retrograde destination of the hashfunction series is not achievable. From BASIC VARIATION the procedure generates in each round and three loops (permutations) the CypherMatrix with 16x16 elements. CypherMatrix is a definite mapping of the start sequence. A new CypherMatrix is created in each round. Due to principles of probability a repetition of an identical distributuion of matrix elements will occur once in 256! (faculty) = 8E+506 cases. Working together of generator and coding area in order to achieve encryption and decryption are demonstrated by the following scheme: In symmetric mode sender and recipient insert an identical start sequence which controls the total procedure on both sites. In each round identical matrixes and control parameters are generated. An attacker would be able to enter the procedure only if he finds the start sequence (42 bytes and not written down anywhere). Key words Data-generator, start sequence, one-way-hashfuncion, expansion, contraction, position-weighted, hash-function-series, hash-constant, BASIC VARIATION, CypherMatrix, matrix key Munich, July 1, of 55

203 Templateless Biometric-Enforced Non-Transferability of Anonymous Credentials 1 Introduction Sebastian Pape Databases and Interactive Systems Research Group University of Kassel Wilhelmshöher Allee 73, Kassel Most cryptographic primitives for authentification schemes in the digital world are based on the knowledge of a private key or secret. For example digital signatures or zero-knowledge proofs. In many cases there is (at least) an implicit binding of the secret to a person. If you receive a signed mail, you assume it is signed by the regular owner of the private key; if you authenticate yourself with a zero-knowledge proof you are expected not to give the secret to other persons. One may not put too many trust into this assumption since this secrects are eventually digital data which can be copied without evidence. Two obvious situations cross one s mind: On one hand cryptographic secrets are not very memorable for human beings in general, so they are usually stored somewhere they could be stolen. On the other hand the user may want to share his secret while the authorizing organisation does not want him to do so. While the first situation could be released by storing the key at a safe place (e.g. a tamper-proof smartcard or encrypted with a human memorable passphrase it is much harder to achieve the same in the latter situation. If you want to ensure the non-transferability of knowledge you must keep it secret from the user or make him want to keep it secret. In the following work we especially focus on the non-transferability of anonymous credentials since they offer less contact points than e.g. digital signatures, where the user is known and could be sued for abuse. Nevertheless the following approach can also be used to ensure non-transferability of non-anonymous authentification. Anonymous credentials [CE87] introduced by Chaum [Cha85] usually consist of cryptographic tokens which allow the user (herein after referred to as prover) to prove a statement or relationship with an organisation to another person or organisation (herein after referred to as verifier) without being identified. While some anonymous credential systems are related to the concept of untraceable or anonymous payments [Cha83] and hence it should be possible to transfer them easily to another person there are some situations where credentials should not be transferable. E.g. if the prover wants to show the possession of a valid driving licence the verifier probably does not want to see a transfered driving licence which would rather prove the statement I know someone who has a valid driving licence. Other examples of anonymous credentials are age verification, the proof of a country s citizenship and weekly, monthly or yearly tickets for train rides or baths. 2 Different Approaches of Ensuring Non-Transferability As abovementioned there are two general principals to ensure the non-transferability of tickets. One approach tries to make the secret more valuable for the prover making it unpleasant for him to share the credential. The other approach is of technical nature and tries to prevent the prover from sharing with biometrics of 55

204 2.1 Embedding Valuables There are already approaches to prevent the transfer of credentials where sharing a credential implies also sharing a valuable secret outside a system [DLN97, GPR98, LRSW00] or even all of the prover s secrets inside the system [CL01]. Nevertheless this protection naturally will not prevent all users from sharing credentials. Be it they share their credentials incautious, be it they really trust someone else. In addition those valuable secrets raise the system s value. So users have to take care of thiefs and have to immanently trust the system s architecture. As a first conclusion we notice that the system s effectiveness fundamentally depends on the embedded value. 2.2 Biometrically Enforcing Non-Transferability Another possibility to make sure the credentials are only used by the person the credential was created for, is to make use of the person s biometric information. Using biometrics however usually causes privacy concerns, especially since in contrast to passwords or tokens you cannot change biometric attributes. Therefore extraordinary care has to be taken to protect the user s data [BBB + 08]. It can be easily seen that allowing the verifier to check the prover s biometric attributes conflicts with the prover s wish of anonymity. In 1998 Bleumer [Ble98] combined anonymous credentials with biometric authentification making use of a variant of the wallet-with-observerarchitecture introduced by Chaum and Pedersen [CP93]. In the wallet-with-observer-architecture there exists a user trusted device (wallet) which runs a local process (observer). The credential issuing organization (herein after referred to as authority) trusts that the observer only performs legitimate operations. Impagliazzo and Miner More [IM03] transfered that design to a personal digital assistant (PDA) with a tamper-resistant smartcard. The smartcard (observer) is issued and trusted by the authority and it s tamper-resistance makes sure that the user cannot read and tamper with its content. In contrast the PDA (warden) protects the prover s interests and makes sure the smartcard does not diverge from the specified protocol. Both approaches have in common that biometric authentification is not part of the underlying credential system, but instead prerequisite for the credential protocol to start. 3 Templateless Biometric-Enforced Non-Transferability Analogous to [IM03] we prefer a setup where each user has a PDA and a smartcard handed out by the credential issuing authority. Following [BBB + 08] renders it obvious that the biometric device has to be connected straightly to the smartcard. The only devices which fulfill our needs today are fingerprintreaders. Either they are embedded into the smartcard [Bio, fid08] or they work the way cash-card-terminals act when asking for the users personal identification number (PIN). In this case the reader s input is directly sent to the card, the smartcard decides about acception or rejection (called match-on-card [Nor04, NH04]) and no attached computer is able to eavesdrop on it. Note that relying on fingerprints is only a compromise solution. On the one hand some people do not have suitable fingerprints, forging fingerprints has been done with acceptable effort and there are several attacks on biometric systems based on fingerprints [UJ04]. On the other hand today fingerprint readers are the only devices which could be embedded into smartcards. Following the parameters and comparison of [Jai04] the most desirable attributes for us would be to have a low Circumvention (the ease of use of substitutes is hard) and high Universality (as many people as 2 52 of 55

205 possible have those attributes), Uniqueness (people can be well seperated) and Permanence (the attributes resist aging well). That would lead us to the use of DNA-recognition which can t be done on a smartcard. Since we do not rely on special attributes of fingerprints it seems suitable to us to base our contribution on fingerprints and should there exists an on-the-fly DNA-recognition (or any other suitable biometric identification method) some day which could be embedded into smartcards, the system could easily be switched. Due to the template s compact nature it was commonly assumed that it is infeasible to extract the complete biometric information from them. Since there are some attacks [JRU05] we do not want to store templates on the card. Instead of that we rely on fuzzy extractors proposed by Dodis et al. [DORS08]. Fuzzy extractors provide the same output, even if the input changes, but remains reasonably close to the original. Since Dodis et al. also claim that their fuzzy extractor s output is nearly distributed uniformly and information-theoretically secure, we translate each user s fingerprint to a unique identifier with it. Let us assume the underlying anonymous credential protocol is based on the Fiat-Shamir identification protocol [FFS87] or simular to the protocol proposed in [IM03]. Both are Zero-Knowledge- Protocols (ZKP) and have in common, that the prover possesses secret information which he proves to a verifier without revealing to him what that secret information is. If the prover should not be able to share this information, it has to be kept in a tamper-prove device, e.g. a smartcard. Since we want to restrict the use of such a smartcard to a specific person, the secret information is not stored straightly on the smartcard but instead connected to the output of the fuzzy extractor introduced in the previous paragraph. This can be done by a simple modulo addition. 3.1 Creating and Showing Credentials Depending on the underlying ZKP, public parameters and the secret information are determined. The authority stores the secret information s on the card an sets it to initialisation state. After reading the user s fingerprint fp u, the card computes the value of the fuzzy extractor fe, stores s := s fe(fp u ) and deletes any occurence of s. Now the card is personalized to a specific user, as the secret information s can only be restored with the value of the fuzzy extrator. To prevent a later change of s the card has to be switched to proving state at this point. Proving the possession of a credential is also only slightly changed. The user s fingerprint is read and the following calculations/proofs are done with s + fe(fp u ). It can be easily seen, that this operation restores s. Therefore the underlying ZKP does not need to be changed. We illustrate that by means of an example in the next subsection. 3.2 Example: Modified Feige-Fiat-Shamir Identification Protocol Setup: The authority chooses two large prime integers p and q and calculates their product n = pq. n is then stored on the smartcard and given to the verifier and the prover, p and q are kept secret. Next the authority generates secret numbers s 1,..., s k with gcd(s i, n) = 1 and computes v i s 2 i (mod n). The verifier and prover receive the numbers v i while the numbers s i are stored on the card. When the card is initialised to the prover his fingerprint fp u is read from the card and the stored s i are overwritten by s i s i fe(fp u ) (mod n). The card is set to proving state then of 55

206 Proving: The smartcard chooses a random integer r, a random sign σ { 1, 1}, computes σx r 2 (mod n) and sends this number to the verifier. The verifier chooses numbers a i {0, 1} and sends them to the card. The prover now has to give his fingerprint to the smartcard and the card computes y r(s 1 + fe(fp u)) a 1 (s 2 + fe(fp u)) a2 (s k + fe(fp u)) a k (mod n). y is sent to the verifier and he checks if y 2 ± xv a 1 1 va 2 2 va k k (mod n) to decide if the prover has passed authorisation. Notice that the user is able to follow the procedure since he is provided with n and v i and listens to the communication of the card and the verifier to make sure the card follows the protocol. 4 Conclusion Our contribution proposes a general procedure to ensure the non-transferability of credentials without storing a template anywhere. It achieves the same goal as [IM03] to establish an anonymous credential system that makes use of biometrics to obtain non-transferability. Since our approach is very similar, there are almost the same problems and restrictions that apply to [IM03]. To call them by name: The security of the system depends very much on the tamper-resistance of the smartcard. If a prover begins to interact with a verifier he needs to be kept isolated from the outside world. This is necessary, because otherwise the verifier cannot be sure which card he is communicating with. E.g. if the prover has radio contact to another card, non-transferability suffers and it is possible to share cards. However there are some improvements compared to [IM03]. Our approach abandons the storage of the user s fingerprint. Although it is not easy to tamper a smartcard, there may be sidechannelattacks like timing attacks [Koc96] or differential power analysis [KJJ99] attacking the concrete implementation of a card. This especially affects match-on-card systems where the fingerprint is read by an external reader and submitted to the card. This way it is a lot easier to create test data and it may be possible to extract information about the stored template on the card if the card is lost. Also if the stored secret is revealed it is useless without the information about the fuzzy extractor s value of the user s fingerprint (and vice versa), which might be usefull if the card is lost. Another advantage is that the biometric information is stronger embedded into the anonymous credential system and not only a prerequisite. A consequence of this is that it is easier to create a biometric authentification failed -awareness for the verifier since he will notice rather the fail of a ZKP than a failed biometric authentification on the card itself. References [BBB + 08] H. Biermann, M. Bromba, C. Busch, G. Hornung, M. Meints, and G. Quiring-Kock. White Paper zum Datenschutz in der Biometrie. Technical report, TELETRUST Deutschland e.v., Arbeitsgruppe Biometrie, [Bio] [Ble98] [CE87] Biometric Associates, Inc. The BAI Authenticator Smart Card Datasheet. Technical report, from Gerrit Bleumer. Biometric yet privacy protecting person authentication. Lecture Notes in Computer Science, 1525:99 110, David Chaum and Jan-Hendrik Evertse. A secure and privacy-protecting protocol for transmitting personal information between organizations. In Proceedings on Advances in cryptology CRYPTO 86, pages , London, UK, Springer Verlag of 55

Diss. ETH No. 16589 Efficient Design Space Exploration for Embedded Systems A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Sciences presented by

Diss. ETH No. 12075 Group and Session Management for Collaborative Applications A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZÜRICH for the degree of Doctor of Technical Seiences

0 Corporate Digital Learning, How to Get It Right Learning Café Online Educa Berlin, 3 December 2015 Key Questions 1 1. 1. What is the unique proposition of digital learning? 2. 2. What is the right digital

This press release is approved for publication. Press Release Chemnitz, February 6 th, 2014 Customer-specific software for autonomous driving and driver assistance (ADAS) With the new product line Baselabs

Support Technologies based on Bi-Modal Network Analysis H. Agenda 1. Network analysis short introduction 2. Supporting the development of virtual organizations 3. Supporting the development of compentences

Exercise (Part II) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Prediction Market, 28th July 2012 Information and Instructions S. 1 Welcome, and thanks for your participation Sensational prices are waiting for you 1000 Euro in amazon vouchers: The winner has the chance

Exercise (Part XI) Notes: The exercise is based on Microsoft Dynamics CRM Online. For all screenshots: Copyright Microsoft Corporation. The sign ## is you personal number to be used in all exercises. All

Diss. ETH No. 18190 Large-Scale Mining and Retrieval of Visual Data in a Multimodal Context A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Technical

Inequality Utilitarian and Capabilities Perspectives (and what they may imply for public health) 1 Utilitarian Perspectives on Inequality 2 Inequalities matter most in terms of their impact onthelivesthatpeopleseektoliveandthethings,

p^db=`oj===pìééçêíáåñçêã~íáçå= How to Disable User Account Control (UAC) in Windows Vista You are attempting to install or uninstall ACT! when Windows does not allow you access to needed files or folders.

p^db=`oj===pìééçêíáåñçêã~íáçå= Error: "Could not connect to the SQL Server Instance" or "Failed to open a connection to the database." When you attempt to launch ACT! by Sage or ACT by Sage Premium for

Filing system designer FileDirector Version 2.5 Novelties FileDirector offers an easy way to design the filing system in WinClient. The filing system provides an Explorer-like structure in WinClient. The

Name: AP Deutsch Sommerpaket 2014 The AP German exam is designed to test your language proficiency your ability to use the German language to speak, listen, read and write. All the grammar concepts and

1 The zip archives available at http://www.econ.utah.edu/ ~ ehrbar/l2co.zip or http: //marx.econ.utah.edu/das-kapital/ec5080.zip compiled August 26, 2010 have the following content. (they differ in their

Health Care for all Creating Effective and Dynamic Structures How does the Institute for quality and efficiency in health care work? Peter T. Sawicki; Institute for Quality and Efficiency in Health Care.

Labour law and Consumer protection principles usage in non-state pension system by Prof. Dr. Heinz-Dietrich Steinmeyer General Remarks In private non state pensions systems usually three actors Employer

USBASIC SAFETY IN NUMBERS #1.Current Normalisation Ropes Courses and Ropes Course Elements can conform to one or more of the following European Norms: -EN 362 Carabiner Norm -EN 795B Connector Norm -EN

DISS. ETH NO. 18143 Strategies for Random Contract-Based Testing A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by ILINCA CIUPA Dipl. Eng., Technical University of

Delivering services in a user-focussed way - The new DFN-CERT Portal - 29th TF-CSIRT Meeting in Hamburg 25. January 2010 Marcus Pattloch (cert@dfn.de) How do we deal with the ever growing workload? 29th

Possible Solutions for Development of Multilevel Pension System in the Republic of Azerbaijan by Prof. Dr. Heinz-Dietrich Steinmeyer Introduction Multi-level pension systems Different approaches Different

Long-term archiving of medical data new certified cloud-based solution offers high security and legally approved data management The European Centre of Expertise for the Health Care Industry Langzeitarchivierung

From a Qualification Project to the Foundation of a Logistics Network Thuringia Strengthening the role of Logistics through Corporate Competence Development a pilot project by Bildungswerk der Thüringer

An Introduction to Monetary Theory Rudolf Peto 0 Copyright 2013 by Prof. Rudolf Peto, Bielefeld (Germany), www.peto-online.net 1 2 Preface This book is mainly a translation of the theoretical part of my

SELF-STUDY DIARY (or Lerntagebuch) GER102 This diary has several aims: To show evidence of your independent work by using an electronic Portfolio (i.e. the Mahara e-portfolio) To motivate you to work regularly

Service Design Dirk Hemmerden - Appseleration GmbH An increasing number of customers is tied in a mobile eco-system Hardware Advertising Software Devices Operating System Apps and App Stores Payment and