ONTAP Protocols and Application to the Biometric Passport

Một phần của tài liệu cryprography and security from theory to applications (Trang 147 - 157)

Definition 1 Advanced Computational Diffie-Hellman [BFPV11])

2.2 ONTAP Protocols and Application to the Biometric Passport

We now define the ONTAP protocol which can be used to authenticate a message m by means of a certificate(X,w). There are three participants: the signer, the signature holder (prover), and the verifier.

Definition 3. An offline non-transferable authentication protocol (ONTAP) consists of the following probabilistic polynomial-time (ppt) algorithms.

(Kp,Ks) =setup(1λ)a key setup algorithm to generate a public key Kpand a pri- vate key Ksfrom a security parameterλand some random coins

(X,w) =sign(Ks,m)a signature algorithm to generate from the secret key, a mes- sage m and some random coins a signature with a public part X and a private part w

an interactive proof protocol(P,V)taking(Kp,m,X)as common input, with pri- vate input w, and producing a bit b as output.

The final bit must always be 1 if the algorithms and protocols are well executed. Fur- thermore, they must provide the security properties ofκ-unforgeability and offline non- transferability as defined by the following games.

In theκ-unforgeability game, an adversary plays with a challenger. The challenger first generates Kpand Ksand releases Kp. The adversary can then make some signing queries to the challenger who will return the complete signature (X and w). Then, the adversary proposes a message m which was not queried for signature and run the inter- active protocol with the challenger by playing himself the role of the prover. We say that the protocol is unforgeable if for any ppt adversary the output bit is 1 with probability at mostκ.

In the offline non-transferable game, a process (either the adversary or a simulator) plays with a challenger. The challenger first generates Kp and Ks and releases Kp. The process can then make some signing queries to the challenger who will return the complete signature (X and w). When the process is the adversary, he can then submit some message m which is signed by the challenger but only the X part of the signature is returned. Then, the adversary can run the interactive protocol with the challenger playing the role of the prover. When the process is the simulator, the submission of m and the protocol execution are skipped. Finally, the process yields the list of all sign queries together with an output string. We say that the protocol is offline non-transferable if for any ppt adversary there exists a ppt simulator such that running the game with both processes generate outputs which are computationally indistinguishable.

Given a traditional digital signature scheme which satisfies the some special properties, we construct an ONTAP protocol based on our generic ZK protocol. First, we need the signature to be splittable into two parts X and w. The X part shall be forgeable without the secret key. Finally, there shall be a weakΣprotocol for which w is a witness for(Kp,m,X). Note that all digital signature schemes satisfy the first two conditions by just setting X to the empty string. The last condition (provability) may however be more efficient by enlarging a simulatable part X as much as we can. TheΣ-protocol is enriched into a ZK protocol as on Fig. 2 and we obtain an ONTAP protocol.

Theorem 4 (Monnerat-Pasini-Vaudenay 2009 [17]). Let (setup,sign,verify) be a digital signature scheme such that

the output ofsigncan be written(X,w);

there exists a ppt algorithmsimsuch that for any execution ofsetup(1λ) = (Kp,Ks), the execution of sign(Ks,m) = (X,w)and of sim(Kp,m) =X generate X ’s with computationally indistinguishable distributions;

there exists aκ-weakΣ-protocol for the relation

R((Kp,m,X),w)⇐⇒verify(Kp,m,X,w)

the signature scheme resists existential forgery under chosen message attacks.

Then,(setup,sign) with the protocol from Fig. 3 is an ONTAP scheme which isκ- unforgeable and offline non-transferable where κ(λ) =max(κ(λ),1/Poly(λ)). This holds for any polynomialPoly.

Prover Verifier

(Kp,m,X,w) (Kp,m,X)

generate R formϖ1 pick d at random

(k,ω)gen(R)−−−−−−−−−−−−−−−−→k pick r∈UDr(Kp,m,X)

←−−−−−−−−−−−−−−−−c c←com(k,r; d)

a←P(Kp,m,X,w;ϖ2)−−−−−−−−−−−−−−−−→a check c=com(k,r; d)←−−−−−−−−−−−−−−−−r,d

z←P(Kp,m,X,w,r;ϖ2)−−−−−−−−−−−−−−−−→z,R outputVer(Kp,m,X,a,r,z)∧k= (gen(R))1

Fig. 3. Generic ONTAP Construction

In practice, the use of a trapdoor commitment is a bit artificial. Indeed, neither the secret keyωnor the equivalgorithms are used. Their only purpose is to write down a formal proof for the protocol to be zero-knowledge. In practice, one may favor the use of a simple commitment based on a hash function. This could be formalized in the random oracle model (ROM) and the specific notion of deniable zero-knowledge could be proven. (See [17].)

For instance, a generic RSA signature for a message m has a public key(N,e)and a secret key d. Here, X is a formatted string based on m and N only and w=Xdmod N.

Checking a signature consists of verifying some predicateformat(N,m,wemod N)to check that X=wemod N is a valid formatted string for m. This can be proven by the GQ protocol with the extra input X . Assuming that the RSA signature resists existential forgery under chosen message attacks, we obtain the ONTAP scheme on Fig. 4 with soundness errorκ=

2t e

2−t.

Prover Verifier

(Kp,m,X,w) (Kp,m,X)

pick d at random pick r∈U{0,...,2t1} pick y∈ZN ←−−−−−−−−−−−−−−−−c c←H(dr)

a←yemod N−−−−−−−−−−−−−−−−→a check c=H(dr)←−−−−−−−−−−−−−−−−r,d

z←ywrmod N−−−−−−−−−−−−−−−−→z output ze≡aXr (mod N)format(N,m,X) Fig. 4. RSA-Based ONTAP Construction in ROM

In the case of the biometric passport, the passport would play the role of the prover and the reader would be the verifier. So, the passport would prove to the reader that it owns a valid witness w for the formatted message X without leaking any transferable evidence.

By using e=65 537 and t=16, we obtainκ=216. The workload of the prover consists in computing two exponentials with a 16-bit exponent, one of them being e. Let say this roughly costs 40 multiplications on average using standard square-and-multiply algorithms. An online security of 216is pretty good but maybe some people will not be happy with it. Ifκis believed to be too large, we can still execute two instances of the GQ protocol in parallel and obtainκ=232using four exponentials.

By using e=3, a single run of GQ with t=1 leads to κ = 12 at a cost of 2.5 modular multiplications on average. By iterating 16 times we get κ=216 and 40 multiplications again. So, the cost with same κ is roughly the same: we need “2.5 multiplications per bit of online security”.

We could also use DSA or any variant such as ECDSA, but with the SchnorrΣ- protocol [24,25] instead of GQ. (See [17].)

3 Epilog: Ali Baba’s Trial and the Fall of Deniability

Deniability is a pretty fragile property though. In a world where computability is fully described by regular Turing machines, deniability works. However, some special tamper proof devices may be sealed by the Caliph. In this model, deniability might collapse as shown in [18,19]. What happened is that the thieves used devices sealed by the Caliph.

Indeed, the Caliph provides programmable sealed devices which can be configured by users using any software but with the property that the hash of the loaded software is permanently displayed by the device. That is, the device owner could load it with its favorite software s and the device would always display communications from s attached to H(s). So, another user could be convinced that the displayed message is the result of executing s on a trusted platform. Those devices can, for instance be used in payment terminals where the vendor loads some open software and invite customers to type their credential. Customers can check that the device is genuine and running the publicly certified software.

Unfortunately, the existence of these devices changes the set up assumptions in the computational model. Namely, there are now devices executing some program which cannot be interrupted or rewinded. These devices could also be used maliciously to break the ONTAP protocol. Indeed, the 40 thieves can load a trusted device with the code s of the verifier and display its view. Then, showing the device after the protocol succeeds would prove that a trusted hardware has executed the verifier code in a honest way and seen the displayed accepting view which includes the instance x. Then, the thieves impersonate the cave, replay messages to and from the device, and just wait until Ali Baba wants to enter the cave. After Ali Baba executes the protocol, the device ends up by displaying a reference x to Ali Baba owing a certificate in an accepting view, showing evidence that a prover successfully proved possession of a witness for x. The 40 thieves can then go to the Caliph and show the device as a proof that there exists a valid certificate for x: that Ali Baba is granted access to the cave. This convicts Ali Baba (and Scheherazade as well, which is embarrassing for the Caliph).

One can easily realize that no ONTAP protocol in the standard model can survive this kind of scenario. There are other cryptographic notions collapsing in this model.

Namely, the notion of invisibility in undeniable signatures (a device could convert a sig- nature into some evidence that the signature is true after running the verifier protocol), the notion of deniability of protocols in general (a device could prove that some pro- tocol have been successfully executed), and the notion of receipt-freeness in electronic voting (a device could prove that someone casted a given ballot). On the other hand, some new types of protocols become feasible. We have already seen a proof for having seen a view. We can also make a proof of ignorance. A trusted agent can prove that a user ignores a secret key by showing that the device created the public key but never released the secret one. This could be used to prove that a user did not sign a document in a group signature, which would break the anonymity notion. Similarly, it could be used to prove that a user never decrypted some communication (namely that he did not open a sealed envelop), or did not cheat in some games.

We could still fix classical protocols by making honest participants themselves use trusted agents. One way to construct ONTAP protocols in the trusted agent model would consists of having the prover to use a trusted agent. In this situation, zero-knowledge becomes actually trivial: one can use a trusted agent receiving a(x,w)pair and check- ing that(x,w)∈R holds to display x. The device becomes a zero-knowledge proof of knowledge of w. It is deniable in the sense that the user keeps the device and can deny having a device showing this statement.

However, people in the Caliph’s realm certainly would not like that holding a trusted device would become a necessity for every day life. People would no longer be free to run private transactions. A consequence of Ali Baba’s trial is that trusted devices have been forbidden to support the freedom of people.

This was Scheherazade’s story adapted for an audience of people addicted to cell phones and popular music and video players. Hopefully, this would never happen in our civilized countries. (See Montesquieu revisited in [29]).

Acknowledgements. The author would like to thank Jean-Jacques Quisquater to hav- ing told him (nearly) 1001 fascinating stories about cryptography in 1990. This paper is dedicated to him.

References

1. Machine Readable Travel Documents. PKI for Machine Readable Travel Documents offer- ing ICC Read-Only Access. Version 1.1. International Civil Aviation Organization (2004), http://www.icao.int/mrtd/download/technical.cfm

2. Baek, J., Safavi-Naini, R., Susilo, W.: Universal Designated Verifier Signature Proof (or How to Efficiently Prove Knowledge of a Signature). In: Roy, B. (ed.) ASIACRYPT 2005. LNCS, vol. 3788, pp. 644–661. Springer, Heidelberg (2005)

3. Bellare, M., Palacio, A.: GQ and Schnorr Identification Schemes: Proofs of Security against Impersonation under Active and Concurrent Attacks. In: Yung, M. (ed.) CRYPTO 2002.

LNCS, vol. 2442, pp. 162–177. Springer, Heidelberg (2002)

4. Bellare, M., Ristov, T.: Hash Functions from Sigma Protocols and Improvements to VSH. In:

Pieprzyk, J. (ed.) ASIACRYPT 2008. LNCS, vol. 5350, pp. 125–142. Springer, Heidelberg (2008)

5. Brassard, G., Chaum, D., Cr´epeau, C.: Minimum Disclosure Proofs of Knowledge. Journal of Computer and System Sciences 37, 156–189 (1988)

6. Camenisch, J.L., Michels, M.: Confirmer Signature Schemes Secure against Adaptive Ad- versaries (Extended Abstract). In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 243–258. Springer, Heidelberg (2000)

7. Chaum, D.: Designated Confirmer Signatures. In: De Santis, A. (ed.) EUROCRYPT 1994.

LNCS, vol. 950, pp. 86–91. Springer, Heidelberg (1995)

8. Chaum, D., van Antwerpen, H.: Undeniable signatures. In: Brassard, G. (ed.) CRYPTO 1989.

LNCS, vol. 435, pp. 212–217. Springer, Heidelberg (1990)

9. Desmedt, Y.: Subliminal-free authentication and signature(Extended Abstract). In: G¨unther, C.G. (ed.) EUROCRYPT 1988. LNCS, vol. 330, pp. 23–33. Springer, Heidelberg (1988) 10. Cramer, R., Damg˚ard, I.B., MacKenzie, P.D.: Efficient Zero-Knowledge Proofs of Knowl-

edge without Intractability Assumptions. In: Imai, H., Zheng, Y. (eds.) PKC 2000. LNCS, vol. 1751, pp. 354–373. Springer, Heidelberg (2000)

11. Dolev, D., Dwork, C., Naor, M.: Nonmalleable Cryptography. SIAM Reviews 45(4), 727–

784 (2003)

12. Fiat, A., Shamir, A.: How to Prove Yourself: Practical Solutions to Identification and Sig- nature Problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194.

Springer, Heidelberg (1987)

13. Goldreich, O., Micali, S., Wigderson, A.: Proofs that Yield Nothing but their Validity or all Languages in NP have Zero-Knowledge Proof Systems. Communications of the ACM 38, 690–728 (1991)

14. Guillou, L.C., Quisquater, J.-J.: A Practical Zero-Knowledge Protocol Fitted to Security Mi- croprocessor Minimizing Both Transmission and Memory. In: G¨unther, C.G. (ed.) EURO- CRYPT 1988. LNCS, vol. 330, pp. 123–128. Springer, Heidelberg (1988)

15. Guillou, L.C., Quisquater, J.-J.: A Paradoxical Identity-Based Signature Scheme Resulting from Zero-Knowledge. In: Goldwasser, S. (ed.) CRYPTO 1988. LNCS, vol. 403, pp. 216–

231. Springer, Heidelberg (1990)

16. Jakobsson, M., Sako, K., Impagliazzo, R.: Designated Verifier Proofs and Their Applica- tions. In: Maurer, U.M. (ed.) EUROCRYPT 1996. LNCS, vol. 1070, pp. 143–154. Springer, Heidelberg (1996)

17. Monnerat, J., Pasini, S., Vaudenay, S.: Efficient Deniable Authentication for Signatures: Ap- plication to Machine-Readable Travel Document. In: Abdalla, M., Pointcheval, D., Fouque, P.-A., Vergnaud, D. (eds.) ACNS 2009. LNCS, vol. 5536, pp. 272–291. Springer, Heidelberg (2009)

18. Mateus, P., Vaudenay, S.: On Privacy Losses in the Trusted Agent Model. Presented at the EUROCRYPT 2009 Conference (2009),http://eprint.iacr.org/2009/286.pdf 19. Mateus, P., Vaudenay, S.: On Tamper-Resistance from a Theoretical Viewpoint. In: Clavier,

C., Gaj, K. (eds.) CHES 2009. LNCS, vol. 5747, pp. 411–428. Springer, Heidelberg (2009) 20. Monnerat, J., Vaudenay, S., Vuagnoux, M.: About Machine-Readable Travel Documents: Pri-

vacy Enhancement Using (Weakly) Non-Transferable Data Authentication. In: International Conference on RFID Security 2007, pp. 13–26. University of Malaga, Malaga (2008) 21. Okamoto, T., Ohta, K.: How to Utilize the Randomness of Zero-Knowledge Proofs. In:

Menezes, A., Vanstone, S.A. (eds.) CRYPTO 1990. LNCS, vol. 537, pp. 456–475. Springer, Heidelberg (1991)

22. Pass, R.: On deniability in the common reference string and random oracle model. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 316–337. Springer, Heidelberg (2003) 23. Quisquater, J.-J., Quisquater, M., Quisquater, M., Quisquater, M., Guillou, L., Guillou,

M.A., Guillou, G., Guillou, A., Guillou, G., Guillou, S., Berson, T.A.: How to Explain Zero-Knowledge Protocols to Your Children. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 628–631. Springer, Heidelberg (1990)

24. Schnorr, C.-P.: Efficient Identification and Signatures for Smart Cards. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 239–252. Springer, Heidelberg (1990)

25. Schnorr, C.-P.: Efficient Signature Generation by Smart Cards. Journal of Cryptology 4, 161–

174 (1991)

26. Shahandashti, S.F., Safavi-Naini, R., Baek, J.: Concurrently-Secure Credential Ownership Proofs. In: ACM Symposium on Information, Computer and Communications Security (ASI- ACCS 2007), pp. 161–172. ACM Press, Singapore (2007)

27. Steinfeld, R., Bull, L., Wang, H., Pieprzyk, J.: Universal Designated-Verifier Signatures. In:

Laih, C.-S. (ed.) ASIACRYPT 2003. LNCS, vol. 2894, pp. 523–542. Springer, Heidelberg (2003)

28. Vaudenay, S.: E-Passport Threats. IEEE Security & Privacy 5(6), 61–64 (2007)

29. Vaudenay, S.: La Fracture Cryptographique, Focus Science, Presses Polytechniques et Uni- versitaires Romandes (2010)

30. Vaudenay, S., Vuagnoux, M.: About Machine-Readable Travel Documents. Journal of Physics: Conference Series 77(012006) (2007), http://www.iop.org/EJ/article/

1742-6596/77/1/012006/jpconf7i_77_012006.pdf

David Naccache and David Pointcheval Ecole normale sup´´ erieure

D´epartement d’informatique, Groupe de cryptographie 45, rue d’Ulm,f-75230 ParisCedex 05, France

david.{naccache,pointcheval}@ens.fr

Abstract. Digital signature security is classically defined as an interac- tion between a signer Ssk, a verifierVpk and an attacker A.Asubmits adaptively toSsk a sequence of messagesm1, . . . , mqto whichSsk replies with the signaturesU =1, . . . , σq}. GivenU,Aattempts to produce a forgery,i.e.a pair (m, σ) such thatVpk(m, σ) =trueandσ∈U.

The traditional approach consists in hardening Ssk against a large query boundq. Interestingly, this isone specific wayto preventAfrom winning the forgery game. This work explores an alternative option.

Rather than hardening Ssk, we weaken A by preventing him from influencing Ssk’s input: upon receiving mi, Ssk will generate a fresh ephemeral signature key-pair (ski,pki), use ski to sign mi, erase ski, and output the signature and a certificate on pki computed using the long-term keysk. In other words,Ssk will only use his permanent secret skto sign inputs which arebeyondA’s control(namely, freshly generated public-keys). As theskiare ephemeral,q= 1 by construction.

We show that this paradigm, called autotomic signatures, transforms weakly secure signature schemes (secure against generic attacks only) into strongly secure ones (secure against adaptively chosen-message attacks).

As a by-product of our analysis, we show that blending public key information with the signed message can significantly increase security.

1 Introduction

The security of cryptographic signatures is traditionally modeled as an interac- tion between an attackerA, a signerSand a verifierV.Aadaptively submits to Sa sequence of messagesm1, . . . , mqto whichSreplies with the (corresponding) signaturesσ1, . . . , σq.

As the interaction ceases,Aattempts to produce a forgery (m, σ) such that:1 Vpk(m, σ) =true and σ∈ {σ1, . . . , σq}

The traditional approach consists in endeavoring to harden S against a large query bound q. Namely, design (S,V) in a way allowing to increase q non- polynomially at wish.

1 As is customary, we denote by (sk,pk) the key-pair ofS.

D. Naccache (Ed.): Quisquater Festschrift, LNCS 6805, pp. 143–155, 2012.

c Springer-Verlag Berlin Heidelberg 2012

Interestingly, this is onlyone specific wayto preventAfrom forging. This pa- per explores an alternative approach that preventsAfrom adaptively influencing S’s input.

The idea is the following: To sign a messagemi,S will:

Generate a fresh ephemeral signature key-pair (ski,pki)

Useskito sign mi. Letσi=Sski(mi) be the corresponding signature.

Eraseskiand output (σi, pki, ci) whereci=Ssk(pki) is a certificate onpki. The verifier will check that:

Vpki(m, σi) =true and Vpk(pki, ci) =true

In other words,S will only usesk to sign ”sterilized” input which isbeyondA’s control(freshly generated public-keys). As each ephemeral secret keyskiis used only once,q= 1 by construction.

We show that this paradigm, calledautotomic signaturessuffices to transform weakly secure signature schemes (generic attack secure) into strongly secure ones (adaptively chosen-message secure).

Autotomy2 (or self amputation) is the act whereby an animal (S) severs an appendage (ski,pki) as a self-defense mechanism designated to elude a predator’s (A) grasp. The lost body part being re-generated later (ski+1,pki+1). Typically, lizards and geckoes captured by the tail will shed part of the tail and thus be able to flee. Hence the name chosen for this paradigm.

2 Formal Framework

In [10], Goldwasser, Micali and Rivest definedexistential unforgeability against adaptive chosen-message attacks(EUF-CMA) for digital signatures.EUF-CMAis today’sde factostandard definition for digital signature security.

[10] define a scenario in which the adversaryA, given a target user’s public keypk, is asked to produce a new message + signature pair (forgery) valid with respect to pk. For doing so, A is granted access to a signature oracle S (in practice, the legitimate signer himself) responding with signatures to challenge messages ofA’s choosing.

Formally, given a security parameterk, a signature scheme is a set of three algorithms (K,S,V):

A probabilistic key generation algorithmK, which, on input 1k, outputs a pair (pk,sk) of matching public and private keys.

A (generally probabilistic)signing algorithmS, which receives a messagem andsk, and outputs a signatureσ=Ssk(m).

A (generally deterministic) verification algorithm V, which receives a can- didate signature σ, a message m and a public key pk and returns a bit Vpk(m, σ) representing the validity ofσas a signature ofmwith respect to pk i.e.:

σ=Ssk(m) ⇒ Vpk(m, σ) =true

2 Aυτoτoμ´ιαfromαυτ´(aut´os= self) andτ´μνιν(temnein = to cut).

Attacks against signature schemes are traditionally classified according toA’s goals and resources. The most prevalent goals in the literature are:

Total Break = Aoutputssk. Universal Forgery = Asigns any message.

Selective Forgery = Asigns a message chosen before pk is known.

Existential Forgery = Asigns some message.

It is easy to see that:

Total BreakUniversal ForgerySelective ForgeryExistential Forgery Remark 1. In the above, A should be read as ”an efficient A succeeding with a significant probability”. The terms ”efficiency” and ”significant probability”

admit various formal definitions.

A signature scheme capable of preventing any adversary from generating exis- tential forgeries is called existentially unforgeable (EUF).

Literature abounds on theresourcesatA’s command. In this paper we focus on two specific resource settings:no-message attacksandknown-message attacks.

In a no-message attack the adversary uses onlypk to produce the forgery, whereas in known-message attacksAis given a list of valid message + signature pairs to work with.

Banishing existential forgeries (i.e. EUF-security), removes the lesser form of threat and hence offers the highest form of security (in other words,Ais unable to sign even one, very specific, new message).

Nonetheless, a stronger security notion termed strong unforgeability (SUF)3[21], was recently defined for probabilistic signature schemes. In aSUF-secure scheme Acannot even createnewsignatures of messageslegitimately signedbyS.

Finally, constraints onA’smodus operandicategorize known message attacks into three subcases:

Attack Message choice process

Random-Message Attacks RMA byS at random Generic Chosen-Message Attacks GCMA byA

independently ofpk andS’s previous answers Adaptive Chosen-Message Attacks CMA byA

as a function ofpk andS’s previous answers

Clearly,CMA is the most stringent modus operandiand CMA GCMA RMA.

RestrictingAto one signature requires even weaker security definitions inte- grating one-timeness. We thus extendGCMAandCMAas follows:

Definition 2 (OTCMA). One-Time-CMA, denoted OTCMA, is a CMA where Ais limited to a single signature query.

3Also callednon-malleability.

Một phần của tài liệu cryprography and security from theory to applications (Trang 147 - 157)

Tải bản đầy đủ (PDF)

(512 trang)