Securing group key exchange against strong corruptions and key registration attack

ou juste avant la balise de fermeture -->

 

 

10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI

Retour à l'accueil, cliquez ici

ou juste avant la balise de fermeture -->

Source : http://www.ssi.gouv.fr/IMG/pdf/BreMan08b.pdf

Voir également :

maitriserlesrisquesi ..>
Int. J. Applied Cryptography, Vol. 1, No. 2, 2008 91 Securing group key exchange against strong corruptions and key registration attacks Emmanuel Bresson DCSSI Crypto Lab, 51, bld de La Tour-Maubourg, 75700 Paris 07 SP, France E-mail: emmanuel(à)bresson.org Mark Manulis* UCL Crypto Group, Place du Levant 3, 1348 Louvain-la-Neuve, Belgium E-mail: mark.manulis(à)uclouvain.be *Corresponding author Abstract: In Group Key Exchange (GKE) protocols, users usually extract the group key using some auxiliary (ephemeral) secret information generated during the execution. Strong corruptions are attacks by which an adversary can reveal these ephemeral secrets, in addition to the possibly used long-lived keys. Undoubtedly, security impact of strong corruptions is serious, and thus specifying appropriate security requirements and designing secure GKE protocols appears an interesting yet challenging task – the aim of our article. We start by investigating the current setting of strong corruptions and derive some refinements like opening attacks that allow to reveal ephemeral secrets of users without their long-lived keys. This allows to consider even stronger attacks against honest, but ‘opened’ users. Further, we define strong security goals for GKE protocols in the presence of such powerful adversaries and propose a 3-round GKE protocol, named TDH1, which remains immune to their attacks under standard cryptographic assumptions. Our security definitions allow adversaries to register users and specify their longlived keys, thus, in particular capture attacks of malicious insiders for the appropriate security goals such as Mutual Authentication, key confirmation, contributiveness, key control and keyreplication resilience. Keywords: authenticated group key exchange; GKE; contributiveness; insider attacks; key registration; mutual authentication; MA; strong corruptions; tree Diffie–Hellman; TDH1. Reference to this paper should be made as follows: Bresson, E. and Manulis, M. (2008) ‘Securing group key exchange against strong corruptions and key registration attacks’, Int. J. Applied Cryptography, Vol. 1, No. 2, pp.91–107. Biographical notes: Emmanuel Bresson received his PhD at the École normale supérieure in Paris. He works as a Cryptography Expert for government teams. His main research subjects involve key exchange mechanisms and authentication for multi-party protocols with provable security. He has published his work in many international conference papers and security focusing journals. Mark Manulis received his PhD in Computer Science from the Ruhr University Bochum in 2007. His research focuses on security and cryptography related to key management, authentication, anonymity and privacy in distributed applications and (wireless) communications. 1 Introduction A G r o u p K e y E x c h a n g e ( G K E ) p r o t o c o l p r o v i d e s participants with a common secret group key. The main ( s e ma n t i c ) s e c u r i t y r equ i r e me n t c a l l ed Au t h ent i c a t e d Key Exchange (AKE; Bresson, Chevassut and Pointcheval, 2001, 2002a) aims to ensure that the established key is i nd i st i ngui shabl e from a r a ndom on e by a ny ou t sid e r a d v e r s a r y . T h e s e c o n d r e q u i r e m e n t c a l l e d M u t u a l Authentication (MA; Bresson, Chevassut and Pointcheval, 2001) aims to ensure that all legitimate protocol participants and only they have actually computed identical session g r o u p k e y s . T h e s e s e c u r i t y r e q u i r e m e n t s h a v e b e e n extensively studied in the literature (see the recent survey in Manulis, 2006). In the most basic scenarios, all users are somehow protected, that is, the adversary has no control over them, and is restricted to attacks carried out through 92 E. Bresson and M. Manulis the network (which nevertheless include impersonation attacks where the adversary talks on the network by pretending being a legitimate user). In order to take into account, further real-life threats on users and the notion of forward secrecy is usually considered. Forward secrecy means that the established session key remains secure ‘in the future’, that is, remains indistinguishable from random even if the adversary learns used long-lived keys in the future. The notion is motivated by the fact that, by nature, long-lived keys get more chance to be leaked to an attacker than ephemeral secrets. The next known kind of corruptions, referred to as strong corruptions in Shoup (1999), Steiner (2002) and Bresson, Chevassut and Pointcheval (2002a), provides the adversary with even more information. Namely, the adversary gets the user’s ephemeral secrets in addition to the long-lived keys. But, he is not allowed to get the established session group key. Shoup (1999) explains why such a separation makes sense: session keys are typically controlled by higher-level applications that will use them, while internal, ephemeral secrets are specific to the GKE protocol execution and could be erased once this protocol is finished. Actually in GKE, it seems impossible to obtain secrecy when ephemeral secrets are revealed during the protocol session: if the adversary (even ‘passively’) can learn all intermediate key material, then he will likely be able to compute the final group key. On the other hand, in dynamic groups there are many cases where ephemeral secrets of a particular session are subsequently re-used (in addition to some refreshed data) to update the group key. Then, it is important to ask how the knowledge of ephemeral secrets in a corrupted session impacts the security of other sessions (past and future). This is precisely where the notion of strong forward/backward secrecy raises up. At this point, we precise the corruption types considered in this article. First, we consider users who are corrupted and are introduced by the adversary. We assume that the users are corrupted in a passive mode (rather than active), i.e. the adversary can only ‘read’ secrets held by the attacked user (whatever these secrets are ephemeral or longlived). Through the knowledge of the long-lived key, the adversary can (typically) inject signed messages on behalf of the user while preventing the original user’s messages f r o m b e i n g d e l i v e r e d . I n f a c t , t h i s a l l o w s a n a c t i v e participation of the adversary during the protocol execution, and thus we say the adversary is active; but, this refers to his ability to control the network, not the user’s behaviour. On the other hand, we also wish to capture security threats coming from the users that are fully controlled by the adversary. Therefore, we allow the adversary to introduce n e w u s e r s a nd t o r e g i s t e r t h e i r l o n g - l i v e d k e y s . T h e adversary that corrupts or adds users is adaptive (opposed to static) in the sense that it chooses which users to corrupt or to introduce based on the information he gained so far and in any stage of the protocol execution. Secondly, when considering user corruptions, in order to further refine the security definitions, our intention is to separate the long-lived key from the internal state which contains ephemeral secrets and to specify when the adversary can learn them. Through this separation, we explicitly allow the adversary to reveal ephemeral secrets without revealing the long-lived key; we call this opening attacks. They are the balanced complement of weak corruption attacks, where long-lived keys are revealed, but ephemeral secrets are not. We note that under opening attacks, there is a hope to prevent the adversary from actively participating in the protocol on behalf of the opened parties, since he does not receive the long-lived keys. Finally, we notice that the strong corruption model in its current form is the best (or worst) of two worlds: if the adversary corrupts then it obtains the long-lived keys and the ephemeral data, if it does not corrupt then it obtains nothing. But, separating the attacks in two distinct modes allows to refine and opt for stronger security definitions. Consideration of the adversary that corrupts and introduces users allows us to address security threats against GKE protocols that may arise also in the presence of malicious participants/insiders – corrupted or introduced users whose long-lived keys are known to the adversary. The adversary acting as malicious participants might be able via opening attacks to obtain information from the internal states of the honest users; the goal of the adversary is then to influence their behaviour. Usually, the AKE requirement is defined from the perspective of some (fresh) session, and thus makes sense only if the adversary is restricted to neither participate on behalf of a user nor to obtain any ephemeral secret in that session, i.e. all during the protocol session active users must be honest and not opened. On the other hand, the MA requirement remains meaningful even without such limitations. Even if achieving MA without AKE is of low interest for key exchange protocols, it is still legitimate to ask whether achieving MA under strong corruptions during the attacked session is possible. This especially, since the MA requirement still makes sense in the presence of malicious participants and may also be useful for protocols other than key exchange. Furthermore, consideration of malicious insiders raises attacks related to key control and contributiveness: for instance, think of a participant who can force the same key to be obtained in two different sessions (e.g. key-replication; Krawczyk, 2005). Here, we recall that the question on who controls the value of the group key states the important difference between GKE and group key transport protocols (Bresson and Manulis, 2007). In GKE protocols, it is essential that the key is computed from inputs (contributions) of all participants such that even a strict subset of participants cannot enforce the final value of the group key. Especially, when considering asynchronous communication and malicious participants who can choose own contributions arbitrarily and may additionally reveal internal states of honest participants at any stage of the protocol execution through opening attacks, preventing key control and ensuring contributiveness for the honest users appears to be a challenging task. Securing GKE against strong corruptions and key registration attacks 93 1.1 Related work 1.1.1 Original definitions The AKE- and MA-security requirements (without strong corruptions and only for honest users) were originally given by Bresson et al. (2001), see Katz and Yung, 2003; Dutta, Barua and Sarkar, 2004; Kim, Lee and Lee, 2004, for variants and Bresson, Manulis and Schwenk, 2007, for some flaws. In Bresson, Chevassut and Pointcheval (2002a) and Bresson et al. (2001) modelled strong corruptions, but for AKE-security only, following the ideas of Shoup (1999) and Canetti and Krawczyk (2001) for two-party protocols, for which such strong AKE-security has been recently modelled in LaMacchia, Lauter and Mityagin (2007). Katz and Shin (2005) extended the definition of MAsecurity by assuming misbehaving (malicious) protocol participants; and they provided a concrete generic solution (compiler) to prevent these attacks, however, without considering opening attacks against ephemeral secrets as well as key control and contributiveness. The significance of security against malicious participants was also recognised by Choo, Boyd and Hitchcock (2005) through unknown-key share attacks, by which an active adversary tries to make an honest protocol participant believe that the group key is shared with one party when it is in fact shared with another party. 1.1.2 On key control and contributiveness Mitchell, Ward and Wilson (1998), see also Boyd and Mathuria (2003), gave informal definition of key control, to describe attacks where participants try to influence the resulting value of the key. Yet informally, Ateniese, Steiner and Tsudik (1998) proposed the notion of contributiveness meaning that all participants must equally contribute to the computation of the key and guarantee its freshness (see Steiner, 2002); these definitions emphasise the difference between key distribution and key exchange (Menezes, van Oorschot and Vanstone, 1996). Following these requirements, Bresson and Catalano (2004) have considered the (weaker) case where participants are honest, but have biased source of randomness so that an adversary can possibly gain extra information about the key. Deepening this, Bohli, Vasco and Steinwandt (2007) gave definitions of key control and contributiveness considering a (stronger) case where participants deliberately wish to influence the resulting value of the group key. Still, their definitions are based on the model from Bresson et al. (2001) and thus, do not consider strong corruptions. Finally, Krawczyk (2005) mentioned that a key exchange protocol should prevent keyreplication attacks whose goal is to influence the acceptance of the same key in different protocol sessions. 1.1.3 Other work close to ours Independent of our work, Desmedt et al. (2006) considered a property of non-malleability for GKE protocols, which is close to key control and contributiveness. Their security goal, called shielded-insider privacy, aims to prevent attacks where an outsider adversary upon communication with some malicious participants prior to the protocol execution, obtains information about the later computed group key. In order to ensure shielded-insider privacy, they use Pedersen’s (1991) commitments; however, in case of strong corruptions committed secrets can still be revealed to the adversary (due to opening attacks), so that malicious participants would still be able to bias the computation. In our model, we do not consider this scenario explicitly, but focus on the (in)ability of the adversary representing malicious participants to predict the resulting value of the later established group key. Recently, Manulis (2006) analysed several existing models for GKE protocols with respect to considering strong corruptions: he pointed out that security against strong corruptions is currently considered in a rather restrictive way: only for strong forward secrecy of AKEsecurity. Moreover, none of the available game-based security models is complete enough to unify the most important definitions of AKE-, MA-security, and key control and contributiveness. 1.2 Contributions and organisation We solve most of the problems put in light above by revisiting the GKE security model from the perspective of strong corruptions and key registration attacks. Further, we design a provably secure GKE protocol that resists these attacks. 1.2.1 Security model and stronger definitions As our first contribution in Section 2, we provide the following: x We model a powerful adversary who is given access to strong corruptions, by describing an appropriate gamebased security model for GKE protocols, thus significantly extending the ideas from Bresson, Chevassut and Pointcheval (2002a). x We formalise strong AKE-security by considering opening attacks that may occur in earlier and later protocol sessions. x In our definition of strong MA-security, we consider the adversary that acts as malicious participants during the attacked session and opens all other (honest) users; due to the opening attacks our definition is stronger than the related one from Katz and Shin (2005). x We formalise strong contributiveness as security against attacks that enforce any value chosen by the adversary as a group key (this includes key-replication; Krawczyk, 2005); since, the adversary can act as malicious participants and open all other (honest) participants our requirement is stronger compared to Bohli, Vasco and Steinwandt (2007). x We strengthen the GKE security model by allowing the adversary to introduce users and register their longlived keys; this is similar to the recent models in 2- party key exchange (LaMacchia, Lauter and Mityagin, 2007; Menezes and Ustaoglu, 2008) and is the main difference to the extended abstract of this article which appeared in Bresson and Manulis (2008) and also to many previous GKE security models. 94 E. Bresson and M. Manulis 1.2.2 Group Key Exchange protocol with strong security As a second contribution in Section 3, we describe a 3-round GKE protocol, named TDH1, and prove that it provides strong versions of AKE-, MA-security and contributiveness, while the deployed techniques can be seen as general for many GKE protocols. TDH1 tolerates the following numbers of malicious insiders (out of n participants in total): for MA-security up to n–2, for contributiveness up to n1, whereby all remaining honest users might be opened! Our security proofs do not rely on the Random Oracle Model (ROM; Bellare and Rogaway, 1993). The AKE-security of TDH1 is based on the Tree Decisional Diffie–Hellman (TDDH) assumption, introduced by Kim, Perrig and Tsudik (2004a,b). We give a formal definition of the underlying TDDH problem and show its polynomial equivalence to the standard Decisional Diffie– Hellman (DDH) problem (Boneh, 1998) by a proof which addresses arbitrary full binary trees, i.e. trees where each node has exactly zero or two leaves (note, Kim, Perrig and Tsudik, 2004a,b addressed only a subset, i.e. linear and complete trees). 2 Strong security definitions for Group Key Exchange We start by (re)stating existing definitions and classical notations using the game-based approach. Note that another way (which we do not consider here) to define security requirements is to use the simulation-based approach, e.g. Katz and Shin, 2005, but see Remark 1. 2.1 Protocol execution and participants 2.1.1 Users, instance oracles and long-lived keys Let - be a set of at most N users. Each Ui - holds a long-lived key LLi and has several instances called oracles, denoted s ?i for s `, participating in distinct concurrent executions. (When we do not refer to a specific user Ui we use the index U, e.g. s ?U ). 2.1.2 Internal states Every s ?U maintains an internal state information state s U which is composed of all ephemeral secret information used during the protocol execution. The long-lived key LLU is, in nature, excluded from it (moreover, the long-lived key is specific to the user, not to the oracle). An oracle s ?U is unused until initialisation (by which it is given the longlived key LLU). It then becomes a group member, associated to a particular session, and turns into the stand-by state where it waits for an invocation to execute the protocol. When the protocol starts, the oracle learns its partner id pid s U (and possibly session id sid s U ) and turns into a processing state where it sends, receives and processes messages. During that stage, the internal state information state s U is maintained. After having computed s U k oracle s ?U accepts and terminates the execution of the protocol operation (possibly after some additional auxiliary steps) meaning that it would not send or receive further messages. If the protocol fails, s ?U terminates without accepting and s U k is set to an undefined value. 2.1.3 Session group key, session and partner IDs, group members Every session is identified by a unique, publicly-known sid s U . In each session, each oracle s ?U gets a value pid s U that contains the identities of participating users (including U) and computes the session group key {0,1} s U k N , where N is a security parameter. By ( ) { where pid and sid sid } i s t s s t  ? ? i j j U i j U , we denote the group of oracle s ?i and say that s ?i and t ? j are partnered if ( ) t s ? ? j  i and ( ) s t ? ? i j  . Sometimes, we simply write  to denote the group of oracles participating in the same protocol session. Then, each oracle in  is called a group member. Note that oracles in  may be ordered, e.g. lexicographically based on the user identities. Definition 1. A GKE protocol P consists of a key generation algorithm KeyGen and a protocol Setup: P.KeyGen (1 N ). On input a security parameter 1 N each user in - is provided with a long-lived key LLU. P.Setup( + ). On input a set + of n unused oracles a new group  is created and set to be + . A probabilistic interactive protocol is executed between the oracles in  such that all oracles accept with the session group key and terminate. A protocol is said to be correct if, when no adversary is present, all participants compute the same key. Note that our definition is independent of the communication channel, e.g. (asymmetric) broadcast, multi-cast or unicast. 2.2 Strong adversarial model Now, we consider an adversary  which is a Probabilistic Polynomial-Time (PPT) algorithm having complete control over the network. As described in the following,  can add users to the set - and interact with protocol participants via queries to their oracles. Note that our security model (similar to Bresson, Chevassut and Pointcheval, 2002a; Katz and Shin, 2005; Bohli, Vasco and Steinwandt, 2007) does not deal with the issues of denial-of-service and faulttolerance; our security definitions aim to prevent honest participants from accepting the group key biased by malicious insiders. AddUser(U, ?). If U - , then U with the long-lived (public) key contained in ? is added to - ; ? may also contain some further information. Securing GKE against strong corruptions and key registration attacks 95 Execute( + ).  eavesdrops an honest execution of P.Setup between a chosen set of oracles and is given the resulting transcript of P.Setup( + ). Send( , s ?U m ).  sends message m to oracle s ?U and receives the response s ?U would have generated after having (honestly) processed message m. The response may be empty if m is incorrect. The adversary can have s ?U invoking P.Setup with the oracles in + via a query of the form Send(‘start’, s ?U , + ):  gets the first message that s ?U would generate in this case. RevealKey( s ?U ).  is given the session group key s U k , provided s ?U has accepted. RevealState( s ?U ).  is given the internal state information state s U which includes ephemeral secrets. Corrupt(U).  is given the long-lived key LLU. Test( s ?U ).  tests the semantic security of s U k . Formally, if s ?U has accepted a bit b is privately flipped and  is given s U k if b = 1 and a random string if b = 0. The adversary has two ways of learning LLU: by asking it – Corrupt(U), or by registering it – AddUser(U, ?). For simplicity, in all definitions of security unless otherwise stated, we treat U as corrupted if any of these queries had occurred. Remark 1. The separation of the queries RevealState and Corrupt/AddUser explicitly provides the possibility for the opening attacks mentioned in the introduction. By asking the RevealState query to an instance oracle s ?U , the adversary reads out its internal state, but cannot impersonate honest U in the protocol execution, unless a Corrupt(U) query is asked (in which case all instance oracles s ?U become malicious insiders through possible impersonation actions of  ). Thus, just opening a user does not make him malicious. In contrast, simulation-based security models (e.g. Universal Composability/Reactive Simulatability) handle strong corruptions typically as follows: upon corrupting a user the adversary learns all information known to that user and controls him thereafter. Obviously, in the simulation-based models opening attacks (which strengthen the adversary) are currently not modelled. 2.3 Strong AKE-security In case of strong AKE-security, one must also consider the knowledge of the adversary about long-lived keys and ephemeral secrets of session participants. If the adversary obtains a long-lived key before the session is started then it can impersonate a user, and thus, learn the session key. And, if the adversary is allowed to obtain long-lived keys before the session is finished then it should be restricted from actively using these keys during that time (Katz and Yung, 2003). On the other hand, the adversary should be allowed to reveal ephemeral secrets of participants before the session starts 1 and after the session is finished (defined as strong forward and weak backward secrecy in Bresson, Manulis and Schwenk, 2007). Note that, if one allows long-lived key corruptions in later sessions, revealing ephemeral secrets during the attacked session would not make sense. In order to model the described requirements for the adversarial knowledge, we define the notion of oracle freshness, extending those given in Bresson, Chevassut and Pointcheval (2002a) and Katz and Yung (2003) by the conditions concerning key registration and opening attacks. Definition 2 (Oracle Freshness). In the execution of P the oracle s ?U is fresh if all of the following holds: 1 no pid s Ui U has been added by  via a corresponding AddUser query 2 no pid s Ui U is asked for a query Corrupt prior to a query of the form Send ( , ) t ? j m with pid s U j U until s ?U and its partners accept 3 neither s ?U nor any of its partners is asked for a query RevealState until s ?U and its partners accept 4 neither s ?U nor any of its partners is asked for a query RevealKey after having accepted. We say that a session is fresh if all participating oracles are fresh. We note that the above definition ensures that if at least one oracle participating in a session is fresh then the whole session is fresh too because freshness of one oracle requires freshness of all its partners. This notion of oracle freshness simplifies the following definition of strong AKE-security for GKE protocols. Definition 3 (Strong AKE-Security). Let P be a correct GKE protocol and b a uniformly chosen bit. Consider an adversary  against the AKE-security of P. We define the adversarial game ake Game ( ) ,P b N   as follows: x after initialisation,  interacts with instance oracles via queries x at some point  asks a test query to a fresh oracle s ?U which has accepted x  continues interacting with instance oracles x when  terminates, it outputs a bit, which we define to be the output of the game. We define: ake ake Adv ( ) : 2 Pr[Game ( ) ] 1 , P , P b N N b     and denote with ake Adv ( ) P N the maximum advantage over all 96 E. Bresson and M. Manulis PPT adversaries  . We say that a GKE protocol P provides strong AKE-security if this advantage is negligible. We again stress that (strong) AKE-security makes sense for adversaries that are not able to corrupt users and act on their behalf during the attacked session or reveal any ephemeral secrets used in that session – this is guaranteed by the freshness property. 2.4 Strong MA-security We say that s ?U is a malicious participant/insider if the adversary has previously asked Corrupt(U) or AddUser (U, ?). In all other cases, s ?U is honest. The following definition of MA-security unifies the requirements of MA, key confirmation and unknown-key share resilience. It considers malicious participants and allows opening attacks against all honest users at any protocol stage. Definition 4 (Strong MA-Security). Let P be a correct GKE protocol and  an adversary who is allowed to query Send, Execute, RevealKey, RevealState, Corrupt and AddUser. We denote this interaction as ma Game ( ) ,P N . We say that  wins if at some point, there exists an honest user Ui whose instance oracle s ?i has accepted with s i k and another user pid s U j i that is uncorrupted at the time s ?i accepts such that 1 there is no instance oracle t ? j with (pid ,sid ) (pid ,sid ) t t s s j j i i or 2 there is an instance oracle t ? j with (pid ,sid ) (pid ,sid ) t t s s j j i i that has accepted with t s j i k k z . The maximum probability of this event is denoted ma Suc ( ) P N ; we say that a GKE protocol P provides strong MA-security if this probability is negligible. 2.5 Strong contributiveness The following definition models attacks related to key control, contributiveness and unpredictability of group keys in the presence of malicious participants. Informally, we consider an active adversary  that can add, corrupt and open participants at any stage of the protocol execution in such a way that there exists at least one honest oracle (which may nevertheless be opened!) that accepts the session group key chosen by the adversary. This subsumes key-replication attacks (Krawczyk, 2005) by which honest users are forced to accept a group key from another session. Definition 5 (Strong Contributiveness). Let P be a correct GKE protocol and  an adversary operating in two stages (prepare and attack) and having access to the queries Send, Execute, RevealKey, RevealState, Corrupt and AddUser. We define the following game con Game ( ) ,P N : x  (prepare) interacts with instance oracles via queries x  (prepare) outputs k {0,1} N  , and some state information ? x the following sets are built: us consisting of all honest used oracles, std consisting of all honest oracles that are in the stand-by state 2 , and < consisting of session ids sid t i for every us t ? i  x  (attack, ?) interacts with instance oracles via queries x at the end of this stage  outputs (s, U). The adversary  wins in con Game ( ) ,P N if all of the following holds: 1 s ?U is honest, has terminated accepting us std , \ s U k  ?   and sid s U < 2 there are at most n–1 corrupted users Ui having oracles t ?i partnered with s ?U . We define: con con Suc ( ) : Pr[ wins in Games ( )]   ,P ,P N N  and denote with con Suc ( ) P N the maximum probability of this event over all PPT adversaries  ; we say P provides strong contributiveness if this probability is negligible in N. The first requirement ensures that s ?U belongs to an honest user. The set us std   \ consists of all oracles that at the end of the prepare stage have already terminated or remain in the processing state. Thus, requiring us std \ s ? U   prevents the case where  while being a session participant outputs k  for the still running protocol execution which is then accepted by s ?U that participates in the same execution (this is not an attack since participants do not compute group keys synchronously). Similarly, the condition sid s U < prevents that  while being in the attack stage outputs (s, U) such that s ?U has accepted with k  already in the prepare stage. Finally, since in every session id is unique, s sidU < holds if at least one new session has been executed with s ?U in the attack stage. The second requirement allows  to corrupt at most n–1 (out of totally n) participants in the session where s ?U accepts with k  . Note also that U must be honest, but  is allowed to reveal the internal state of s ?U during the execution of the attack stage (this is because our model separates LLU from state s U ). The goal of the adversary is to influence the honest participants to accept the chosen key. Our game appears