This disclosure relates generally to hardware verification, and in particular but not exclusively, to binding authentication to protect against tampering and subversion by substitution.
The unique properties of PUFs provide several advantages to cryptographic constructions. In general, PUFs may provide some or all of three main advantages: (1) eliminating private key storage, (2) providing tamper detection, and (3) establishing a hardware root-of-trust. Private key storage can be eliminated by evaluating a PUF to dynamically regenerate a value unique to an identified piece of hardware having that PUF. As to tamper detection, a PUF's unclonable properties (e.g., wire delays, resistance) may be such that modification to the PUF irreversibly alters the PUF's mapping from challenges (inputs) to responses (outputs) after enrollment (however, not against malicious modifications before enrollment, e.g., Becker et al., “Stealthy Dopant-Level Hardware Trojans,” Cryptographic Hardware and Embedded Systems—CHES 2013, volume 8086 of Lecture Notes in Computer Science, pages 197-214, Springer, 2013). These PUF properties may be used to produce a hardware-unique, tamper-protected value from which a hardware root-of-trust can be established.
Rührmair et al. (“Modeling Attacks on Physical Unclonable Functions,” Proceedings of the 17th ACM conference on Computer and communications security, CCS '10, pages 237-249, ACM, 2010) define three distinct classes of PUF devices:
PUF output is noisy in that it varies slightly despite evaluating the same input. This is generally addressed with fuzzy extraction, a method developed to eliminate noise in biometric measurements. (See Juels et al., “A Fuzzy Commitment Scheme,” Proceedings of the 6th ACM conference on Computer and Communications Security, CCS '99, pages 28-36, ACM, 1999). Fuzzy extraction may in part be employed within a device having a PUF such as within an auxiliary control unit, such that the output is constant for a fixed input. Fuzzy extraction (or reverse fuzzy extraction) may for example employ a “secure sketch,” as described by Juels et al. to store a sensitive value pipriv to be reconstructed and a helper string helper, for recovering pipriv. A secure sketch SS for input string O, where ECC is a binary (n, k, 2t+1) error correcting code of length n capable of correcting t errors and pipriv←{0, 1}kis a k-bit value, may for example be defined as SS(O; pipriv)=O⊕ECC(pipriv). The original value V then may be reproduced given the helper string helperi and an input O′ within a maximum Hamming distance t of O using a decoding scheme D for the error-correcting code ECC and O′, as D(helperi⊕O′)=D(O⊕ECC(pipriv)⊕O′)=pipriv.
A physical unclonable function Pd: {0, 1}
⊂
,
⊂
,
⊂
,
,
⊂
,
P(c)
The game proceeds as follows:
⊂
,
⊂
,
⊂
,
⊂
,
⊂
,
⊂
,
b
This game proceeds as follows:
Literature on physical unclonable functions evaluates the properties of PUF hardware design (e.g., Gassend et al., “Silicon Physical Random Functions,” Proceedings of the 9th ACM conference on Computer and communications security, CCS '02, pages 148-160, ACM, 2002; Katzenbeisser et al., “PUFs: Myth, Fact or Busted? A Security Evaluation of Physically Unclonable Functions (PUFs) Cast in Silicon,” Cryptographic Hardware and Embedded Systems—CHES '12, pages 283-301, Springer, 2012; Ravikanth, Physical one-way functions, Ph.D. thesis, 2001; Rührmair et al., “Applications of High-Capacity Crossbar Memories in Cryptography,” IEEE Trans. Nanotechnol., volume 10, no. 3:489-498, 2011; Suh et al., “Physical Unclonable Functions for Device Authentication and Secret Key Generation,” Proceedings of the 44th annual Design Automation Conference, DAC '07, pages 9-14, ACM, 2007; Yu et al., “Recombination of Physical Unclonable Functions,” GOMACTech, 2010), provides formal theoretical models of PUF properties, and designs protocols around those definitions (cf. Armknecht et al., “A Formalization of the Security Features of Physical Functions,” Proceedings of the 2011 IEEE Symposium on Security and Privacy, SP '11, pages 397-412, IEEE Computer Society, 2011; Brzuska et al., “Physically Uncloneable Functions in the Universal Composition Framework,” Advances in Cryptology—CRYPTO 2011—31st Annual Cryptology Conference, volume 6841 of Lecture Notes in Computer Science, page 51, Springer, 2011; Frikken et al., “Robust Authentication using Physically Unclonable Functions,” Information Security, volume 5735 of Lecture Notes in Computer Science, pages 262-277, Springer, 2009; Handschuh et al., “Hardware Intrinsic Security from Physically Unclonable Functions,” Towards Hardware-Intrinsic Security, Information Security and Cryptography, pages 39-53, Springer, 2010; Kirkpatrick et al., “PUF ROKs: A Hardware Approach to Read-Once Keys,” Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, ASIACCS '11, pages 155-164, ACM, 2011; Paral et al., “Reliable and Efficient PUF-based Key Generation using Pattern Matching,” IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), pages 128-133, 2011; Rührmair et al., “PUFs in Security Protocols: Attack Models and Security Evaluations,” 2013 IEEE Symposium on Security and Privacy, volume 0:286-300, 2013; van Dijk et al., “Physical Unclonable Functions in Cryptographic Protocols: Security Proofs and Impossibility Results,” Cryptology ePrint Archive, Report 2012/228, 2012; Wu et al., “On Foundation and Construction of Physical Unclonable Functions,” 2010; Yu et al., “Lightweight and Secure PUF Key Storage using Limits of Machine Learning,” Proceedings of the 13th international conference on Cryptographic Hardware and Embedded Systems, CHES'11, pages 358-373, Springer, 2011).
Prior art PUF-based protocols fall into two broad categories: (1) a simple challenge-response provisioning process like the one described below in Protocol 1, or (2) cryptographic augmentation of a device's PUF response such that the raw PUF output never leaves the device. These approaches may require external entities to handle auxiliary information (e.g., challenges and their associated helper data) that is unsupported or superfluous in existing public key cryptography standards, and/or involve a hardware device authenticating to a challenge applied during an initial enrollment process, and/or are premised on the hardware device always recovering essentially the same response to a given challenge.
While a given challenge-response pair reflects the hardware state of a device when the pair was collected, the device will age and its hardware state drift over time. As the PUF hardware ages, the number of errors present in the responses may increase. Maiti et al. (“The Impact of Aging on an FPGA-Based Physical Unclonable Function,” International Conference on Field Programmable Logic and Applications (FPL), pages 151-156, 2011) study the effects of simulated aging on PUF hardware by purposefully stressing the devices beyond normal operating conditions. By varying both temperature and voltage, the authors were able to show a drift in the intra-PUF variation that, over time, will lead to false negatives. Maiti et al. note that the error drift strictly affected the intra-PUF error rate distribution tending towards the maximum entropy rate of 50%. After enough time elapses, the hardware device may no longer be able to recover the proper response for the enrolled challenge.
For example, assume that a specific challenge ci is issued to a device during enrollment, with the device returning a public token {commitmenti, helperi} that links the device's hardware identity with the challenge c. To be authenticated, the device uses the pair {ci, helperi} to recover its private identity pipriv. As shown in
Kirkpatrick et al. (“Software Techniques to Combat Drift in PUF-based Authentication Systems,” Workshop on Secure Component and System Identification, 2010) describe a method for detecting hardware aging drift, and responding by updating the device's challenge-commitment pair stored on an external server. This approach requires that the server maintain auxiliary information in the form of challenge-commitment pairs, however, and that a periodic protocol be executed between the server and the device.
Another challenge facing PUF-based systems is side channel attacks, which seek to observe and analyze auxiliary environmental variables to deduce information about the sensitive PUF output. For example, electromagnetic (EM) analysis (e.g., Merli et al., “Semi-invasive EM Attack on FPGA RO PUFs and Countermeasures,” Proceedings of the Workshop on Embedded Systems Security, WESS '11, pages 2:1-2:9, ACM, 2011; Merli et al., “Side-Channel Analysis of PUFs and Fuzzy Extractors,” Trust and Trustworthy Computing, volume 6740 of Lecture Notes in Computer Science, pages 33-47, Springer, 2011; Schuster, Side-Channel Analysis of Physical Unclonable Functions (PUFs), Master's thesis, Technische Universitat Munchen, 2010) extracts PUF output bits by observing changing EM fields during device operation. Another side channel attack methodology is (simple or differential) power analysis (e.g., Karakoyunlu et al., “Differential template attacks on PUF enabled cryptographic devices,” IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6, 2010; Kocher et al., “Introduction to Differential Power Analysis,” Cryptography Research, Inc., 2011; Kocher et al., “Differential Power Analysis,” Proceedings of the 19th Annual International Cryptology Conference on Advances in Cryptology, CRYPTO '99, pages 388-397, Springer, 1999; Rührmair et al., “Power and Timing Side Channels for PUFs and their Efficient Exploitation,” 2013), where power traces are collected from a device and analyzed to extract sensitive information (e.g., PUF output bits). Over many observations of a device recovering essentially the same response to a fixed challenge, an adversary can discover the sensitive PUF output.
While it is known that the effectiveness of side channel attacks may in some systems be reduced by introducing randomness (Coron, “Resistance Against Differential Power Analysis For Elliptic Curve Cryptosystems,” Cryptographic Hardware and Embedded Systems, volume 1717 of Lecture Notes in Computer Science, pages 292-302, Springer, 1999), disguising sensitive values in this way may leave some vulnerability since the underlying values remain static and/or introduce additional complexity and/or processing overhead.
In one embodiment of the invention, a reconfigurable physical unclonable function (‘RPUF’ or ‘reconfigurable PUF’) is used with one parameter to recover sensitive values (e.g., a secret or a share of a secret) and a different parameter to encode and store values (e.g., challenge-helper pairs) correlated to the sensitive values. In another embodiment, a pair of RPUFs is used instead of a single RPUF, with one RPUF used to recover sensitive values and the other RPUF used to encode and store correlated values.
In another embodiment, the desired expiration of values can be enforced by employing multiple RPUFs in place of a single PUF. When the device is powered on, one (or more than one, but less than all of) of the RPUFs is selected (preferably randomly) and transitioned from its previous configuration to a new (e.g., random) configuration, invalidating any correlated values (e.g., challenge-helper pairs) previously constructed using the old state of that RPUF. The RPUF that was not reconfigured is then used to recover the sensitive value(s) (e.g., secret or shares thereof) using the remaining correlated value(s) (e.g., challenge-helper pair(s)).
The present invention is described with reference to the example of an embodiment utilizing elliptic curve cryptography (including the associated terminology and conventions), but the inventive concept and teachings herein apply equally to various other cryptographic schemes such as ones employing different problems like discrete logarithm or factoring (in which regard the teachings of U.S. Pat. No. 8,918,647 are incorporated here by reference), and the invention is not limited by the various additional features described herein that may be employed with or by virtue of the invention.
Threshold Cryptography
Threshold cryptography involves distributing cryptographic operations among a set of participants such that operations are only possible with the collaboration of a quorum of participants. A trusted dealer generates a master asymmetric key pair
pub,
priv
for the set of participants pi∈
, |
|=n. The private key is then split among the n participants, with each participant receiving a share of
priv. This constitutes a (t, n) sharing of
priv, such that a quorum of at least t participants must combine their private shares in order to perform operations using the master private key.
While other secret schemes can be used with the present invention (e.g., Blakley, “Safeguarding cryptographic keys,” Proceedings of the 1979 AFIPS National Computer Conference, pages 313-317, AFIPS Press, 1979), an example will be described employing Shamir's polynomial interpolation construction (“How to Share a Secret,” Commun. ACM, volume 22, no. 11:612-613, 1979), which can be used for sharing a secret. A polynomial f(•) of degree t−1 is defined, where the coefficients ci remain private: f(x)=c0+c1x+ . . . +ct−1xt−1 mod q. Without knowledge of the coefficients, f(•) can be evaluated when at least t points of f(•) are known by applying Lagrange's polynomial interpolation approach. A private key priv can be set as the free coefficient c0 (i.e., f(0)=
priv), and a set of shares of the private key distributed to the participants (cf., e.g., Ertaul, “ECC Based Threshold Cryptography for Secure Data Forwarding and Secure Key Exchange in MANET (I),” NETWORKING 2005, Networking Technologies, Services, and Protocols; Performance of Computer and Communication Networks; Mobile and Wireless Communications Systems, volume 3462 of Lecture Notes in Computer Science, pages 102-113, Springer, 2005). To split the private key
priv among n participants pi∈
1≦i≦n, the dealer computes pi's
public, private
key pair as
ri·G mod q, ri
such that ri=f(i), i≠0. Here, G∈E/Fp is a base point of order q for elliptic curve E, and (P)x (resp. (P)y) refers to the x (resp. y) coordinate of point P on curve E. (The modulus that operations are performed under may be omitted where it is apparent from context). The public keys are made available to all participants, while the private keys are distributed securely to each participant (e.g., using the device's public key and EIGamal encryption). All participants are also given access to (cj·G)0≦j≦t−1, which allows them to verify their secret key and the public keys of other participants by checking that:
This constitutes a (t, n) verifiable secret sharing (VSS) (e.g., Feldman, “A Practical Scheme for Non-interactive Verifiable Secret Sharing,” Proceedings of the 28th Annual Symposium on Foundations of Computer Science, SFCS '87, pages 427-438, IEEE Computer Society, 1987; Pedersen, “Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing,” Advances in Cryptology, CRYPTO 91, volume 576 of Lecture Notes in Computer Science, pages 129-140, Springer, 1992) of the private key priv, as participants are able to verify the legitimacy of their share with respect to a globally-known public key.
Now, given access to any t shares {(i, ri)}1≦i≦t, where f(•) has degree t−1 and t≦n, the shares (i, ri) may be combined through Lagrange polynomial interpolation to evaluate f(x):
This allows any quorum of t participants pi∈⊂
|
|=t≧n to combine their shares {(i, ri)}1≦i≦t and recover the polynomial's free coefficient c0=f(0), which is the master asymmetric private key
priv. Although the Lagrange form is used for the interpolating polynomial, other approaches (e.g., using a monomial basis or the Newton form) may be substituted. Similarly, although the exemplary construction evaluates f(•) rather than recover the coefficients, alternatively the latter may be accomplished using a Vandermonde matrix representation and solving the system of linear equations.
The interpolating polynomial
which was generated from three points.
which was generated from four points. As the degree of the polynomial is only three, any four points results in a perfect interpolation of the original polynomial. When the size of the set k exceeds the degree of the polynomial t−1 (i.e., k≧t),
While an exemplary embodiment may use elliptic curve cryptography, it will be readily apparent that various other cryptographic frameworks (e.g., EIGamal, RSA, NTRU, etc.) could be employed. A number of threshold cryptographic operations can be carried out within this framework, using a variety of methods such as threshold encryption, decryption, and signing, threshold zero knowledge proofs of knowledge, threshold signcryption, and distributed key generation. Other elliptic curve mechanisms such as Massey-Omura, Diffie-Hellman, Menezes-Vanstone, Koyama-Maurer-Okamoto-Vanstone, Ertaul, Demytko, etc. could likewise be employed.
An entity in possession of a device's enrollment information {pipub, ci, helperi} can thus encrypt a message m such that only the target device is able to recover it, using a method such as EIGamal encryption:
uniformly at random
Then, if all participants of a group ⊂
, where |
|>t, |
|=n and t>n, wish to decrypt an encryption (yG, m+(yrG)z) of a message m∈[1, p−1] using group private key r, threshold EIGamal decryption (e.g., per Ertaul) can be used as follows:
Likewise, a group ⊂
where |
|≧t, |
|=n and t<n, can use a threshold signature scheme (e.g., Chen et al., “An efficient threshold group signature scheme,” IEEE Region 10 Conference TENCON, volume B, pages 13-16 Vol. 2, 2004; Hua-qun et al., “Verifiable (t, n)Threshold Signature Scheme based on Elliptic Curve,” Wuhan University Journal of Natural Sciences, volume 10, no. 1:165-168, 2005; Ibrahim et al., “A Robust Threshold Elliptic Curve Digital Signature providing a New Verifiable Secret Sharing Scheme,” IEEE 46th Midwest Symposium on Circuits and Systems, volume 1, pages 276-280 Vol. 1, 2003; Kim et al., “Threshold Signature Schemes for EIGamal Variants,” Computer Standards and Interfaces, volume 33, no. 4:432-437, 2011; Shao, “Repairing Efficient Threshold Group Signature Scheme,” International Journal of Network Security, 2008) to generate a signature representing all of
for message m as follows:
h(•) or H(•) denotes a cryptographic hash function. Each participant broadcasts Si to an appointed secretary (for convenience, and who need not be trusted).
The participants of a group ⊂
where |
|≧t, |
|=n and t≦n can also collaborate to demonstrate possession of a shared private key
priv=r∈[1, q−1] using a threshold Zero Knowledge Proof of Knowledge (e.g., Sardar et al., “Zero Knowledge Proof in Secret Sharing Scheme Using Elliptic Curve Cryptography,” Global Trends in Computing and Communication Systems, volume 269 of Communications in Computer and Information Science, pages 220-226, Springer, 2012) as follows:
Next, each participant pi , calculates e, Mi as follows:
If B=M·G−e·pub, the verifier
accepts the threshold zero knowledge proof as valid, and rejects the proof otherwise.
The process of signcrypting (e.g., Changgen et al., “Threshold Signcryption Scheme based on Elliptic Curve Cryptosystem and Verifiable Secret Sharing,” International Conference on Wireless Communications, Networking and Mobile Computing, volume 2, pages 1182-1185, 2005; Zheng, “Digital Signcryption or How to Achieve Cos t(Signature & Encryption)<<Cos t(Signature)+Cos t(Encryption),” Advances in Cryptology, CRYPTO '97, volume 1294 of Lecture Notes in Computer Science, pages 165-179, Springer, 1997; Zheng et al., “How to Construct Efficient Signcryption Schemes on Elliptic Curves,” Inf. Process. Lett., volume 68, no. 5:227-233, 1998) a message facilitates performing both signing and encrypting a message at a cost less than computing each separately. Given a message m∈[1, q−1] and a receiver pR with public key pRpub, signcryption can be generated as follows:
With this, the recipient PR has both verified the group's signature over message m, as well as decrypted m.
Distributed Key Generation
Standard threshold cryptographic operations (e.g., those discussed above) traditionally require the presence of a trusted dealer to define a generating polynomial f(•), select a secret r, and distribute shares of r to all participants pi∈
. Distributed key generation protocols (e.g., Ibrahim; Pedersen, “A Threshold Cryptosystem without a Trusted Party,” Advances in Cryptology, EUROCRYPT 91, volume 547 of Lecture Notes in Computer Science, pages 522-526, Springer, 1991; Tang, “ECDKG: A Distributed Key Generation Protocol Based on Elliptic Curve Discrete Logarithm,” Technical Report 04-838, Department of Computer Science, University of Southern California, 2004) remove the necessity of a trusted dealer, and allow a set of participants
to generate shares of a secret where no one knows the shared secret r. This can be accomplished in the present context as follows:
Similarly, each participant pj≠i∈ verifies that their share is consistent with other shares:
The distributed key generation protocol is preferably secure against an adversary that attempts to bias the output distribution, as in the attack described by Gennaro et al. (“Secure Distributed Key Generation for Discrete-Log Based Cryptosystems,” Advances in Cryptology, EUROCRYPT 99, volume 1592 of Lecture Notes in Computer Science, pages 295-310, Springer, 1999). (Gennaro et al. (“Secure Applications of Pedersen's Distributed Key Generation Protocol,” Topics in Cryptology, CT-RSA 2003, volume 2612 of Lecture Notes in Computer Science, pages 373-390, Springer, 2003) later concluded that many threshold operations may be performed securely despite an adversary's ability to bias the output distribution). Similarly, threshold constructions are preferably secure against both static as well as adaptive malicious adversaries (Abe et al., “Adaptively Secure Feldman VSS and Applications to Universally-Composable Threshold Cryptography,” Advances in Cryptology, CRYPTO 2004, volume 3152 of Lecture Notes in Computer Science, pages 317-334, Springer, 2004; Jarecki et al., “Adaptively Secure Threshold Cryptography: Introducing Concurrency, Removing Erasures,” Advances in Cryptology, EUROCRYPT 2000, volume 1807 of Lecture Notes in Computer Science, pages 221-242, Springer, 2000; Libert et al., “Adaptively Secure Forward-Secure Non-interactive Threshold Cryptosystems,” Information Security and Cryptology, volume 7537 of Lecture Notes in Computer Science, pages 1-21, Springer, 2012).
PUF-Enabled Threshold Cryptography
The core functionality of a PUF is extracting a unique mapping between the challenge (input) domain and the response (output) range. As the mapping from challenges to responses is unique for each PUF-enabled device, collecting a set of challenge-response pairs (CRPs) through a provisioning process allows the device to be verified in the future. Protocol 1 illustrates the naïve provisioning process that underlies many PUF-enabled protocols.
r ε {0, 1}κ
Authentication proceeds by issuing a challenge for which the response is known to the server, and verifying that the response is t-close to the expected response. However, this lightweight naïve protocol has many limitations. During enrollment, a large number of challenge-response pairs must be collected, as each pair can only be used once for authentication. If an adversary observed the response, it could masquerade as the device. Similarly, the challenge-response database is sensitive, as an adversary could apply machine learning to fully characterize the PUF mapping [Rührmair I]. These issues can be entirely eliminated by applying cryptographic constructs around the PUF functionality.
In the example of an embodiment employing elliptic curve cryptography, Algorithms 1 and 2 below can be used to allow a PUF-enabled device to locally store and retrieve a sensitive value without storing any sensitive information in non-volatile memory. Algorithm 1 illustrates the storing of a sensitive value i using a PUF, and Algorithm 2 illustrates the dynamic regeneration of
i. The challenge ci and helper data helper, can be public, as neither reveals anything about the sensitive value
i. While the present example uses encryption of
i by exclusive-or, ⊕,
i could also be used as a key to other encryption algorithms (e.g., AES) to enable storage and retrieval of arbitrarily sized values.
of order n
, a group generator
)
← D ((ECC(
) ⊕ O) ⊕ O′)
Whenever O and O′ are t-close, the error correcting code ECC can be passed to a decoding algorithm D which will recover the sensitive value i.
Using Algorithm 3, a local device can perform an enrollment protocol using the PUF. This allows each PUF circuit to generate a local public key pipub, which is useful for bootstrapping more complex key setup algorithms (e.g., the distributed key generation protocol in Algorithm 4). When the key setup algorithm is performed internal to the device (rather than externally among a set of distinct devices), this bootstrap process may not be necessary.
, a group element
In accordance with the invention, PUF-based cryptographic primitives are adapted to secret sharing to permit threshold cryptography founded on PUF or other root of trust. Using the example of an embodiment employing elliptic curve cryptography, distributed key generation is used to generate a number of shares (for example, two: r1, r2) of a master private key priv=(r1+r2) mod q), which itself is never generated or constructed. (It is also possible to work directly with a message (e.g., as described by Ertaul) rather than a private key). The protocol is summarized in Algorithm 4: PUF-DKG, where an exemplary implementation would choose (t, n) as (2, 2).
priv
Using Algorithms 1 and 2 for storing and retrieving a sensitive value, and Algorithm 4 for performing the initial distributed key generation protocol, arbitrary PUF-enabled threshold cryptographic operations (e.g., decryption, digital signatures, zero knowledge proofs) can now be performed. Algorithm 5 describes how to evaluate an arbitrary threshold cryptographic operation that requires as input a participant's share ri. Note that the recovered share ri has already been multiplied by the Lagrange terms
and Auxiliary Information Aux
(ri, Aux)
← Combine({
(ri, Aux)}0≦i≦n)
This enables any threshold cryptographic operation (e.g., decryption, digital signature generation, zero knowledge proofs) to be performed by a PUF-enabled participant without ever generating, reconstructing, or storing their private key. Further, from an external perspective (e.g., the server), the PUF-enabled device simply implements standard public key cryptographic protocols. That is, the server never issues a challenge or stores helper data, and its interaction with the device is indistinguishable from any standard public key cryptography device.
By internalizing the challenge-response functionality of the PUF, and utilizing Algorithms 1 and 2 to locally store and recover a value (e.g., a cryptographic key), arbitrary (e.g., symmetric or asymmetric) cryptographic operations can be performed without need for issuing or storing auxiliary (e.g., challenges or helper data) information. While one embodiment described herein advantageously strengthens the construction through both distributed key generation and threshold cryptography, neither is necessary to support arbitrary cryptographic operations through localized storage and retrieval of a value using a device's PUF functionality according to the present invention.
Although threshold cryptography typically considers distributing operations across physically-distinct nodes, in one embodiment of the present invention, threshold cryptography may be applied within a single device. As an example, a device may be equipped, e.g., with two PUF circuits (e.g., ring oscillator, arbiter, SRAM) and provided with the ability to execute at least two instructions at the same time (e.g., through multiple CPU cores). One embodiment of such a device may comprise a Xilinx Artix 7 field programmable gate array (FPGA) platform, equipped, e.g., with 215,000 logic cells, 13 Megabytes of block random access memory, and 700 digital signal processing (DSP) slices. In an embodiment employing elliptic curve cryptography, for example, the hardware mathematics engine may be instantiated in the on-board DSP slices, with the PUF construction positioned within the logic cells, and a logical processing core including an input and output to the PUF and constructed to control those and the device's external input and output and to perform algorithms (sending elliptic curve and other mathematical calculations to the math engine) such as those described above. The FPGA may have one or more PUF circuits implemented in separate areas of the FPGA fabric. Simultaneous execution may be accomplished by instantiating multiple software CPUs, e.g., a MicroBlaze processor. An embodiment of the present invention with only one PUF circuit would simply execute operations over each share sequentially, rather than querying the multiple PUF circuits in parallel. pipub=pipriv·G, pipriv
and locally store its public enrollment information and then together run the distributed key generation protocol (Algorithm 4) and perform all cryptographic operations over a private key that is never actually constructed. When threshold cryptography is applied within a single device, it may not be necessary to run the enrollment algorithm (Algorithm 3) to generate an asymmetric key pair as all computations are performed internal to the device.
Algorithm 6 describes how a dual-PUF device can compute cryptographic operations in a threshold manner by constructing a (2, 2) threshold sharing within the device using distributed key generation. That is, the two parts establish a private key known to neither part through distributed key generation and publicize the corresponding public key pub. All operations targeted at the device are now performed in a threshold manner through internal collaboration (with each part retrieving its share ri and performing a local threshold operation, and the results are combined to complete a threshold operation
), while the input/output behavior of the device remains unchanged to external systems.
at time τ
do
pub
do
) ←
(ri(τ)), PUF core local threshold share
← Combine({p0(
), P1(
)})
Thus, rather than being constrained to a mapping between a challenge issued to the device and its response (which to an extent may be a function of the challenge), a multi-PUF device di can have a single static external identity, pipub. The challenge-response functionality of each PUF core is used to maintain each share of the device's private identity, pipriv, which is never generated or constructed. This renders a side channel attack more difficult for a remote adversary, which now must observe and resolve multiple values simultaneously generated within the device. Each part retrieves its share ri(τ) and performs a local threshold operation, and the shares are combined to complete the operation .
Referring to
Various share refresh protocols (e.g., Frankel et al., “Optimal-Resilience Proactive Public-Key Cryptosystems,” 38th Annual Symposium on Foundations of Computer Science, pages 384-393, 1997; Herzberg et al., “Proactive Public Key and Signature Systems,” Proceedings of the 4th ACM Conference on Computer and Communications Security, CCS '97, pages 100-110, ACM, 1997; Herzberg et al., “Proactive Secret Sharing Or: How to Cope With Perpetual Leakage,” Advances in Cryptology, CRYPTO 95, volume 963 of Lecture Notes in Computer Science, pages 339-352, Springer, 1995) allow each of a set of players pi∈ to refresh their share ri(τ) of an original secret r at time period τ into a new share ri(τ+1) such that the resulting set of new shares {ri(τ+1)}i∈[1 . . . n] remains a sharing of the original secret. This protocol does not require reconstruction of the master secret r, so a mobile adversary would have to compromise t players in a fixed time period τ in order to recover the shared secret. Assuming a polynomial f(•) of degree (t−1) represents a shared secret r=f(0) amongst n participants each having a share ri=f(i), and denoting encrypting for player pj as ENCj(•) and decryption by pj as DECj(•), the set of players pi∈
can refresh their sharing of r using such a protocol as follows:
Thus, the refreshed set of shares {ri(τ+1)}i∈[1 . . . n] remains a sharing of the master private key priv, and yet knowledge of t−1 or fewer shares from time period τ is useless in time period τ+1.
As outlined in Algorithm 7, participants can update their share ri(τ) in time period τ to a new share ri(τ+1) in the next time period such that the set of shares {ri}i∈[1 . . . n] remains a sharing of the master private key priv.
do
The hardware device performs Algorithm 7 at Share Refresh 11 in
and Auxiliary Information Aux
(ri, Aux)
← Combine({
(ri, Aux)}0≦i≦n)
at time τ
do
pub
do
) ←
(ri(τ)), PUF core local threshold share
← Combine({p0(
), p1(
)})
Referring for example to a single-PUF embodiment as shown in
do
Next, each participant verifies the update information received from other participants and applies the update to its share as set forth in Algorithm 11.
do
As each threshold operation over a share can be performed independently of the other shares, the device need only recover one share at a time. This process is illustrated in Algorithm 12. Upon receiving a command and its associated auxiliary information Aux, the device first performs Algorithm 10 to prepare for the share update. Next, the device iteratively performs threshold operations over each share. A share is recovered by reading a challenge-helper pair from non-volatile memory, and using the PUF to regenerate the corresponding share. After performing a threshold operation over the share, the share update is applied using Algorithm 11, which generates the updated share for new time period (τ+1). After computing the threshold operations over each share, the threshold operations are combined to form the result
which is returned to the server.
and Auxiliary Information Aux
(ri, Aux)
← Combine({
(ri, Aux)}0≦i≦n)
In one embodiment, a (2, 2) threshold system is constructed internally to the device. Algorithm 13 illustrates an example of a single-PUF (2, 2) threshold construction of the more general Algorithm 12. The device has the share set {r0, r1}, and iteratively computes a threshold operation over each share to produce the set {p0(), p1(
)}. Once both threshold operations are complete and the shares have been updated and stored, the two threshold operations are combined into the final output
.
at time τ
pub
) ←
(ri(τ)), Local threshold operation
← Combine({p0(
), p1(
)})
The flow of Algorithm 13, a specific single-PUF (2, 2) threshold construction of the more general Algorithm 12, is illustrated in is constructed by combining the two local threshold operations that were performed over each share.
The device has a constant identity pub,
priv
, yet all operations
that require
priv are performed without ever reconstructing
priv and with values that change after each operation is executed. As each part uses the PUF-Store and PUF-Retrieve algorithms to maintain its share, the (challenge, helper) pair is updated after each operation when PUF-Store is executed. Each share is refreshed for the new time period τ+1, and is stored by generating a new random challenge ci(τ+1) and setting the updated helper to helperi(τ+1)←ECC(ri(τ+1))⊕PUF(ci(τ−1)). Staggering the threshold operations such that the share regeneration, threshold operation, and share storing occur consecutively (rather than concurrently), precludes the simultaneous recovery of more than one updated share. Any tampering while one share exists would (assuming tampering pushes PUF output beyond error correction limits) prevent recovery of another share, in which case the device cannot perform operations over its private key.
An adversary applying a side channel attack against such an embodiment therefore must extract t or more shares from a period of observation that cannot exceed the period of refreshment. In other words, the adversary must compromise t devices in a given time period τ since any shares from time period τ are useless in time period τ+1. The difficulty of a side channel attack thus can be increased by updating more frequently (even after each operation). (Increasing refresh frequency also may multiply the difficulty inherent in side channel attacks on multiple-PUF device embodiments that are not staggered, wherein a remote adversary must observe and resolve multiple PUF values simultaneously generated in the device).
Also, whereas the longevity of systems using a fixed challenge/helper and response is directly limited to the hardware's increase in error rate due to aging, by continuously updating the pair in each time period, the error rate can be nominally reset to zero. That is, periodically refreshing the pair (ci(τ), helperi(τ)) during each time period τ links the PUF output to the current state of the hardware, eliminating the hardware drift from previous time periods. In that regard,
As can be seen in
Dynamic Membership
The dynamic nature of shares in this construct also permits an embodiment in which the number of participants n participating in a group can be varied dynamically so that participants may join or leave the set of participants in the (t, n) threshold system. In this case, up to n−t participants can be removed from the set simply by leaving them out of the next share refresh protocol. To add a participant pj to the set of participants, each current participant pi generates an extra share nij from their share update polynomial δi(•).
In some embodiments employing dynamic membership (in a (t, n) threshold system) and multi-PUF device(s), the device(s) may be configured to perform a local self-test to ensure it is not nearing the point where it can no longer recover its shares due to hardware aging. A secondary threshold,
The internal self-test procedure may be easily extended to the setting where multiple PUF-enabled devices are used as part of a larger system (e.g., a processing center as discussed below). When one PUF-enabled device fails to recover its share, it can be replaced with a new device. The remaining and correctly functioning PUF-enabled devices run the share update algorithm and increase n by sending the new device shares as well. This allows systems composed of multiple PUF-enabled devices to continue acting as a single entity, as failing devices can be immediately replaced and provisioned with shares of the global (t, n) threshold system. However, the present invention is equally compatible with other cryptographic bases (e.g., discrete logarithm), and need not employ threshold cryptography.
Threshold Symmetric Operations
In addition to asymmetric operations, symmetric cryptographic operations may also be performed in a threshold manner (e.g., Nikova et al., “Threshold Implementations Against Side-Channel Attacks and Glitches,” Information and Communications Security, volume 4307 of Lecture Notes in Computer Science, pages 529-545, Springer Berlin Heidelberg, 2006; Moradi et al., “Pushing the Limits: A Very Compact and a Threshold Implementation of AES,” Advances in Cryptology—EUROCRYPT 2011, volume 6632 of Lecture Notes in Computer Science, pages 69-88, Springer Berlin Heidelberg, 2011; Bilgin et al., “A More Efficient AES Threshold Implementation,” Cryptology ePrint Archive, Report 2013/697, 2013). Thus all cryptographic operations, asymmetric and symmetric, can be performed over threshold shares rather than the private key. As with the refreshing process described for shares of an asymmetric private key, the shares of a symmetric key may also be refreshed.
Reconfigurable PUFs
A reconfigurable PUF (‘RPUF’) can be altered to generate a new challenge-response mapping that is different from (and ideally unrelated to) its prior mapping; the reconfiguration may be reversible (as may be the case in logical RPUFs) or irreversible (as in physical RPUFs). (See, e.g., Majzoobi et al., “Techniques for Design and Implementation of Secure Reconfigurable PUFs,” ACM Transactions on Reconfigurable Technology Systems, volume 2, no. 1:5:1-5:33, 2009; Kursawe et al., “Reconfigurable Physical Unclonable Functions—Enabling technology for tamper-resistant storage,” IEEE International Workshop on Hardware-Oriented Security and Trust, 2009. HOST '09., pages 22-29, 2009; Katzenbeisser et al., “Recyclable PUFs: logically reconfigurable PUFs,” Journal of Cryptographic Engineering, volume 1, no. 3:177-186, 2011; Eichhorn et al., “Logically Reconfigurable PUFs: Memory-based Secure Key Storage,” Proceedings of the Sixth ACM Workshop on Scalable Trusted Computing, STC '11, pages 59-64, ACM, 2011; Chen, “Reconfigurable physical unclonable function based on probabilistic switching of RRAM,” Electronics Letters, volume 51, no. 8:615-617, 2015; Horstmeyer et al., “Physically secure and fully reconfigurable data storage using optical scattering,” IEEE International Symposium on Hardware Oriented Security and Trust (HOST), 2015, pages 157-162, 2015; Lao et al., “Reconfigurable architectures for silicon Physical Unclonable Functions,” IEEE International Conference on Electro/Information Technology (EIT), 2011, pages 1-7, 2011; Zhang et al., “Exploiting Process Variations and Programming Sensitivity of Phase Change Memory for Reconfigurable Physical Unclonable Functions,” IEEE Transactions on Information Forensics and Security, volume 9, no. 6:921-932, 2014).
Embodiments of the invention may employ RPUFs so as to periodically change an authenticatable device's working values, such as with a single RPUF configured with one parameter to recover sensitive values and another parameter to encode and store correlated values, or with one RPUF to recover sensitive values and another RPUF to encode and store correlated values. Embodiments of the invention may also employ redundant RPUFs to enforce the desired invalidation of a device's working values (such as challenge-helper pairs) correlated to sensitive values (a secret or shares thereof).
In one embodiment, an authenticatable device may be provided with a single reconfigurable PUF e.g., a logically-reconfigurable PUF having a reversible configuration process, and e.g., a (2, 2) threshold sharing employed. The PUF configuration is controlled by a parameter, which may be stored locally on the device. Using parameter a to recover one share, a new parameter b is chosen (preferably randomly), the PUF is reconfigured, and the refreshed share is translated into a correlated challenge-helper pair for storage using the PUF configured with parameter b. The PUF is then reconfigured using parameter a to recover the second share, which is subsequently refreshed and translated into a correlated challenge-helper pair for storage using the PUF configured with parameter b. Now, original PUF parameter a is deleted, and the next round will select a new parameter c to replace parameter b. Alternatively, a separate RPUF PUFi could be employed for each share si, where, e.g., i∈{0, 1} for a (2, 2) threshold sharing. Using parameter ai to recover share si using PUFi, parameter ai may then be deleted. A new parameter bi is chosen (preferably randomly), PUFi is reconfigured using bi, and the refreshed share si is translated into a correlated challenge-helper pair for storage using PUFi configured with parameter bi. As another alternative, rather than a threshold sharing construction, the value stored and recovered could instead be an undivided value (e.g., a secret).
In another embodiment, an authenticatable device can be provided with two reconfigurable PUF circuits PUF-A and PUF-B preferably having a non-reversible reconfiguration process, and a (2, 2) threshold sharing employed. After each share is recovered using PUF-A and refreshed, it is translated into a correlated challenge-helper pair for storage using PUF-B. Once both refreshed shares have been stored using PUF-B, the reconfiguration process is applied to PUF-A, such that PUF-A now exhibits a new PUF mapping. The next time the shares are recovered, the same procedure is performed using PUF-B for recovery and PUF-A for encoding and storage. Alternatively, a separate pair of RPUFs {PUF−Ai, PUF−Bi} could be employed for each share si, where, e.g., i∈{0 ,1} for a (2, 2) threshold sharing. Share si is recovered using PUF−Ai. Share si is refreshed, and the reconfiguration process is applied to PUF−Ai, such that PUF−Ai now exhibits a new PUF mapping. Share si is then stored using PUF−Bi. The next time share si is recovered, the same procedure is performed using PUF−Bi for recovery and PUF−Ai for encoding and storage. As another alternative, rather than a threshold sharing construction, the value stored and recovered could instead be an undivided value (e.g., a secret).
With reference to i. (Again, rather than a threshold sharing construction as illustrated in
of order n
, a group generator
uniformly at random
)
← D((ECC(
) ⊕ O) ⊕ O′)
When Algorithm 14 is run during the device's initial power-on, there are no challenge-helper pairs stored in non-volatile memory. Thus, the fact that PUFb will be selected for reconfiguration (cf.
Referring again to
As (riτ) are combined into the final cryptographic output
(
and Auxiliary Information Aux
(ri(τ), Aux)
← Combine({
(ri(τ), Aux)}0≦i≦n)
The TRNG and variables not stored in non-volatile memory are preferably protected by the tamper sensitivity property of the PUF, so that an adversary cannot bias the TRNG or alter the bit b selected on power-on. In that regard, reconfigurable PUFs have been demonstrated as a viable protection mechanism for non-volatile memory (see, e.g., Kursawe et al., supra).
It is noted that if the device of the foregoing embodiment loses power before storing in memory updated challenge-helper pairs using both the unmodified and reconfigured PUF, when it is powered up the unmodified PUF may be selected for reconfiguration and the shares will be unrecoverable. To preclude that possibility, a pair of backup PUFs may be employed, with a set of corresponding challenge-helper pairs generated for each of the backup PUFs. If the primary pair of PUFs are unable to regenerate the shares (e.g., the device performs a test by comparing its regenerated public key against the original stored in non-volatile memory), the backup pair is invoked. The same general approach that is used for the primary PUF pair is followed, where the device randomly reconfigures one of the backup PUFs before attempting share reconstruction.
More specifically, on power-up, the device proceeds as in
As backup (riτ) are combined into the final cryptographic output
.
In one embodiment, instantiating four physically reconfigurable PUFs (P-RPUFs) can be achieved using phase change memory (PCM), which is a candidate replacement for Flash and DRAM and, if adopted, would be common to many architectures. A P-RPUF can be instantiated using PCM (Kursawe et al., “Reconfigurable Physical Unclonable Functions—Enabling technology for tamper-resistant storage,” IEEE International Workshop on Hardware-Oriented Security and Trust, 2009. HOST '09., pages 22-29, 2009), and four P-RPUFs can be instantiated on one device by dividing the entire memory space into four blocks.
Scalability
Standard PUF protocols are inherently linked to a specific hardware device (indeed, this is their goal), which can impose a constraint on the ability to readily scale a system to support an arbitrary processing load.
This application is a continuation-in-part of U.S. patent application Ser. No. 14/746,054 filed Jun. 22, 2015, which was in turn a continuation-in-part of Ser. No. 14/704,914 filed May 5, 2015, and claims the benefit of the priority of and incorporates by reference the contents of provisional U.S. Patent Applications Ser. No. 62/150,586 filed Apr. 21, 2015, Ser. No. 62/128,920 filed Mar. 5, 2015, and Ser. No. 61/988,848 filed May 5, 2014.
Number | Name | Date | Kind |
---|---|---|---|
8446250 | Kursawe | May 2013 | B2 |
8458489 | Beckmann | Jun 2013 | B2 |
8912817 | Wang | Dec 2014 | B2 |
8918647 | Wallrabenstein | Dec 2014 | B1 |
20060023887 | Agrawal et al. | Feb 2006 | A1 |
20060045262 | Orlando | Mar 2006 | A1 |
20090083833 | Ziola et al. | Mar 2009 | A1 |
20100176920 | Kursawe et al. | Jul 2010 | A1 |
20120183135 | Paral et al. | Jul 2012 | A1 |
20130246809 | Beckmann et al. | Sep 2013 | A1 |
20140140513 | Brightley et al. | May 2014 | A1 |
Number | Date | Country |
---|---|---|
2 320 344 | Jul 2011 | EP |
Entry |
---|
Majzoobi et al., “Techniques for Design and Implementation of Secure Reconfigurable PUFs,” ACM Transactions on Reconfigurable Technology Systems, 2:1, pp. 5:1-5:33 (2009). |
Kursawe et al., “Reconfigurable Physical Unclonable Functions—Enabling technology for tamper-resistant storage,” Hardware-Oriented Security and Trust, HOST '09, IEEE International Workshop, pp. 22-29 (2009). |
Katzenbeisser et al., “Recyclable PUFs: logically reconfigurable PUFs,” Journal of Cryptographic Engineering, 1:3, pp. 177-186 (2011). |
Eichhorn et al., “Logically Reconfigurable PUFs: Memory-based Secure Key Storage,” Proceedings of the Sixth ACM Workshop on Scalable Trusted Computing, STC '11, pp. 59-64 (ACM 2011). |
Maes et al., “Intrinsic PUFs from flip-flops on reconfigurable devices,” 3rd Benelux workshop on information and system security (WISSec 2008), vol. 17. |
Zhang et al., “Exploiting Process Variations and Programming Sensitivity of Phase Change Memory for Reconfigurable Physical Unclonable Functions,” IEEE Transactions on Information Forensics and Security, vol. 9, No. 6, pp. 921-932 (2014). |
Horstmeyer et al., “Physically secure and fully reconfigurable data storage using optical scattering,” IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pp. 157-162 (2015). |
Lao et al., “Reconfigurable architectures for silicon physical unclonable functions,” IEEE International conference on Electro/Information Technology (EIT), pp. 1-7 (2011). |
International Search Report and Written Opinion dated Jun. 3, 2016 for Application No. PCT/US2016/021275. |
Asaeda et al., Structuring Proactive Secret Sharing in Mobile Ad-hoc Networks. 2006 1st International Symposium on Wireless Pervasive Computing. Jan. 18, 2006. 6 pages. |
Number | Date | Country | |
---|---|---|---|
20170149572 A1 | May 2017 | US |
Number | Date | Country | |
---|---|---|---|
61988848 | May 2014 | US | |
62128920 | Mar 2015 | US | |
62150586 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14746054 | Jun 2015 | US |
Child | 15176766 | US | |
Parent | 14704914 | May 2015 | US |
Child | 14746054 | US |