This invention relates to coding of information using a pseudo-random source.
A pseudo-random source (PRS) of values can be used, for example, for applications in which the pseudo-random value can be regenerated, but the value cannot be predicted, or such prediction would be very difficult or time consuming. In some examples, the pseudo-random value depends on an input value, often referred to as a “challenge” value. In some examples, the pseudo-random values comprise bits that are generated by circuitry that implements a function depend on device-specific characteristics, for example, based on device-to-device fabrication variation among a set of devices that are fabricated in a common manner, for example, according to the same semiconductor masks and fabrication conditions. Some examples of such functions have been referred to as Physical Unclonable Functions (PUFs). Device-specific quantities can be generated in ways that depend on device-specific characteristics, for example, based on device-to-device fabrication variation among a set of devices that are fabricated in a common manner, for example, according to the same semiconductor masks and fabrication conditions. Examples of the device-specific characteristics include time-delay along electrical signal paths, and voltage thresholds of active semiconductor devices. In a number of previous approaches, the device specific quantities are binary, for example, based on a comparison of pairs of underlying device-specific characteristics. For example, US Pat. Pub. 2003/0204743A1, titled “Authentication of Integrated Circuits,” describes an approach in which a device-specific bit is generated according to the relative delay along two delay paths. As another example, US Pat. Pub. 2007/0250938A1, titled “Signal Generator Based Device Security,” describes an approach in oscillation frequencies are compared to determine device-specific bits.
In some techniques, regeneratable pseudo-random bits are used to encode a hidden value so that the encoding (e.g., exclusive OR of pseudo-random bits and hidden value) can be disclosed without directly disclosing the hidden value, and so that the device can use the encoding and re-generated pseudo-random bits to re-generate the hidden value. In some examples, error correction techniques are used to account for differences between the initially generated pseudo-random bits and the re-regenerated pseudo-random bits. For instance, an error correction syndrome may be calculated for the pseudo-random bits, and stored, along with an XOR mask. A degree to which information about the hidden value is “leaked” through knowledge of the error correction syndrome and XOR mask can depend on the statistical characteristics of the pseudo-random values, for instance according to bias characteristics of the pseudo-random values.
In one aspect, in general, an approach uses a series of pseudo-random quantities to encode a hidden value or set of values. In some examples, the pseudo-random quantities each represent a degree of comparison of devices-specific characteristics. In some examples, the pseudo-random quantities are derived from biometric information of organic (e.g., human) or inorganic sources (e.g., manufacturing variations of surfaces). The hidden value is encoded using indexes into the series of pseudo-random quantities, for example, based on numerically ordering the series of quantities. In some examples, a possibly noisy version of the pseudo-random quantities is re-generated and used to re-generate (decode) the hidden value. In some examples, this decoding of the hidden value does not require additional error correction mechanisms.
In another aspect, in general, an encoding of first data is accepted as data representing a set of one or more indices formed based on a first series of quantities. The first series of quantities is based on a pseudo-random source and the data representing the indices is insufficient to reproduce the first data. A second series of quantities based on the pseudo-random source is generated. The set of one or more indices identifies quantities in the second series. The set of one or more indices and the second series of quantities are combined to reproduce first data.
Aspects may include one or more of the following features.
The first data include multiple elements, and reproducing each element of the first data includes combining a subset of the indices and a subset of the second series of quantities based on the pseudo-random source to reproduce the element. In some examples, the subsets of quantities used to reproduce different of the elements are disjoint subsets.
In another aspect, in general, a decoder includes an input for receiving an encoding of first data as data representing a set of one or more indices. The decoder also includes a pseudo-random source for generating a series of quantities. A combination module in the decoder is used to combine the set of one or more indices and the series of quantities to reproduce first data.
Aspects may include one or more of the following features.
The encoding of the first data includes error correction data, and the decoder further includes an error corrector for application to the encoding prior to processing by the combination module.
In another aspect, in general, a method includes generating a first series of quantities based on a pseudo-random source, each quantity being represented as a multiple bit representation. First data is accepted for encoding, and the first data is encoded as a first set of one or more indices into the series of generated values according to a mapping function from the generated values to functions of index positions in the series.
Aspects may include one or more of the following features.
The mapping function depends on a numerical ordering of the quantities in the first series.
The method further includes generating a second series of quantities based on the pseudo-random source, the quantities in the first series corresponding to the quantities in the second series. The first set of one or more indices and the second series of quantities are combined to reproduce the first data.
Generating the first series of quantities includes generating said quantities according to a challenge value, and where generating the second series of quantities includes generating said quantities according to the challenge value.
The pseudo-random source depends on device-specific characteristics that vary among like devices formed according to a common design.
The pseudo-random source depends on biometric characteristics and/or on characteristics of an organic or an inorganic source.
The pseudo-random source may include multiple separate sources. For instance, one separate source may depend on device specific characteristics while another separate source may depend on biometric characteristics.
Each of the series of quantities represents a degree of comparison of device-specific values.
Each quantity includes a polarity and a magnitude of the comparison.
Aspect can include one or more of the following advantages.
The encoding scheme provides low information leakage by taking advantage of pseudo-random sequence randomness, taking advantage of both the polarity and confidence information in each output value, and/or by introducing non-linear mapping between the data bits to be encoded and the index-based outputs.
When the output of the PRS is viewed as a series of soft bits, the index-based encoding effectively forms a soft-decision encoder. The soft-decision encoder (an encoder that takes as input “soft” bits) is made possible by using index-based encoding, and brings about advantages that are evident in the description contained in this document.
Even if either the pseudo-random sequence or data source (consisting of polarity information), or both, are biased, this information is not directly leaked via the index-based outputs.
One approach to computing an error correction syndrome is by exclusive OR of PRS bits with parity from an encoder (herein referred to as conventional syndrome generation method). To the extent that PRS (PUF) exhibits bias, for example, product of PRS bias and parity bias is leaked into syndrome, which is public information. As an example, if a particular PRS has a bias of 0.125 towards 0 (i.e., around ⅝ of the bits are 0), if n-k parity also as 0.125 bias towards 0, product (syndrome) has 0.03125 bias towards 0. Using index based syndrome, even if both the PRS output and parity is biased, or very heavily biased, product of bias is not leaked out through the syndrome. Decoupling the security of the syndrome from bias characteristics of PUF output (not possible with conventional syndrome generation method) allows, for example, for more modular design techniques.
In a degenerate case where the PRS outputs one bit values, index-based encoding still achieves the desired effect, by randomly selecting the address (index) of a bit in the pseudo-random bit sequence that matches, and writing out the index. If none of the bits match, a random mismatching bit is selected. If bit exact reproduction is desired, further error correction techniques can be applied.
Using NIST's statistical test for randomness, index based syndrome values have been tested to be random using representative test sequences as input. In some examples, correlation tests shows similar results in that 95% of correlation value are within 2 standard error of ideal unbiased correlation value, and the few outliers do not stray much further than 2 standard errors from ideal. Index-based encoding can be a form of “soft-decision” encoding that takes advantage of multi-bit valued PRS output to, among other effects, decorrelate syndrome from parity or PUF bias.
The use of index-based outputs as a means of error correction reduces the complexity of encoding and/or decoding as compared to conventions error correction approaches.
In some use cases, the combination provides a degree of error correction that is not practical using conventional error correction alone. Coding gain can be achieved using index-based encoding allowing the combined decoder to error correct in conditions with higher noisy densities, thus allowing ECC decoder to operate on smaller block sizes, thus reducing ECC complexity.
In some examples, the coding scheme operates on pseudo-random sources, which are possibly noisy, in a way that is challengable (degenerate case include challenge being fixed) and has real-valued outputs (polarity and magnitude information; or in some degenerate cases outputs only polarity information). The PRS may include biometric readings, splotch of paint, optical or magnetic readings, piece of paper or fabric, device-specific signatures from an integrated circuit, or a variety of other characteristics that can be modeled as a pseudo-random source, which is possibly noisy. In some examples, the PRS outputs real values in the sense that the output is more than a single hard bit (polarity) (although in degenerate cases the PRS may output only a single bit value and multiple reading are taken to synthesize a “real” value). That is, confidence/magnitude information is present as well. Coding of information can be directly from PRS or a recombined variant, such as in a recombination PUF.
In some examples, the PRS depends of one or more of biometric readings, measurements of physical characteristics such as paint splotch patterns, speckle patterns, optical or magnetic readings, piece of paper or fabric, device-specific signatures from an integrated circuit, each of which can be modeled as a direct, or possibly noisy, observation of a pseudo-random source.
Advantage of index based coding can include the syndrome revealing minimal information about embedded secret. In conventional XOR method, bias PUF may leak information about secret. Specifically, the product of PUF bias and secret bias may be leaked into an error correction syndrome, which reduces brute force effort to guess secret. PUF bias thus leak secret information as first order effect. In at least some examples of the present approach, first order information is not leaked even if PUF or secret or both as biased, when Index Based Coding is used.
Using NIST's statistical test for randomness, index based syndrome values have been tested to be random using representative test sequences as input. In some examples, correlation tests shows similar results in that 95% of correlation value are within 2 standard error of ideal unbiased correlation value, and the few outliers do not stray much further than 2 standard errors from ideal.
A further advantage of one or more embodiments is that there is processing gain associated with well-chosen mapping functions for index based syndrome, which can result in exponential reduction in ECC complexity.
Furthermore, one-to-many mapping of data bits to syndrome is possible, further enhancing security. Further security may also be gained by using iterative chaining techniques.
Other features and advantages of the invention are apparent from the following description, and from the claims.
Referring to
Referring again to
The encoder 600 includes a “syndrome” encoder 610, which applies one of a family of functions P(B)(•), which is indexed by the value B being encoded, to the sequence of values R=(R0, . . . , Rq−1). That is, for a one-bit input (i.e., 0 or 1), there are two functions, P(0)(•) and P(1)(•). Each function takes as input the sequence of pseudo-random values, R=(R0, . . . , Rq−1) and provides an s-bit index as an output, for instance where q≦2s such that s is sufficiently large to uniquely specify an index in the range 0 to q−1. Note that the s-bit index can be represented using a variety of encoding approaches, for example, as an explicit s-bit number, of as an alternate representation that can be translated into an index, including direct addressing, indirect addressing, relative addressing, encoding of differential distance, etc.
Note that in other embodiments, more generally, the input B can take on one of more than two values, for example, one of eight values. In such a case, one of eight functions P(B)(•), indexed by B, are used using the sequence as input.
One example of an index based encoding function with a binary input is based on the indices of the extreme values in the sequence:
Referring to
The decoder 700 includes a syndrome decoder 710, which accepts the index value P, and outputs an estimate {circumflex over (B)}, which in normal operation is expect to re-generate the original value B. In some examples, this re-generation is done by first generating applying a regeneration function B(P)(•) to the sequence of values, {tilde over (R)}=({tilde over (R)}0, . . . , {tilde over (R)}q−1), to produce a “soft” reconstruction of the value B, followed by a hard decision H(•), which outputs the one-bit re-generation of B.
One example of the regeneration function B(P)(•), which is compatible with the maximum and minimum encoding function shown above in the case that the values Ri are distributed about zero is:
B(P)({tilde over (R)}0, . . . ,{tilde over (R)}q−1)={tilde over (R)}P
and
Note that these encoding and decoding functions can be understood to be compatible based on the observation that in encoding, the device-specific value that is maximum is the most positive, and therefore, the re-generation of that value is expected to remain at least positive, even if it is not the maximum of the regenerated sequence. Similarly, the minimum value in encoding is expected to be remain negative when it is regenerated.
Note that these encoding, decoding, and hard decision functions are only one example. Other examples may not correspond to the maximum and minimum values in the sequence in encoding. For example, the encoding functions could correspond to the index of the second largest versus the second smallest value, or index of the median versus the value most different than the median. Also, in some embodiments, each data bit may be encoded with a tuple of multiple indices, or groups of bits may each be encoded with a tuple of indices. An example of encoding using a pair (i.e., two-tuple), the output may comprise the pair of indices representing the two values that are most arithmetically different versus the pair of indices of values that are closest to equal. As introduced above, in some examples B can take on more than two values (i.e., represented using multiple bits), and in such examples, a multibit value can be represented by a set of multiple indices.
In some examples, the decoding function is
B(P)({tilde over (R)}0, . . . ,{tilde over (R)}q−1)=Pr(B=1|P,{tilde over (R)}0, . . . ,{tilde over (R)}q−1)
based on a probabilistic model of the encoding process, thereby generating a “soft bit” re-regeneration of the original data. In another example, soft bits can be generated by extracting polarity and magnitude of {tilde over (R)}p.
Referring to
As discussed above, the encoding of a single bit value using an s-bit index introduces a degree of error resilience. In some examples in which multiple data bits are to be encoded, further redundancy, and with it further error resilience, is introduced into the n-bit bit sequence B, for example, by using fewer than n information bearing bits with the remaining bits providing redundancy. For example, k information bearing bits are augmented with n-k redundancy bits using conventions Error Correction Code (ECC) techniques. The approach illustrated in
Also as introduced above, in some implementations of decoding “soft bits” are recovered, such that for a sequence of n encoded bits, as sequence of n soft bits, {tilde over (B)}=({tilde over (B)}0, . . . , {tilde over (B)}n−1) are first recovered, and then a soft error correction approach is applied to the entire sequence of soft bits yield the reconstructed error corrected values {circumflex over (B)}=({circumflex over (B)}0, . . . , {circumflex over (B)}n−1).
In another approach, the encoding approach shown in
Another approach combines a number of techniques described above:
Encoder:
Decoder:
Other embodiments do not necessarily use an input challenge. For example, the device-specific values can be based only of device characteristics, or can be based on a fixed challenge that in integrated into the device.
As introduced above, a variety of pseudo-random sources, which permit noisy regeneration, can be used with the index-based coding and decoding. Examples include biometric readings (e.g., iris scans, fingerprints, etc.), or from human generated passwords. In some examples, the pseudo-random source that is used is generated from a combination of sources, for example, based in part on “uncloneable” characteristics of a device (e.g., a silicon PUF) and biometric readings.
The values being encoded and later regenerated (e.g., the values B above), can be used for a variety of authentication and/or cryptographic functions, including key generation.
In some examples, a device may implement an index-based encoder or an index-based decoder, but not necessarily both. For instance, the device may include the PRS, and provide the outputs of the PRS to an enrollment function, which is not necessarily hosted in the device. Later the device, using the same PRS can regenerate a value encoded in the enrollment function.
In some examples, the encoding function is based on a model of the PUF rather than physical application of the particular challenge to the PUF. For instance, in an enrollment phase, parameters of a physical instance of a PUF are extracted, for example, based on a set of measurements of outputs based on a limited set of challenge inputs. These parameters are known to the encoding system, which uses those parameters to predict the sequence of outputs R=(R0, . . . , Rq−1) that will be generated by the device at decoding time with a particular challenge. This sequence is used to determined the index output to encode the hidden value B. At decoding time, one approach is to regenerate the sequence of values as {tilde over (R)}=({tilde over (R)}0, . . . , {tilde over (R)}q−1) from which the estimate of the hidden value is determined. Note however that it may not be necessary for the PUF to actually generate the multibit values {tilde over (R)}=({tilde over (R)}0, . . . , {tilde over (R)}q−1). For example, using a reconstruction function
does not require a multibit output. In this example, it is suitable for the PUF to output the sign as a one-bit output, even though the encoding was based on a simulation of the full multibit output.
In some examples, the values Ri are not necessarily represented in digital form. For instance, they may be accepted as analog signals and either converted to a digital form for determining the index outputs, or processed directly in their analog form (e.g., in an analog signal processing circuit).
Implementations of approaches described above may use software, hardware, or a combination of software and hardware. Software may include instructions stored on a machine-readable medium, for causing a general or special-purpose processor to implement steps of the approaches. The hardware may include special-purpose hardware (e.g., application specific integrated circuits) and/or programmable gate arrays.
In some examples, the PUF and syndrome encoder and/or decoder are implemented in a device, such as an RFID or a secure processor. The decoded data may be used as or used to form a cryptographic key or for other cryptographic or security (e.g., authentication) functions. In some examples, the syndrome encoder is implemented in a different device than the pseudo-random source.
It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the appended claims. Other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Provisional Applications No. 61/231,424, filed Aug. 5, 2009, and No. 61/295,374, filed Jan. 15, 2010, which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4944009 | Micali et al. | Jul 1990 | A |
4985614 | Pease | Jan 1991 | A |
5177352 | Carson | Jan 1993 | A |
5180901 | Hiramatsu | Jan 1993 | A |
5204902 | Reeds | Apr 1993 | A |
5247577 | Bailey | Sep 1993 | A |
5276738 | Hirsch | Jan 1994 | A |
5297207 | Degele | Mar 1994 | A |
5307412 | Vobach | Apr 1994 | A |
5375169 | Seheidt | Dec 1994 | A |
5388157 | Austin | Feb 1995 | A |
5420928 | Aiello et al. | May 1995 | A |
5438622 | Normile et al. | Aug 1995 | A |
5528231 | Patarin | Jun 1996 | A |
5768382 | Schneier | Jun 1998 | A |
5818738 | Effing | Oct 1998 | A |
5862094 | Kawabata | Jan 1999 | A |
5883956 | Le | Mar 1999 | A |
5920628 | Indeck | Jul 1999 | A |
5963104 | Buer | Oct 1999 | A |
6026293 | Osborn | Feb 2000 | A |
6161213 | Lofstrom | Dec 2000 | A |
6233339 | Kawano | May 2001 | B1 |
6246254 | Choukalos | Jun 2001 | B1 |
6289292 | Charlton | Sep 2001 | B1 |
6289453 | Walker | Sep 2001 | B1 |
6289455 | Kocher | Sep 2001 | B1 |
6301695 | Burnham | Oct 2001 | B1 |
6305005 | Burnham | Oct 2001 | B1 |
6324676 | Burnham | Nov 2001 | B1 |
6363485 | Adams | Mar 2002 | B1 |
6386456 | Chen | May 2002 | B1 |
6402028 | Graham, Jr. | Jun 2002 | B1 |
6529793 | Beffa | Mar 2003 | B1 |
6535016 | Choukalos | Mar 2003 | B2 |
6848049 | Tailliet | Jan 2005 | B1 |
6973187 | Gligor et al. | Dec 2005 | B2 |
7472105 | Staddon et al. | Dec 2008 | B2 |
7568113 | Linnartz | Jul 2009 | B2 |
7577850 | Barr | Aug 2009 | B2 |
20010032318 | Yip | Oct 2001 | A1 |
20010033012 | Kommerling | Oct 2001 | A1 |
20020065574 | Nakada | May 2002 | A1 |
20020095594 | Dellmo | Jul 2002 | A1 |
20020106087 | Lotspiech | Aug 2002 | A1 |
20020107798 | Hameau | Aug 2002 | A1 |
20020128983 | Wrona | Sep 2002 | A1 |
20020150252 | Wong | Oct 2002 | A1 |
20020188857 | Orlando | Dec 2002 | A1 |
20020199110 | Kean | Dec 2002 | A1 |
20030204731 | Pochuev | Oct 2003 | A1 |
20030204743 | Devadas | Oct 2003 | A1 |
20030219121 | Van Someren | Nov 2003 | A1 |
20040032950 | Graunke | Feb 2004 | A1 |
20040136529 | Rhelimi et al. | Jul 2004 | A1 |
20040148509 | Wu | Jul 2004 | A1 |
20040268117 | Olivier et al. | Dec 2004 | A1 |
20050051351 | De Jongh | Mar 2005 | A1 |
20060227974 | Haraszti | Oct 2006 | A1 |
20070036353 | Reznik et al. | Feb 2007 | A1 |
20070038871 | Kahlman | Feb 2007 | A1 |
20070039046 | Van Dijk | Feb 2007 | A1 |
20070044139 | Tuyls | Feb 2007 | A1 |
20080044027 | Van Dijk | Feb 2008 | A1 |
20080059809 | Van Dijk | Mar 2008 | A1 |
20080106605 | Schrijen | May 2008 | A1 |
20080222415 | Munger et al. | Sep 2008 | A1 |
20090161872 | O'Brien et al. | Jun 2009 | A1 |
20090292921 | Braun et al. | Nov 2009 | A1 |
20100073147 | Guajardo Merchan et al. | Mar 2010 | A1 |
20100185865 | Yeap et al. | Jul 2010 | A1 |
20100211787 | Bukshpun et al. | Aug 2010 | A1 |
20100306221 | Lokam et al. | Dec 2010 | A1 |
20110033041 | Yu et al. | Feb 2011 | A1 |
20110055585 | Lee | Mar 2011 | A1 |
Number | Date | Country |
---|---|---|
2344429 | Mar 2000 | CA |
19843424 | Mar 2000 | DE |
1100058 | May 2001 | EP |
1341214 | Sep 2003 | EP |
Entry |
---|
Arazi, B. “Interleaving Security and Efficiency Consiederations in the Design of Inexpensive IC Cards”. IEEE Proceedings on Computers and Digital Techniques. vol. 141, Issue 5. Publ. Date. Sep. 1994. pp. 265-270. |
Hon-Sum Wong et al. “Three Dimensional “Atomistic” Simulation of Discrete Random Dopant Distribution Effect in Sub-0.1 μm MOSFET's”. IEDM, 29(2):705-708, 1993. |
Bennett Yee, “Using Secure Coprocessors,” Carnegie Mellon Univeristy, Pittsburgh, PA. May 1994. |
Ross Anderson et al. “Low Cost Attacks on Tamper Resistant Devices” Cambridge University, Cambridge, England. Apr. 1997. |
Milor et al., “Logic Product Speed Evaluation and Forecasting During the early phases of Process Technology Development Using Ring Oscillator Data,” 2nd International Work Statistical Metrology, 1997 pp. 20-23. |
Ross Anderson et al. “Tamper Resistance—a Cautionary Note”. Cambridge University, Cambridge, England Nov. 1996. |
Tuyis et al., “Information—Theoretic Security Analysis of Physical Uncloneable Functions,” Proceedings ISIT 2004 (Chicago), p. 141. |
Omura, J.K., Novel Applications of Crytogrtaphy in Digital Communications, IEEE Comm. Mag., May 1990, pp. 21-29. |
Srinivas Devadas et al., “Synthesis of Robust Delay-Fault Testable Circuits Practice” Massachusetts Institute of Technology, Cambridge, MA Mar. 1992. |
Srinivas Devadas et al., “Synthesis of Robust Delay-Fault Testable Circuits. Theory” Massachusetts Institute of Technology, Cambridge, MA Jan. 1992. |
Sean W Smith et al. “Building a High-Performance, Programmable Secure Coprocessor”. IBM T.J. Watson Research Center, Yorktown Heights, NY. Oct. 16, 1998. |
Duane S. Boning et al., “Models of Process Variations in Device and Interconnect,” Massachusetts Institute of Technology, Cambridge, MA Aug. 23, 1999. |
Ravikanth, Pappu Srinivasa “Physical One-Way Functions”. Massachusetts Institute of Technology, Cambridge, MA. Mar. 2001. |
Blaise Gassend et al., “Silicon Physical Unknown Functions and Secure Smartcards,” Massachusetts Institute of Technology, Cambridge, MA May 13, 2002. |
Blaise Gassend et al. “Controlled Physical Unknown Functions: Applications to Secure Smartcards and Certified Execution,” Massachusetts Institue of Technology, Cambridge, Jun. 10, 2002. |
Blaise Gassend et al., “Silicon Physical Random Functions”, MIT , Proceedings of the Computer and Communication Security Conference, Nov. 2002, Memo 456. |
Blaise Gassend, “Physical Random Functions,” Massachusetts Institute of Technology, Cambridge, MA Feb. 2003. |
Gassend, B.L.P., Physical Random Functions, Thesis at the Massachusetts Institute of Technology, pp. 1-89 (Feb. 1, 2003) XP002316843. |
Daihyun Lim, “Extrating Secret Keys from Integrated Circuits” Massachusetts Institute of Technology, Cambridge, MA, May 2004. |
Lee et al., “A Technique to Build a Secret Key in Intergrated Circuits for Identification and Authentication Applications,” Massachusetts Institute of Technology (CSAIL) Jun. 2004. |
Xilinx (Ralf Krueger) “Using High Security Features in Virtex-II Series FPGAs” www.xilinx.com; [printed Jul. 8, 2004]. |
Ranasinghe et al., “Secutiry and Provacy Solutions for Low-Cost RFID Systems,” (2004). |
Tuyls, Pim and Lejla Batina, “RFID-Tags for Anti-Counterfeiting,” Topics in Cryptography, vol. 3860/2006, No. LNCS3860, (Feb. 13, 2005) XP002532233. |
Tuyls et al., “Security Analysis of Physical Uncloneale Functions,” Proc. 9th Conf. on Financial Cryptography and Data Security , Mar. 2000, LNCS 3570, pp. 141-155. |
G. Edward Suh, et al., “Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions,” In the proceedings of the 32nd International Symposium on Computer Architecture, Madison, Wisconsin, Architecture, Jun. 2005, (Memo-483). |
Skoric et al , “Robust Key Extraction from Physical Uncloneable Functions,” Proc Applied Crytopgraphy and Network Security 2005, LNCS 3531, pp. 407-422. |
Ulrich Ruhrmair “SIMPL Systems: On a Public Key Variant of Physical Unclonable Functions” Cryptology ePrint Archive, Report 2009/255. |
Number | Date | Country | |
---|---|---|---|
20110033041 A1 | Feb 2011 | US |
Number | Date | Country | |
---|---|---|---|
61295374 | Jan 2010 | US | |
61231424 | Aug 2009 | US |