Cryptography protects data from unwanted access. Cryptography typically involves mathematical operations on data (encryption) that makes the original data (plaintext) unintelligible (ciphertext). Reverse mathematical operations (decryption) restore the original data from the ciphertext. Cryptography covers a wide variety of applications beyond encrypting and decrypting data. For example, cryptography is often used in authentication (i.e., reliably determining the identity of a communicating agent), the generation of digital signatures, and so forth.
Current cryptographic techniques rely heavily on intensive mathematical operations. For example, many schemes use a type of modular arithmetic known as modular exponentiation which involves raising a large number to some power and reducing it with respect to a modulus (i.e., the remainder when divided by given modulus). Mathematically, modular exponentiation can be expressed as ge mod M where e is the exponent and M the modulus.
Conceptually, multiplication and modular reduction are straight-forward operations. However, often the sizes of the numbers used in these systems are very large and significantly surpass the native wordsize of a processor. For example, a cryptography protocol may require modular operations on numbers 1024 to 4096 bits in length or greater while many processors have native wordsizes of only 32 or 64 bits. Performing operations on such large numbers may be very expensive in terms of time and in terms of computational resources.
As described above, a wide variety of cryptographic operations involve multiplication of very large numbers and/or modular reduction. Described herein are a variety of techniques that can reduce the burden of these compute-intensive operations and speed operation of cryptographic systems. These techniques can also be applied in more general purpose, non-cryptographic, computing settings. One such technique involves improving the efficiency of a technique to multiply large numbers known as Karatsuba multiplication. Another technique involves improving the efficiency of modular reduction.
Karatsuba Multiplication
A wide variety of approaches have been developed to perform multiplication of two numbers. A common approach, known as schoolbook multiplication, involves segmenting the operands and performing multiplication operations on the smaller segments. As an example, two n-bit wide numbers A and B can be expressed as a set of smaller sized sub-segments such as:
A=a12s+a0 [1]
B=b12s+b0 [2]
where the a0 and b0 terms represent the s least significant bits of A and B and a1 and b1 represent the remaining more significant bits. In this notation, the subscript x in ax and bx represents the ordinal of a segment within a number (e.g., a0 represents the least significant bits of A, a1 the next most significant bits, and so forth).
Using conventional schoolbook multiplication, A and B can be computed using four smaller multiplications:
A×B=a1b122s+(a0b1+b0a1)2s+a0b0 [3]
A multiplication technique known as Karatsuba multiplication can reduce the number of segment multiplications. For example, for A and B above, the result of the:
(a0b1+b0a1) [4]
terms in [3] can be computed as:
[(a0+a1)(b0+b1)]−a1b1−a0b0 [5]
Since a1b1 and a0b0 form other terms in equation [3], using the values of a1b1 and a0b0 in equation [5] does not represent additional computational cost. Substituting equation [5] for equation [4] in equation [3], Karatsuba multiplication of A×B can be computed as:
A×B=a1b122s+([(a0+a1)(b0+b1)]−a1b1−a0b0)2S+a0b0 [6]
This substitution trades two adds and a single multiplication for two multiplications. In most cases, this represents a significant gain in computational efficiency.
In the example above, Karatsuba multiplied numbers segmented into two segments (i.e., “two-term Karatsuba multiplication”). Karatsuba, however, can also be applied to other numbers of segments. For example, a three-term Karatsuba multiplication can be defined for numbers A and B as:
A=a222s+a12s+a0 [7]
B=b222s+b12s+a0 [8]
A×B=a2b224s+a1b122s+a0b0+[(a2+a1)(b2+b1)−a2b2−a1b1]23s+[(a2+a0)(b2+b0)−a2b2−a0b0]22s+[(a0+a1)(b0+b1)−a0b0−a1b1]2s [9]
where each A and B are divided into three s-bit segments.
Like the two-term Karatsuba multiplication [6], the three-term Karatsuba multiplication [9] substituted multiplication between different ordinal segments (e.g., axby) with multiplication operations on like ordinal segments (e.g., axbx) and an addition of segments (e.g., ax+ay) of the same number. Equations have also been defined for five-term Karatsuba multiplication. These Karatsuba equations share the property that they require, at most, (t2+t)/2 multiplications where t is the number of terms.
Karatsuba multiplication can be implemented using recursion. For example, in a two-term Karatsuba multiplication of:
A×B=a1b122n+((a0+a1)(b0+b1)−a1b1−a0b0)2n+a0b0 [6]
each smaller segment multiplication can, in turn, be performed using Karatsuba. For example, performing Karatsuba multiplication of A×B can involve Karatsuba multiplication of a1b1, a0b0, (a0+a1)(b0+b1). These multiplications may involve Karatsuba multiplication of even smaller sub-segments. For example, determining a1b1 may involve segmenting a1 and b1 into multiple terms of sub-segments.
A potential problem with this approach, however, is the different sized operands generated. That is, the (a0+a1) term and the (b0+b1) term may both generate carries from the add operations. The subsequent multiplication of the results of (a0+a1) and (b0+b1) may spill into an additional native word. This can undermine much of the efficiency of a Karatsuba implementation.
To address the “carry” problem,
As shown, Karatsuba multiplication can be performed on the s-sized terms using:
22sa1b12s[(a1+a0)(b1+b0)−a1b1−a0b0]+a0b0 [10]
The results can then be adjusted based on the values of the most significant bits ah and bh. For example, as shown, the result can be increased by
2nahB[b1:b0] 106[11]
and
2nbhA[a1:a0] 108[12]
In other words, if ah is “1”, the result is increased by the n-bits of b1:b0 shifted by n bits. Similarly, if bh is “1”, the result is increased by the n-bits of a1:a0 shifted by n bits. These adjustments can be implemented as addition operations, for example:
result=result+2nahB[b1:b0]
result=result+2nbhA[a1:a0]
or as branches followed by adds:
if (ah) then result=result+2nB[b1:b0]
if (bh) then result=result+2nA[a1:a0]
Finally, if both ah and bh are “1”, the result is increased by 2″ (i.e., ah bh). This can be implemented using a branch, for example:
if (ah bh) then result=result+22n
This combination of addition and one or more branch statements can prevent carries from propagating down into lower level of recursion.
Karatsuba multiplication is particularly desirable when the length of the operands is much longer than the native wordsize of a processor. For example, the processor may only have a native wordsize of s compared to longer operands. When n approaches s, the efficiency of Karatsuba decreases and schoolbook multiplication becomes more attractive. Thus, as shown in
While
As described above, different Karatsuba equations have been defined for different numbers of terms (e.g., 2, 3, and 5). A canonical Karatsuba decomposition is a number of one of the following six lengths:
n=2k
n=3·2k
n=32·2k
n=33·2k
n=34·2k
n=5·2k
where n is the length of a number and k is an integer.
To optimize Karatsuba decomposition, a number may be padded with zeros to conform to a larger canonical form. In order to discern which canonical Karatsuba decomposition to use the work, w, for each can be computed and the smallest selected:
The values of w may be computed for different values of n. The results may, for example, be used to form a lookup table indicating the amount to pad a given number based on the lowest w value for a given n.
Modular Reduction Using Folding
In addition to multiplication, many cryptography schemes involve modular reduction (e.g., computation of N mod M). To diminish the expense of modular reduction operations, some systems use a technique known as Barrett modular reduction. Essentially, Barrett computes an estimate of a quotient,
q=floor(floor(N/2m)μ/M) [13]
where m is the width of modulus M and μ is a constant determined by:
μ=floor(22n/M). [14]
where n is the width of number N. The value of N mod M can then be determined by computing N−qM, followed by a final subtraction by M if necessary to ensure the final value is less than M. Contributing to Barrett's efficiency is the ability to access a pre-computed value for μ. That is, the value of μ can be determined based only on the size of N without access to a particular value of N.
Techniques such as Barrett's modular reduction, can lessen the expensive of a modular reduction.
In greater detail,
Based on the folding point, N′ can be determined as:
N′=NH2f mod M+NL 212[15]
The smaller N′ can then be used to perform a modular reduction, for example, using the classical Barrett technique.
As shown, determination 212 of N′ involves a term of 2f mod M 208 (referred to as M′). The value of 2f mod M can be pre-computed without regard to a particular N value. Pre-computing this value for various values of M and f speeds real-time computation of N′ by shifting expensive multiplications to a less time critical period. The pre-computed values for the values of M and f can be stored in a table in memory for fast access. The multiplication of NH (2f mod M) may be performed using Karatsuba multiplication, for example, as described above.
To illustrate,
After determination of N′, N′ mod M can be computed using classical Barrett reduction. In this case the Barrett reduction is computed 230, 234 as:
R=N′−floor(floor(N′/22s)(μ/2s))M [16]
where μ is determined as floor (23s/M). Like the value of M′, the value of μ can be pre-computed for a variety of values of s and M. This pre-computation can, again, time-shift expensive operations to periods where real-time operation is not required.
The resultant R 236 may be larger than the modulus M 200. In this comparatively rare case, a subtraction of R=R−M may be used to ensure R<M.
A single folding operation can significantly improve the efficiency and real-time performance of modular reduction. As shown in
The folding point used in the different folding iterations moved from 21.5m for the first iteration to 21.25m for the second. More generally, the folding point for a given iteration may be determined as 2(1+2^−i)m where i is the iteration number.
While
Sample Implementation of Modular Exponentiation
The techniques described above can be used to perform a variety of cryptographic operations. For example, the Karatsuba multiplication and folding techniques described above can be combined to perform modular exponentiation.
Again, modular exponentiation involves determining ge mod M. Performing modular exponentiation is at the heart of a variety of cryptographic algorithms. For example, in RSA, a public key is formed by a public exponent, e-public, and a modulus, M. A private key is formed by a private exponent, e-private, and the modulus M. To encrypt a message (e.g., a packet or packet payload) the following operation is performed:
ciphertext=cleartextee-public mod M [17]
To decrypt a message, the following operation is performed:
cleartext=ciphertexte-private mod M. [18]
One procedure for performing modular exponentiation processes the bits in exponent e in sequence from left to right. Starting with an initial value of A=1, the procedure squares the value for each “0” bit encountered (i.e., A=A*A). For each “1” bit, the procedure both squares the value and multiplies by g (i.e., A=A*A*g). The end result can be used in a modular reduction operation. For example, to determine 31010b mod 5, the procedure operates as follows where g=3, e=“1010”, and M=5:
Instead of performing the modular reduction at the end when a very large number may have been accumulated, modular reduction may be interleaved within the multiplication operations such as after processing every exponent bit or every few exponent bits. For example, to compute 31010b mod 5, the procedure may proceed as follows:
Regardless of the particular implementation, use of the Karatsuba multiplication technique described above to both the squaring and “g” multiplication can significantly speed modular exponentiation. Additionally, using folding, the
reduction operations consume significantly less processing resources.
Additional computational efficiency can be obtained by storing repeatedly used values. For instance, in the example, the value of g is involved in two different multiplications. In a real-world example of a 2048-bit exponent, the number multiplications using g will be much larger. To improve efficiency of Karatsuba multiplication involving g, different values of gi=(gH(i)+gL(i) can be stored in a table for repeated use, where i represents the depth of Karatsuba recursion. This caching can save a significant number of cycles that redundantly perform the same addition. Caching other frequently used values such as M′ and μ used in folding may also enhance performance if modular reduction occurs multiple times using the same modulus.
An additional optimization may be used when performing multiplication of uneven sized numbers such as multiplication of a 1k sized number by a 2k sized number. Such multiplications may occur in determining Barrett's qM value and in determining NH 2f mod M. To take advantage of Karatsuba, a 1k*2k multiplication can be broken up into two 1k*1k operations such as q*mh and q*ml. Since q is used in both operations the value of (qh+ql) need not be determined twice but may instead be stored for further use.
Again, the above is merely an example and the Karatsuba and folding techniques can be used to perform a wide variety of other cryptographic operations as well as other general purpose mathematical applications.
The techniques can be implemented in variety of ways and in a variety of systems. For example, the techniques may be implemented in dedicated digital or analog hardware (e.g., determined by programming techniques described above in a hardware description language such as Verilog(tm)), firmware, and/or as an ASIC (Application Specific Integrated Circuit) or Programmble Gate Array (PGA). The techniques may also be implemented as computer programs, disposed on a computer readable medium, for processor execution. For example, the processor may be a general purpose processor.
As shown in
Again,
Other embodiments are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
3980874 | Vora | Sep 1976 | A |
4945537 | Harada | Jul 1990 | A |
4949294 | Wambergue | Aug 1990 | A |
5166978 | Quisquater | Nov 1992 | A |
5274707 | Schlafly | Dec 1993 | A |
5384786 | Dudley et al. | Jan 1995 | A |
5642367 | Kao | Jun 1997 | A |
5920702 | Bleidt et al. | Jul 1999 | A |
5942005 | Hassner et al. | Aug 1999 | A |
6128766 | Fahmi et al. | Oct 2000 | A |
6185596 | Hadad et al. | Feb 2001 | B1 |
6223320 | Dubey et al. | Apr 2001 | B1 |
6396926 | Takagi et al. | May 2002 | B1 |
6484192 | Matsuo | Nov 2002 | B1 |
6530057 | Kimmitt | Mar 2003 | B1 |
6609410 | Axe et al. | Aug 2003 | B2 |
6666381 | Kaminaga et al. | Dec 2003 | B1 |
6721771 | Chang | Apr 2004 | B1 |
6732317 | Lo | May 2004 | B1 |
6795946 | Drummond-Murray et al. | Sep 2004 | B1 |
6904558 | Cavanna et al. | Jun 2005 | B2 |
7027597 | Stojancic et al. | Apr 2006 | B1 |
7027598 | Stojancic et al. | Apr 2006 | B1 |
7058787 | Brognara et al. | Jun 2006 | B2 |
7171604 | Sydir et al. | Jan 2007 | B2 |
7187770 | Maddury et al. | Mar 2007 | B1 |
7190681 | Wu | Mar 2007 | B1 |
7243289 | Madhusudhana et al. | Jul 2007 | B1 |
7343541 | Oren | Mar 2008 | B2 |
7428693 | Obuchi et al. | Sep 2008 | B2 |
7458006 | Cavanna et al. | Nov 2008 | B2 |
7461115 | Eberle et al. | Dec 2008 | B2 |
7543214 | Ricci | Jun 2009 | B2 |
20020053232 | Axe et al. | May 2002 | A1 |
20020126838 | Shimbo et al. | Sep 2002 | A1 |
20020144208 | Gallezot et al. | Oct 2002 | A1 |
20030167440 | Cavanna et al. | Sep 2003 | A1 |
20030202657 | She | Oct 2003 | A1 |
20030212729 | Eberle et al. | Nov 2003 | A1 |
20040083251 | Geiringer et al. | Apr 2004 | A1 |
20050044134 | Krueger et al. | Feb 2005 | A1 |
20050138368 | Sydir et al. | Jun 2005 | A1 |
20050149725 | Sydir et al. | Jul 2005 | A1 |
20050149744 | Sydir et al. | Jul 2005 | A1 |
20050149812 | Hall et al. | Jul 2005 | A1 |
20050154960 | Sydir et al. | Jul 2005 | A1 |
20060059219 | Koshy et al. | Mar 2006 | A1 |
20060282743 | Kounavis | Dec 2006 | A1 |
20060282744 | Kounavis | Dec 2006 | A1 |
20070083585 | St Denis et al. | Apr 2007 | A1 |
20070150795 | King et al. | Jun 2007 | A1 |
20070157030 | Feghali et al. | Jul 2007 | A1 |
20080092020 | Hasenplaugh et al. | Apr 2008 | A1 |
20080144811 | Gopal et al. | Jun 2008 | A1 |
20090157784 | Gopal et al. | Jun 2009 | A1 |
20090158132 | Gopal et al. | Jun 2009 | A1 |
20090164546 | Gopal et al. | Jun 2009 | A1 |
20090168999 | Boswell et al. | Jul 2009 | A1 |
Number | Date | Country |
---|---|---|
2008002828 | Jan 2008 | WO |
2008002828 | Jan 2008 | WO |
2009012050 | Jan 2009 | WO |
2009012050 | Mar 2009 | WO |
2009082598 | Jul 2009 | WO |
2009085489 | Jul 2009 | WO |
2009085489 | Aug 2009 | WO |
Number | Date | Country | |
---|---|---|---|
20070297601 A1 | Dec 2007 | US |