The invention relates to performing residue number system calculations, and in particular, to reduced complexity algorithms and hardware designs for performing residue number system calculations. This invention is in the field of the Residue Number Systems (RNS) and their applications.
The RNS uses a set “” of pair-wise co-prime positive integers called as the “moduli”
={m1, m2, . . . , mr, . . . , mk} where mr>1 ∀r ∈ [1, K] and gcd (mi, mj)=1 for i≠j (B-1)
Note that ||=the number of moduli=K (also referred to as the number of “channels” in the literature)
We use the term “component-modulus” to refer to any single individual modulus (ex: mr) in the set .
For the sake of convenience, we also impose an additional ordering constraint: mi<mj if i<j
Total-modulus m1×m2× . . . ×mK. Typically
>>K (B-2)
Any integer Z ∈ [0,, can be uniquely represented by the ordered-touple (or vector) of residues:
∀ Z ∈ [0, −1]; Z≡
=[z1, z2, . . . , zK] where, zr=(Z mod mr), r=1, . . . , K (B-3)
Conversion from residues back to an integer is done using the “Chinese Remainder Theorem” as follows:
Z=(ZT mod ) (B-4) where
pr=((zr·wr) mod mr), r=1, . . . , K where (B-6)
outer-weights
inner-weights
are constants for a given (B-7)
The Residue Number Systems (abbreviated “RNS”) have been around for while [4]. The underlying Residue Domain representation (or simply Residue Representation, abbreviated “RR”) has some unique attributes (explained below) that make it attractive for signal processing. It is therefore not surprising that the early work in this area was contributed by the signal-processing community.
Thereafter, from the late 1970s through the mid 1980s, the field of cryptology was revolutionized by the invention of the 3 most fundamental and widely used cryptology algorithms, viz., Diffie-Hellman, RSA, and Elliptic-curves. In the beginning, the aforementioned cryptology algorithms were not easy to implement in hardware. However, as semiconductor device sizes kept on shrinking, the hardware that could be integrated on a single chip kept on becoming larger as well as faster. As a result, today (in 2011) it possible to easily realize the cryptographic (abbreviated “crypto”) algorithms in hardware. The word-lengths used in crypto methods are substantially larger as compared with wordlengths required by other applications; typical crypto word lengths today are at least 256 bits or higher. It turns out that the same attributes of the Residue Number Systems that are attractive for signal processing are also beneficial when implementing cryptographic algorithms at long word-lengths. Consequently the cryptology and computer-arithmetic communities also started researching RNS. This coincidental convergence of goals (to research and improve RNS, which is now shared by the signal processing, cryptology as well as computational/computer arithmetic communities) has in-turn led to a resurgence of interest as well as activity in the RNS [5].
§1.1 Advantages of the Residue Domain Representation
The main advantage of the Residue Number (RN) system is that in the Residue-Domain (RD), the operations {±, ×, } can be implemented on a per-component/channel basis, wherein the processing required in any single channel is completely independent of the processing required in any other channel. In other words, these operations can be implemented fully in parallel on a per-channel basis as follows:
Z
[zr=(xr
yr) mod mr)], r=1, . . . , K where
∈ {±, ×, ?} (1)
Note that equality of two numbers can be checked by comparing their residues which can be done in parallel in all channels. In other words, in the RD, the most fundamental operations viz., addition/subtraction, equality check AND Multiplication can all be performed in parallel in each channel independently of any other channel(s). This independence of channels implies that each of the above operations can be implemented with O(n) computing effort (operations/steps), where
┌lg┐=n (2)
i.e., n is the number of bits required to represent the total-modulus or the overall range of the RNS.
In contrast, in the regular integer domain, a multiplication is a convolution of the digit-strings representing the two numbers being multiplied. A convolution is substantially more expensive than add/subtract operations (Addition/Subtraction fundamentally require O(n) operations and can be implemented in O(lg n) delay using the “carry-look-ahead” method and its variants. Naive paper-and-pencil multiplication, requires O(n2) operations. Asymptotically fastest multiply methods use transforms such as floating point FFT (Fast Fourier Transform) or number-theoretic transforms to convert the convolution in the original domain into a point-wise product in the transform domain, so that the number of operations required turns out to be ≈O(nlg n); for further details, please refer to [2]).
Thus, performing the multiplications in the RD is substantially faster as well as cheaper (in terms of interconnect length and therefore h/w area as well as power consumption). Consequently, wherever multiplication is heavily used, adopting the RR can lead to smaller and faster realizations that also consume less power. For example:
(i) Filtering is heavily used in signal processing. Most of the effort in filtering is in the repeated multiply and add operations. It is therefore not surprising that the first practical use of the RNS was in synthesizing fast filters for signal processing.
(2) Multiplication (note that squaring is a special case of multiplication) also gets used heavily in long-wordlength cryptology algorithms. Therefore RD implementations of cryptological algorithms are also smaller, faster and consume lower power.
§1.2 Disadvantages of the Residue Domain Representation
Together with the advantages, also come some of the disadvantages of the residue domain: when compared to the “easy operations” above, several fundamental operations are relatively a lot more difficult to realize in the RD [4, 6-8]:
Reconstruction in the regular format by directly using the CRT turns out to be a slow operation. Note that a straightforward/brute-force application of the CRT entails directly implementing Equation (B-4). Accordingly, ZT is fully evaluated first, and then a division by the modulus M is carried out to retrieve the remainder (Z). For long word-lengths (ex, in cryptography applications) the final division by M is unacceptably slow and inefficient.
Re-construction by evaluating the mixed-radix representation takes advantage of the “mixed-radix” representation associated with every residue-domain-representation [6], wherein a number is represented by an ordered set of digits. The value of the number is a weighted sum where the weights are positional (just like the weights of a normal single radix decimal or binary representation). As a result, a digit-by-digit comparison starting with the most-significant-digit is feasible. However, to the best of our knowledge it takes O(K2) sequential operations (albeit on small sized operands of about the same size as the component-moduli mi) in the residue-domain. The inherently sequential nature of this method makes it slow.
Moreover, at a first glance, it appears that for magnitude-comparison and division, the operands need to be fully reconstructed in the form of a unique digit string representing an integer either in the regular or the mixed-radix-format.
§1.3 Related Prior Art
§1.3.1 Base-Extension or Change
In a sign-magnitude representation or a radix-complement (such as the two's complement) representation, a 32 bit integer can be easily extended into a 64 bit value. The corresponding operation in the RNS is considerably more involved. Related Prior work in this area falls under two categories, each is briefly explained next.
§1.3.1.A Deploying a Redundant Modulus
Shenoy and Kumarersan [9] start by re-expressing the CRT in a slightly different form:
Z=Z
T−α· where 0
α
K−1
In the above equation α=C is the only unknown. It is clear that knowledge of (Z mod me), i.e., the residue/remainder of Z, w.r.t. one extra/redundant modulus me is sufficient to determine the value of
C. They assume the availability of such an extra residue, which lets them evaluate α=
C.
This base extension method has been widely adopted in the literature. For example, Algorithms for modular multiplication developed by Bajard et. al. [10, 11] perform their computations in two independent RNS systems and change base from one to the other using the shenoy-kumaresan method. This is done so as to avoid a full reconstruction at intermediate steps. As a result, they end up requiring a base-conversion in each step and consequently, their algorithm requires O(K) units of delay when O(K) dedicated processing elements are available (where K=the total number of moduli or channels in the RNS system).
§1.3.1.B Iterative Determination of C
Another base-extension algorithm related to our work is described in [12-15]. They show a method to evaluate an approximate estimate in a recursive, bit-by-bit (i.e., one bit-at-a-time) manner and then derive conditions under which the approximation is error-free. This method is at the heart of their base-extension algorithm.
The recursive structure of this method makes it relatively slower and cumbersome.
The idea of using the “fractional-representation” of CRT has been around for a while. For instance, Vu [16, 17] proposed using a Fractional interpretation of the CRT in the mid 1980s. However, he ends up using a very high (actually the FULL) precision: [lg (K·)] bits (see equations (13) and (14) in reference [17]).
§1.3.2 Sign Detection and Magnitude Comparison
In the RNS, the total range is divided into positive and negative intervals of the same length (to the extent possible). For example, if the set of RNS moduli is ={2, 3, 5, 7} then
=210 and the overall range of the RNS is [−105,+104], where,
the numbers 1 through 104 represent +ve numbers, and
the numbers 105 thru 209 represent −ve numbers from −105 to −1, respectively.
In general, all negative values in the range [−(−1), −1] satisfy the relation
(−α) mod ≡
−α (3)
Sign detection in the RNS is not straightforward, rather, it has been known to be relatively difficult to realize in the Residue Domain.
Likewise, comparison of magnitudes of two numbers represented as residue touples is also not straightforward (independent of whether or not negative numbers are included in the representation). For instance, with the same simple moduli set ={2,3,5,7} above, note that 18≡(1, 1, 4, 5) and 99≡(1,0,4,1) while 79≡(1, 1, 4, 1) and the negative number −101≡(1,1,4,4).
In other words, the touples of remainders corresponding to +ve and −ve numbers cannot be easily distinguished.
Prior Work on Sign Detection and Magnitude Comparison
Sign detection operation has been known to be relatively difficult to realize in the RNS for a while (early works date back to 1960's, for example [4,18]). Recent works related to Sign detection in RNS have tended to focus on using moduli having special forms [19, 20], which limits their applicability. The idea of “core-functions” was introduced in [21] in the context of coding-theory. RNS sign-detection algorithms based on idea of using “core-functions” have been published [22-24]. However, these methods are unnecessarily complicated and appear to be useful only with moduli with special properties [24], limiting their applicability.
Lu and Chiang [25, 26] introduced a method to use the least significant bit (lsb) to keep track of the sign. However, tracking the lsb of arbitrary (potentially all possible) numbers is not an easy task. In their quest to keep track of the lsb, Lu and Chiang first proposed an exhaustive method in their first publication [25]; which turns out to be infeasible for all but small toy examples because the size of their look-up table was the same as the total range M. In the follow up publication, they abandoned the exhaustive look-up approach [26] and ended up unnecessarily using the full precision, just as Vu does in his work [17].
§1.3.3 Scaling or Division by a Constant
In general, “scaling” includes both multiplication as well as division by a fixed constant, (viz., the scaling factor Sf). Early versions of signal processors often deployed a fixed-point format which necessitated scaling to cover a wider dynamic range of input values. Consequently, scaling has been heavily used in signal processing. It is therefore not surprising that the early work in realizing the scaling operation in the residue-domain comes from the signal-processing community [27, 28].
Shenoy and Kumaresan [29, 30] introduced a scaling method that works only if the constant divisor has the special form
D=m
d
·m
d
. . . m
d
wherein s<K and
{md=the set of RNS moduli (4)
i.e., the divisor is a factor of the overall modulus M. This restriction renders their method inapplicable in most cryptographic algorithms; because the modulus (aka, the constant divisor) N is either a large prime number (as in elliptic curve methods) or a product of two large primes numbers (as in RSA). In either case, it does not share a factor with the total modulus .
All methods and apparata for scaling in the RNS that have been published thus far [23, 31-33]; including more recent ones [33-35] are either limited to special moduli or are more involved than necessary because they all attempt to estimate the remainder first, subtract it off and then arrive at the quotient, which is the quantity of interest in scaling. Consequently, none of the methods or apparata are even remotely similar to the new algorithm that I have invented for RNS division by a constant.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the claimed subject matter. This summary is not an extensive overview, and is not intended to identify key or critical elements, or to delineate any scope of the disclosure or claimed subject matter. The sole purpose of the subject summary is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later. In one exemplary aspect, a method for performing reconstruction using a residue number system is disclosed. A set of moduli is selected. A reconstruction coefficient is estimated based on the selected set of moduli. A reconstruction operation is performed using the reconstruction coefficient. In another exemplary aspect, an apparatus for performing reconstruction using a residue number system includes means for selecting a set of moduli, means for estimating a reconstruction coefficient based on the selected set of moduli and means for performing a reconstruction operation using the reconstruction coefficient. In yet another exemplary aspect, a computer program product comprising a non-volatile, computer-readable medium, storing computer-executable instructions for performing reconstruction using a residue number system, the instructions comprising code for selecting a set of moduli, estimating a reconstruction coefficient based on the selected set of moduli and performing a reconstruction operation using the reconstruction coefficient is disclosed. In yet another exemplary aspect, a method for performing division using a residue number system comprises selecting a set of moduli, determining a reconstruction coefficient and determining a quotient using an exhaustive pre-computation and a look-up strategy that covers all possible inputs. In yet another exemplary aspect, a method of computing a modular exponentiation in a residue number system includes iterating, without converting to a regular integer representation, by performing modular multiplications and modular squaring and computing the modular exponentiation as a result of the iterations.
In this section, first, I explain my moduli-selection method. After that, each new algorithm illustrated in detail. Since the “RPPR” algorithm is used in all others, it has been explained in more detail than other algorithms.
Notations-1 Math Functions, Symbols
The symbol ≡ means “equivalent-to”; whereas the symbol means “is defined as”
LHSLeft Hand Side of a relation; RHS
the right-hand-side
a mod bthe remainder when integer a is divided by integer b.
ulp≡wight or value a unit or a “1” in the least-significant-place
gcdgreatest common divisor (also known as highest common factor or hcf)
lg≡log-to-base 2, ln≡log-to-base-c, log≡log-to-base-10
floor function: └x┘=the largest integer x≡Round to the nearest integer toward−∞
ceiling function: ┌x┐=the smallest integer x≡Round to the nearest integer toward+∞
truncation: trunc(x)=only the integer-part of x≡Round toward 0
O( )≡Order-of or the big-O function as defined in the algorithms literature (for example see [2]). |•|≡cardinality if argument is a set; ≡absolute value of integer argument.
“RR” is abbreviation for “Residue Representation”, “RD” is abbreviation for “Residue Domain”, “integer-domain” refers to the set of all integers .
denotes the “equality-check” operation.
Notations-2 Algorithm Pseudo-Code
The pseudo-code syntax closely resembles MAPLE [3] syntax.
Lines beginning with # as well as everything between /* and */ are comments.
All entities/variables with a bar on top are vectors/ordered-touples (ex,
Operations that can be implemented in parallel in all channels are shown inside a square/rectangular box. for example [zr=(xr ‡ yr) mod)], r=1, . . . , K
Definition 1: We define “Reconstruction-Remainders” to be the component-wise values p1, p2, . . . , pK defined by relations (B-6) above.
Note that Equation (B-4) can be re-written as ZT=Z+Q·M (B-8.1) or equivalently as Z=ZT−Q·M where, (B-8.2)
when ZT is divided by C
0
C
K−1 (B-9)
Definition 2: We define the coefficient of (which is denoted by the variable Q) in Equations (B-8*) to be the “Reconstruction-Coefficient” and henceforth denote it by the dedicated symbol “
C”
Definition 3: Full reconstruction of the integer corresponding to a residue-touple refers to the process of retrieving the entire unique digit-string representing that integer in a non-redundant, weighted-positional format (such as two's complement or decimal or the mixed-radix format).
D-4: Full-precision≡dT base-b digits; where dT=┌logb(K)┐≈┌logb
┐
nb; since
>>K (B-10)
If the base b=2 then nb=n2 is the bit-length required to represent the total modulus or the overall range of RNS; and is therefore also denoted simply by the variable n without any subscript.
Definition 5: Any method/algorithm that simply determines the value of C without attempting to fully reconstruct Z is referred-to as a “Partial-Reconstruction” (PR).
Evaluating C yields an exact equality (Eqn. (B-8.2)) for the target integer Z, without any “mod” i.e., remaindering operations in it (unlike the statement of CRT, Eqn. (B-4)). For most operations (especially division with a constant) such an exact equality for Z suffices, i.e., there is no need to fully reconstruct Z. This is why Partial-Reconstruction (i.e., evaluating
C) is an important enabling step underlying most other operations
§4.1 Moduli Selection
Note that a modulus of value mr needs a table with (mr−1) entries to cover all possible values of the reconstruction-remainder pr w.r.t. mr, (excluding the value 0). Therefore, the total number of memory locations required by all moduli is
Thus, in order to minimize the memory needed, each component modulus should be as small as it can be.
Therefore, in order to cover a range [0, R] we select smallest consecutive K prime numbers starting with either 2 or 3, such that their product exceeds R: ={m1, m2, . . . , mK}={2, 3, . . . , K-th prime number}, where
This selection leads to the following two analytically tractable approximations:
The notation defines mK to be the K-th prime number. In other words, K is the index of prime number whose value is mK. Consequently, K and mK can be related to each other via the well-known “prime-counting” function [36] defined as
2
The overall modulus
becomes the well known “primorial” function [37] which for any positive integer N is denoted as “N#” and defined as
(Note that a the definition as well as the notation for the primorial is analogous to the well known “factorial” function (N!)). The primorial function satisfies well-known identities [37, 38]
2N<(N#)<4N=22N and (10)
(N#)≈O(eN) for large N (11)
As a result, to be able to represent n bit numbers (i.e. the range [0,2n−1]), in the residue domain using all available prime numbers (starting with 2), the total modulus satisfies
≈exp (mK)>2n and therefore (12)
m
K≈(ln)=ln(2n)=n·ln2 (13)
Substituting this value of mK in Eqn. (8), K, the number of moduli required to cover all “n”-bit long numbers can be approximated as :
These analytic expressions are extremely important because they imply:
A.1 K<mK<< (which follows from relations (13) and (14) above.) Moreover, both (15)
A.2 maximum-modulus mK as well as the number of moduli K grow logarithmically w.r.t. (16)
i.e., linearly w.r.t. the wordlength n (since n=┌lg┐).
§4.1.1 moduli selection enables exhaustive look-up strategy that covers all possible inputs
The attributes A.1 and A.2 make it possible to exhaustively deploy pre-computation and lookup because they guarantee that the total amount of memory required grows as a low degree polynomial of the wordlength n.
In other words, the main novelty in my method of moduli selection and its real significance is the fact that I leverage the selection to enable an exhaustive pre-computation and look-up strategy that covers all possible input cases. This exhaustive pre-computation and look-up in turn makes my algorithms extremely simple, efficient and therefore ultrafast because I deploy the maximum amount of pre-computation possible, and perform as much of the task ahead of time as possible; so that there is not much left to be done dynamically at run-time (a perfect example of this is the new “Quotient First Scaling” algorithm for RNS division by a constant divisor that is explained in detail in Section §4.5 below).
In other words, the “minimization” of the total number of look-up table entries is the best possible scenario, but it is not necessary to obtain the major benefits that are illustrated for the first time in this invention. There is a lot more flexibility in selecting the moduli as long as they do not make it infeasible to deploy the exhaustive precomputation strategy.
Consider a concrete example: Suppose the claims section says “select moduli so as to minimize the total amount of look-up table memory required.”
If the desired range is all 32-bit numbers, then the set of moduli ={2,3,5,7, 11,13, 17,19, 23,29} minimizes the total number of look-up table entries required.
Now, one can replace any component modulus from the above set (for example, say the modulus 29) with another prime number (such as 31, 37 or even 101). The resulting moduli set does not minimize the total number of look-up table entries required, but it is sufficiently close and would not make much of a difference in the ability to deploy the exhaustive precomputation strategy. In the strict sense, however, the modified moduli set does not satisfy the “minimization” criteria and this fact might be used to wiggle around having to acknowledge the use of intellectual property claimed by this patent.
We would therefore like to clarify that the spirit of this part of the invention (i.e., the moduli selection method) can be better captured by the following description:
Select the set moduli so as to simultaneously bound both
(i) mk=the maximum value in the set of moduli by a low degree polynomial in n, as well as
(ii) K=the total number of moduli in the set =
(also known as the number of RNS channels) by another low degree polynomial in the wordlength n.
(both polynomials could be identical which is a special case). Following usual practices, we consider any polynomial of degree 16 to be a low-degree polynomial.
In closing we would like to point out some additional benefits of our moduli selection:
§4.2 The Reduced Precision Partial Reconstruction (“RPPR”) Algorithm
This is a fundamental algorithm that underlies all other algorithms to follow. To speed-up the Partial-Reconstruction, we combine the information contained in both integer as well as fractional domains. We express the CRT in the form:
Relation (18) states that C can be approximately estimated as the integer part of a sum of at most K proper fractions fr, r=1, . . . , K. (proper fractions because the numerator pr is a remainder w.r.t. mr; and therefore it is strictly less than mr).
To speed up such an estimation of the C, we leverage pre-computation and look-up: for each modulus mr, we pre-calculate the value of each of the (mr−1) fractions, i.e., all possible fractions that can occur, and store them in the look-up table (denoted by
mr)
(if pr=0 then the table entry is 0 which need not be explicitly stored).
The important point is that The look-up table for each modulus mr can be accessed independent of (and therefore in parallel with) the look-up table for any other modulus ms where r≠s.
The fractional values (obtained from the tables) are then added up as illustrated in
§4.2.1 Derivation of the Algorithm and Novel Aspects Therein
{circle around (1)} First, note that we need to estimate the integer part of a sum of fractions, i.e., we need to be able to accurately evaluate the most-significant digits/portion of the sum as illustrated in
In other words, using the rational-domain interpretation allows us to focus on values that represent the “most-significant” bits/digits of the target and therefore approximation methods can be invoked.
{circle around (2)} The implication is that the precision of the individual fractional values that get added need not be very high. All that is required is that the fractions
be calculated to enough precision so that when they are all added together, the error is small enough so as not to reach up-to and affect the least significant digit of the integer part (to the extent possible).
Let the radix/base of the number representation be b and let wf be the number of fractional (radix-b) digits required. Then, for each fraction fi we generate an upper and lower bound as follows:
f
i=0.d1d2 . . . dw
Truncation of fi to wf digits yields an under-estimate: {circumflex over (f)}ifi (23)
and 0(fi−{circumflex over (f)}i
However, a ceiling or rounding-toward-∞ to retain wf fractional digits adds a ulp to the least significant digit (lsd), yielding an over-estimate: (25)
The understand the upper limit in relation (29) above, note that each non-zero pi makes the corresponding over-estimate higher than the under-estimate by a ulp, as per Eqns (21), (23) and (26).
Let {circumflex over (S)}low({circumflex over (f)}l
└{circumflex over (S)}low┘=integer part of {circumflex over (S)}low (31)
{circumflex over (S)}high{circumflex over (S)}low+nz·ulp and {circumflex over (I)}high
└{circumflex over (S)}high┘=integer part of {circumflex over (S)}high (32)
Taking the “floor” of each expression in the inequalities in relations (29) above; substituting the floors from Eqns (31) and (32); and using the identity
since both upper and lower bounds converge to the same value. In practice (numerical simulations), this case is encountered in an overwhelmingly large fraction of numerical examples. Moreover,
In other words, even in the uncommon/worst cases, wherein, Îlow≠Îhigh, relation (39) demonstrates that the estimate of C can be quickly narrowed down to a pair of consecutive integers.
{circle around (3)} It is intuitively clear that further “disambiguation” between these choices needs at least one bit of extra information. This information is obtained from the value (Z mod me) where me is the extra modulus. For efficiency, me should be as small as possible. Accordingly, our method leads to only two scenarios:
[a] if is odd then me=2 is sufficient for disambiguation
[b] otherwise if 2 is included in the set of moduli , then me=4 is sufficient for disambiguation
me ∈ {2,4} (this is analytically proved in [39]) (40)
Note that when includes “2” as a modulus, it already contains the value (z1=Z mod 2), i.e., the least significant bit of the binary representation of Z. The value (Z mod 4) therefore conveys only one extra bit of information beyond what the residue touple conveys.
{circle around (4)} It is reasonable to assume that for primary/external inputs the extra-info is available. The exhaustive pre-computations can also assume that the extra-info is available. Starting with these, we generate the extra-bit of information (either explicitly or implicitly) for every intermediate value we calculate/encounter. This is done in a separate dedicated channel. Let
W=*(X) where * is a unary operation. Then, (41)
W mod me=[*(X mod me)] mod me for * ∈ {left-shift, power} (42)
If the operation is a right shift, then finding the remainder of the shifted value w.r.t. me, is slightly more involved but it can be evaluated using a method identical to “Quotient_First_Scaling”, i.e., “Divide_by_Constant', which explained in detail in Section §4.5 below.
Likewise, let
Z=X
Y where is a binary operation. Then, (43)
Z mod me=[(X mod me)(Y mod me)] mod me for
∈ {±, ×} (44)
Finally, since division is fundamentally a sequence of shift and add/subtract operations, as long as we keep track of the remainder of every intermediate value w.r.t. me, we can also derive the values of (Quotient mod me) and (Remainder mod me). Thus all the basic arithmetic operations are covered.
§4.2.2 Analytical Results
Result 1 Pre-conditions: Let the radix of the original (non-redundant, weighted and positional, i.e., usual) number representation be denoted by the symbol “b” (note that b=10 yields the decimal representation, b=2 gives the binary representation). Suppose integer Z=[z1, z2, . . . , zK] is being partially re-constructed and the extra-bit-of-information, i.e., the value of (Z mod me) is also available. Let
and pi values are the reconstruction-remainders defined in Equation (B-6)
Result 1: In order to narrow the estimate of the Reconstruction Coefficient C down to two successive integers, viz., Î or (Î+1), it is sufficient to carry out the summation of the fractions (whose values can obtained from the look-up-tables) in a fixed-point format with no more than a total of wT radix-b digits, wherein
w
T
=w
I
=w
F where (52)
wI=Number of digits allocated to the Integer part, and (53)
wF=Number of digits allocated to hold the fractional part (54)
where the precisions (i.e., the digit lengths) of the integer and fractional parts satisfy the conditions:
R1.1 wI=┌logb K┐ (55)
R1.2 wF=┌logb (K·Δuuzf)┐ where, K=number of moduli=, and (56)
Δuuzf≡“Unicity Uncertainty Zone Factor”, that satisfies Δuuzf2 (57)
R1.3 The Rounding mode adopted in the look-up-tables (when limiting the pre-computed values of the fractions to the target-precision) as well as during the summation of fractions as per equation (51) must be TRUNCATION, i.e., discard excess bits.
For the proof of the above result as well as all other analytical results stated below, please refer to [39].
Result 2: In order to disambiguate between the two possible values of the Reconstruction Coefficient i.e., select the correct value (Î) or (Î+1), a small amount of extra information is sufficient.
R2.1 In particular, (prior) knowledge of the remainder of Z (the integer being partially reconstructed), w.r.t. one extra component modulus me that satisfies
gcd(, me)<me is sufficient for the disambiguation. (58)
R2.2 For computational efficiency, the minimum value of me that satisfies (58) should be selected.
Such a selection gives rise to the following two canonical cases:
§4.2.3 RPPR Algorithm: Illustrative Examples
The pre-computations and Look-up-tables needed for the partial reconstruction are illustrated next.
§4.2.3.A First an Example with Small Values, to Bootstrap the Concepts
Let the set of moduli be ={3,5,7,11}, so that, K=4,
=1155, and let me=2
= {3, 5, 7, 11}. This table uses the value of ρr as the address to look-up
explicit value of ρr is needed. In this case
Then, the look-up table for the RPPR algorithm is shown in Table 1.
The table consists of 4 subtables (one per-channel/modulus) that are independently accessible in parallel.
For each value of mr, the table consists of a row that simply stores the approximate pre-computed values of the fractions
§4.2.3.B Further Optimization: Skip the Computation of pr Values
Note that instead of explicitly calculating pr and then using it as an index into a table, the residue zr could be directly used as an index into a table that stores the appropriate values of the precomputed fractions.
The resulting table is illustrated in Table 2.
In this toy example the number fractional digits required for intermediate computations is 1 which is not a sizable reduction from the full precision which is 3 digits.
The the following non-trivial long-wordlength example demonstrates the truly drastic reduction in precision that is afforded by our novel algorithm.
= {3, 5, 7, 11}. This table is a further optimized
in the rth row (i.e., sub-table) for component-
Note that the only difference between this table and Table 1 is a permutation of the entries in sub-tables for those moduli mi for which the “inner-weights” are larger than unity (in this case wi>1 for the last two rows corresponding to moduli 7 and 11).
§4.2.3.C A Nontrivial Long Word-Length Example
Now consider the partial reconstruction of numbers with a word-length=256 bits. Here the range is R=2256.
In this case, first 44 prime-numbers are required to cover the entire range. Therefore K=44 and ={2, 3, 5, 7, 11, . . . , 181, 191, 193}, me=4, and the product of all the component-moduli is
=198962376391690981640415251545285153602734402721821058212203976095413910572270 and the ratio
mk=m44=44 th prime number=193
The word-length required to represent is
n=┌log
10 ()┐=┌77.298┐=78 decimal digits and the word length required for ZT is (60)
d
T=┌log10(×44)┐)=┌78.9┐=79 digits (61)
Hence, any conventional full re-construction method to evaluate C requires at least a few operations on 79 digit-long integers.
In contrast, our new partial-reconstruction method requires a drastically smaller precision as well as a drastically small number of simple operations (only additions) to accurately evaluate the reconstruction coefficient C. As per Result R1.2 above, the number of fractional digits required in the look-up-tables as well as in intermediate computations is only
w
F=┌log10(44×2)┐=2 decimal fractional digits. (62)
When addition of such fractions is considered, the integer part that can accrue requires no more than 2 additional digits (since we are adding at most K=44 values, and each is a proper fraction their sum must be less than 44 and therefore requires no more than 2 decimal digits to store the integer-part).
Therefore, the total number of digits required in all intermediate calculations is as small as 4, which is a drastic reduction from 79. (In general the reduction in precision is from O(lg) required by conventional methods versus the much smaller amount O(lglg
) required by our method).
Another extremely important point: by appropriate scaling, all the fixed point fractional values in the table can be converted into integers. Correspondingly, all the fixed-point computations (additions and subtractions of these fractions) are also scaled and can therefore be realized as integer-only operations.
The obvious scaling factor is 10wF. The resulting look-up-table that contains the scaled integers as its entries is illustrated in Table 3.
= {2, 3, 5, 7, 11, . . . , 191, 193}. The fixed point
Note that un-scaling requires a division. But since the scaling factor is a power of the radix of the underlying number representation, un-scaling can be achieved simply by left-shift and truncation of integers. Thus, with the scaling, floating point computations are entirely avoided.
Next we formally specify the algorithm and simultaneously illustrate it for two examples:
1. Example 1: find C for value X=641 in the small wordlength case. inputs:
2. Example 2: find C for the value “X=1” in the long=wordlength case. (inputs:
Right below every step of the algorithm, the computations actually performed for each of the two examples are also illustrated inside “comment-blocks”
§4.2.4 Specification of the Algorithm via Maple-Style Pseudo-Code
c
, me, all constants (ex,
j,wj = ((1/
j) mod mj) mod mj, . . . ))
C
correct value of
C = 1, and is returned
need to disambiguate between {25, 26} using extra info
) mod me + Z mod me} mod me) then
((26 × 2) + 1) mod 4 = 1 mod 4
C
The correctness of the algorithm can be proved by invoking Results 1, 2 and other equations and identities presented in this document. In addition, the algorithm has been implemented in Maple (software) and exhaustively verified for small wordlengths (upto 16 bits). A large number of random cases (>105) for long wordlengths (up to 220≈million-bits) were also run and verified to yield the correct result.
§4.2.5 RPPR Architecture
In addition, each channel is also capable of accessing it's own look-up-table(s) (independent of other channels). Finally there is a dedicated channel corresponding to the extra modulus me=2 or me=4 that evaluates Z mod me for every non-primary integer Z (non-primary refers to a value that is not an external input or is not one of the precomputed values).
We would like to emphasize that the schematic diagram is independent of whether the actual blocks in it are realized is in hardware or software. The parallelism inherent in the RNS is independent of whether it is realized in h/w or s/w. This should be contrasted with some other speed-up techniques (such as rendering additions/subtractions constant-time by deploying redundant representations) that are applicable only in hardware [40].
§4.2.6 Delay Models and Assumptions
In order to arrive at concrete estimates of delay, we assume a fully dedicated h/w implementation. Each channel has its own integer ALU that can perform all operations modulo any specified modulus. Among all the channels, the K-th one that performs all operations modulo-mK requires the maximum wordlength since mK is the largest component-modulus.
The maximum channel wordlength is: nK=lg mK≈O(lg n)≈lgln≈lglg
(63)
Note that this is drastically smaller than the wordlength nc required for conventional binary representation, which is roughly O(n), the number of bits required to represent .
In accordance with the literature, we make the following assumptions about delays of hardware modules
<A-1> A carry-look-ahead adder can add/subtract two operands within a delay that is logarithmic w.r.t. the wordlength(s) of the operands.
<A-2> Likewise, a fast hardware multiplier (which is essentially a fast multi-operand accumulation tree followed by a fast carry-lookahead-adder and therefore) also requires a delay that is logarithmic w.r.t. the wordlength of the operands.
More generally, a fast-multi-operand addition of K numbers each of which is n-bits long requires a delay of
O(lgK)+O(lg(n+lgK)) which becomes ≈O(lgn) in our case. (64)
<A-3> Assuming that the address-decoder is implemented in the form of a “tree-decoder”, a look-up table with entries requires ≈O(lg
) delay to access any of it's entries.
<A-4> We assume that dedicated shifter(s) is(are) available. A multi-stage-shifter (also known as a “bar-rel” shifter [1, 6[) implements shift(s) of arbitrary (i.e., variable) number of bit/digit positions, where the delay is ≈O(lg(maximum_shift_distance_in_digits)) units.
§4.2.7 Estimation of the Total Delay
The preceding assumptions, together with Equation (63) imply that the delay ΔCH all operations within individual channels can be approximated to be
ΔCH=lg mK≈O(lglgn)≈lglglg (65)
which very small.
The delay estimation is summarized in Table 4.
As seen in the table, the dominant delay is in Step 3, the accumulation of values read from the per-channel look-up tables.
Therefore, the overall delay is ≈O(lgn) ≈O(lglg)
§4.2.8 Memory Required by the PR Algorithm
The r-th channel associated with modulus mr has its own Look-up table with (mr−1) entries (since the case where the remainder is “0” need not be stored). Hence, the number of storage locations needed is
Each location stores a fractional value that is no longer than wF digits ≈O(lgn) bits. Therefore, total storage (in bits)=O(n2) (locations)×O(lgn) (bits per locations) ≈O(n2lgn) bits (67)
There are several important points to note:
1 Although the above estimate makes it look as-though it is a single chunk of memory, in reality it is not. Realize that each channel has its own memory that is independently accessible. The implications are:
2 The address-selector (aka the decoder) circuitry is substantially smaller and therefore faster (than if the memory were to be one single block).
3 It is a READ-ONLY memory, the precomputed values are to be loaded only once, they never need to be written again. In a dedicated VLSI implementation, it would therefore be possible to utilize highly optimized, smaller and faster cells.
§4.3 Base Change/Extension
Those familiar with the art will realize that once the “RPPR” algorithm yields an exact equality for the operand (being partially re-constructed), a base-extension or change is straightforward.
Without loss of generality, the algorithm is illustrated via an example which extends a randomly generated 32-bit long unsigned integer to a 64-bit integer (without changing the value), which requires an extension of the residue touple as shown below.
In order to cover the (single-precision) range ┌0, 232┐, the moduli set required is =
32={2,3,5,7,11,13,17,19,23,29} so that K=|
|=10, and total product
=6469693230, the reconstruction weights are
i=1, . . . , 10 =[3234846615, 2156564410, 1293938646, 924241890, 588153930, 497668710, 380570190, 340510170, 281291010, 223092870] and the inner weights are
i =1, . . . , 10 =[1,1,1,3,1,11,4,9,11,12]
In order to cover the double-precision range [0, 264], the extended moduli set required is ext=
64={2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53} so that Kext=
ext|=16, and total product
ext=32589158477190044730
Let the single precision operand be Z=1355576195≡
Z=(3234846615×p1+2156564410×p2+ . . . +223092870×p10)−6469693230 C
where, pi=(zi×wi) mod mi is the reconstruction remainder for channel i, for i=1, . . . , 10 and C
The only unknown in Eqn (68) is C
Note that (3234846615 mod 31), . . . , (6469693230 mod 31) are all constants for a given RNS and can be pre-calculated and stored. In general we always assume that whichever values can be pre-computed are actually pre-computed. Thus
θ(i,j )=i mod mej for i=1, . . . , K and j=K+1, . . . , Kext (69)
are all pre-computed and stored, so that the operation of evaluating the remainder w.r.t. an extra modulus me
Z mod meC
where, Δe mod me
Next, we specify the algorithm exactly in Maple-style pseudo-code.
§4.3.1 Specification of the Algorithm via Maple-Style Pseudo-Code
32 = {2,3,...,29},
32| = K32 = 10
64 = {2,3,...,53}, |
64| = K64 = 16
j,wj = ( (1/
j) mod mj,...))
32 and
64, etc. */
C
C
= Reduced_Precision_Partial_Reconstruction(
C
§4.4 Sign and Overflow Detection by Interval Separation (SODIS)
The next canonical operation I have sped-up is sign-detection. The new algorithms for sign-detection in the RNS are illustrated in this section.
As illustrated in
Q: Where then is the problem in sign detection?
A: (i) Note that a “re-construction” of the overall magnitude is necessary
F
max
−
=F
max
++1 (71)
In other words, ALL the unsigned integers in the range [0,−1] are utilized, not a single digit value is wasted.
An unfortunate by-product of this quest for representational efficiency is that consecutive integers Fmax+ and Fmax− end up having opposite signs. Consequently, re-construction must be able to distinguish between consecutive integers, i.e., the resolution of the re-construction must be full, i.e., in fractional computations
This, in-turn requires that all fractional computations must be carried out to the full precision, thereby rendering them slow.
The main question therefore is whether it is possible to make do with the drastically reduced precision we wish to deploy? and if so, then how?
The answer is to insert a sufficiently large “separation-zone” between the positive and negative regions, as illustrated in
In
both “0” as well as e correspond to the actual magnitude 0 (73)
unsigned integers {1, . . . , Fmax+} represent +ve values and (74)
unsigned integers {Fmax−, . . . , (e−1)} represent −ve values {−(
e−Fmax−), . . . , −1}, repsectively, wherein (75)
Unsigned Integer Fmax+ represents the maximum positive magnitude allowed, and (76)
Unsigned Integer Fmax− represents the max. −ve magnitude allowed=(e−Fmax−) (77)
The interval between Fmax+ and Fmax− is the separation zone (78)
Most practical/useful number representations try to equalize the number of +ve values and the number of −ve values included in order to attain maximal amount of symmetry. This yields the constraint
F
max
−≈e−Fmax+ (79)
Intuitively, it is clear that equal lengths should be allocated to
(1) the +ve interval
(2) the −ve interval and
(3) the separation-zone between the opposite polarity intervals.
For this to be possible, the extended modulus e must satisfy
e>3·Fmax+ (80)
Finally, the attainment of maximal possible symmetry dictates that to the extent possible, the separation-zone must be symmetrically split across the mid-point of the range [0, (e−1)]. Note that
With the separation interval in place, note that all +ve numbers Z+ within the range [1,Fmax+] when represented as fractions of the total magnitude e now satisfy
Likewise, all −ve numbers Z− when represented as fractions of the total magnitude Me satisfy
But Eqn (19) (repeated here for the sake of convenience) states that
In other words, the separation interval enables the evaluation of the sign of the operand under consideration by examining one (or at most two) most significant digits of the accumulated sum of fractions.
Recall that in the partial reconstruction, the integer part of the sum-of-fractions was of crucial importance.
It is quite striking that when the interval separation is properly leveraged as illustrated herein, the most significant digit(s) of the fractional part also convey equally valuable information, viz., the sign of the operand.
As per Equations (81) and (82) the natural choice of the detection boundaries T+ and T− is specified by the relations
However, note that even if the “detection boundaries” T+(for +ve numbers) and T−(for −ve numbers) are moved slightly into the “separation zone”, as illustrated in
For the ease of computation, we therefore set
§4.4.1 Specification of Sign-Detection Algorithm via Maple-Style Pseudo-Code
C for the input and
Z = 0
C
C := Î
) mod 4 + Z mod 4} mod 4) then
C := Î;
C := Î + 1;
C , = Î) then
C, Approx_overflow_estimate)
§ 4.4.2 Overflow Detection
Since we are dealing with sign-detection of integers, an underflow of “magnitude” simply results in the value “0”; no other action needs to be taken in case of magnitude underflow.
However, a magnitude overflow, must be detected and flagged. let A and B be the operands and let denote some operation, then “overflow of magnitude” includes both cases
case 1: (AB)>Posedge=Fmax+ and (90)
case 2: (AB)<Negedge=−(
−Fmax−) or dividing both sides by
(91)
However, recall that the decision boundaries T+ and T− are shifted by a small amount into the “separation region”. As a result, whenever input values in the range [Fmax+,T+] or in the range [T, Fmax] are encountered, they will be wrongly classified as being within the correct range even though they are actually outside the designated range. The only solution to this problem is to separately evaluate the sign of either (Z−Fmax+) or (Z−Fmax−) to explicitly check for overflow.
§4.4.2.A Specification of Overflow Detection Algorithm via Maple-Style Pseudo-Code
Once these building blocks are specified, the overall SODIS algorithm is specified next.
§4.4.2.B Specification of Sign and Overflow Detection Algorithm via Maple-Style Pseudo-Code
C
RC
C
Those familiar with the art shall realize that using the algorithms presented in this section, a comparison of of two numbers say A and B can be realized extremely fast, without ever leaving the residue domain in a straightforward manner by detecting the sign of (A-B).
§4.5 The Quotient First Scaling (QFS) Algorithm for Dividing by a Constant
Assume that a double-length (i.e., 2n-bit) dividend X is to be divided by an n-bit divisor D, which is a constant, i.e., it is known ahead of time. The double length value X is variable/dynamic. It is either an external input or more typically it the result of a squaring or a multiplication of two n-bit integers. It is assumed that the extra-bit of information, i.e., the value of (X mod me) is available. Given positive integers X and D, a division entails computing the quotient Q and a remainder R such that
To derive the Division algorithm, start with the alternative form of the Chinese Remainder Theorem (CRT) which expresses the target integer via an exact integer equality of the form illustrated in Equations (B-8.*). Express the double-length Dividend X as
where the exact value of Reconstruction Coefficient C
To implement Division, evaluate the quotient Q as follows:
The exact integer-quotient can be written as
Since exact values of Qr and QRC are pre-computed and looked-up, the value of QI in Eqn (103) above is exact. However, since we use approximate precomputed values of the fractions truncated to drastically small precision, the value of Qf calculated via Eqn (104) above is approximate. As a result, the value of Q that is calculated is also approximate. We indicate approximate estimates by a hat on top, which yields the relations:
Our selection of moduli (explained in detail in Section 4.1 above) leads to the fact that the number of of memory-locations required for an exhaustive look-up turns out to be a small degree (quadratic) polynomial of n=lg. This amount of memory can be easily integrated in h/w modules in today's technology for word-lengths up to about 217≈0.1-Million bits (which should cover all word-lengths of interest today as well as in the foreseeable future).
Note that the Reconstruction Coefficient C
can also be pre-computed and stored for all possible values the Reconstruction Coefficient C
§4.5.1 Further Novel Optimizations
{circle around (1)} Store the pre-computed Quotient values directly as residue touples
Note that the quotient values themselves could be very large (about the same word-length as the divisor D). However, we need not store these long-strings of quotient values, since in many applications (such as modular exponentiation) the quotient is only an intermediate variable required to calculate the remainder. Obviously the extra bit of information conveyed by (Qrs mod me) is also pre-computed and stored together with the touple representing the exact integer quotient
The total memory required to store either the full-length long-integer value or storing the residues w.r.t. the component moduli as a touple is about the same. By opting to store only the residue-touples, we eliminate the delay required to convert integer quotient values into residues, without impacting the memory requirements significantly.
Thus, the QFS algorithm needs 2 distinct Quotient_Tables.
§4.5.2 Quotient-Tables Explained via a Small Numerical Example
I believe that the tables can be best illustrated by a concrete-small example. Assume that the divisor D=209=11×19 (i.e. D is representable as an 8-bit number). The dividends of interest are therefore up-to 16-bit long numbers. In this case the moduli turn out to be [2, 3, 5, 7, 11, 13, 17]. Even if the first two moduli (viz., 2 and 3) are dropped, the product still exceeds the desired range [0, 216]. Therefore we select
={5,7,11,13,17}
K=5,
=85085 and the extra-modulus me=2 (108)
To realize division by this divisor D=209, the first table required is shown in Table 5.
This table is referred to as “Quotient_Table—1” (or also as the “Quotient_Touples_Table”). It stores all possible values of Quotients required to evaluate the first term (the sum) in Eqn (103). The entries (rows) corresponding to each component-modulus mr constitute a sub-table of all possible values pr can assume for that value of mr. For the sake of clarity, we have used a “double-line” to separate one sub-table from the next.
To illustrate the pre-computations, we explain the last sub-table, in Quotient_Table—1 corresponding to the component-modulus “m5=17”, wherein,
This sub-table has 16 rows. The first row corresponds to p5=1 the second row corresponds to p5=2 and so on. Now, we explain each entry in the penultimate row in Table 1 above (this row corresponds to p5=15).
The value in the 3rd column titled “Quotient . . . ” lists the quotient, i.e.
The next entry (within the angled-brackets ) simply lists the value of (Q5
The last column in Table 1 stores the fixed point fractional remainder values scaled by the multiplying factor bw
which when truncated to two decimal places yields 0.21
Accordingly,
and this is the value stored in the last column.
= [5, 7, 11, 13, 17] and divisor D = 209. In
and Qr mod 2
1
0
0
1
1
1
1
1
We would like to point out that the actual full-wordlength-long integer values of quotients Qr (that are listed in column 3 in the table) need not be (and hence are not) stored in a real (h/w or s/w) implementation of the algorithm (the full decimal Qr values were included in column 3 in Table 1 above, merely for the sake of illustration). In an actual implementation, only the extra-information, i.e., Qr mod 2
values (shown inside the angled-braces
in column3) and the residue-domain touples representing Qr (as shown in column 4 in the table) are stored. For example, in the penultimate row, actual quotient value “359” need not be stored, only
359 mod 2
=
1
would be stored, together with the touple of residues of 359 w.r.t. the component moduli=[359 mod 5, . . . , 359 mod 17]=[4, 2, 7, 8, 2] as shown in column 4 therein.
Next we explain Table 6, which shows Quotient_Table—2 (also referred to as the “Quotient_Rc_Table”)
C = [1, 2, . . . K] ↓
Qc mod 2
·
C − Qc · D Scaled Fractional Remainder
1
0
1
0
1
This table covers all possible values of the Reconstruction Coefficient C
in column 2) and the touple of residues of Qc w.r.t. the component-moduli are stored as illustrated in the third column of the Table. The last column stores the fixed point fractional remainder values scaled by the factor 10w
Another nontrivial distinction of Quotient_Table—2 from all previous tables is the fact that the fractional values in the last column are always rounded-up (the mathematical expression uses the “ceiling” function). Note that the last term in Equations (96) and (98), has a negative sign. As a result, when rounding the fractional remainders, we must “over-estimate” them, so that when this value is subtracted to obtain the final quotient estimate, we never over-estimate. In other words, the use of “ceiling” function is necessary to ensure that we are always “under-estimating” the total quotient.
§4.5.3 Specification (Pseudo-Code) of the QFS Algorithm
Like the RPPR-algorithm, we illustrate the division algorithm with 2 examples:
(i) first with small sized operands (dividend X=3249, divisor D=209) so that the reader can replicate the calculations by hand/calculator if needed.
(ii) The 2nd numerical example is a realistic long-wordlength case.
Instead of separating the pseudo-code and numerical illustration, we have waved in the numerical illustration of each step of the algorithm for the running (small) example at hand by including the numerical calculations into the pseudo code as comment blocks.
X mod me)
)
X mod me
), where me ε {2, 4}
,Mr, wr, r = 1, 2, . . . , K etc.
C
X mod me
C
C
C
C
C
5 := [2, 1, 8, 6, 9] and
only left-shift followed by truncation suffices
/* In the example: Using the CRT, it can be verified that 1.
It is also easy to independently check that
verifying the returned value of flag Q_is_exact */ We would like to clarify some important issues regarding the QFS algorithm.
(
{circumflex over (R)} mod me:=[(X mod me)−({circumflex over (Q)} mod me)×(D mod me)]mod me (110)
If the variable/flag “Q_is_exact” is set to the value “1”, then {circumflex over (Q)}=Q, i.e., the estimate equals the exact integer quotient. In practice (numerical experiments) this happens in an overwhelmingly large number of cases.
otherwise, the flag Q_is_exact=0, indicating that the algorithm could not determine whether or not {circumflex over (Q)} is exact. (because of the drastically reduced precision used to store the pre-computed fractions) In this case {circumflex over (Q)} could be exact, i.e., {circumflex over (Q)}=Q
or {circumflex over (Q)}=(Q−1), i.e., Q can under-estimate Q by a ulp.
Further disambiguation between these two values is possible by calculating the estimated-remainder {circumflex over (R)} and checking whether ({circumflex over (R)}−D) is +ve or −ve
Let the exact integer remainder be denoted by R. It is clear that the estimated integer-remainder {circumflex over (R)} can have only two possible values:
if {circumflex over (Q)} is exact, then {circumflex over (R)}=R, i.e., {circumflex over (R)} is also exact; or (111)
{circumflex over (Q)}=(Q−1)
{circumflex over (R)}=X31 (Q−1)D=(X−Q·D)+D=R+D (112)
In other words, in the relatively infrequent case , performing a sign-detection on ({circumflex over (R)}−D) is guaranteed to identify the correct Q and R in all cases. (if ({circumflex over (R)}−D) is +ve, then it is clear that {circumflex over (Q)} underestimated Q by a ulp; otherwise {circumflex over (Q)}=Q)
§4.5.4 Estimation of the Delay of and the Memory Required for the QFS Algorithm
§4.5.4.A Delay Model and Latency Estimation
We assume dedicated h/w implementation of all channels (including the extra channels). Within each channel the look-up tables are also implemented in h/w (note that the tables need not be “writable”). All tables are independently readable in parallel with a latency of O(lgn). Likewise, since each component modulus is small as well as the number of channels (K) is also small, we assume that a dedicated adder-tree is available in each channel for the accumulations modulo the component-modulus for that channel. The latency of the accumulations can be also shown to be logarithmic in the word-length, i.e., O(lgn). Likewise, we assume that a fast, multistage or barrel shifter is available per channel so that delay of “variable: shifts is also O(lgn).
Since the maximum latency of any of the blocks is O(lgn), the overall/total latency of the h/w implementation is estimated to be O(lgn).
§4.5.4.B Memory Requirements
In addition to the reconstruction table, we also need the Quotient Tables. The total number of number of entries in both parts of the Quotient table is O(K2/2)+O(K−1)=O(K2). In this case, each table entry has K+1 components, wherein, each component is no bigger than O(lglgK) bits. Consequently the total storage (in bits) that is required is ≈O(K3lglgK) bits ≈O(n3lglgn) bits.
§4.6 Modular Exponentiation Entirely within the Residue Domain
Modular exponentiation refers to evaluating (XY mod D). In many instances, in addition to D, the exponent Y is also known ahead of time (ex: in the RSA method, Y is the public or private-key). Our method does not need Y to be a constant, but we assume that it is a primary/external input to the algorithm and hence available in any desired format (in particular, we require the exponent Y as a binary integer, i.e., a string of w-bits).
To the best of our knowledge, one of the fastest methods to perform modular exponentiation expresses the exponent Y as polynomial of radix 2, parenthesized as shown Eqn (114) above (known as the “Horner's method” of evaluating a polynomial). Since the coefficient of the leading-term in (113) must be non-zero, (i.e. yw−1=1), the modular exponentiation starts with the initial value Ans:=X2 mod D. If yw−2≠0 then the result is multiplied (modulo-D) by X and is then squared. This operation is repeatedly performed in a loop as shown below:
The obvious speedup mechanism is to deploy the QFS algorithm to realize each modular-reduction, aka, remaindering operation. (the remaindering operations needed in modular-exponentiation are tagged with the label “mod_red_n” inside a box at the end of the corresponding line in the maple-style pseudo-code above).
§4.6.1 Further Optimization: Avoiding Sign-Detection at the End of QFS
Result 3: Directly using the estimate {circumflex over ({circumflex over (Q)})} to evaluate
Proof: Immediately follows from the definition of the residue class:
Definition 1: Integers p and q are in the same residue class w.r.t. D if (p mod D=q mod D)
Eqns (111) and (112) show that {circumflex over (R)} ∈ {R, R+D} it is in the same residue-class as the exact integer remainder R.
Next, we show that as long as the range of the RNS system is sufficiently large, it is possible to use incorrect values for the remainder at intermediate steps of modular exponentiation, (as long as they are in the proper residue class); and still generate the correct final result.
Result 4: If the inputs X1 and X2 to the QFS algorithm are in the same residue class w.r.t. the (constant/known) divisor D then the remainder estimates {circumflex over (R)}1 and {circumflex over (R)}2 evaluated using the quotient estimates {circumflex over (Q)}1 and {circumflex over (Q)}2 returned by the QFS algorithm both satisfy the constraints
{circumflex over (R)}
1 can assume only one of the two values: {circumflex over (R)}1=R or {circumflex over (R)}1=R+D (115)
{circumflex over (R)}
2 can assume only one of the two values: {circumflex over (R)}2=R or {circumflex over (R)}2=R+D (116)
where R is the correct/exact integer remainder. (this holds even if the “Q_is_exact” flag is set to 0, indicating that the algorithm could not determine whether or not the quotient estimate equals the exact quotient).
Result 5: If the range of the RNS is sufficiently large, then there is no need for a sign-detection at the end of the QFS algorithm in order to identify the correct remainder in intermediate steps during the modular-exponentiation operation.
Proof: Assume that at the end of some intermediate step i, {circumflex over (Q)}=(Q−1) thereby causing
Ans
i
:={circumflex over (R)}
i
=R
i
+D instead of the correct value Ansi:={circumflex over (R)}i=Ri; (117)
Then, as seen in the pseudo-code for modular exponentiation (which is illustrated in Section §4.6.2) above, the next operation is either a modular-square or a modular-multiplication:
Ans
(i+1):=(Ansi)2 mod D or Ans(i+1):=(Ansi×X) mod D (118)
Ans
(i+1):=(Ri+D)2 mod D or Ans(i+1):=(Ri+D)×X mod D (119)
instead of the correct values
Ans
(i+1)
:=R
i
2 mod D or Ans(i+1):=(Ri×X) mod D
However, note that
Rj2 is in the same residue class w.r.t. D as (Ri+D)2 and (120)
[(Ri+D)×X] is in the same residue class w.r.t. D as (Ri×X) (121)
Therefore from claim 2 above, it follows that in either paths (modular-square or modular-product-by-X) the answers obtained at the end of the next step satisfy the exact same constraints, specified by Equations (115) and (116), independent of whether the answers (remainders) at the end of the previous step were exact or had an extra D in them; which shows that performing a sign-detection on the {circumflex over (Q)} returned by the QFS algorithm is not necessary.
Result 6: A single precision RNS range 3D, and correspondingly a double-precision range
9D2 is sufficient to obviate the need for a sign-detection
Proof: Since the correct remainder satisfies the constraints 0R<D, it is clear that the erroneous remainder value (R+D) satisfies
0<D(R+D)<2D (122)
As a result, the estimated remainder could be as high as about/almost 2D. We therefore set the single-precision range-limit to be 3D so that the full double length values could be as large as (3D)2=9D2. Accordingly, we select K-smallest-consecutive prime numbers such that their product exceeds 9D2. With this big a range, either modular-square or modular-multiplication using an inexact remainder does not cause overflow, as per constraint (122) above
§4.6.2 The ME-FWRD Algorithm: Maple-Style Pseudo-Code
X mod me
)
(
9D2, D_mod_me,
X mod me
) ;
Correctness of the algorithm follows from the analytical results presented so far. Moreover the algorithm was implemented in Maple and extensively tested on a large number of cases.
§4.6.3 Delay Estimation of the Proposed Modular-Exponentiation Algorithm
Pre-computation costs are not considered (they represent one-time fixed costs).
(i) The main/dominant delay is determined by the delay of the loop.
Assuming that the exponent Y is about as big as D, the number of times the exponentiation loop is executed=lgY≈O(n) times.
(ii) Determination of the Quotient estimate is the most time-consuming operation in each iteration of the loop and it requires O(lgn) delay (as explained in Section §4.5.4-A).
As a result, each iteration of the loop requires (O(lgn) delay.
(iii) Therefore, the total delay is O(nlgn).
The memory requirements are exactly the same as those of the QFS algorithm: ≈O(n3lglgn) bits as shown above (in Section §4.5.4-B).
§4.6.4 Some Remarks about the ME-FWRD Algorithm
{circle around (1)} In a remaindering operation, it is possible to under-estimate the quotient, but it is not acceptable to over-estimate the quotient even by a ulp for the following reason:
if {circumflex over (Q)} is an over-estimate, then {circumflex over (R)}=X−{circumflex over (Q)}×D0 and therefore gets evaluated as (123)
{circumflex over (R)}≡
−|{circumflex over (R)}| which is not in the same residue class w.r.t. D as the correct remainder R (124)
{circle around (2)} The algorithm always works in full (double) precision mode. In the RNS, increased word length simply requires some more channels. In a dedicated h/w implementation, all the channels can execute concurrently, fully leveraging the parallelism inherent in the system. Hence, the incremental delay (as a result of doubling the word-length) is minimal: Since doubling the word-length adds one-level to each adder/accumulation-tree (within each RNS-channel),
the incremental delay is ≈O(1).
§4.7 Convergence Division via Reciprocation to Handle Arbitrary, Dynamic Divisors
let X be a 2n bit dividend
X=X
i·2n+Xl (125)
where Xu is the upper-half (more-significant n bits) and Xl is the lower-half. Let D be an n bit-long divisor. Then, the quotient Q is
Since Xl and D are both n bit long numbers Xl<2D (127)
The remaining term is
In the inequality above, the lower bound ½ follows from the fact that the leading bit of an n-bit long number D is 1 (if not, the word-length of D would be smaller than n). Also note that the maximum value of the n-bit integer D can be (2n−1), which yields the upper bound 1 on Df.
From the last equality it is clear that in order to correctly evaluate └XuDf
To evaluate Df
Note that the integer Yint=(2nY)=(2n−D) (133)
(134)
Substituting Df in terms of Y, yields
In the last set of equalities, since the numerator and the denominator of each successive expression are both multiplied by the same
factor of the form (1+Y(2
the original value of the reciprocal does not change. Also note that each successive multiplication by a factor of the above form doubles the number of leading ones in the denominator. As a result the denominator in the successive expressions in Eqn (135) approaches the value 1 from below (it becomes 0.11111 . . . ).
It is well known [6] that
when the number of leading-ones in the denominator exceeds the word-length (i.e., n bits),
the error ε in the numerator also satisfies the bound ″ε|<2−n
and the iterations can be stopped. In other words, when t leads to the satisfaction of the constraint
the iterations can be stopped and the approximation
can be used. Thus, number of iterations in a convergence division is O(lg).
In contrast any digit-serial division fundamentally requires O(n) steps.
It turns out that the above convergence method is equivalent to newton-style convergence iterations (for details, please see any textbook on Computer Arithmetic [6,7]). Newton's method is quadratic which means that the error
εn+1 after the (n+1)th iteration ≈O((εn)2) (139)
which in-turn implies that the number of accurate bits doubles after each iteration (which is why convergence-division is the method of choice in high speed implementations).
From (138) it is clear that
D
f
inv≈(1+Y)(1+Y2) . . . . (1+Y(2
Accordingly the products need to be accumulated, so as to yield a precision of 2n-bits at the end. Since a product of two n bit numbers (which includes a square) can be upto 2n bits long, the lower half of the double length product must be discarded retaining only the n most significant bits at every step. Each such retention of n most significant bits is tantamount to division by a constant, viz., 2n. Thus the QFS algorithm needs to be invoked at every step in the convergence division method. In general the SODIS algorithm is also needed at each step. Those familiar with the art will appreciate that using all the preceding algorithms unveiled herein, an ultrafast convergence division via reciprocation can be realized without ever having to leave the residue domain at any intermediate step.
§5 Application Scenarios
The difficulty of implementing some basic canonical operations such as base-extension, sign-detection, scaling, and division has prevented the widespread adoption of the RNS. The algorithms and apparata unveiled herein streamline and expedite all these fundamental operations, thereby removing all roadblocks and unleashing the full potential of the RNS. Since a number representation is a fundamental attribute that underlies all computing systems, expediting all arithmetic operations using the RNS can potentially affect all scenarios that include computing systems. The scenarios that are most directly impacted are listed below.
5.1 Distinctions and Novel Aspects of this Invention
In some designs, the operation of selecting the set of moduli is done so as to enable an exhaustive pre-computation and look-up strategy that covers all possible inputs. In some designs, the determination of reconstruction coefficient may be performed in hardware such that the determination is upper limited by delay of O(log n) where n is an integer number representing wordlength.
It is noted that in one or more exemplary embodiments described herein, the functions and modules described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blue-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
As utilized in the subject disclosure, the terms {hacek over (r)}system,
ś
{hacek over (r)}module,
ś
{hacek over (r)}component,
ś
{hacek over (r)}interface,
ś and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. Components can include circuitry, e.g., processing unit(s) or processor(s), that enables at least part of the functionality of the components or other component(s) functionally connected (e.g., communicatively coupled) thereto. As an example, a component may be, but is not limited to being, a process running on a processor, a processor, a machine-readable storage medium, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
The aforementioned systems have been described with respect to interaction between several components and modules. It can be appreciated that such systems, modules and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components also can be implemented as components communicatively coupled to other components rather than included within parent component(s). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components and may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
Moreover, aspects of the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or computing components to implement various aspects of the claimed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, optical disks (e.g., compact disk (CD), digital versatile disk (DVD), smart cards, and flash memory devices (e.g., card, stick, key drive. Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving voice mail or in accessing a network such as a cellular network. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of what is described herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the described embodiments are intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term {hacek over (r)}includes
ś is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term
{hacek over (r)}comprising
ś as
{hacek over (r)}comprising
ś is interpreted when employed as a transitional word in a claim.
This patent claims the benefit of priority from U.S. Provisional Patent Application Ser. No. 61/311815, entitled “Ultrafast Residue Number System Using Intermediate Fractional Approximations That Are Rounded Directionally, Scaled and Computed,” filed on Mar. 9, 2010, incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61311815 | Mar 2010 | US |