The present invention relates to processors for carrying out encryption and decryption operations that require long length operands. In particular the processors are for communication systems and more exclusively but not explicitly for cable set-top boxes, satellite set-top boxes, DTVs, modems and home gateways, which are increasingly required to encrypt and decrypt data using symmetric and asymmetric ciphers that are based on such long length operands.
The devices above, hereinafter set top boxes or STBs, are used to receive data from cable or satellite links, from a home network, digital still or video cameras, or any other kind of network device. The STB may also send data to the home network, digital still or video camera, or any kind of network device. The data includes a number of compressed & uncompressed video, audio, still image & data channels, and may be either scrambled or unscrambled.
Public key cryptography is a form of cryptography which generally allows users to communicate securely without having prior access to a shared secret key. This is done by using a pair of cryptographic keys, designated as public key and private key, which are related mathematically. What has been encrypted by the first key, can only be decrypted by the second—and vice versa.
In public key cryptography, the private key is kept secret, while the public key may be widely distributed. In a sense, one key “locks” the message; while the other is required to unlock the message. It should not be feasible to deduce the private key of a pair given the public key, and in high quality algorithms no such technique is known.
There are a number of ways in which public key systems may be used, including:
Typically, public key techniques are much more computationally intensive than purely symmetric algorithms, but the judicious use of these techniques enables a wide variety of applications. In particular, key agreement means that the computationally intensive public key technique is only used to a minimal extent at the start of the transaction and less demanding techniques can be used later on.
Authentication between network devices is usually done using asymmetric ciphers, in which a symmetric encryption key is generated and exchanged via a secure channel.
RSA is one well-known and widely used algorithm for public-key encryption. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys. The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that me=c mod n, where (e, n) is an RSA public key and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. RSA keys are typically 1024-2048 bits long. It is generally presumed that RSA is secure if n is sufficiently large.
The skilled in the art may appreciate that the computational task for ciphering and de-ciphering symmetric/asymmetric cryptology based messages necessitates an efficient processing unit. Preferably such a device should be able to execute the necessary tasks in real time.
A known general purpose microcontroller architecture, which is well described in the art, is considered for the implementation of encryption/decryption algorithms. Such a known microcontroller may contain multiple CPUs & ALUs, each or all are implemented on a single silicon die.
As described in the art, a general purpose microprocessor (sometimes abbreviated μP) is a programmable digital electronic component that incorporates the functions of a central processing unit (CPU) on a single integrated circuit (IC). The arithmetic logic unit (ALU) is a digital circuit that calculates an arithmetic operation (such as an addition, subtraction, etc.) and logic operations (such as an Exclusive Or) between two numbers. The ALU is a fundamental building block of the central processing unit of a computer. Modern general purpose microprocessors incorporate complex ALUs. These ALUs can perform most 32 or 64 bit operations in a single cycle.
Long arithmetic operations, such as required in asymmetric encryption and decryption, are composed of multiple shift/add/divide procedures, which are repeated over & over, each time producing partial results and a carry. An implementation of an encryption algorithm, such as RSA, on a general purpose microcontroller involves complex data supply software (multiple fetch/store operations) and thus results in low utilization of the ALU resources. Therefore, a general purpose processor is not adequate for these types of calculations in performance constrained environments. Those skilled in the art may appreciate that the implementation of complex encryption/decryption algorithms over a general purpose CPU, suffers from low throughput and inefficient memory bandwidth. Additionally, such implementation, when performing long arithmetic instructions, occupies the CPU resources, and thus the CPU may not be used for other tasks. An additional disadvantage of such an implementation is the high cost of integration, software development, qualification, time to market etc. Another drawback is the high power dissipation, low fault tolerance and short life time (MTBF). On top of that, software implementations are susceptible to a potential security breach through so-called “side channel attack” which is a method of breaking secure systems and recovering secrets through power consumption analysis of microprocessor-based ciphers.
As a consequence, such operations are implemented in hardware using specialized ALU units that are specifically designed for the extra long operand length. However such devices still give rise to large memory access bandwidth.
According to one aspect of the present invention there is provided an arithmetic and logic unit for carrying out arithmetic or logic operations on long operands, said long operands having an operand word length, the unit comprising:
an operation unit comprising circuitry for carrying out selectable ones of a plurality of pre-defined arithmetic or logical operations on a first number of bits determined by said operand word length;
a fetch and write unit comprising direct memory access circuitry for fetching a second number of bits of operand data by direct access from an external memory and for writing results to memory, said second number being set by a predetermined memory access width;
said second number being smaller than an operand length, and said fetch and write unit being controllable to carry out fetch operations for a further second number of bits of a long operand while a current part of said operand is being processed in said operation unit, thereby to hide memory access latency.
According to a second aspect of the present invention there is provided a multi-word arithmetic device for executing modular arithmetic on multi-word integers, in accordance with instructions from an external device, the multi-word arithmetic device comprising:
an operation unit comprising a processing location, the operational unit being configured for carrying out processing on bits at said processing location, the processing comprising selectable ones of a plurality of pre-defined arithmetic or logical operations, the processes being defined for a first number of bits determined by said operand word length;
a fetch and write unit comprising direct memory access circuitry for fetching a second number of bits of operand data by direct access from an external memory and for writing results to memory, said second number being set by a predetermined memory access width;
said second number being smaller than or equal to said first predetermined number, and said direct memory access circuitry being configured to deliver said second number of bits directly to said processing location.
According to a fourth aspect of the present invention there is provided an arithmetic and logic unit for carrying out arithmetic or logic operations on long operands having an operand word length, the unit comprising:
an operation unit comprising a processing location, the operational unit being configured for carrying out processing on bits at said processing location, the processing comprising selectable ones of a plurality of pre-defined arithmetic or logical operations, the processes being defined for a first number of bits determined by said operand word length;
a fetch and write unit comprising direct memory access circuitry for fetching a second number of bits of operand data by direct access from an external memory and for writing results to memory, said second number being set by a predetermined memory access width and being smaller than said first number;
and wherein said operation unit comprises a dedicated register for each one of said plurality of predefined arithmetic or logical operations, thereby to allow more than one of said plurality of predefined arithmetic or logical operations to be carried out in parallel.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings:
The present embodiments comprise a system and a method for performing long operand arithmetic calculations of the kind required by public key and asymmetric ciphers, and the implementation of a Very Long Data Word Arithmetic Logic Unit (VLALU), hereinafter VLALU, for a Security Processor device. These ciphers include, but are not limited to, RSA, Eliptic Curve Cryptography, and ACE.
The present embodiments may use direct memory access by the very long data word ALU unit, and may further use processing time to hide memory latency.
The principles and operation of a system and method according to the present invention may be better understood with reference to the drawings and accompanying description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. In addition, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
Reference is now made to
The operation unit 200 comprises circuitry for carrying out any of several pre-defined arithmetic or logical operations on a first number of bits determined by the operand word length. The first number of bits is preferably smaller than the operand word length but a multiple of the number of bits fetched by the individual fetch operation.
The DMA unit 300 is a fetch and write unit which comprises direct memory access circuitry for fetching a second number of bits of operand data by direct access from an external memory and for writing results to memory. The second number is the number of bits that can be obtained in a single fetch instruction, and is typically set by the memory access width, that is by the number of bits that can be fetched in a single instruction from the memory. The memory access width is defined by the width of the system bus, the memory architecture and the definition of the fetch instruction.
The second number is smaller than the operand length for any practical computational device. As explained in the introduction, in some prior art systems multiple fetches may be carried out until the entire operand is present, and only then can the calculation begin. Such a procedure however takes time and secondly, sufficient register space is required on the unit for storing the entire operand, thus increasing the amount of expensive real estate on the chip needed by unit 100. The present embodiments operate on parts of the operand at a time, and generate partial results and carries. The rest of the result is produced when the remainder of the operand arrives. Thus register space is saved on the unit 100 since at no time is the entire operand stored on the unit. In an embodiment the partial results are placed back in the memory after processing so that the complete result is also never stored on the unit.
The fetch and write unit is free to carry out additional fetch operations for a further second number of bits of a long operand while a current part of the operand is being processed in the operation unit. As a result fetching and processing are carried out in parallel and the memory access latency is hidden.
Likewise the results, as produced, may be stored directly back into the memory, again by direct memory access. Storage of results in the memory saves register space on the chip, and memory latency is hidden by carrying out the storage operation at the same time as generation of the next part result.
In one embodiment the number of bits fetched in a single operation, the second predetermined number, is the number of bits that is processed each time. In another embodiment the first predetermined number is selected for operation so that the time required for the arithmetic or logical operations is greater than or, even more preferably just large enough, to mask the time required for memory fetch and write operations. As a result the memory latency is most effectively hidden.
Preferably, the first number is selected to optimize between chip area utilization, the time required for fetch and write operations, and power utilization. The larger the first number the more bits are processed in parallel so the larger the processor has to be, thus using more power. Also the more fetch operations are needed to feed the processor with enough bits to operate and the longer the operations themselves take. Longer operations may imply multi-cycle operations or lengthening of the clock cycle. Lengthening of the clock cycle slows the entire chip down. A smaller first number means that fewer memory fetches are needed and each operation is quicker, but more operations are needed overall to cover the entire operand.
In one case the operation is addition and the fetch and write unit is controllable in the case of addition to fetch each second predetermined number of bits of a current operand prior to processing, and there are two or more prefetch registers each to store a part of the operand from a single fetch until required for processing. At no point is the entire operand stored however.
In another case the operation is multiplication, and the fetch and write unit is controllable to fetch the second predetermined number of bits of each of two multiplication operands, and to complete multiplication sub-operations on all bits of one of the two multiplication operands before fetching new bits of the other of the two multiplication operands.
Other cases are division and modulus operations.
In one embodiment the unit 100 comprises a single temporary register which is shared between all of the operations. In this case it is not possible to carry out two different operations in parallel. However in another embodiment each operation has its own dedicated temporary register. Different operations may therefore be carried out at the same unit 100 in parallel, at the cost of slightly increased chip utilization space. In this case there may be conflicts between the operations for use of the DMA unit 300. There is therefore provided a direct memory access arbiter 301 for arbitrating between operations to dynamically assign direct memory access between the operations and thus avoid bus conflict.
In one aspect of the invention the bits fetched are placed directly at a processing register or in a prefetch register to wait for further fetch operations to be completed so that sufficient bits are available for processing. However at no point is the entire operand stored.
Reference is made to
The VLALU device 100 further comprises configuration and monitoring interface 110, and direct memory access (DMA) interface 111. The VLALU device 100 can operate independently. Alternatively, an external controller may use the Configuration and Status interface 110 to configure the VLALU device 100, and to monitor its status.
As discussed earlier, long arithmetic operations are typically used for the implementation of symmetric/asymmetric encryption and decryption. In particular, the following five basic operations are used: addition, subtraction, multiplication, division and modulo.
2's complement addition operation is performed iteratively on smaller parts of the operands. The sum is then derived from concatenating all the intermediate additions, in the following manner:
First, the two 2's compliment representation of the operands (a, b) are divided into nk sub-blocks of one bit each, as follows:
a={a
nk−1
, . . . , a
2
, a
1
, a
0}
b={b
nk−1
, . . . , b
2
, b
1
, b
0}
The operands can also be represented by n sub-blocks, each with k bits, as follows:
a
(i)
={a
(i+1)k−1
, . . . , a
ik}
b
(i)
={b
(i+1)k−1
, . . . , b
ik}
For simplicity, m/k (the width of the memory divided by the size of the operands) is assumed to be an integer. The constant k preferably reflects the maximum number of bits allowed in the Adder/Subtractor unit 101. High values of k result in higher performance per cycle, since the calculations are performed on more bits simultaneously. Lower values of k result in a smaller area taken up by the unit on the chip and lower delay of the operation as a whole. That is to say the resulting addition process takes less time since there are fewer bit positions for the addition to propagate through, but the end result covers less of the operand and so needs to be carried out more often per operand.
The number of sub-blocks n is calculated by dividing the total number of bits in the operands by the number of bits of the adder, k. If the division does not result in an integer, the result is rounded up to the nearest integer.
The addition is performed by calculating the following formula for each sub-block i, starting at i=0 and c0=0:
{ci+1,s(i)}=a(i)+b(i)+ci
The intermediate result is stored in a temporary register t, whose width is (m+1) bits. Exactly m/k intermediate results are required to fill the register t. Each addition uses the carry bit generated by the previous addition iteration. Following m/k additions, the least significant m bits of t are written to memory. The most significant bit of t is the carry bit for the next addition.
When used in conjunction with the DMA controller 104, the adder requires two read operations of m bits to two operand registers, one write operation of m bits for the result, and m/k additions for every m/k sub-blocks. By varying m, an optimal trade-off between memory access overhead, silicon area and power consumption can be reached. Therefore, the performance (bits per cycle) of the addition command is as follows:
In one of the preferred embodiments of the invention, DMA may be used to prefetch the operands prior to their use. This would require at least two shadow registers, each of m bits in size. The shadow registers are also referred to herein as prefetch registers. The benefit from prefetching the operands is the ability to reach full utilization for the adder, in that calculations may proceed on one part of the operand while another part is being fetched. The performance is thus:
A general purpose processor would achieve much lower performance, due to the data supply overhead. Each k-bit operation requires at least two load instructions, one add operation, one store operation, three instructions for updating the memory pointers and two instructions for flow control. If all of the instructions take only one cycle to complete, the resulting performance is:
The following example shows the mathematical operation of the adder:
k=2;m=4;a=12h;b=36 h
a(2)=1;a(1)=0;a(0)=2
b(2)=3;b(1)=1;b(0)=2
{c1,s(0)}=2+2+0=4={c1=1,s(0)=0}
{c2,s(1)}=0+1+=2={c2=0,s(1)=2}
{c3,s(2)}=1+3+0=4={c3=1,s(2)=0}
{c4,s(3)}=0+0+1=1={c4=0,s(3)=1}
s=a+b=1·26+0·24+2·22+0·20=64+8=48h
Herein, “h” indicates use of hexadecimal notation.
2's complement subtraction operation is performed in substantially similar manner as the addition operation is performed, with the following changes:
{ci+1,s(i)}=a(i)+
c0=1
Where
The following example shows the mathematical operation of the subtractor:
k=2;m=4;a=48h;b=12h
a(3)=1;a(2)=0;a(1)=2;a(0)=0
b(3)=0;b(2)=1;b(1)=0;b(0)=2
(3)=3;
{c1,s(0)}=0+1+1=2={c1=0,s(0)=2}
{c2,s(1)}=2+3+0=5={c2=1,s(1)=1}
{c3,s(2)}=0+2+1=3={c3=0,s(2)=3}
{c4,s(3)}=1+3+0=4={c4=1,s(3)=0}
s=a−b=0·26+3·24+1·22+2·20=48+4+2=36h
2's complement multiplication operation of two large numbers can be a complex and time consuming task. The VLALU implements the multiplication operation by using an undersized operand multiplier iteratively. In one embodiment of the invention, the multiplier may multiply k by l bits. In the preferred embodiment of the invention, the multiplier is symmetric, and may multiply k bits by k bits, resulting in a 2 k bit product. In the preferred embodiment, the multiplication operation is performed in the following manner:
First, the two operands are divided into n sub-blocks of m bits as following:
a≡{a
nm−1
, . . . , a
2
, a
1
, a
0}
b≡{b
nm−1
, . . . , b
2
, b
1
, b
0}
a
(i)
≡{a
(i+1)m−1
, . . . , a
im}
b
(i)
≡{b
(i+1)m−1
, . . . , b
im}
a
%(i)
≡{a
(i+1)k−1
, . . . , a
ik}
b
%(i)
≡{b
(i+1)k−1
, . . . , b
ik}
The constant k preferably reflects the maximum number of bits allowed for the multiplier of the Multiplier unit 102. High values of k result in higher performance per cycle, whereas lower values of k result in smaller area and lower delay. That is to say, as with the adder, smaller parts of the operand are taken so that each operation is quicker but more operations are required.
The number of sub-blocks n, is calculated by dividing the total number of bits in the operands by the memory width, m. If the division does not result in an integer, the result is rounded up to the nearest integer.
The calculation starts by setting the 2 nm-bit product result in memory to zero. Then, the multiplication is performed by calculating the following formula for each m-bit sub-block i, j, starting at i=j=0:
p(i)(j)=a(i)b(j)
Calculating p(i)(j) involves first reading the two m-bit operands a(j) and b(i) from memory to two m-bit temporary registers. Then, the operands are multiplied using a k-bit multiplier iteratively. Each temporary 2 k-bit result is added to a 2 m-bit temporary register, that holds p(i)(j). Finally, the 2 m-bit result is added to the 2 nm-bit multiplication result in memory. p(i)(j) is calculated for all possible values of i and j. In order to reduce the number of reads for the m-bit operands, all values of i are traversed before another value of j is used. This is similar to a nested “for” loop in the “C” programming language, where i is the counter of the inner loop, and j is the counter of the outer loop.
Multiplying two m-bit operands is performed using a k-bit multiplier iteratively. The m-bit operands a(j) and b(i) are divided again into smaller k-bit operands:
Reference is made to
A temporary register t is used to store the result of the multiplication of the various undersized or part operands in aj and bi. First, the temporary register is cleared to zero. Then, for every i% and j% in the range of 0≦i%, j%≦m/k−1, the following multiplication is performed:
pi
After each k-bit multiplication, the intermediate 2 k-bit product pi
The performance of the multiplier for one m-bit subproduct is as follows:
The above algorithm requires two m-bit registers for each partial operand, and one 2 m-bit register for the partial product pij. Additionally, the algorithm requires a k-bit multiplier, and a 2 k-bit adder.
In the preferred embodiment of the invention, mitigating memory latency is possible by using smaller k and/or larger m values. By using prefetching and delayed writes, memory overhead is completely avoided. This is possible when the time required for calculations is greater than the time required for memory overhead:
Usually the multiply and add operations have a throughput of one result per cycle. In this case, the condition becomes:
The performance without memory overhead is therefore:
For the example values of
and tmuliplier,k=tadd,2k=1, the performance is
A general purpose processor would achieve much lower performance, due to the data supply overhead. Each k-bit partial product operation would require at least two load instructions to load two k-bit values, one multiply operation, one add operation, one store operation, three instructions for updating the memory pointers and two instructions for flow control. The resulting performance is:
For an m-bit product, the above computation should be repeated
times for the different k-bit operands, resulting in the following performance:
The following example shows the mathematical operation of the multiplier:
k=2;m=4;a=18=12h;b=72=48h
a(1)=1;a(0)=2
a1(1)=0;a0(1)=1;a1(0)=0;a0(0)=2;
b(1)=4;b(0)=8
b1(1)=1;b0(1)=0;b1(0)=2;b0(0)=0;
p
0,0
(0)(0)
=a
0
(0)
b
0
(0)=0·2=0
p
1,0
(0)(0)
=a
1
(0)
b
0
(0)=0·0=0
p
0,1
(0)(0)
=a
0
(0)
b
1
(0)=2·2=4
p
1,1
(0)(0)
=a
1
(0)
b
1
(0)=2·0=0
p
(0)(0)=0·24+4·22+0·22+2·20=16
p
(1)(0)=0·24+2·22+0·22+2·20=8
p
(0)(1)=0·24+2·22+0·22+2·20=8
p
(1)(1)=0·24+1·22+0·22+2·20=4
ab=4·28+8·24+8·24+16·20=1024+128+128+16=1296
Reference is now made to
The dividend, the divisor, the quotient and the remainder are divided into single bit variables, as follows:
X={X
n
−1
, . . . , X
1
, X
0}
D={D
n
−1
, . . . , D
1
, D
0}
q={q
n
−1
, . . . , q
1
, q
0}
r={r
n
−1
, . . . , r
1
, r
0}
The algorithm starts with i=nx−nd+1, Y(i)={Xn
The second stage of the division is the subtraction stage. In the case of qi=1, the divisor D is subtracted from the dividend Y. However, if qi=0, nothing is done at this stage.
The third stage of the division is the shifting stage. At this stage, Y(i) is shifted left by one bit, the least significant bit of Y(i−1) becomes Xi−2, and i is decreased:
Y
(i−1)
={X
(i)
,X
i−2}
After all the bits of X have been shifted into Y, the variable Y(0) holds the remainder result, r.
r=Y(0)
All versions of Y(i) are stored in a memory structure in the same location. For long operands, a memory structure of m-bit width is used to store the operands. Thus, in the case of long operands, Y(i) may span many m-bit boundaries.
In the preferred embodiment of the invention, the location of the most significant bit of D, Dn
In the comparison stage, Y(i) is compared with D. Since the operands may span many m-bit words, more than one cycle may be required to determine the result of the comparison.
The comparison begins with the m most significant bits of Y(i) and D:
If
is larger than
the
quotient bit qi becomes 1. If
is smaller than
the quotient bit qi becomes 0. In case that
is equal to
another comparison will be made for
and
and so on. The probability of making another comparison for random numbers is 2−m. Therefore, for every m bits, the comparison requires two loads from memory and a comparator of m bits.
The subtraction stage is performed by subtracting D from Y(i). Since D and Y(i) reside in the same m-word aligned offset in memory, no additional multiplexers are needed. The subtraction may take more than one cycle, depending on the number of m-boundaries within D.
Each shift operation involves reading the next bit out of X, and shifting Y(i) left by one bit. This is done by an m-bit shift register.
In another preferred embodiment of the invention, Y(i) is never shifted, but rather it is stored at discontinuous locations. This embodiment requires that the most significant bits of Y(i) reside in an nd-bit temporary variable in the memory structure, while the least significant bits reside in the original dividend, X. As a result, performance is increased, since no shift is required. However, the increased performance is achieved at the expense of complexity, which results in additional chip area for the comparison and the subtracting stages.
Additional hardware may be used to increase the speed of the divider. In one of the preferred embodiments of the invention, the most significant non zero bit of Y(i) is detected. Then, the shifter parameter s is calculated by the difference between the location of the most significant bit of D and the location of the most significant bit of Y(i). Hardware can be conserved if s is limited to a small value. The shifting is then done using two steps. The first involves reading the s most significant bits of X. This may involve more than one read cycle since these s bits may span more than one m-bit memory word. Then, Y(i) is shifted left, starting at the least significant bits. Depending on the amount of hardware dedicated to the shifter and the parameter s, the shifting for every m-bit word may take a different number of cycles.
The following example, shown schematically in
m=4;X=73=49h;D=3;
Y(6)=X(6)=X[6..5]={10b}=2
Y
(6)
<D→q
5=0,X(6)=Y(6)=2
Y(5)={X(6),X4}={10b,0b}=4
Y
(5)
≧D→q
4=1,1,X(5)=Y(5)−D=1
Y(4)={X(5),X3}={1b,1b}=3
Y
(4)
≧D→q
3=1,X(4)=Y(4)−D=0
Y(3)={X(4),X2}={0b,0b}=0
Y(3)≧D→q2=0,X(3)=Y(3)=0
Y(2)={X(3),X1}={0b,0b}=0
Y(2)≧D→q1=0,X(2)=Y(2)=0
Y(1)={X(2),X0}={0b,1b}=1
Y(1)≧D→q0=0,X(1)=Y(1)=1
Y(0)={X(1)}=1
q={0,1,1,0,0,0}=24
r=Y(0)=1
Modulo Operation
The modulo operation is executed in a substantially similar method to the division operation above, however, the remainder from the division operation carries the result of the modulo operation.
In the preferred embodiment of the invention, the long arithmetic unit VLALU shares a temporary register of 2 m bits among its various operations. The operations are therefore mutually exclusive, meaning that only one operation can be performed at any given time.
In another preferred embodiment of the invention, each operation uses its own temporary register and has its own interface to the memory structure. The DMA Controller unit 104 preferably comprises an arbiter to arbitrate access to the memory structure between the various operations. Again, this provides increased performance at the expense of chip area.
The Adder/Subtractor unit 101
The adder/subtractor unit 101 preferably comprises a finite field adder/subtractor of length k, preferably implemented such that it may complete the addition/subtraction operation within a single machine clock cycle. The finite field adder/subtractor unit may receive its operand inputs from, and may deposit its results to, the following options:
Internal shadow or temporary storage.
Shadow or temporary storage of the DMA Controller & Local Storage unit 104.
External memory (via the DMA Controller & local storage unit 104).
Any combination of the above.
The unit may be configured via the Configuration & monitoring interface 110.
The Multiplier unit 102 may comprise of a finite field multiplier of length k, preferably implemented to complete the multiplication operation within a single machine clock cycle. The multiplier unit may receive its operand inputs from, and deposit its results to, the following options:
Internal shadow or temporary storage.
Shadow or temporary storage of the DMA Controller & Local Storage unit 104.
External memory (via the DMA Controller & local storage unit 104).
Any combination of the above.
The unit may be configured via the Configuration & monitoring interface 110.
The Divider/Modulo Unit 103
The Divider/Modulo unit 103 may comprise a finite field adder/subtractor of length k, preferably implemented such that it may complete the subtraction operation within a single machine clock cycle. The divider/modulo unit may receive its operand inputs from, and deposit its results to, the following options:
Internal shadow or temporary storage.
Shadow or temporary storage of the DMA Controller & Local Storage unit 104.
External memory (via the DMA Controller & local storage unit 104).
Any combination of the above.
The unit may be configured via the Configuration & monitoring interface 110.
Asymmetric encryption and decryption usually involve operations such as multiplication and division with large operands. These operands may comprise thousands of bits each, making it impractical to store such large operands in internal registers in an area-constrained design. Therefore, the VLALU device 100 preferably fetches its operands from a random access memory, as well as from a smaller register array.
The VLALU device 100 may access data stored in external memory using an internal or external DMA controller, such as DMA Controller & Local Storage unit 104. The addresses of the operands are provided to the DMA controller, and subsequently, the data are fetched from the external memory and provided to the VLALU. The operands are then stored into the Local Storage 104, or into internal registers of the Adder/Subtractor 101, Multiplier 102, Divider/Modulo 103 respectively. Storing is done in the same manner, i.e., internal or external DMA controller is provided with the target address, and the output of the Adder/Subtractor 101, or Multiplier 102, or Divider/Modulo 103, or the content of the Local Storage 104 is stored in the external memory.
Such a DMA controller may comprise input and output FIFO (first-in, first-out) memory, efficient arbitration and bus mastering capabilities and the like.
An example of a memory allocation for storing of m bit data words is provided in
The skilled in the art may appreciate that the VLALU device 100 may perform one or more of the operations below, in parallel:
Adding two long operands.
Subtracting two long operands.
Multiplying two long operands.
Dividing two long operands.
Finding the modulus of two long operands.
Fetching operands from local or external storage.
Storing results to local or external storage.
Any combination of the above.
A VLALU device as described above is highly useful for cryptography operations of the kind described above.
It is expected that during the life of this patent many relevant devices and systems will be developed and the scope of the terms herein, particularly of the terms cryptology, compression, accumulator, multiplier, divider, adder, subtractor, ALU and modulo are intended to include all such new technologies a priori.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents, and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.