This invention relates to an improved lookup table addressing system and method.
As computer speed increased from 33 MHz to 1.0 GHz and beyond, the computer operations could not be completed in one cycle. As a result the technique of pipelining was adopted to make most efficient use of the higher processor performance and to improve throughput. Presently, deep pipelining uses as many as 15 stages or more. Generally, in a pipelined computing system there are several parallel building blocks working simultaneously where each block takes care of different parts of the whole process. For example, there is a compute unit (CU) that does the computation, an address unit including a data address generator (DAG) that fetches and stores the data in memory according to the selected address modes and a sequencer or control circuit that decodes and distributes the instructions. The DAG is the only component that can address the memory. Thus, in a deeply pipelined system if an instruction is dependent on the result of a previous one, a pipeline stall will happen where the pipeline will stop, waiting for the offending instruction to finish before resuming work. For example, if, after a computation, the output of the CU is needed by the DAG for the next data fetch, it can't be delivered directly to the DAG to be conditioned for a data fetch: it must propagate through the pipeline before it can be processed by the DAG to do the next data fetch. This is so because only the DAG has access to the memory and can convert the compute unit result to an address pointer to locate the desired data. In multi-tasking general purpose computers this stall may not be critical but in real time computer systems such as used in e.g., cell phones, digital cameras, these stalls are a problem. See U.S. patent application, entitled: IMPROVED PIPELINE DIGITAL SIGNAL PROCESSOR, by Wilson et al. (AD-432J) filed on even date herewith, herein incorporated in its entirety by this reference.
In one application bit permutation is used to effect data encryption. This can be done in the CU but the arithmetic logic units (ALU) in the CU are optimized for 16, 32, or 64 bit operations and are not efficient for bit by bit permutation. For example, if the permutation is done by the ALU, each bit requires three cycles of operation: mask, shift and OR. Thus, permuting a single 32 bit word requires 96 cycles or more.
In another approach instead of performing the permutations in the ALU, the permutation values can be stored in a lookup table located in external storage. However, now, the R register in the ALU must deliver the word e.g. 32 bits to a pointer (P) register in the DAG which can address the external memory lookup table. But this requires an enormous lookup table (LUT), i.e., 232 bits or more then 33.5 megabytes of memory. To overcome this, the 32 bit word in the R register in the ALU can be processed, e.g., as four bytes (8 bits) or eight nibbles (4 bits). This reduces the memory size required: for four bytes there is needed four tables of 256 entries, each of 32 bits (or a 4 Kbyte LUT) and for eight nibbles there is needed eight tables of sixteen entries, each of 32 bits (or a 512 byte LUT). But this, too, creates problems: now the ALU requires four (bytes) or eight (nibbles) to be transferred to the DAG's P register for a single 32 bit word. Each transfer in turn causes a number of pipeline stalls as discussed, supra.
In a separate but related problem linear feedback shift registers (LFSR's) e.g. CRC's, scramblers, de-scramblers, trellises encoding are widely used in communication systems. The LFSR operations can be scaled by the CU one bit at a time using mask/shift/OR cycles as explained above with the same problems. Or a specific hardware block, e.g. ASIC, FPGA that solves the LFSR problem using 4, 8, or 16 bits per cycle can be used. Both the mask/shift/OR approach in the CU and the ASIC approach can be eliminated by using an external lookup table or tables but with all the aforesaid shortcomings.
It is therefore an object of this invention to provide an improved lookup table addressing system and method.
It is a further object of this invention to provide such an improved lookup table addressing system and method which minimizes pipeline stall between compute unit and data address generator.
It is a further object of this invention to provide such an improved lookup table addressing system and method which optimizes the size of the lookup table.
It is a further object of this invention to provide such an improved lookup table addressing system and method which accelerates linear feedback shift register operations without additional dedicated hardware, e.g. ASIC or FPGA.
It is a further object of this invention to provide such an improved lookup table addressing system and method which is faster and requires less power.
It is a further object of this invention to provide such an improved lookup table addressing system and method which can reuse existing processor components.
It is a further object of this invention to provide such an improved lookup table addressing system and method which accelerates permutation operations without added hardware, e.g. ASIC, FPGA.
It is a further object of this invention to provide such an improved lookup table addressing system and method which is fully scalable to accommodate larger memory requirements.
It is a further object of this invention to provide such an improved lookup table addressing system and method which is adaptable for a variety of different applications e.g., encryption, permutation, and linear feedback shift register implementation including CRC, scrambling, de-scrambling and trellis.
The invention results from the realization that an improved lookup table addressing system and method which minimizes pipeline stall, optimizes lookup table size, is faster, uses less power, reuses existing processing components and, is scalable and adaptable for a variety of different applications can be achieved by transferring a data word from a compute unit to an input register in a data address generator; providing in at least one deposit-increment index register in the data address generator having a table base field for identifying the location of the set of tables in memory, a table index field for identifying the location of a specific one of the tables in the set and a displacement field; and depositing a section of the data word into the displacement field of the deposit-increment index register for identifying the location of a specific entry in the tables.
The subject invention, however, in other embodiments, need not achieve all these objectives and the claims hereof should not be limited to structures or methods capable of achieving these objectives.
This invention features a lookup table addressing system having a set of lookup tables in an external memory including a data address generator having an input register for receiving a data word from a compute unit and a deposit increment index register having a table base for identifying the location of the set of tables in memory. A displacement field identifies the location of a specific entry in that specific table, the data address generator is configured to deposit a section of the data word into the displacement field to access the specific entry.
In a preferred embodiment the entries include the partial results of the corresponding section of the data word. The compute unit may include an accumulator register, a lookup table destination register and a combining circuit; the compute unit may be configured to accumulate the partial results from all of the sections of the data word to obtain the final result. The destination register can be any of the compute units data register files. The data address generator may include a plurality of pointer registers and the deposit-increment index register may be implemented by one of the pointer registers. The data address generator may also include a plurality of pointer registers and the deposit-increment input register may be implemented by one of the pointer registers. The index field of the deposit-increment index register may be configured to increment to identify the next table in the set. The partial result may include the data bits of the corresponding section and the data address generator may be further configured to map those bits to a predetermined output word. The destination word and the data word may have an equal number of bits. The destination word and the data word may have an unequal number of bits. The combining circuit may be an exclusive OR circuit. The combining circuit may be a summing circuit. The data address generator may include a second index register and the data address generator may be configured to deposit a second section of the data word into the displacement field of the second deposit increment index register. The data address generator may be configured to preload the index register to a known table address. The known table address may be a start address. The bit field may be a byte. The bit field may be a nibble.
This invention also features a lookup table addressing method for servicing a set of lookup tables in an external memory including transferring a data word from a compute unit to an input register in a data address generator. There is provided, in at least one deposit-increment index register in the data address generator, including a table base field for identifying the location of the set of tables in memory and a displacement field in the index register for identifying the location of a specific entry in the tables.
In a preferred embodiment the entries may include partial results of the corresponding section of the data word. The partial results from all sections of the data word may be accumulated to obtain the final results. A table base field may be incremented to identify the next table in the set in the data address generator. The partial result may include data bits and may also include mapping those bits to a predetermined output word. The output word and the data word may have an equal number of bits. The output word and the data word may have an unequal number of bits. Accumulating may include exclusive-ORing. Accumulating may include summing. It may include depositing a second section of the data word into another index register displacement field for identifying the location of another specific entry in parallel with the first. It may include preloading the index register to a known table address. The index register may be preloaded to the starting address. The section may be a bit field. The bit field may be a nibble or a byte.
Other objects, features and advantages will occur to those skilled in the art from the following description of a preferred embodiment and the accompanying drawings, in which:
Aside from the preferred embodiment or embodiments disclosed below, this invention is capable of other embodiments and of being practiced or being carried out in various ways. Thus, it is to be understood that the invention is not limited in its application to the details of construction and the arrangements of components set forth in the following description or illustrated in the drawings. If only one embodiment is described herein, the claims hereof are not to be limited to that embodiment. Moreover, the claims hereof are not to be read restrictively unless there is clear and convincing evidence manifesting a certain exclusion, restriction, or disclaimer.
There is shown in
The LUT deposit-increment index register 20 generates the effective memory address as function of table base bit field 38, table index bit field 40, deposit bit field 36 and the zero bit field 41. In operation, a data word from CU 14 is delivered to the DAG data input word register 18. One section of it, for example, a first nibble 34 is deposited directly into displacement bit field 36. The table base bit field 38 identifies the starting location of the set of tables 32 in external memory 16. The index field 40 identifies the location of the particular table 32-1 through 32-8 in table set 32, and the zero field 41 which accommodates for LUT entry width. If a thirty-two bit LUT access is used the ZERO field will contain two zeros, or one zero if a sixteen bit LUT access is used. The section or nibble 34 deposited in displacement field 36 is the address displacement of the specific entry in a particular table, for example, entry 42. Assuming that the system is being used to permute one nibble at a time of a 32 bit word transferred to the DAG input register 18, then entry 42 will contain four bits plus a mapping location into a 32 bit word. The four bits are a permutation of the bits in nibble 34 deposited in displacement field 36. These four bits and the information which maps their location in a 32 bit word is delivered to a 32 bit LUT destination register 26 in CU 14. The four bits from specific entry 42 will be loaded into four of those thirty-two locations in accordance with the mapping information in entry 42. This partial result is combined by combining circuit or GF-2 adder 30 (Xor) with the contents of accumulator register 28. Since this is the initial cycle of operation, register 28 contains zero. Thus, after combining the outputs of the two registers in adder 30, the accumulated result in register 28 is nothing more than the contents of LUT destination register 26. Next, incrementing circuit 22 increments the table index 40 value by one and feeds it back as the new table index so that the system moves to Table 2, 32-2. At the same time the next section of the data word in data word input register 18, the nibble in the next section 44 of data input register 18, is delivered to displacement field 36. This now identifies another specific entry 46 in Table 2 which is mapped into LUT destination register 26. The output from register 26 is once again combined by GF-2 adder (Xor) 30 with the contents of register 28 and the combined results are accumulated and stored in register 28. Now register 28 contains a combination of the data from specific entry 42 mapped into a 32 bit word format combined with the specific entry 46 whose 4 bits are mapped to four other positions in the 32 bit word format. This continues until all eight nibbles in the thirty-two bit word present in register 18 are completed. At that point incrementing circuit 22 has reached eight and preload circuit 24 will preload the table base back to the beginning of the set of tables. Preload circuit 24 could in fact preload table base field 38 to any particular place.
The advantages of the invention can be seen by contrasting it with conventional operations. In a conventional operation the data word is moved a nibble or a byte at a time from the R register in the CU to the input P input register in the DAG. In deep pipeline operations this means that there will be several stalls for each nibble or byte so transferred. In addition, the operations of depositing the nibble or byte data into the displacement field and incrementing to the next table, have to be manually performed by the DAG. In contrast, with this invention, the entire data word is transferred at once from the R register in the CU to the P input register in the DAG so the several stalls that have to be endured are only endured once for the entire data word rather than once for each eight nibbles or each of the four bytes. In addition, the operation of depositing the nibble or byte data into the displacement field and incrementing to the next table each time can now be done automatically by the DAGs own circuits.
In the DAG there may be more than one available input register 18a and deposit-increment index register 20a Input registers 18 and 18a can actually be a single register which services both deposit-increment index register 20 and deposit-increment index register 20a. There can also be additional increment circuits 22a and preload circuits 24a.
In that case using a second LUT destination register 26a in CU 14 the operation may be carried out twice as fast. With the same data word installed in input register 18 and 18a the system can look at nibble 34 in register 18 and deliver it to displacement field 36 in register 20 while nibble 44a in register 18a can be delivered to displacement field 36a. Thus while DAG 12 in
Permuting of a thirty-two bit input register such as 18 is done by dividing the input register into eight nibbles, groups of 4 bits, and combining the partial results of all permuted nibbles, for example. The first nibble, such as nibble 34 in register 18 of
In an alternative construction there may be two P index registers, 20b, 20bb,
Although thus far the invention has been explained only with respect to a permutation operation, it can be used in a number of other applications to great advantage. For example, in
The invention may also be used to great advantage in connection with linear feedback shift registers (LFSRS) such as Galois Field Linear Transformer (GFLT) LFSR 110 in
There is shown in
In operation, at each cycle of the clock, column 150,
Galois field linear transformer trellis system 110a,
On the next or second clock cycle designated clock cycle, 0, Chart II,
For further explanation see U.S. patent application Ser. No. 10/753,301, filed Jan. 7, 2004, entitled GALOIS FIELD LINEAR TRANSFORMER TRELLIS SYSTEM by Stein et al. herein incorporated in its entirety by this reference.
One advantage of the use of this invention in this environment is that the need for a thirty-two by thirty-two matrix of exclusive OR gates or a lookup table of 232 capacity can be avoided. This is taught in
The invention may be conveniently implemented in a processor such as a digital signal processor DSP 200,
One implementation of lookup table addressing method 300,
Although specific features of the invention are shown in some drawings and not in others, this is for convenience only as each feature may be combined with any or all of the other features in accordance with the invention. The words “including”, “comprising”, “having”, and “with” as used herein are to be interpreted broadly and comprehensively and are not limited to any physical interconnection. Moreover, any embodiments disclosed in the subject application are not to be taken as the only possible embodiments.
In addition, any amendment presented during the prosecution of the patent application for this patent is not a disclaimer of any claim element presented in the application as filed: those skilled in the art cannot reasonably be expected to draft a claim that would literally encompass all possible equivalents, many equivalents will be unforeseeable at the time of the amendment and are beyond a fair interpretation of what is to be surrendered (if anything), the rationale underlying the amendment may bear no more than a tangential relation to many equivalents, and/or there are many other reasons the applicant can not be expected to describe certain insubstantial substitutes for any claim element amended.
Other embodiments will occur to those skilled in the art and are within the following claims.
Number | Name | Date | Kind |
---|---|---|---|
3303477 | Voigt | Feb 1967 | A |
3805037 | Ellison | Apr 1974 | A |
3959638 | Blum et al. | May 1976 | A |
4757506 | Heichler | Jul 1988 | A |
5031131 | Mikos | Jul 1991 | A |
5062057 | Blacken et al. | Oct 1991 | A |
5101338 | Fujiwara et al. | Mar 1992 | A |
5260898 | Richardson | Nov 1993 | A |
5287511 | Robinson et al. | Feb 1994 | A |
5351047 | Behlen | Sep 1994 | A |
5386523 | Crook et al. | Jan 1995 | A |
5530825 | Black et al. | Jun 1996 | A |
5537579 | Hiroyuki | Jul 1996 | A |
5666116 | Bakhmutsky | Sep 1997 | A |
5675332 | Limberg | Oct 1997 | A |
5689452 | Cameron | Nov 1997 | A |
5696941 | Jung | Dec 1997 | A |
5710939 | Ballachino et al. | Jan 1998 | A |
5819102 | Reed et al. | Oct 1998 | A |
5832290 | Gostin et al. | Nov 1998 | A |
5937438 | Raghunath et al. | Aug 1999 | A |
5961640 | Chambers et al. | Oct 1999 | A |
5970241 | Deao et al. | Oct 1999 | A |
5996057 | Scales, III et al. | Nov 1999 | A |
5996066 | Yung | Nov 1999 | A |
6009499 | Koppala | Dec 1999 | A |
6029242 | Sidman | Feb 2000 | A |
6061749 | Webb et al. | May 2000 | A |
6067609 | Meeker et al. | May 2000 | A |
6094726 | Gonion et al. | Jul 2000 | A |
6134676 | VanHuben et al. | Oct 2000 | A |
6138208 | Dhong et al. | Oct 2000 | A |
6151705 | Santhanam | Nov 2000 | A |
6223320 | Dubey et al. | Apr 2001 | B1 |
6230179 | Dworkin et al. | May 2001 | B1 |
6263420 | Tan et al. | Jul 2001 | B1 |
6272452 | Wu et al. | Aug 2001 | B1 |
6285607 | Sinclair | Sep 2001 | B1 |
6332188 | Garde et al. | Dec 2001 | B1 |
6430672 | Dhong et al. | Aug 2002 | B1 |
6480845 | Egolf et al. | Nov 2002 | B1 |
6539477 | Seawright | Mar 2003 | B1 |
6587864 | Stein et al. | Jul 2003 | B2 |
6757806 | Shim | Jun 2004 | B2 |
6771196 | Hsiun | Aug 2004 | B2 |
6829694 | Stein et al. | Dec 2004 | B2 |
7173985 | Diaz-Manero et al. | Feb 2007 | B1 |
7187729 | Nagano | Mar 2007 | B2 |
7243210 | Pedersen et al. | Jul 2007 | B2 |
7424597 | Lee et al. | Sep 2008 | B2 |
7728744 | Stein et al. | Jun 2010 | B2 |
7882284 | Wilson et al. | Feb 2011 | B2 |
20030085822 | Scheuermann | May 2003 | A1 |
20030103626 | Stein et al. | Jun 2003 | A1 |
20030133568 | Stein et al. | Jul 2003 | A1 |
20030149857 | Stein et al. | Aug 2003 | A1 |
20030196072 | Chinnakonda et al. | Oct 2003 | A1 |
20030229769 | Montemayor | Dec 2003 | A1 |
20040145942 | Leijten-Nowak | Jul 2004 | A1 |
20040193850 | Lee et al. | Sep 2004 | A1 |
20040210618 | Stein et al. | Oct 2004 | A1 |
20050086452 | Ross | Apr 2005 | A1 |
20050228966 | Nakamura | Oct 2005 | A1 |
20050267996 | O'Connor et al. | Dec 2005 | A1 |
20060143554 | Sudhakar et al. | Jun 2006 | A1 |
20060271763 | Pedersen et al. | Nov 2006 | A1 |
20070044008 | Chen et al. | Feb 2007 | A1 |
20070089043 | Chae et al. | Apr 2007 | A1 |
20070094483 | Wilson et al. | Apr 2007 | A1 |
20070277021 | O'Connor et al. | Nov 2007 | A1 |
20080010439 | Stein et al. | Jan 2008 | A1 |
20080244237 | Wilson et al. | Oct 2008 | A1 |
20090089649 | Wilson et al. | Apr 2009 | A1 |
Number | Date | Country |
---|---|---|
04092921 | Mar 1992 | JP |
06110852 | Apr 1994 | JP |
2001210357 | Aug 2001 | JP |
02290494 | Oct 2002 | JP |
05513541 | May 2005 | JP |
WO-96010226 | Apr 1996 | WO |
WO-03067364 | Aug 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20070094474 A1 | Apr 2007 | US |