This invention relates to a permutable address mode processor and method implemented between the storage device and arithmetic unit.
Earlier computers or processors had but one compute unit and so processing of images, for example, proceeded one pixel at a time where one pixel has eight bits (byte). With the growth of image size there came the need for high performance heavily pipelined vector processing processors. A vector processor is a processor that can operate on an entire vector in one instruction. Single Instruction Multiple Data (SIMD) is another form of vector oriented processing which can apply parallelism at the pixel level. This method is suitable for imaging operations where there is no dependency on the result of previous operations. Since an SIMD processor can solve similar problems in parallel on different sets of data it can be characterized as n times faster than a single compute unit processor where n is the number of compute units in the SIMD. For SIMD operation the memory fetch has to present data to each compute unit every cycle or the n speed advantage under utilized. Typically, for example, in a thirty-two bit (four byte) machine data is loaded over two buses from memory into rows in two thirty-two bit (four byte) registers where the bytes are in four adjacent columns, each byte having a compute unit associated with it. Then a single instruction can instruct all compute units to perform in its native mode the same operation on the data in the registers byte by byte in the same column and store the thirty-two bit result in memory in one cycle. In 2D image processing applications, for example, this works well for vertical edge filtering. But for horizontal edge filtering where the data is stored in columns, all the registers have to be loaded before operation can begin and after completion the results have to be stored a byte at a time. This is time consuming and inefficient and becomes more so as the number of compute units increases.
SIMD or vector processing machines also encounter problems in accommodating “little endian” and “big endian” data types. “Little endian” and “Big-endian” refer to which bytes are most significant in multi byte types and describe the order in which a sequence of bytes is stored in processor memory. In a little-endian system, the least significant byte in the sequence is stored at the lowest storage address (first). “Big-endian ” does the opposite: it stores at the lowest storage address the most significant byte in the sequence Currently systems service all levels from user interface to operating system to encryption to low level signal processing. This leads to “mixed endian” applications because usually the higher levels of user interface, and operating system are done in “little endian” whereas the signal processing and encryption are done in “big endian.” Programmers must, therefore, provide instructions to transform from one to the other before the data is processed or to configure the processing to work with the data in the form it is presented.
Another problem encountered in SIMD operations is that the data actually has be to spread or shuffled or permutated for presentation for the next step in the algorithm . This requires a separate step, which involves a pipeline stall, before the data is in the format called for by the next step in the algorithm.
It is therefore an object of this invention to provide an improved processor and method with a permutable address mode.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which improves the efficiency of vector oriented processors such as SIMD's.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which effects permutations in the address mode external to the arithmetic unit thereby avoiding pipeline stall.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which can unify data presentation thereby unifying problem solution, reducing programming effort and time to market.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which can unify data presentation thereby unifying problem solution, utilizing more arithmetic units and faster storing of results.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode in which the data can be permuted on the load to efficiently utilize the arithmetic units in its native form and then permuted back to its original form on the store which makes load, solution and store operations faster and more efficient.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which easily accommodates mixed endian modes.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which enables fast, easy, and efficient reordering of the data between compute operations.
It is a further object of this invention to provide such an improved processor and method with a permutable address mode which enables data in any form to be reordered to a native domain form of the machine for fast, easy processing and then if desired to be reordered back to its original form.
The invention results from the realization that a processor and method can be enabled to process a number of different data formats by loading a data word from a storage device and reordering it to a format compatible with the native order of the vector oriented arithmetic unit before it reaches the arithmetic unit and vector processing the data word in the arithmetic unit. See U.S. Pat. No. 5,961,628, entitled LOAD AND STORE UNIT FOR A VECTOR PROCESSOR, by Nguyen et al. and VECTOR VS. SUPERSCALAR AND VLIW ARCHITECTURES FOR EMBEDDED MULTIMEDIA BENCHMARKS, by Christoforos Kozyrakis and David Patterson, In the Proceedings of the 35th International Symposium on Microarchitecture, Istanbul, Turkey, November 2002, 11 pages, herein incorporated in their entirety by these references.
The subject invention, however, in other embodiments, need not achieve all these objectives and the claims hereof should not be limited to structures or methods capable of achieving these objectives.
This invention features a processor with a permutable address mode including an arithmetic unit having a register file. At least one load bus and at least one store bus interconnecting the register file with a storage device. And a permutation circuit in at least one of the buses for reordering the data elements of a word transferred between the register file and storage device.
In a preferred embodiment the load and store buses may include a permutation circuit. There may be two load buses and each of them may include a permutation circuit. The permutation circuit may include a map circuit for reordering the data elements of a word transferred between the register file and storage device and/or a transpose circuit for reordering the data elements of a word transferred between the register file and storage device. The register file may include at least one register. The map circuit may include at least one map register. The map register may include a field for every data element. The map register may be loadable from the arithmetic unit. The map registers may be default loaded with a big endian little endian map. The data elements may be bytes.
This invention also feature a method of accommodating a processor to process a number of different data formats including loading a data register with a word from a storage device, reordering it to a second format compatible with the native order of the vector oriented arithmetic unit before it reaches the arithmetic unit data register file, and vector processing the data register in said arithmetic unit In a preferred embodiment the result of vector processing may be stored in a second data register device. The stored result may be reordered to the first format. The second storage device and the first storage device may be included in the same storage.
Other objects, features and advantages will occur to those skilled in the art from the following description of a preferred embodiment and the accompanying drawings, in which:
Aside from the preferred embodiment or embodiments disclosed below, this invention is capable of other embodiments and of being practiced or being carried out in various ways. Thus, it is to be understood that the invention is not limited in its application to the details of construction and the arrangements of components set forth in the following description or illustrated in the drawings. If only one embodiment is described herein, the claims hereof are not to be limited to that embodiment. Moreover, the claims hereof are not to be read restrictively unless there is clear and convincing evidence manifesting a certain exclusion, restriction, or disclaimer.
There is shown in
Arithmetic unit 14,
For example, as shown in
A little endian transformation is accomplished in a similar fashion,
The big endian and little endian mapping shown in
Data register 92,
One application of this invention illustrating its great versatility and benefit is described with respect to
In contrast, for horizontal filtering,
Although in the example thus far, the invention is explained in terms of the manipulation of bytes, this is not a necessary limitation of the invention. Other data elements larger or smaller could be used and typically multiples of bytes are used. In one application, for example, two bytes or sixteen bits may be the data element. Thus, with the permutable address mode the efficiency of vector oriented processing, such as, SIMD is greatly enhanced. The permutations are particularly effective because they occur in the address mode external to the arithmetic unit. They thereby avoid pipeline stall and do not interfere with the operation of the arithmetic units. The conversion or permutation is done on the fly under the control of the DAG 16 and sequencer 18 during the address mode of operation either loading or storing. The invention allows a unified data presentation which thereby unifies the problem solving. This not only reduces the programming effort but also the time to market for new equipment. This unified data presentation in the native domain of the processor also makes faster use of the arithmetic units and faster storing as just explained. It makes easy accommodation of big endian, little endian or mixed endian operations. It enables data in any form to be reordered to a native domain form of the machine for fast processing and if desired it can then be reordered back to its original form or some other form for use in subsequent arithmetic operations or for permanent or temporary storage in memory.
One implementation of a map circuit 54a, b, c is shown in
The method according to this invention is shown in
Although specific features of the invention are shown in some drawings and not in others, this is for convenience only as each feature may be combined with any or all of the other features in accordance with the invention. The words “including”, “comprising”, “having”, and “with” as used herein are to be interpreted broadly and comprehensively and are not limited to any physical interconnection. Moreover, any embodiments disclosed in the subject application are not to be taken as the only possible embodiments.
In addition, any amendment presented during the prosecution of the patent application for this patent is not a disclaimer of any claim element presented in the application as filed: those skilled in the art cannot reasonably be expected to draft a claim that would literally encompass all possible equivalents, many equivalents will be unforeseeable at the time of the amendment and are beyond a fair interpretation of what is to be surrendered (if anything), the rationale underlying the amendment may bear no more than a tangential relation to many equivalents, and/or there are many other reasons the applicant can not be expected to describe certain insubstantial substitutes for any claim element amended.
Other embodiments will occur to those skilled in the art and are within the following claims.