1. Field of the Invention
The present invention relates in general to the field of computer systems, and in particular, to an apparatus and method for performing multi-dimensional computations based on an intra-add operation.
2. Description of the Related Art
To improve the efficiency of multimedia applications, as well as other applications with similar characteristics, a Single Instruction, Multiple Data (SIMD) architecture has been implemented in computer systems to enable one instruction to operate on several operands simultaneously, rather than on a single operand. In particular, SIMD architectures take advantage of packing many data elements within one register or memory location. With parallel hardware execution, multiple operations can be performed on separate data elements with one instruction, resulting in significant performance improvement.
Currently, the SIMD addition operation only performs “vertical” or inter-register addition, where pairs of data elements, for example, a first element Xn (where n is an integer) from one operand, and a second element Yn from a second operand, are added together. An example of such a vertical addition operation is shown in
Although many applications currently in use can take advantage of such a vertical add operation, there are a number of important applications which would require the rearrangement of the data elements before the vertical add operation can be implemented so as to provide realization of the application.
For example, a matrix multiplication operation is shown below.
To obtain the product of the matrix A with a vector X to obtain the resulting vector Y, instructions are used to: 1) store the columns of the matrix A as packed operands (this typically requires rearrangement of data because the rows of the matrix A coefficients are stored to be accessed as packed data operands, not the columns); 2) store a set of operands that each have a different one of the vector X coefficients in every data element; 3) use vertical multiplication where each data element in the vector X (i.e., X4, X3, X2, X1) has to be first multiplied with data elements in each column (for example, [A14, A24, A34, A44]) of the matrix A. The results of the multiplication operations are then added together through three vertical add operations such as that shown in
Exemplary Code Based on Vertical-Add Operations:
Accordingly, there is a need in the technology for providing an apparatus and method which efficiently performs multi-dimensional computations based on a “horizontal” or intra-add operation. There is also a need in the technology for a method and operation for increasing code density by eliminating the need for the rearrangement of data elements and the corresponding rearrangement operations.
A method and apparatus for including in a processor instructions for performing intra-add operations on packed data is described. In one embodiment, a processor is coupled to a memory. The memory has stored therein a first packed data. The processor performs operations on data elements in the first packed data to generate a plurality of data elements in a second packed data in response to receiving an instruction. At least two of the plurality of data elements in the second packed data store the result of an intra-add operation on data elements in the first packed data.
The invention is illustrated by way of example, and not limitation, in the figures. Like reference indicate similar elements.
In the following description, numerous specific details are set forth to provide a thorough understanding of the invention. However, it is understood that the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the invention.
One aspect of the present invention is a processor including instructions for performing horizontal or intra-addition operations on packed data. In one embodiment, two pairs of data elements (e.g., X3 and X2, and X1 and X0) located within a single storage area (e.g., Source1) are added together using a horizontal or a intra-add operation. In an alternate embodiment, data elements from each of two storage areas (e.g., Source1 and Source2) are added and stored as data elements of a resulting packed data, as shown in FIG. 2.
Another aspect of the present invention involves a method and apparatus for performing matrix multiplication using a horizontal or intra-addition operation. In one embodiment, each 32-bit data element from a 1×2 vector is multiplied with corresponding 32-bit data elements from each row of a 2×2 matrix, as shown in
The operation of a further example of a matrix multiplication operation based on intra-add operations is shown in
A first pair of intra-add operations are then performed on the initial resulting data elements (IResult1+IResult2, IResult3+IResult4), as shown in
Although the examples illustrated in
Exemplary Code Based on Horizontal-Add Operations
PFADDM represents the Intra-add instruction of the present invention.
Although the discussions above pertain to a horizontal-add or inter-add instruction, alternative embodiments could in addition to, or in place of the intra-add instruction, have an inter-subtract instruction or element operation. In this case, one of a pair of data elements within a packed data will be subtracted from a second element of the pair of data elements to accomplish the inter-subtract operations.
In addition, although the discussions above pertain to packed operands that have four data elements, alternative embodiments may involve packed operands that have at least two data elements (i.e., that are double wide).
As shown in
Execution unit 120 is used for executing instructions received by processor 110. In addition to recognizing instructions typically implemented in general purpose processors, execution unit 120 recognizes instructions in packed instruction set 122 for performing operations on packed data formats. Packed instruction set 122 includes instructions for supporting intra-add and multiply operations. In addition, packed instruction set 122 may also include other packed instructions.
Execution unit 120 is coupled to register file 130 by internal bus 160. Register file 130 represents a storage area on processor 110 for storing information, including data. It is understood that the aspects of the invention are the described intra-add instruction set and a code sequence for performing matrix multiplication for operating on packed data. According to these aspects of the invention, the storage area used for storing the packed data is not critical. Execution unit 120 is coupled to cache 140 and decoder 150. Cache 140 is used to cache data and/or control signals (such as instructions) from, for example, main memory 104. Decoder 150 is used for decoding instructions received by processor 110 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 120 performs the appropriate operations. Decoder 150 may be implemented using any number of different mechanisms (e.g., a look-up table, a hardware implementation, a PLA, etc.).
Generally, a data element is an individual piece of data that is stored in a single register (or memory location) with other data elements of the same length. The number of data elements stored in a register is the number of bits supported by the packed data format (e.g., 64 bits for integer packed data) divided by the length in bits of a data element. While any number of packed data formats can be used,
Three integer packed data formats are illustrated in FIG. 6: packed byte 401, packed word 402, and packed doubleword 403. While in one embodiment, each of the packed data formats in
In one embodiment of the invention, the SRC1 register contains packed data (Source1), the SRC2 register contains packed data (Source2) and the DEST register will contain the result (Result) of performing the horizontal add instruction on Source1 and Source2. In the first step of the horizontal add instruction, one or more pairs of data elements from Source1 are summed together. Similarly, one or more pairs of data elements from Source2 are summed together. The results of the instruction are then stored in the DEST register.
The process S800 then advances to process step S804, where the device 150 accesses registers in register file 130 given the SRC1602 and SRC2603 addresses. Register file 130 provides the execution unit 120 with the packed data stored in the SRC1602 register (Source1), and the packed data stored in SRC2603 register (Source2).
The process S800 proceeds to process step S806, where the decoder 150 enables the execution unit 120 to perform the instruction. Next, the process S800 performs the following series of steps, as shown in process step S808 and FIG. 2. Source1 bits thirty-one through zero are added to Source1 bits sixty-three through thirty-two, generating a first 32-bit result (Result[31:0]). Source1 bits ninety-five through sixty-four are added to Source1 bits one hundred-and-twenty-seven through ninety-six, generating a second 32-bit result (Result[63:32]). Source2 bits thirty-one through zero are added to Source2 bits sixty-three through thirty-two, generating a first 32-bit result (Result[95:64]). Source2 bits ninety-five through sixty-four are added to Source1 bits one hundred-and-twenty-seven through ninety-six, generating a second 32-bit result (Result[127:96]).
The process S800 advances to process step S810, where the results of the intra-add instruction are stored in DEST. The process S800 then terminates. Of course, the method of
In one embodiment, the intra-add instructions can execute on multiple data elements in the same number of clock cycles as an inter-add operation on unpacked data. To achieve execution in the same number of clock cycles, parallelism is used.
The intra-adder 930 receives inputs from Source1[127:0], Source2[127:0], and Enable 920. The intra-adder 930 includes four adder circuits 932, 934, 936 and 938. Adder 932 receives inputs from Source2[127:64], adder 934 receives inputs from Source2[63:0], adder 936 receives inputs from Source1[127:64], while adder 938 receives inputs from Source1[63:0]. When enabled, the adders 932, 934, 936 and 938 sum their respective inputs, and each generates a 32-bit output. The results of the addition by adder 932 (i.e., Result[127:96]), adder 934 (i.e., Result[95:64], by adder 936 (i.e., Result[63:32]), and by adder 938 (i.e., Result[31:0]) are combined into the 128-bit Result and communicated to the Result Register 940.
In one embodiment, the computer system 100 shown in
The intra-add operation facilitates the efficient performance of multi-dimensional computations. It further increases code density by eliminating the need for the rearrangement of data elements and the corresponding rearrangement operations.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application is a continuation and claims the benefit of application Ser. No. 09/053,401, filed Mar. 31, 1998, now U.S. Pat. No. 6,418,529.
Number | Name | Date | Kind |
---|---|---|---|
3711692 | Batcher | Jan 1973 | A |
3723715 | Chen et al. | Mar 1973 | A |
4161784 | Cushing et al. | Jul 1979 | A |
4189716 | Krambeck | Feb 1980 | A |
4393468 | New | Jul 1983 | A |
4418383 | Doyle et al. | Nov 1983 | A |
4498177 | Larson | Feb 1985 | A |
4630192 | Wassel et al. | Dec 1986 | A |
4707800 | Montrone et al. | Nov 1987 | A |
4771379 | Ando et al. | Sep 1988 | A |
4785393 | Chu et al. | Nov 1988 | A |
4785421 | Takahashi et al. | Nov 1988 | A |
4901270 | Galbi et al. | Feb 1990 | A |
4989168 | Kuroda et al. | Jan 1991 | A |
5095457 | Jeong | Mar 1992 | A |
5187679 | Vassiliadis et al. | Feb 1993 | A |
5201056 | Daniel et al. | Apr 1993 | A |
5327369 | Ashkenazi | Jul 1994 | A |
5339447 | Balmer | Aug 1994 | A |
5390135 | Lee et al. | Feb 1995 | A |
5418736 | Widigen et al. | May 1995 | A |
5442799 | Murakami et al. | Aug 1995 | A |
5448703 | Amini et al. | Sep 1995 | A |
5517626 | Archer et al. | May 1996 | A |
5530661 | Garbe et al. | Jun 1996 | A |
5537601 | Kimura et al. | Jul 1996 | A |
5586070 | Purcell | Dec 1996 | A |
5677862 | Peleg et al. | Oct 1997 | A |
5678009 | Bains et al. | Oct 1997 | A |
5721697 | Lee | Feb 1998 | A |
5721892 | Peleg et al. | Feb 1998 | A |
5815421 | Dulong et al. | Sep 1998 | A |
5819117 | Hansen | Oct 1998 | A |
5822232 | Dulong et al. | Oct 1998 | A |
5859790 | Sidwell | Jan 1999 | A |
5862067 | Mennemeier et al. | Jan 1999 | A |
5875355 | MacKenzie et al. | Feb 1999 | A |
5880984 | Burchfiel et al. | Mar 1999 | A |
5880985 | Makineni et al. | Mar 1999 | A |
5883824 | Lee et al. | Mar 1999 | A |
5887186 | Nakanishi | Mar 1999 | A |
5901301 | Matsuo et al. | May 1999 | A |
5918062 | Oberman et al. | Jun 1999 | A |
5983257 | Dulong et al. | Nov 1999 | A |
6006316 | Dinkjian | Dec 1999 | A |
6014684 | Hoffman | Jan 2000 | A |
6014735 | Chennupaty et al. | Jan 2000 | A |
6041404 | Roussel et al. | Mar 2000 | A |
6115812 | Abdallah et al. | Sep 2000 | A |
6122725 | Roussel et al. | Sep 2000 | A |
6211892 | Huff et al. | Apr 2001 | B1 |
6212618 | Roussel | Apr 2001 | B1 |
6230253 | Roussel et al. | May 2001 | B1 |
6230257 | Roussel et al. | May 2001 | B1 |
6288723 | Huff et al. | Sep 2001 | B1 |
Number | Date | Country |
---|---|---|
0 318 957 | Nov 1998 | EP |
WO 9723821 | Jul 1997 | WO |
Number | Date | Country | |
---|---|---|---|
20030050941 A1 | Mar 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09053401 | Mar 1998 | US |
Child | 10193645 | US |