Claims
- 1. A data processor comprising:
an instruction execution pipeline comprising N processing stages; and an instruction issue unit capable of fetching into said instruction execution pipeline instructions fetched from an instruction cache associated with said data processor, each of said fetched instructions comprising from one to S syllables, said instruction issue unit comprising; a first buffer comprising S storage locations capable of receiving and storing said one to S syllables associated with said fetched instructions, each of said S storage locations capable of storing one of said one to S syllables of each fetched instruction; a second buffer comprising S storage locations capable of receiving and storing said one to S syllables associated with said fetched instructions, each of said S storage locations capable of storing one of said one to S syllables of each fetched instruction; and a controller capable of determining if a first one of said S storage locations in said first buffer is full, wherein said controller, in response to a determination that said first one of said S storage locations is full, causes a corresponding syllable in an incoming fetched instruction to be stored in a corresponding one of said S storage locations in said second buffer.
- 2. The data processor as set forth in claim 1 wherein S=4.
- 3. The data processor as set forth in claim 1 wherein S=8.
- 4. The data processor as set forth in claim 1 wherein S is a multiple of four.
- 5. The data processor as set forth in claim 1 wherein each of said one to S syllables comprises 32 bits.
- 6. The data processor as set forth in claim 1 wherein each of said one to S syllables comprises 16 bits.
- 7. The data processor as set forth in claim 1 wherein each of said one to S syllables comprises 64 bits.
- 8. The data processor as set forth in claim 1 wherein said controller is capable of determining when all of the syllables in one of said fetched instructions are present in said first buffer, wherein said controller, in response to a determination that said all of said syllables are present, causes said all of said syllables to be transferred from said first buffer to said instruction execution pipeline.
- 9. The data processor as set forth in claim 8 wherein said controller is capable of determining if a syllable in said first one of said S storage locations in said first buffer has been transferred from said first buffer to said instruction pipeline, wherein said controller, in response to a determination that said first one of said S storage locations has been transferred, causes said corresponding syllable stored in said corresponding one of said S storage locations in said second buffer to be transferred to said first one of said S storage locations in said first buffer.
- 10. The data processor as set forth in claim 9 further comprising a switching circuit controlled by said controller and operable to transfer syllables from said second buffer to said first buffer.
- 11. A processing system comprising:
a data processor; a memory coupled to said data processor; a plurality of memory-mapped peripheral circuits coupled to said data processor for performing selected functions in association with said data processor, wherein said data processor comprises:
an instruction execution pipeline comprising N processing stages; and an instruction issue unit capable of fetching into said instruction execution pipeline instructions fetched from an instruction cache associated with said data processor, each of said fetched instructions comprising from one to S syllables, said instruction issue unit comprising;
a first buffer comprising S storage locations capable of receiving and storing said one to S syllables associated with said fetched instructions, each of said S storage locations capable of storing one of said one to S syllables of each fetched instruction; a second buffer comprising S storage locations capable of receiving and storing said one to S syllables associated with said fetched instructions, each of said S storage locations capable of storing one of said one to S syllables of each fetched instruction; and a controller capable of determining if a first one of said S storage locations in said first buffer is full, wherein said controller, in response to a determination that said first one of said S storage locations is full, causes a corresponding syllable in an incoming fetched instruction to be stored in a corresponding one of said S storage locations in said second buffer.
- 12. The processing system as set forth in claim 11 wherein S=4.
- 13. The processing system as set forth in claim 11 wherein S=8.
- 14. The processing system as set forth in claim 11 wherein S is a multiple of four.
- 15. The processing system as set forth in claim 11 wherein each of said one to S syllables comprises 32 bits.
- 16. The processing system as set forth in claim 11 wherein each of said one to S syllables comprises 16 bits.
- 17. The processing system as set forth in claim 11 wherein each of said one to S syllables comprises 64 bits.
- 18. The processing system as set forth in claim 11 wherein said controller is capable of determining when all of the syllables in one of said fetched instructions are present in said first buffer, wherein said controller, in response to a determination that said all of said syllables are present, causes said all of said syllables to be transferred from said first buffer to said instruction execution pipeline.
- 19. The processing system as set forth in claim 18 wherein said controller is capable of determining if a syllable in said first one of said S storage locations in said first buffer has been transferred from said first buffer to said instruction pipeline, wherein said controller, in response to a determination that said first one of said S storage locations has been transferred, causes said corresponding syllable stored in said corresponding one of said S storage locations in said second buffer to be transferred to said first one of said S storage locations in said first buffer.
- 20. The processing system as set forth in claim 19 further comprising a switching circuit controlled by said controller and operable to transfer syllables from said second buffer to said first buffer.
- 21. For use in a data processor comprising an instruction execution pipeline comprising N processing stages, a method of fetching into the instruction execution pipeline instructions fetched from an instruction cache associated with the data processor, each of the fetched instructions comprising from one to S syllables, the method of fetching comprising the steps of:
storing in a first buffer comprising S storage locations the one to S syllables associated with the fetched instructions, each of the S storage locations capable of storing one of the one to S syllables of each fetched instruction; determining if a first one of the S storage locations in the first buffer is full; and in response to a determination that the first one of the S storage locations is full, storing a corresponding syllable in an incoming fetched instruction in a corresponding one of S storage locations in a second buffer, wherein the second buffer comprises S storage locations, each of the S storage locations in the second buffer capable of storing one of the one to S syllables of each fetched instruction.
- 22. The method as set forth in claim 21 wherein S is a multiple of four.
- 23. The method as set forth in claim 21 wherein each of the one to S syllables comprises one of: a) 16 bits, b) 32 bits, and c) 64 bits.
- 24. The method as set forth in claim 21 further comprising the steps of:
determining when all of the syllables in one of the fetched instructions are present in the first buffer; and in response to a determination that all of the syllables are present, transferring all of the syllables from the first buffer to the instruction execution pipeline.
- 25. The method as set forth in claim 24 further comprising the steps of:
determining if a syllable in the first one of the S storage locations in the first buffer has been transferred from the first buffer to the instruction pipeline; and in response to a determination that the first one of the S storage locations has been transferred, transferring the corresponding syllable stored in the corresponding one of the S storage locations in the second buffer to the first one of the S storage locations in the first buffer.
CROSS-REFERENCE TO RELATED APPLICATIONS The present invention is related to those disclosed in the following United States Patent Applications:
[0001] 1) Ser. No. [Docket No. 00-BN-051], filed concurrently herewith, entitled “SYSTEM AND METHOD FOR EXECUTING VARIABLE LATENCY LOAD OPERATIONS IN A DATA PROCESSOR”;
[0002] 2) Ser. No. [Docket No. 00-BN-052], filed concurrently herewith, entitled “PROCESSOR PIPELINE STALL APPARATUS AND METHOD OF OPERATION”;
[0003] 3) Ser. No. [Docket No. 00-BN-053], filed concurrently herewith, entitled “CIRCUIT AND METHOD FOR HARDWARE-ASSISTED SOFTWARE FLUSHING OF DATA AND INSTRUCTION CACHES”;
[0004] 4) Ser. No. [Docket No. 00-BN-054], filed concurrently herewith, entitled “CIRCUIT AND METHOD FOR SUPPORTING MISALIGNED ACCESSES IN THE PRESENCE OF SPECULATIVE LOAD INSTRUCTIONS”;
[0005] 5) Ser. No. [Docket No. 00-BN-055], filed concurrently herewith, entitled “BYPASS CIRCUITRY FOR USE IN A PIPELINED PROCESSOR”;
[0006] 6) Ser. No. [Docket No. 00-BN-056], filed concurrently herewith, entitled “SYSTEM AND METHOD FOR EXECUTING CONDITIONAL BRANCH INSTRUCTIONS IN A DATA PROCESSOR”;
[0007] 7) Ser. No. [Docket No. 00-BN-057], filed concurrently herewith, entitled “SYSTEM AND METHOD FOR ENCODING CONSTANT OPERANDS IN A WIDE ISSUE PROCESSOR”;
[0008] 8) Ser. No. [Docket No. 00-BN-058], filed concurrently herewith, entitled “SYSTEM AND METHOD FOR SUPPORTING PRECISE EXCEPTIONS IN A DATA PROCESSOR HAVING A CLUSTERED ARCHITECTURE”;
[0009] 9) Ser. No. [Docket No. 00-BN-059], filed concurrently herewith, entitled “CIRCUIT AND METHOD FOR INSTRUCTION COMPRESSION AND DISPERSAL IN WIDE-ISSUE PROCESSORS”;
[0010] 10) Ser. No. [Docket No. 00-BN-066], filed concurrently herewith, entitled “SYSTEM AND METHOD FOR REDUCING POWER CONSUMPTION IN A DATA PROCESSOR HAVING A CLUSTERED ARCHITECTURE”.
[0011] The above applications are commonly assigned to the assignee of the present invention. The disclosures of these related patent applications are hereby incorporated by reference for all purposes as if fully set forth herein.