For the purpose of illustrating the invention, the drawings show aspects of one or more embodiments of the invention. However, it should be understood that the present invention is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein:
Referring now to
System 10 includes a cache 12 for providing and storing instructions, a fetcher 14 for fetching instructions from the cache with a data latch 15, a decoder 16, with a localized control cache (LCC) unit 30 for decoding instructions received from the fetcher, and an executor 18 for executing the instructions with a data latch 19. System 10 also includes a writer 20 for writing the instructions back to the cache with a data latch 21, and a LCC state machine 22 which tracks the address values of instructions and controls all the components of the system. All the components of system 10 discussed above are coupled via a coupling circuitry (not shown) to allow communications and exchange of data and signals, as is well known in the art. Decoder 16 may also be referred to as a logic cone which performs the decoding functions. Data latches 15, 19 and 21 generally save data for only one cycle with no data caching or storing capability. Cache 12 may also include a program counter register, an instruction register, and data registers (none of these registers are shown) for providing instructions to and storing instructions from system 10.
Referring now to
Referring now to
System 100 performs in substantially the same manner as system 10, i.e., it performs the pipeline stages of fetching, decoding, executing and writing. However, each BEC unit 108 contains a plurality of shadow latches (not shown) that can store and cache instructions. Accordingly, system 100 can store a plurality of different loops in each of the plurality of BEC units 108 that can be accessed via state machine 114. BEC units 108 have a similar configuration to LLC units 30 and 130, as illustrated in of
Additionally, a circular queue structure 124 is provided on each element of the pipeline stages (e.g., on fetcher 104, power efficient decoder 106, BEC 108, and writer 110) for communication with state machine 114, which uses circular queue control logic, described more below, to operate the processor with localized caching in plurality of shadow latches 38 in each BEC. The circular queue control logic allows a localized copy of the instructions, generally the decode instructions, to replace the random logic generation of the same control signals. Circular queue control logic utilizes a start pointer, a stop pointer, a flush, a partial flush, and a don't care state, to detect and retrieve loops, as is well known in the art. The instruction loop may be user-defined or function dependent upon execution, where the same sequences of instructions are performed.
Operation of circular queue control logic for power efficient decoding performed by LLC state machine 22 is illustrated in a flowchart in
Control logic detects the return of a code sequence by detecting any branch/jump instructions. When conditional values are true, a loop will occur and is detected again at step 54. Decoder 16 is then deactivated and the sequence is now processed thru via state machine 22 by multiplexer 34 which outputs control to plurality of shadow latches 38 to reuse instruction streams or loops at step 58. The decode values are now retrieved from plurality of shadow latches 38, and the previous control inputs at the start of the decode cycle are locked down, or clock gated. For the entire loop control sequences, no decode functions will be allowed to process resulting in zero AC power for the skipped decode cycles. The process may continue at step 62, when the caching stops and the process can go to steps 52 or 54, and repeat the process over again, or go the reset mode at step 50.
An overflow condition is where the cache depth is greater than the loop depth. Thus, an underflow condition exists when the loop depth is greater than the cache depth. The overflow condition happens when the loop has been completely stored with shadow latches 38 remaining open or unused. When state machine 22 uses a history/event trace to detect a request for the loop stored in shadow latches 38, the state machine commands the shadow latches to reuse the instruction streams at step 58. Thus, latch 36 is disabled and bypassed and the instructions are obtained from latch 38a to multiplexer 34 and then latch 32, then latch 38b to multiplexer 34 to latch 32, and so on. Additionally during step 58, state machine 22 will deactivate latch 36, decoder 16, executor 18, and writer 20.
In underflow conditions where the instruction stages or steps (loop depth) exceed the number of queues (cache depth) available in shadow latches 38, state machine 22 selects an underflow path for those cycles, where those excess cycles or instructions are not cached. State machine 22 detects a request for the loop stored in shadow latches 38, and the state machines commands the shadow latches to reuse the instruction streams at step 58. During step 58, and state machine 22 will deactivate decoder 16, executor 18, and writer 20, as previously discussed. Shadow latches 38 will perform the instructions stored and then the excess instructions (non-shadowed cycles) will be performed by the last shadow latch 38e, which may be designated as an underflow latch, which has been designated by state machine 22 to perform all the remaining instruction steps of the loop. In overflow conditions, decoding of the excess instructions would be decoded conventionally. In underflow conditions, the non-shadowed cycles would activate decoder 16, 124 or logic cone to decode the function. When the loop returns to the start, the contents of shadow latches 38 are used, until the overflow cycles are reached.
While the preceding discussion of the operation of system 10 was provided with respect to system 10 having LCC units 30, those skilled in the art will appreciate that this description also applies to other embodiments of the invention featuring LCC units 130 or BEC units 108.
Referring now to
Chart 74 provides over time for the process according to one embodiment of the present disclosure. Chart 74 depicts an overflow condition where the queue depth has already been configured, as may occur in step 52. At step 54, state machine 22, 114 detects a loop, and begins to start caching, as occurs at step 56. In this illustrative example, three instructions, N3, N4, and N5, make up the loop. Loops with a greater or lesser number of instructions can be utilized while still keeping within the scope and spirit of the present invention. At the end of the caching, state machine 22, 114 detects that the loop has been requested and thus the loop, cached in the plurality of shadow latches 38, is activated, as indicated in step 58. In the illustrative embodiment of
Chart 74 would operate in a similar manner for underflow conditions. Thus the stored instructions would be executed in the same manner, with the underflow latch 38e performing the conventional decoding in the remaining steps or stages in the loop.
Exemplary embodiments have been disclosed above and illustrated in the accompanying drawings. It will be understood by those skilled in the art that various changes, omissions and additions may be made to that which is specifically disclosed herein without departing from the spirit and scope of the present disclosure.