The present invention relates to the field of computer processors. More particularly, it relates to issuing and executing out-of-order loop instructions in a processor where the processor may be a general-purpose microprocessor, a digital-signal processor, a single instruction multiple data processor, a vector processor, a graphics processor, or other type of processor which executes instructions.
Processors have become increasingly complex, chasing small increments in performance at the expense of power consumption and semiconductor chip area. The approach in out-of-order (OOO) superscalar microprocessors has remained basically the same for the last 25-30 years, with excessive power dissipation arising from dynamic scheduling of instructions for execution from reservation stations or central windows. Designing an OOO superscalar microprocessor has consequently become a huge undertaking. Hundreds of instructions are issued to the execution pipeline where data dependencies are resolved and arbitrated for execution by a large number of functional units. The result data from the functional units are again arbitrated for the write buses to write back to the register file. If the data cannot be written back to the register file, then the result data are kept in temporary registers and a complicated stalling procedure is performed for the execution pipeline.
Execution of loops is important in many applications where the number of iterations can be in the hundreds or thousands. In dynamic scheduling, it is difficult to track a loop in the execution pipeline. The loop may contain load instructions with load data coming from external memory causing the microprocessor to stall, thereby degrading performance. The load data from external memory often result from a data cache miss or non-cacheable load data. Another impact on performance is that the latency of fetching external data is greater than the limitation on the number of external memory requests when pipelining of external memory requests is not possible.
Thus, there is a need for an OOO superscalar microprocessor which efficiently executes loops, consumes less power, has a simpler design, and is scalable with consistently high performance.
The disclosed embodiments provide a processor with a time counter and a method for statically dispatching instructions to an execution pipeline with preset execution times based on a time count from the counter.
An approach to microprocessor design employs static scheduling of instructions. A disclosed static scheduling algorithm is based on the assumption that a new instruction has a perfect view of all previous instructions in the execution pipeline, and thus it can be scheduled for execution at an exact time in the future, e.g., with reference to a time count from a counter. Assuming an instruction has 2 source operands and 1 destination operand, the instruction can be executed out-of-order when conditions are met of (1) no data dependency, (2) availability of read buses to read data from the register file, (3) availability of a functional unit to execute the instruction, and (4) availability of a write bus to write result data back to the register file.
All the above requirements are associated with time: (1) a time when all data dependencies are resolved, (2) at which time the read buses are available to read source operands from a register file, (3) at which subsequent time the functional unit is available to execute the instruction, and (4) at which further subsequent time the write bus is available to write result data back to the register file.
In one embodiment a time counter increments every clock cycle and the resulting count is used to statically schedule instruction execution. Instructions have known throughput and latency times, and thus can be scheduled for execution based on the time count. For example, a multiply instruction with throughput time of 1 and latency time of 3 can be scheduled to execute when the data dependency is resolved. If the time count is 5 and the multiply has no data dependency at time 8, then the available read buses are scheduled to read data from the register file at time 8, the available multiply unit is scheduled to execute the multiply instruction at time 9, and the available write bus is scheduled to write result data from multiply unit to the register file at time 11. The multiply instruction is dispatched to the multiply execution queue with the preset execution times. The read buses, the multiply unit, and the write bus are scheduled to be busy at the preset times. The maximum time count is designed to accommodate the largest future time to schedule execution of instruction. In some embodiments, the time count is 64 and no instruction can be scheduled to execute more than 64 cycles in the future. In another embodiment a superscalar microprocessor with quad-issue can have 256 instructions in the execution pipeline. With static scheduling of instructions based on the time count, the complexity of dynamic scheduling is eliminated, the arbitration of resources is reduced, and the hundreds of comparators for data dependency are eliminated. The basic out-of-order execution of instructions operates similarly to that of a conventional out-of-order processor, but statically scheduling of instructions with a time count is more efficient. The elimination of the extra components means the processor consumes less power. Instructions are efficiently executed out-of-order with preset times to retain the performance compared to traditional dynamic approaches. The number of issued instructions is scalable from scalar to superscalar.
A basic block is defined herein as a code sequence with no branches in except to the entry, and no branches out except at the exit. A loop is defined as a basic block where the branch target address of the exit point is the same as the entry address of the same basic block. A loop may be detected as an out-of-order execution loop when the loop iterations can be executed out-of-order. Certain types of loops can be detected for out-of-order execution, for example, accumulation of load data, counting of matching data, or copying of memory data in which ordering of the data is not required. For accumulation loops, a load instruction can fetch data out-of-order as long as all data are accumulated. In one embodiment, processor resources are reserved for out-of-order loop execution and a subsequent instruction after the loop, that can be concurrently executed. In another embodiment, a data processing loop is stalled in the instruction issue unit to wait for load data. For execution, a loop is broken into 2 smaller loops, one to load data, and one to process data. In one embodiment, a load data loop fetches data out-of-order and sends load data to the data processing loop.
Aspects of the present invention are best understood from the following description when read with the accompanying figures.
The following description provides different embodiments for implementing aspects of the present invention. Specific examples of components and arrangements are described below to simplify the explanation. These are merely examples and are not intended to be limiting. For example, the description of a first component coupled to a second component includes embodiments in which the two components are directly connected, as well as embodiments in which an additional component is disposed between the first and second components. In addition, the present disclosure repeats reference numerals in various examples. This repetition is for the purpose of clarity and does not in itself require an identical relationship between the embodiments.
In one embodiment a processor is provided, typically implemented as a microprocessor, that schedules instructions to be executed at a preset time based on a time count from a time counter. In such a processor the instructions are scheduled to be executed using the known throughput and latency of each instruction to be executed. For example, in one embodiment, the ALU instructions have throughput and latency times of 1, the multiply instructions have throughput time of 1 and the latency time of 2, the load instructions have the throughput time of 1 and latency time of 3 (based on a data cache hit), and the divide instruction have throughput and latency times of 32.
According to an embodiment the microprocessor 10 also includes a time counter unit 90 which stores a time count incremented, in one embodiment, every clock cycle. The time counter unit 90 is coupled to the clock unit 15 and uses “clk” signal to increment the time count.
In one embodiment the time count represents the time in clock cycles when an instruction in the instruction issue unit 55 is scheduled for execution. For example, if the current time count is 5 and an instruction is scheduled to be executed in 22 cycles, then the instruction is sent to the execution queue 70 with the execution time count of 27. When the time count increments to 26, the execution queue 70 issues the instruction to the functional unit 75 for execution in the next cycle (time count 27). The time counter unit 90 is coupled to the register scoreboard 40, the time-resource matrix 50, the read control 62, the write control 64, and the plurality of execution queues 70. The scoreboard 40 resolves data dependencies in the instructions. The time-resource matrix 50 checks availability of the various resources which in one embodiment include the read buses 66, the functional units 75, the load-store unit 80, and the write buses 68. The read control unit 62, the write control unit 64, and the execution queues 70 receive the scheduled times from the instruction issue unit 55. The read control unit 62 is set to read the source operands from the register file 60 on specific read buses 66 at a preset time. The write control unit 64 writes the result data from a functional unit 75 or the load-store unit 80 or the data cache 85 to the register file 60 on a specific write bus 68 at a preset time. The execution queue 70 is set to dispatch an instruction to a functional unit 75 or the load-store unit 80 at a preset time. In each case, the preset time is the time setup by the decode/issue unit. The preset time is a future time based on the time count, so when the time count 90 counts up to the preset time, then the specified action will happen, where the specified action is reading data from the register file 60 writing data to the register file, or issuing instruction to a functional unit 75 for execution. The instruction issue unit 55 determines that an instruction is free of data dependencies and the resources are available at the “preset time” for the instruction to be executed in the execution pipeline.
In the microprocessor system 10 the instruction fetch unit 20 fetches the next instruction(s) from the instruction cache 24 to send to the instruction decode unit 30. More than one instruction can be fetched per clock cycle from the instruction fetch unit depending on the configuration of microprocessor 10. For higher performance, microprocessor 10 fetches more instructions per clock cycle for the instruction decode unit 30. For low-power and embedded applications, microprocessor 10 might fetch only a single instruction per clock cycle for the instruction decode unit 30. If the instructions are not in the instruction cache 24 (commonly referred to as an instruction cache miss), then the instruction fetch unit 20 sends a request to external memory (not shown) to fetch the required instructions. The external memory may consist of hierarchical memory subsystems, for example, an L2 cache, an L3 cache, read-only memory (ROM), dynamic random-access memory (DRAM), flash memory, or a disk drive. The external memory is accessible by both the instruction cache 24 and the data cache 85. The instruction fetch unit 20 is also coupled to the branch prediction unit 22 for prediction of the next instruction address when a branch is detected and predicted by the branch prediction unit 22. The branch prediction unit 22 includes a branch target buffer (BTB) 26 that stores a plurality of the entry-point addresses, branch types, offsets to exit-point addresses, and the target addresses of the basic blocks which will be discussed in detail later. The instruction fetch unit 20, the instruction cache 24, and the branch prediction unit 22 are described here for completeness of description of a microprocessor 10. In other embodiments, other instruction fetch and branch prediction methods can be used to supply instructions to the instruction decode unit 30 for microprocessor 10.
The instruction decode unit 30 is coupled to the instruction fetch unit 20 to receive new instructions and also coupled to the register scoreboard 40. The instruction decode unit 30 decodes the instructions for instruction type, instruction throughput and latency times, and the register operands. The register operands, as an example, may consist of 2 source operands and 1 destination operand. The operands are referenced to registers in the register file 60. The source and destination registers are used here to represent the source and destination operands of the instruction. The source registers support solving read-after-write (RAW) data dependencies. If a later instruction has the same source register as the destination register of an earlier instruction, then the later instruction has RAW data dependency. The later instruction must wait for completion of the earlier instruction before it can start execution. The register scoreboard 40 is used to keep track of the completion time of the destination registers of the earlier instructions. In the preferred embodiment the completion time is maintained in reference to the time count 90.
Each of the units shown in the block diagram of
The integrated circuitry employed to implement the units shown in the block diagram of
In other embodiments, the units shown in the block diagram of
The aforementioned implementations of software executed on a general-purpose, or special purpose, computing system may take the form of a computer-implemented method for implementing a microprocessor, and also as a computer program product for implementing a microprocessor, where the computer program product is stored on a non-transitory computer readable storage medium and include instructions for causing the computer system to execute a method. The aforementioned program modules and/or code segments may be executed on suitable computing system to perform the functions disclosed herein. Such a computing system will typically include one or more processing units, memory and non-transitory storage to execute computer-executable instructions.
To be executable OOO a loop must include no more than one load instruction. Moreover, there needs to be sufficient resources for execution of the loop and for execution of a non-loop instruction concurrently with the loop. In the example shown in
In this example, the instruction in the loop that can be executed OOO is the load data from the load data loop that can be fetched out-of-order. This load data operation is processed immediately by the data processing loop. The loop count is stored in register x13. The load data loop is setup in the execution queue 70 with the loop count to fetch the load data independently from the data processing loop. The resources for executing the OOO loop may consist of some the read buses 66, the write buses 68, the functional units 75, and the load-store ports of the load-store unit 80. In one embodiment, the load data and accumulative result data are written to the register file 60 once on the last iteration of the loop. In another embodiment, the data processing loop is stalled in the instruction issue unit 55 until valid data is received from the load-store unit 80. This loop example will be referenced throughout the description below.
The OOO loop can be more complicated than the example above. Because the ordering of load data is not important, the loads can be fetched out-of-order. A data cache miss can return data at later time, while a later load with a data cache hit sends data to the data processing loop at an earlier time. For non-cacheable load instructions, the routing through many nodes of the memory subsystem can fetch data with different load latency. The external memory data are returned at arbitrary time and can be sent to the data processing loop out-of-order. Because the ordering of external load data is not necessary, the load-store unit 80 can send many external memory requests without any identification (ID) to keep track of the order of load requests. The count of load requests is tracked by the load-store unit 80. In the example, the load and address increment instructions are at the beginning of the loop which simplifies the loop detection unit 45, allowing it to separate the load data loop and the data processing loop. A compiler could be used to structure the loop as in the example of
In the loop example, one of the source registers is the same as the destination register for each of the loop count increment instruction, the address increment instruction, and the accumulate instruction. The accumulate instruction is adding a first register value to a second register value and writing back to the first register. The increment/decrement instruction is adding a first register value to a positive/negative immediate data and writing back to the first register. In these cases, a self-forwarding path is built into the functional unit or the address generation unit for the OOO loop operations. The result data is routed back to the first source operand data of the functional unit. The second source operand data is from the immediate data or a second source register.
In one embodiment, the loop type can be configured for OOO execution. For example, memory copy, floating-point accumulation, and vector accumulation operations can be configured to be executed as OOO loops. The rounding of a floating-point multiply-accumulate would ultimately be within an acceptable error margin, however, a user may choose to exclude floating-point operations as OOO loops. The loop detection unit 45 includes a control and status register (CSR) 145 to disable/enable loops to be detected as OOO loops. The load-store unit 80 includes physical memory registers (PMR) 180 to further disable/enable a memory region for OOO loop execution. The PMR 180 may include a physical memory attribute (PMA) and physical memory protection (PMP) for memory regions which may include disable/enable bits for OOO execution of certain loop types. The load/store addresses access the PMR 180 in the first iteration of the loop and can disable/enable the OOO loop detection by the loop detection unit 45.
In one embodiment, the load-store unit 80 consists of an address generation unit (not shown) that calculates addresses before accessing the data cache 85. In the example of
In one embodiment, the branch prediction unit 22 is implemented in accordance with a basic-block algorithm. Examples are shown in
Referring back to
In another embodiment, the OOO execution loop is marked as such in the branch target buffer of the branch prediction unit 22. The loop entry in the branch target buffer has sufficient information so that the OOO loop begins execution immediately without going through the loop detection unit 45. Assuming the number of instructions in the loop is less than 16, 4 bits are needed for the offset from the entry point of the basic block to locate the instructions in the loop. The OOO loop entry in the branch target buffer includes the offset for self-increment and accumulative instructions and resources required for the OOO loop.
The registers may be assigned to an OOO loop operation designated by the loop bit field 48 of the register scoreboard 40. As examples illustrated in
In another embodiment, the write times 46 of registers 10 and 12 are incremented every clock cycle as long as the loop bits 48 are set. For example, for register 12, when the time count is 28, the valid bit 42 remains set and the write time 46 is incremented as long as the loop bit 48 is set. On the last iteration of the loop, the load-store execution queue 70 resets the loop bit 48 to allow writing back to the register file 60. Similarly, the ALU execution queue 70 resets the loop bit 48 for register 10. The write time 46 functionality is the same as other non-loop entries. If the load data is not in the data cache 80, then the load-store unit 80 modifies the write time with the L2 cache latency time. It is important only for the last load data to update register 12 of the register file 60. The write time 46 is incremented based on the write control unit 64 unless there is a data cache miss in which case the L2 cache latency is used. A read bus 66 is reserved for sending data from the load-store unit 80 (as part of the load data loop in
The write time of a destination register is the read time for the subsequent instruction with RAW data dependency on the same destination register. Referring back to
An instruction reads source operand data at read time, executes the instruction with a functional unit 75 at execute time, and writes the result data back to the register file 60 at write time. The write time is recorded in the write time field 46 of the register scoreboard 40. With 2 source registers, the instruction selects the later write time from the register scoreboard 40 as the read time for the instruction. In one embodiment, the load and store instructions are executed in order to simplify the data dependencies of the load and store instructions. The load instruction may read from the same address as the previous store instruction in which case the data are forwarded from the store instruction instead of reading from the data cache 85. The load store unit keeps track of the order of load and store instructions for correct processing of data dependencies. Keeping the order of load and store instructions in the execution queue 70 allows the addresses to be calculated in order for any data dependency. The load-store execution queue (one of the plurality of execution queues 70) keeps the latest busy time of the load or store instructions. The read time of a load or store instruction is determined from the write times 46 of the source registers from the register scoreboard 40 or the latest busy time of the load-store execution queue 70. In another embodiment, the resources (read buses, write buses, and functional units) also have associated with them the latest busy times which are needed for an OOO loop. The resources are reserved and used every clock cycle for OOO loop operations until the completion of the loop. The OOO loop can start after the latest busy time of all necessary resources. The execute time is the read time plus 1 time count where the functional unit 75 or the load-store unit 80 starts executing the instruction. The write time of the instruction is the read time plus the instruction latency time. If the instruction latency time is 1 (e.g., an ALU instruction), then the write time and execution time of the instruction are the same.
Each instruction has an execution latency time. For example, the add instruction has a latency time of 1, the multiply instruction has a latency time of 2, and the load instruction has a latency time of 3 assuming a data cache hit. If the current time count is 5 and the source registers of an add instruction receive write time counts of 22 and 24 from the register scoreboard 40 to write into the source registers, then the read time count is set at 24. In this case, the execution and the write time counts are both 25 for the add instruction. As shown in
The read buses column 51 corresponds to the plurality of read buses 66 in
In one embodiment, resources are in-order assigned to the instructions. The source registers of the add instruction will receive data from read buses 1 and 2, ALU 2 is used for execution of the add instruction and write bus 2 is used to write back data from ALU 2. The counts in the row are reset by the time count. As illustrated in
In one embodiment, some resources are reserved for the OOO loop operations. For example, the last 2 read buses (read buses number 2 and 3) out of 4 read buses are used for the OOO loop operations. The OOO loop operation cannot start until the latest busy time of the read buses 2 and 3. During the OOO loop operation, the time-resource matrix 50 adjusts the maximum available read buses to two for issuing subsequent instructions after the OOO loop. The instructions after the OOO loop can be concurrently executed with the OOO loop operations. The loop detection unit 45 determines the resources needed for the OOO loop operations which may include the read buses, the write buses, and functional units. The number of reserved resources limits the loop to be detected as OOO loop by the loop detection unit 45. The earliest time to start the OOO loop operation is based on the latest busy times of all required loop resources. The resource latest busy times are in addition to the write times 46 of the source registers from the register scoreboard 40 to determine the first read time of the OOO loop operation.
The write bus 52 is not needed by the OOO loop operations, and no write bus is reserved for the OOO loop operations until the last iteration of the loop. The execution queue 70 checks the write bus 52 of the time-resource matrix 50 on the last iteration of the loop to write back to the register file 60. In another embodiment, a write bus 52 is reserved as part of the required resources to be used by the last iteration of the OOO loop. All the required resources for OOO loop operation are no longer part of the time-resource matrix 50 and the time-resource matrix 50 adjusts the maximum available resources.
All available resources for the required times are read from the time-resource matrix and sent to the instruction issue unit 55 for a decision of when to issue an instruction to the execution queue 70. If the resources are available at the required times, then the instruction can be scheduled and sent to the execution queue 70. The issued instruction updates the register scoreboard 40 with the write time and updates the time-resource matrix 50 to reduce the available resource values. All resources must be available at the required time counts for the instruction to be dispatched to the execution queue 70. If all resources are not available, then the required time counts are incremented by one, and the time-resource matrix is checked as soon as the same cycle or next cycle. The particular number of read buses 66, write buses 68, and functional units 75 in
In the example illustrated in
Similarly in
Note that the destination register can be, but does not need to be, kept with the instruction. The write control unit 64 is responsible for directing the result data from a functional unit 75 to a write bus 68 to write to the register file 60. The execution queues 70 are only responsible for sending instructions to the functional units 75 or the load-store unit 80. The read time field 77 which has the read time of the instruction is synchronized with the read control unit 62. When the read time 77 is the same as the time count 90 as detected by the comparators 78, the instruction is issued to the functional units 75 or the load/store unit 80. For the example in
In an embodiment, each functional unit 75 has its own execution queue 70. In another embodiment, an execution queue 70 dispatches instructions to multiple functional units 75. In this case, another field (not shown) can be added to the execution queue 70 to indicate the functional unit number for dispatching of instructions.
Referring back to
The loop detection unit 45 detects the OOO executable loop based on the number of available resources and restrictions. For example, the resources are locked up during loop operation, so there should be sufficient to execute instructions outside of the loop. For example, if there is only 1 MUL functional unit, then if there is a MUL instruction in the loop, then the loop cannot be designated as an OOO executable loop.
A number of read buses 66, a number of write buses 68, a number of functional units 75, and/or load-store port of load-store unit 80 are reserved for OOO loop operations. The execution queue 70 keeps track of the loop count and dispatches loop instructions to the functional units 75 and/or the load-store unit 80.
The examples in
The foregoing explanation described features of several embodiments so that those skilled in the art may better understand the scope of the invention. Those skilled in the art will appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments herein. Such equivalent constructions do not depart from the spirit and scope of the present disclosure. Numerous changes, substitutions and alterations may be made without departing from the spirit and scope of the present invention.
Although illustrative embodiments of the invention have been described in detail with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be affected therein by one skilled in the art without departing from the scope of the invention as defined by the appended claims.
This application claims priority to U.S. provisional patent application Ser. No. 63/368,282, filed Jul. 13, 2022, and entitled “Out-Of-Order Execution Of Loop Instructions in a Microprocessor,” which application is hereby incorporated by reference in its entirety. This application is related to the following U.S. patent application which is hereby incorporated by reference in its entirety: U.S. patent application Ser. No. 17/588,315, filed Jan. 30, 2022, and entitled “Microprocessor with Time Counter for Statically Dispatching Instructions.”
Number | Name | Date | Kind |
---|---|---|---|
5021985 | Hu et al. | Jun 1991 | A |
5185868 | Tran | Feb 1993 | A |
5251306 | Tran | Oct 1993 | A |
5655096 | Branigin | Aug 1997 | A |
5699536 | Hopkins et al. | Dec 1997 | A |
5799163 | Park et al. | Aug 1998 | A |
5802386 | Kahle et al. | Sep 1998 | A |
5809268 | Chan | Sep 1998 | A |
5835745 | Sager et al. | Nov 1998 | A |
5860018 | Panwar | Jan 1999 | A |
5881302 | Omata | Mar 1999 | A |
5903919 | Myers | May 1999 | A |
5958041 | Petolino, Jr. et al. | Sep 1999 | A |
5961630 | Zaidi et al. | Oct 1999 | A |
5964867 | Anderson et al. | Oct 1999 | A |
5996061 | Lopez-Aguado et al. | Nov 1999 | A |
5996064 | Zaidi et al. | Nov 1999 | A |
6016540 | Zaidi et al. | Jan 2000 | A |
6035393 | Glew et al. | Mar 2000 | A |
6065105 | Zaidi et al. | May 2000 | A |
6247113 | Jaggar | Jun 2001 | B1 |
6282634 | Hinds et al. | Aug 2001 | B1 |
6304955 | Arora | Oct 2001 | B1 |
6425090 | Arimilli et al. | Jul 2002 | B1 |
6453424 | Janniello | Sep 2002 | B1 |
7069425 | Takahashi | Jun 2006 | B1 |
7434032 | Coon et al. | Oct 2008 | B1 |
8166281 | Gschwind et al. | Apr 2012 | B2 |
9256428 | Heil et al. | Feb 2016 | B2 |
10339095 | Moudgill et al. | Jul 2019 | B2 |
11062200 | Lie et al. | Jul 2021 | B2 |
11132199 | Tran | Sep 2021 | B1 |
11144319 | Battle et al. | Oct 2021 | B1 |
11163582 | Tran | Nov 2021 | B1 |
11204770 | Tran | Dec 2021 | B2 |
11263013 | Tran | Mar 2022 | B2 |
11467841 | Tran | Oct 2022 | B1 |
11829187 | Tran | Nov 2023 | B2 |
11954491 | Tran | Apr 2024 | B2 |
12061906 | Stephens et al. | Aug 2024 | B2 |
20010004755 | Levy et al. | Nov 2001 | A1 |
20030023646 | Lin et al. | Jan 2003 | A1 |
20030135712 | Theis | Jul 2003 | A1 |
20040073779 | Hokenek et al. | Apr 2004 | A1 |
20050251657 | Boucher | Nov 2005 | A1 |
20060010305 | Maeda et al. | Jan 2006 | A1 |
20060095732 | Tran et al. | May 2006 | A1 |
20060218124 | Williamson et al. | Sep 2006 | A1 |
20060259800 | Maejima | Nov 2006 | A1 |
20060288194 | Lewis et al. | Dec 2006 | A1 |
20070038984 | Gschwind et al. | Feb 2007 | A1 |
20070260856 | Tran et al. | Nov 2007 | A1 |
20110099354 | Takashima et al. | Apr 2011 | A1 |
20110320765 | Karkhanis et al. | Dec 2011 | A1 |
20120047352 | Yamana | Feb 2012 | A1 |
20130151816 | Indukuru et al. | Jun 2013 | A1 |
20130297912 | Tran et al. | Nov 2013 | A1 |
20130346985 | Nightingale | Dec 2013 | A1 |
20140059328 | Gonion | Feb 2014 | A1 |
20140082626 | Busaba et al. | Mar 2014 | A1 |
20150026435 | Muff et al. | Jan 2015 | A1 |
20150212972 | Boettcher et al. | Jul 2015 | A1 |
20150227369 | Gonion | Aug 2015 | A1 |
20160092238 | Codrescu et al. | Mar 2016 | A1 |
20160275043 | Grochowski et al. | Sep 2016 | A1 |
20160371091 | Brownscheidle et al. | Dec 2016 | A1 |
20170177354 | Ould-Ahmed-Vall | Jun 2017 | A1 |
20170357513 | Ayub et al. | Dec 2017 | A1 |
20180181400 | Scherbinin | Jun 2018 | A1 |
20180196678 | Thompto | Jul 2018 | A1 |
20180253310 | Stephens | Sep 2018 | A1 |
20190079764 | Diamond et al. | Mar 2019 | A1 |
20190243646 | Anderson | Aug 2019 | A1 |
20200004534 | Gurram et al. | Jan 2020 | A1 |
20200004543 | Kumar et al. | Jan 2020 | A1 |
20200319885 | Eyole et al. | Oct 2020 | A1 |
20200387382 | Tseng et al. | Dec 2020 | A1 |
20210026639 | Tekmen et al. | Jan 2021 | A1 |
20210311743 | Tran | Oct 2021 | A1 |
20210326141 | Tran | Oct 2021 | A1 |
20210389979 | Tran | Dec 2021 | A1 |
20230068637 | Feiste et al. | Mar 2023 | A1 |
20230244490 | Tran | Aug 2023 | A1 |
20230244491 | Tran | Aug 2023 | A1 |
Number | Date | Country |
---|---|---|
0840213 | May 1998 | EP |
0902360 | Mar 1999 | EP |
0959575 | Nov 1999 | EP |
0010076 | Feb 2000 | WO |
0208894 | Jan 2002 | WO |
0213005 | Feb 2002 | WO |
Entry |
---|
Anonymous: “RISC-V—Wikipedia”, Apr. 16, 2022 (Apr. 16, 2022), XP093142703, Retrieved from the Internet:URL:https://en.wikipedia.org/w/index.php?title=RISC-V&oldid=1083030760 [retrieved on Mar. 27, 2024]. |
PCT/US2023/018970, International Preliminary Report on Patentability, Jul. 18, 2024. |
PCT/US2023/018996, International Preliminary Report on Patentability, Jul. 19, 2024. |
PCT/US2023/018996, Written Opinion of the International Preliminary Examining Authority, Apr. 8, 2024. |
PCT/US2023/027497; 20/13/2023, Written Opinion of the International Searching Authority. |
PCTUS2023081682, Written Opinion of the International Searching Authority, Mar. 22, 2024. |
Written Opinion of The International Preliminary Examining Authority, PCTUS2023/018970, Mar. 25, 2024. |
Choi, W., Park, SJ., Dubois, M. (2009). Accurate Instruction Pre-scheduling in Dynamically Scheduled Processors. In: Stenström, P. (eds) Transactions on High-Performance Embedded Architectures and Compilers I. Lecture Notes in Computer Science, vol. 5470 Springer, Berlin, Heidelberg. pp. 107-127. (Year: 2009). |
Diavastos, Andreas & Carlson, Trevor. (2021). Efficient Instruction Scheduling using Real-time Load Delay Tracking. (Year: 2021). |
J. S. Hu, N. Vijaykrishnan and M. J. Irwin, “Exploring Wakeup-Free Instruction Scheduling,” 10th International Symposium on High Performance Computer Architecture (HPCA'04), Madrid, Spain, pp. 232-232 (Year: 2004). |
Written Opinion of the International Searching Authority, PCT/S2022/052185. |
Written Opinion of the International Searching Authority, PCT/US2023/018970. |
Written Opinion of the International Searching Authority, PCT/US2023/018996. |
Number | Date | Country | |
---|---|---|---|
20240020127 A1 | Jan 2024 | US |
Number | Date | Country | |
---|---|---|---|
63368282 | Jul 2022 | US |