PROCESSING CORE HAVING SHARED FRONT END UNIT

Information

  • Patent Application
  • 20190171462
  • Publication Number
    20190171462
  • Date Filed
    November 26, 2018
    6 years ago
  • Date Published
    June 06, 2019
    5 years ago
Abstract
A processor having one or more processing cores is described. Each of the one or more processing cores has front end logic circuitry and a plurality of processing units. The front end logic circuitry is to fetch respective instructions of threads and decode the instructions into respective micro-code and input operand and resultant addresses of the instructions. Each of the plurality of processing units is to be assigned at least one of the threads, is coupled to said front end unit, and has a respective buffer to receive and store microcode of its assigned at least one of the threads. Each of the plurality of processing units also comprises: i) at least one set of functional units corresponding to a complete instruction set offered by the processor, the at least one set of functional units to execute its respective processing unit's received microcode; ii) registers coupled to the at least one set of functional units to store operands and resultants of the received microcode; iii) data fetch circuitry to fetch input operands for the at least one functional units' execution of the received microcode.
Description
FIELD OF INVENTION

The field of invention pertains to the computing sciences generally, and, more specifically, to a processing core having a shared front end unit.


BACKGROUND


FIG. 1 shows the architecture of an exemplary multi-core processor 100. As observed in FIG. 1, the processor includes: 1) multiple processing cores 101_1 to 101_N; 2) an interconnection network 102; 3) a last level caching system 103; 4) a memory controller 104 and an I/O hub 105. Each of the processing cores contain one or more instruction execution pipelines for executing program code instructions. The interconnect network 102 serves to interconnect each of the cores 101_1 to 101_N to each other as well as the other components 103, 104, 105. The last level caching system 103 serves as a last layer of cache in the processor before instructions and/or data are evicted to system memory 108.


The memory controller 104 reads/writes data and instructions from/to system memory 108. The I/O hub 105 manages communication between the processor and “I/O” devices (e.g., non volatile storage devices and/or network interfaces). Port 106 stems from the interconnection network 102 to link multiple processors so that systems having more than N cores can be realized. Graphics processor 107 performs graphics computations. Power management circuitry (not shown) manages the performance and power states of the processor as a whole (“package level”) as well as aspects of the performance and power states of the individual units within the processor such as the individual cores 101_1 to 101_N, graphics processor 107, etc. Other functional blocks of significance (e.g., phase locked loop (PLL) circuitry) are not depicted in FIG. 1 for convenience.



FIG. 2 shows an exemplary embodiment 200 of one of the processing cores of FIG. 1. As observed in FIG. 2, each core includes two instruction execution pipelines 250, 260. Each instruction execution pipeline 250, 260 includes its own respective: i) instruction fetch stage 201; ii) data fetch stage 202; iii) instruction execution stage 203; and, iv) write back stage 204. The instruction fetch stage 201 fetches “next” instructions in an instruction sequence from a cache, or, system memory (if the desired instructions are not within the cache). Instructions typically specify operand data and an operation to be performed on the operand data. The data fetch stage 202 fetches the operand data from local operand register space, a data cache or system memory. The instruction execution stage 203 contains a set of functional units, any one of which is called upon to perform the particular operation called out by any one instruction on the operand data that is specified by the instruction and fetched by the data fetch stage 202. The write back stage 204 “commits” the result of the execution, typically by writing the result into local register space coupled to the respective pipeline.


In order to avoid the unnecessary delay of an instruction that does not have any dependencies on earlier “in flight” instructions, many modern instruction execution pipelines have enhanced data fetch and write back stages to effect “out-of-order” execution. Here, the respective data fetch stage 202 of pipelines 250, 260 is enhanced to include data dependency logic 205 to recognize when an instruction does not have a dependency on an earlier in flight instruction, and, permit its issuance to the instruction execution stage 203 “ahead of”, e.g., an earlier instruction whose data has not yet been fetched.


Moreover, the write-back stage 204 is enhanced to include a re-order buffer 206 that re-orders the results of out-of-order executed instructions into their correct order, and, delays their retirement to the physical register file until a correctly ordered consecutive sequence of instruction execution results have retired.


The enhanced instruction execution pipeline is also observed to include instruction speculation logic 207 within the instruction fetch stage 201. The speculation logic 207 guesses at what conditional branch direction or jump the instruction sequence will take and begins to fetch the instruction sequence that flows from that direction or jump. The speculative instructions are then processed by the remaining stages of the execution pipeline.





FIGURES

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 shows a processor (prior art);



FIG. 2 shows an instruction execution pipeline (prior art);



FIG. 3 shows a processing core having a shared front end unit;



FIG. 4 shows a method performed by the processing core of FIG. 3;



FIG. 5 shows a processor whose respective cores have a shared front end unit;



FIG. 6 shows a computing system composed of processors whose respective cores have a shared front end unit.





DETAILED DESCRIPTION

The number of logic transistors manufactured on a semiconductor chip can be viewed as the semiconductor chip's fixed resource for processing information. A characteristic of the processor and processing core architecture discussed above with respect to FIGS. 1 and 2 is that an emphasis is placed on reducing the latency of the instructions that are processed by the processor. Said another way, the fixed resources of the processor design of FIGS. 1 and 2, such as the out-of-order execution enhancements made to each of the pipelines, have been devoted to running a thread through the pipeline with minimal delay.


The dedication of logic circuitry to the speed-up of a currently active threads is achieved, however, at the expense of the total number of threads that the processor can simultaneously process at any instant of time. Said another way, if the logic circuitry units of a processor were emphasized differently, the processor might be able to simultaneously process more threads than the processor of FIG. 1 whose processing core are designed according to the architecture of 2. For example, if the logic circuitry resources of the out-of-order execution enhancements were removed, the “freed up” logic circuitry could be re-utilized to instantiate more execution units within the processor. With more execution units, the processor could simultaneously execute more instructions and therefore more threads.



FIG. 3 shows an embodiment of an e architecture of a processing core 300 that can be instantiated multiple times (e.g., once for each processing core) within a multi-core processor. The processing core architecture of FIG. 3 is designed with more execution units than is typical for a standard processing core so as to increase the overall throughput of the processing core (i.e., increase the number of threads that the processing core can simultaneously process). As observed in FIG. 3, the processing core architecture includes a shared front end unit 301 coupled to a plurality of processing units 302_1 to 302_N. Each of the processing units 302_1 to 302_N, in an embodiment, contain at least one set of functional units (e.g., at least one set of functional units 303) capable of supporting an entire instruction set, such as an entire x86 instruction set or other general purpose instruction set (as opposed to a more limited specific purpose instruction set such as the typical instruction set of a digital signal processor (DSP) or accelerator).


As observed in FIG. 3, the shared front end unit 301 fetches and receives the instructions to be processed by the processing core 300, decodes the received instructions, and dispatches the decoded instructions to their appropriate processing unit. In an embodiment, the shared front end unit fetches all instructions for all of the threads being executed by all of the general purpose processing units of the processing core.


A particular thread is assigned to a particular processing unit, and, each processing unit, as described in more detail below, is multi-threaded (i.e., can simultaneously and/or concurrently process more than one thread). Thus, if each processing unit can simultaneously/concurrently execute up to M hardware threads and there are N processing units, the processing core can simultaneously/concurrently execute up to MN hardware threads. Here, the product MN may be greater than the typical number of hardware threads that can simultaneously executed in a typical processing core (e.g., greater than 8 or 16 at current densities).


Referring to the shared front end unit 301, the shared front end unit contains program control logic circuitry 311 to identify and fetch appropriate “next” instructions for each thread. Here, the program control logic circuitry 311 includes an instruction pointer 312_1 to 312_MN for each thread and instruction fetch circuitry 313. Note that FIG. 3 indicates that there are MN instruction pointers to reflect support for MN different hardware threads. For each hardware thread, the instruction fetch circuitry 313 first looks first to an instruction cache 314 for the instruction identified within the thread's instruction pointer. If the sought for instruction is not found within the instruction cache 314 it is fetched from program memory 315. In various implementations, blocks of instructions may be stored and fetched from cache and/or memory on a per hardware thread basis.


The individual hardware threads may be serviced by the instruction fetch circuitry 313 on a time-sliced basis (e.g., a fair round robin approach). Further still, the instruction fetch circuitry 313 may be parallelized into similar/same blocks that fetch instructions for different hardware threads in parallel (e.g., each parallel block of instruction fetch circuitry services a different subset of instruction pointers).


Because, however, the individual hardware threads may be processed slower than a traditional processor (e.g., because per thread latency reduction circuitry has not been instantiated in favor of more processing units as described above), it is conceivable that some implementations may not require parallel instruction fetch capability, or, at least include less than N parallel instruction fetch channels (e.g., N/2 parallel instruction fetch blocks). Accordingly, in any of these cases, certain components of the front end unit 301 are shared by at least two of the processing units 302_1 to 302_N.


In a further embodiment, the program control logic circuitry 311 also includes an instruction translation look-aside buffer (ITLB) circuit 316 for each hardware thread. As is understood in the art, an ITLB translates the instruction addresses received from program memory 315 into actual addresses in physical memory where the instructions actually reside.


After an instruction has been fetched it is decoded by an instruction decoder 317. In an embodiment there is an instruction decoder for each processing unit (i.e., there are N decoders). Again, e.g., where the number of processing units N has been increased at the expense of executing threads with lower latency, there may be more than one processing unit per instruction decoder. Conceivably there may even be one decoder for all the processing units.


An instruction typically specifies: i) an operation to be performed in the form of an “opcode”; ii) the location where the input operands for the operation can be found (register and/or memory space); and, iii) the location where the resultant of the operation is to be stored (register and/or memory space). In an embodiment, the instruction decoder 317 decodes an instruction not only by breaking the instruction down into its opcode and input operand/resultant storage locations, but also, converting the opcode into a sequence of micro-instructions.


As is understood in the art, micro-instructions are akin to a small software program (microcode) that an execution unit will execute in order to perform the functionality of an instruction. Thus, an instruction opcode is converted to the microcode that corresponds to the functional operation of the instruction. Typically, the opcode is entered as a look-up parameter into a circuit 318 configured to behave like a look-up table (e.g., a read only memory (ROM) configured as a look-up table). The look-up table circuit 318 responds to the input opcode with the microcode for the opcode's instruction. Thus, in an embodiment, there is a ROM for each processing unit in the processing core (or, again, there is more than one processing unit per micro-code ROM because the per-thread latency of the processing units has been diluted compared to a traditional processor).


The microcode for a decoded instruction is then dispatched along with the decoded instruction's register/memory addresses of its input operands and resultants to the processing unit that has been assigned to the hardware thread that the decoded instruction is a component of. Note that the respective micro-code for two different instructions of two different hardware threads running on two different processing units may be simultaneously dispatched to their respective processing units.


In an embodiment, as discussed above, each processing unit 302_1 to 302_N can simultaneously and/or concurrently execute more than one hardware thread. For instance, each processing unit may have X sets of execution units (where X=1 or greater), where, each set of execution units is capable of supporting an entire instruction set such as an entire x86 instruction set. Alternatively or in combination, each processing unit can concurrently (as opposed to simultaneously) execute multiple software threads. Here, concurrent execution, as opposed to simultaneous execution, corresponds to the execution of multiple software threads within a period of time by alternating processing resources amongst the software threads supported by the processing unit (e.g., servicing each of the software threads in an round robin fashion resources). Thus, in an embodiment, over a window of time, a single processing unit may concurrently execute multiple software threads by switching the software threads and their associated state information in/out of the processing unit as hardware threads of the processing unit.


As observed in FIG. 3, each processing unit has a microcode buffer 320 to store the microcode that has been dispatched from the instruction decoder 317. The microcode buffer 320 may be partitioned so that separate FIFO queuing space exists for each hardware thread supported by the processing unit. The input operand and resultant addresses are also queued in an aligned fashion or otherwise associated with the respective microcode of their instruction.


Each processing unit includes register space 321 coupled to its internal functional unit set(s) 303 for keeping the operand/resultant data of the thread(s) the functional unit set(s) 303 are responsible for executing. If a single functional unit set is to concurrently execute multiple hardware threads, the register space 321 for the functional unit set 303 may be partitioned such that there is one register set partition for each hardware thread the functional unit set 303 is to concurrently execute. As such, the functional unit set 303 “operates out of” a specific register partition for each unique hardware thread that the functional unit set is concurrently executing.


As observed in FIG. 3, each processing unit 302_1 to 302_N includes register allocation logic 322 to allocate registers for the instructions of each of the respective hardware threads that the processing unit is concurrently and/or simultaneously executing. Here, for implementations having more than one functional unit set per processing unit, there may be multiple instances of micro-code buffer circuitry 320 and register allocation circuitry 322 (e.g., one instance for each functional unit set of the processing unit), or, there may be one micro-code buffer and register allocation circuit that feeds more than one functional unit set (i.e., one micro-code buffer 320 and register allocation circuit 322 for two or more functional unit sets). The register allocation logic circuitry 322 includes data fetch logic to fetch operands (that are called out by the instructions) from register space 321 associated with the functional unit that the operands' respective instructions are targeted to. The data fetch logic circuitry may be coupled to system memory 323 to fetch data operands from system memory 323 explicitly.


In an embodiment, each functional unit set 303 includes: i) an integer functional unit cluster that contains functional units for executing integer mathematical/logic instructions; ii) a floating point functional unit cluster containing functional units for executing floating point mathematical/logic instructions; iii) a SIMD functional unit cluster that contains functional units for executing SIMD mathematical/logic instructions; and, iv) a memory access functional unit cluster containing functional units for performing data memory accesses (for integer and/or floating point and/or SIMD operands and/or results). The memory access functional unit cluster may contain one or more data TLBs to perform virtual to physical address translation for its respective threads.


Micro-code for a particular instruction issues from its respective microcode buffer 320 to the appropriate functional unit along with the operand data that was fetched for the instruction by the fetch circuitry associated with the register allocation logic 322. Results of the execution of the functional units are written back to the register space 321 associated with the execution units.


In a further embodiment, each processing unit contains a data cache 324 that is coupled to the functional units of the memory access cluster. The functional units of the memory access cluster are also coupled to system memory 323 so that they can fetch data from memory. Notably, each register file partition described above may be further partitioned into separate integer, floating point and SIMD register space that is coupled to the corresponding functional unit cluster.


According to one scenario, operating system and/or virtual machine monitor (VMM) software assigns specific software threads to a specific processing unit. The shared front end logic 301 and/or operating system/VMM is able to dynamically assign a software thread to a particular processing unit or functional unit set to activate the thread as a hardware thread. In various embodiments, each processing unit includes “context switching” logic (not shown) so that each processing unit can be assigned more software threads than it can simultaneously or concurrently support as hardware threads. That is, the number of software threads assigned to the processing unit can exceed the number of “active” hardware threads the processing unit is capable of presently executing (either simultaneously or concurrently) as evidenced by the presence of context information of a thread within the register space of the processing unit.


Here, for instance, when a software thread becomes actived as a hardware thread, its context information (e.g., the values of its various operands and control information) is located within the register space 321 that is coupled to the functional unit set 303 that is executing the thread's instructions. If a decision is made to transition the thread from an active to inactive state, the context information of the thread is read out of this register space 321 and stored elsewhere (e.g., system memory 323). With the register space of the thread now being “freed up”, the context information of another “inactive” software thread whose context information resides, e.g., in system memory 232, can be written into the register space 321. As a consequence, the other thread converts from “inactive” to “active” and its instructions are executed as a hardware thread going forward.


As discussed above, the “room” for the logic circuitry to entertain a large number of hardware threads may come at the expense of maximizing the latency of any particular thread. As such, any of the mechanisms and associated logic circuitry for “speeding-up” a hardware thread's execution may not be present in the shared front end or processing unit circuitry. Such eliminated blocks may include any one or more of: 1) speculation logic (e.g., branch prediction logic); 2) out-of-order execution logic (e.g., register renaming logic and/or a re-order buffer and/or data dependency logic); 3) superscalar logic to dynamically effect parallel instruction issuance for a single hardware thread.


A multi-core processor built with multiple instances of the processing core architecture of FIG. 3 may include any/all of the surrounding features discussed above with respect to FIG. 1.



FIG. 4 shows a flow chart describing a methodology of the processing core described above. According to the methodology of FIG. 4, first and second instructions of different hardware threads are fetched 401 and decoded in a shared front-end unit. The instructions are decoded and respective microcode and operand/resultant addresses for the instructions are issued to different processing units from the shared front-end unit 402. The respective processing units fetch data for their respective operands and issue the received microcode and respective operands to respective functional units 403. The functional units then execute their respective instructions 404.



FIG. 5 shows an embodiment of a processer 500 having multiple processing cores 550_1 through 550_N each having a respective shared front end unit 511_1, 511_2, . . . 511_N (with respective instruction TLB 516_1, 516_2, . . . 516_N) and respective processing units having with corresponding micro-code buffer (e.g., micro-code buffers 520_1, 520_2, etc. within the processing units of core 501_1). Each core also includes one or more caching levels 550_1, 550_2, 550_N to cache instructions and/or data of each processing unit individually and/or a respective core as a whole. The cores 501_1, 501_2, . . . 501_N are coupled to one another through an interconnection network 502 that also couples the cores to one or more caching levels (e.g., last level cache 503) that caches instructions and/or data for the cores 501_1, 501_2 . . . 501_N) and a memory controller 504 that is coupled to, e.g., a “slice” of system memory. Other components such as any of the components of FIG. 1 may also be included in FIG. 5.



FIG. 6 shows an embodiment of a computing system, such as a computer, implemented with multiple processors 600_1 through 600_z having the features discussed above in FIG. 5. The multiple processors 600_1 through 600_z are connected to each other through a network that also couples the processors to a plurality of system memory units 608_1, 608_2, a non volatile storage unit 610 (e.g., a disk drive) and an external (e.g., Internet) network interface 611.


In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A processor having one or more processing cores, each of said one or more processing cores comprising: front end logic circuitry to fetch respective instructions of threads and decode said instructions into respective micro-code and input operand and resultant addresses of said instructions;a plurality of processing units, each of said processing units to be assigned at least one of said threads, each processing unit coupled to said front end unit and having a respective buffer to receive and store microcode of its assigned at least one of said threads, each of said plurality of processing units comprising: i) at least one set of functional units corresponding to a complete instruction set offered by said processor, said at least one set of functional units to execute its respective processing unit's received microcode;ii) registers coupled to said at least one set of functional units to store operands and resultants of said received microcode;iii) data fetch circuitry to fetch input operands for said at least one functional units' execution of said received microcode.
  • 2. The processor of claim 1 wherein said functional units are not coupled to any logic circuitry used to perform out-of-order execution of said received micro-code.
  • 3. The processor of claim 2 wherein said processor includes N processing units.
  • 4. The processor of claim 1 wherein said functional unit units are not coupled to any logic circuitry to perform speculative execution of said received micro-code.
  • 5. The processor of claim 4 wherein said processor includes N processing units.
  • 6. The processor of claim 1 wherein said processor does not include circuitry for any of said threads to issue instructions in parallel for any one of said threads.
  • 7. The processor of claim 6 wherein said processor includes N processing units.
  • 8. A method performed by a processor, comprising: performing at least one of a) and b) below with same logic circuitry of a processing core of said processor: a) fetching first and second instructions of two different threads;b) decoding said first and second instructions into respective units of microcode, input operand address information and resultant address information;dispatching said respective units of microcode and address information to two different processing units; and,at each processing unit performing the following for its respective one of said two threads: storing its respective thread's microcode;fetching input operand data with a received input operand address;executing received microcode upon said fetched input operand with functional unit circuitry that is part of a set of functional units that support a complete general purpose instruction set.
  • 9. The method of claim 8 where a first of said processing units is a first processing unit and a second of said processing units is an Nth processing unit.
  • 10. The method of claim 9 wherein software assigns a first of said threads to said first processing unit and a second of said threads to said Nth processing unit.
  • 11. The method of claim 8 wherein both said threads are not processed with any speculative execution logic circuitry.
  • 12. The method of claim 8 wherein both said threads are not processed with any out-of-order execution logic circuitry.
  • 13. The method of claim 8 wherein both said threads do not issue their respective instructions in parallel.
  • 14. A processor, comprising: at least two processing cores each having:a front end unit to fetch all respective instructions of all threads processed by its processing core and decode said instructions into respective micro-code and input operand and resultant addresses of said instructions;said front end unit coupled to all general purpose processing units of its processing core, each of said processing units to be assigned at least one of said threads, each processing unit coupled to said front end unit to receive microcode and input operand and resultant addresses of its assigned at least one of said threads, each of said plurality of processing units comprising: i) at least one set of functional units corresponding to a complete general purpose instruction set offered by said processor, said at least one set of functional units to execute its respective processing unit's received microcode;ii) registers coupled to said at least one set of functional units to store operands and resultants of said received microcode;iii) data fetch circuitry to fetch input operands for said at least one functional units' execution of said received microcode;an interconnection network coupled to said plurality of processing units;a last level cache coupled to said interconnection network.
  • 15. The processor of claim 14 wherein said functional units are not coupled to any logic circuitry used to perform out-of-order execution of said received micro-code.
  • 16. The processor of claim 15 wherein said processor includes N processing units.
  • 17. The processor of claim 14 wherein said functional units are not coupled to any logic circuitry to perform speculative execution of said received micro-code.
  • 18. The processor of claim 17 wherein said processor includes N processing units.
  • 19. The processor of claim 14 wherein said processor does not include circuitry for any of said threads to issue instructions in parallel for any one of said threads.
  • 20. The processor of claim 19 wherein said processor includes N processing units.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present patent application is a continuation application claiming priority from U.S. patent application Ser. No. 13/730,719, filed Dec. 28, 2012, and titled: “Processing Core Having Shared Front End Unit”, which is incorporated herein by reference in its entirety.

Continuations (1)
Number Date Country
Parent 13730719 Dec 2012 US
Child 16200203 US