Independent mapping of threads

Abstract
Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads.
Description
BACKGROUND OF THE INVENTION

The present invention relates generally to the field of processors, and more particularly to the independent mapping of threads over parallel slices in a processor core.


Register files are a means of memory storage within a central processing unit of a computing device and may be used for storing architectural state information, among other information and data. The architectural state information may include instructions, operands, and addresses. Each register file may store the architectural state and future state information for one or more threads, or sequence of programmed instructions.


In some processors, the register file is a centralized component of the execution units, and is able to supply all of the register file data to its connected execution units in both single thread (ST) and simultaneous multi-threading (SMT) modes. In these processors, if the register file gets too large, a second level of storage may often be added.


SUMMARY

According to one embodiment of the present invention, a processor core is provided, the processor core comprising: a plurality of dispatch queues, wherein the plurality of dispatch queues are configured to receive instructions associated with a plurality of threads; an even plurality of parallel execution slices, wherein each of the even plurality of execution slices includes a register file; and a routing network configured to direct an output of the plurality of dispatch queues to the even plurality of parallel execution slice, wherein the even plurality of execution slices are configured to independently execute a plurality of threads.


According to another embodiment of the present invention, a method is provided, the method comprising: receiving, from a dispatch queue, a first number of threads; allocating, by a network, the received first number of threads to a first even plurality of parallel execution slices; executing the first number of threads on the first even plurality of parallel execution slices; writing results of the execution of the first number of threads to a set of register files associated with the first even plurality of parallel execution slices; receiving, from the dispatch queue, a second number of threads; allocating, by the network, the received second number of threads to a second even plurality of parallel execution slices; executing the second number of threads on the second even plurality of execution slices; and writing results of the execution of the second number of threads to a set of register files associated with the second even plurality of parallel execution slices.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram illustrating an example of a processor core environment, in accordance with an embodiment of the present invention;



FIG. 2 is a flowchart depicting operational steps for mapping different numbers of threads, each containing an architected register state, to physical register files, in accordance with an embodiment of the present invention;



FIG. 3 is an example of a register file structure depicting the visible architected state on a processor slice pair to execute a set of instructions, in accordance with an embodiment of the present invention; and



FIGS. 4A-D are examples of sharing resources when a different number of threads are mapped to execution slices, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

In some processors, such as processors having a multi-slice processor architecture, many execution slices are connected in parallel making a centralized register file more difficult to implement. The number of ports and entries needed for a centralized register file is extensive, leading to a large amount of wires to send to the distributed execution slices. Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor.


The present invention will now be described in detail with reference to the Figures. FIG. 1 is a block diagram illustrating an example of a processor core environment, generally designated 100, in accordance with one embodiment of the present invention. Modifications to processor core environment 100 may be made by those skilled in the art without departing from the scope of the invention as recited by the claims.


In this exemplary embodiment, processor core environment 100 includes two clusters: cluster 102, containing four processor slices (slices 106A-D); and cluster 104 containing four processor slices (slices 106E-H). In other embodiments, processor core environment 100 may contain any number of clusters (a grouping of processor slices) which support the functionality of processor core environment 100. In this exemplary embodiment, each slice 106A-H is a 64 bit processor slice, and two slices (a “superslice”) are the smallest unit which maintains the minimum functionality of the processor core. In other embodiments, any number of processor slices which maintain the minimum functionality of the processor core may be implemented.


Each slice 106A-H contains ES (execution slices) 108A-H and LS (load store) slices 114A-H, respectively. Each slice 106A-H receives instructions from dispatch queue 125 via dispatch routing network 120. Dispatch queue 125 may be one or more dispatch queues which receive multiple instruction streams and store the instruction streams until the control logic of the processor core instructs the instruction streams to be sent to execution slices 108A-H, via dispatch routing network 120. Execution slices 108A-H perform sequencing and execution of logical, mathematical, and other operations as needed to perform the execution of instruction streams. The execution results are sent to each of execution slices 108A-H by write back bus 130. Execution slices 108A-H are coupled to load store slices 114A-H and results of execution by load store slices 114A-H are sent to each of execution slices 108A-H by write back bus 130. Write back bus 130 may have a dedicated bus for each execution result of each of execution slices 108A-H next to the dedicated bus of each of load store slices 114A-H, or the bus may be shared between execution slices 108A-H and load store slices 114A-H, which requires arbitration to prevent result write collisions at write back bus 130. Load store slices 114A-H perform operations to maintain cache consistency. Each of load store slices 114A-H are coupled by load store routing network 140, which provides conversion transfers for execution of SIMD instructions, processing of instructions with data width greater than the width of one of load store slices 114A-H, and other operations requiring translation or realignment of data between load store slices 114A-H.


Execution slices 108A-H each contain history buffers 110A-H and register files 112A-H, respectively. History buffers 110A-H are used for recovery, and provide restore capabilities for register targets of instructions executed by execution slices 108A-H. Register files 112A-H are physical processor registers which store the architected states and future states for instruction streams. In this exemplary embodiment, the architected state when mapping two instruction threads to two processor slices is 96 total registers per thread.



FIG. 2 is a flow chart depicting operational steps for mapping different numbers of threads, each containing an architected register state, to physical register files 112A-H, in accordance with an embodiment of the present invention.


In step 202, the resources of a processor core receive a number of instruction sequences (threads). In this exemplary embodiment, the received number of threads to map on the processor core is a factor external to the processor core, decided by the software running on the processor and the environment in which the software is running. In this exemplary embodiment, the software determines whether one, two, four, or eight threads are mapped to the physical register files 112A-H. The software determination to map one number of threads is not more likely to occur than any other supportable number of threads to map. In other embodiments, the software may decide to map any number of threads that may be supported by the microarchitecture and resources of the processor core.


In step 204, the received number of threads are allocated to one or more processor slices 106A-H, within the processor core. In this exemplary embodiment, the threads are allocated to the slices of a processor core dependent on the number of threads received and the available resources (depicted in detail in FIGS. 4A-D). In other embodiments, any number of threads received may be allocated in any manner which supports the independent execution of the received threads in the processor core.


In step 206, each allocated thread is executed and the results of execution are written to the physical register files to which the thread is allocated. In this exemplary embodiment, a thread is executed on one processor slice by sending load instructions to load store slices 114A-H. The load data results of the load execution are written to write back bus 130, so that the results of the execution are seen by each register file on each execution slice. The execution results are then written to each register file on the processor slices to which the thread is also mapped (depicted further in FIGS. 4A-D). In other embodiments, the thread state may be loaded in the register file slices in any other manner using load instructions, for example, by using a separate thread state memory from which the thread state can be directly loaded, instead of using load instructions executed by load store slices 114A-H.


In step 208, the resources of the processor core determine whether the software program requests to add an additional thread. For example, an event such as an interrupt may occur, causing the software to request an additional thread or threads to be added. If an additional thread or threads are added, then, in step 210, the resources of the processor core allocate register file entries for the newly added thread(s) (as in step 204) and initialize or restore new register file entries for the added thread(s). For example, if the program is running a single thread (ST) and the software decides to add a second thread (SMT2), then the second thread is allocated to a predetermined set of register file entries. New register file entries may be created for the added thread, or may be restored from previous register file entries stored in the memory, for the newly added second thread to proceed with execution in SMT2 mode.


If, in step 208, the resources of the processor core determine that a thread has not been added, then, in step 212, the resources of the processor core determine whether an existing thread is removed. For example, the software may decide to put one or more of the threads into a ‘power save’ mode, causing the one or more threads to go dormant after saving the architectural state. If, in step 212, the resources of the processor core determine that a thread has been removed by the software program, then, in step 214, the processor core allocates register file entries for the new, reduced number of threads, and the data of the remaining threads is copied to the newly freed register file entries. For example, if two threads are running, (one thread is mapped to register files 112A-D on slices 106A-D and the second thread is mapped to register files 112E-H on slices 106E-H) and the software decides that one thread should be removed, then one thread remains and the architected state of the remaining thread is mapped to each register file 112A-H on slices 106A-H. If, in step 212, the resources of the processor core determine that a thread has not been removed by the software program, then the operational steps end and the execution may begin, or continue, on the newly added thread(s). In other embodiments, the adding and deletion of threads may be combined into a single step in which the removed thread space is directly allocated to an added thread by combining steps 208, 210, 212, and 214.


Accordingly, by performing the operational steps of FIG. 2, a processor core manages the state of transition between the number of active instruction threads running on each processor slice upon receiving instructions to add or remove one or more threads.



FIG. 3 is an example of a register file structure depicting the visible architected state on a processor slice pair (“superslice”) to execute a set of instructions, in accordance with an embodiment of the present invention.


As depicted in FIG. 3, one processor slice pair is shown, where each of execution slice 301 and execution slice 302 are individual processor slices. In this exemplary embodiment, two threads are mapped to the register files of two execution slices (i.e., registers 305A-F on execution slice 301 and registers 306A-D on execution slice 302). Write bus 304 comprises eight write ports mapped to registers 305A-F, registers 306A-D, history buffer 303, and execution slices 301 and 302. Registers 305A, 305C, and 305E are exemplary registers for the architected state of a first thread, and registers 305B, 305D, and 305F are exemplary registers for the architected state of a second thread. In this exemplary embodiment, registers 305A-F are all visible to both execution slice 301 and 302, and may execute 64 bit wide instructions on either execution slice 301 or execution slice 302. Read bus 307 comprises six read ports, in which three of the read ports go to execution slice 301 and the other three read ports go to execution slice 302 (i.e., the results of thread execution are sent to both execution slice 301 and execution slice 302). Read bus 309 comprises two read ports which read out a copy of registers 305A-F to history buffer 303 during 64 bit wide instruction execution. Registers 306A and 306C are exemplary registers for the architected state of a first thread and registers 306B and 306D are exemplary registers for the architected state of a second thread. In this exemplary embodiment, registers 306A-D are all visible only to execution slice 302 and may execute 128 bit wide instructions over both execution slices 301 and 302, as registers 306A-D contain the second set of 64 bits used to execute 128 bit wide instructions (the first set of 64 bits are from registers 305A-F). Read bus 308 comprises three read ports read to execution slice 302, as only execution slice 302 sees this information (i.e., registers 306A-D). Read bus 310 comprises two read ports, which read a copy of registers 306A-D during 128 bit wide instruction execution to history buffer 303. Additionally, a copy of registers 305A-F is read to history buffer 303 via read port 309 during the execution of 128 bit wide instructions. In this exemplary embodiment, 64 bit wide instructions are executed on either execution slice 301 or execution slice 302, while 128 bit wide instructions are executed on both execution slice 301 and execution slice 302. In other embodiments, the system may be adapted to execute any size instruction on either execution slice (301 or 302) and/or may be programmed to execute some or all instructions on execution slice 301 and some or all instructions on execution slice 302.


Accordingly, by implementing the dual register file structure depicted in FIG. 3, one register file is used to execute 64 bit operations, and the pair of register files are used to execute 128 bit operations, reducing the overall amount of power and area consumed.



FIGS. 4A-D are examples of sharing resources when a different number of threads are mapped to execution slices 108A-H, in accordance with an embodiment of the present invention.



FIG. 4A depicts an example of mapping a single thread to all execution slices 108A-H. In this exemplary embodiment, when one thread (thread 400) is received, thread 400 is mapped to each register file of the eight execution slices 108A-H. A sequence of instructions may be executed by execution slices 108A-H, or load store slices 114A-H. Write back bus 130 takes the results produced by the execution of the one thread on one of execution slices 108A-H (the originating slice) and sends the results to each other execution slice, so that every execution slice 108A-H sees the results of the thread execution. For example, execution slice 108A may execute thread 400, and the results of the execution will be written to each of execution slices 108A-H, via write back bus 130. If the sequence of instructions is executed on one or more of load store slices 114A-H, connected by load store routing network 140 to provide transfers for the execution of none aligned load stores and SIMD instructions, then load store routing network 140 writes the result of the execution of the one thread to each slice, using write back bus 130, so that each of execution slices 108A-H sees the results of the execution.



FIG. 4B depicts an example of mapping two threads, each to a dedicated set of execution slices. In this exemplary embodiment, when two threads (thread 401 and thread 402) are received, thread 401 is mapped to each register file on execution slices 108A-D, and thread 402 is mapped to each register file on execution slices 108E-H. Thread 401 is executed on one processor slice, and the results are distributed to each of the other slices to which thread 401 is mapped. In this exemplary embodiment, write back bus 130 is cut, so that execution slices 108A-D communicate only with each other and execution slices 108E-H communicate only with each other. This eliminates the need for write back bus arbitration between execution slices 108A-D and execution slices 108E-H, as each set of execution slices may use separated parts of write back bus 130. Load store routing network 140 is also cut in a similar manner to write back bus 130, so that load store slices 114A-D communicate only with each other and load store slices 114E-H communicate only with each other. In this exemplary embodiment, threads 401 and 402 each use 50% of the total execution units.



FIG. 4C depicts an example of mapping four threads, each pair of the four threads mapped to a dedicated set of execution slices. In this exemplary embodiment, when four threads (threads 404, 405, 406, and 407) are received, the four threads are mapped to each register file in two pairs (i.e., two architected states are each mapped to a set of four slices). For example, threads 404 and 405 are mapped to each register file on execution slices 108A-D and threads 406 and 407 are mapped to each register file on execution slices 108E-H. In this exemplary embodiment, any of the threads can use resources of the processor slices to which it is mapped. This dynamic sharing is determined by structure availability and dependencies. In this exemplary embodiment, the results of a thread execution are shared with the other slices to which the thread is mapped. Write back bus 130 is cut, so that execution slices 108A-D communicate only with each other and execution slices 108E-H communicate only with each other. Load store routing network 140 is also cut in a similar manner to write back bus 130. In this exemplary embodiment, the two thread pairs (threads 404, 405 and threads 406, 407) each use 50% of the total execution units.



FIG. 4D depicts an example of mapping eight threads, each pair of the eight threads mapped to a dedicated set of execution slices. In this exemplary embodiment, when eight threads (threads 408, 409, 410, 411, 412, 413, 414, and 415) are received, the eight threads are mapped to each register file in four pairs. For example, threads 408 and 409 are mapped to each register file on execution slices 108A-B, threads 410 and 411 are mapped to each register file on execution slices 108C-D, threads 412 and 413 are mapped to each register file on execution slices 108E-F, and threads 414 and 415 are mapped to each register file on execution slices 108G-H. In this exemplary embodiment, write back bus 130 is cut, so that execution slices 108A-D communicate only with each other, and execution slices 108E-H communicate only with each other. Load store routing network 140 is also cut in a similar manner to write back bus 130. In this exemplary embodiment, each register file stores two threads, and the four thread pairs each use 25% of the total execution units. In other embodiments, write back bus 130 and load store routing network 140 may each be cut any number of times that may be supported by the architecture of a processor core and allows for the necessary communication between the components. For example, a cut in write back bus 130 may be advantageous when the cut removes commonly used busses used by multiple slices, which require arbitration before writing results to write back bus 130 and routing network 140.


The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience and thus, the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.


The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.


The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device, such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.


Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.


Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.


Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.


These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.


The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.


The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims
  • 1. A processor core comprising: a plurality of dispatch queues, wherein the plurality of dispatch queues are configured to receive instructions associated with a first number of threads;a first even plurality of parallel execution slices, wherein each of the first even plurality of parallel execution slices includes a corresponding first set of register files;a second even plurality of parallel execution slices, wherein each of the second even plurality of parallel execution slices includes a corresponding second set of register files; anda routing network configured to direct an output of the plurality of dispatch queues to the first even plurality of parallel execution slices and the second even plurality of parallel execution slices, wherein the first even plurality of parallel execution slices are configured to independently execute a first number of threads and the second even plurality of parallel execution slices are configured to independently execute a second number of threads that differ from the first number of threads, wherein the first even plurality of parallel execution slices write results of the execution of the first number of threads to the first set of register files, and wherein the second even plurality of parallel execution slices writes results of the execution of the second number of threads to the second set of register files, wherein the routing network determines whether the first number of threads is greater than the second number of threads and allocates data associated with the second number of threads to the second set of register files.
  • 2. The processor core of claim 1, wherein the first and second even plurality of parallel execution slices are arranged as pairs of parallel execution slices, and wherein each parallel execution slice of the pair of parallel execution slices is configured to share information associated with the corresponding first or second set of register files.
  • 3. The processor core of claim 2, wherein a register file associated with a first parallel execution slice of the pair of parallel execution slices is configured to store a first set of bits and wherein a register file associated with a second parallel execution slice of the pair of parallel execution slices is configured to store a second set of bits.
  • 4. The processor core of claim 3, wherein the information associated with the register file shared by each parallel execution slice of the pair of parallel execution slices is configured to support instructions of doubled size as the instructions of one of the parallel execution slices.
  • 5. The processor core of claim 2, wherein the pair of parallel execution slices is configured to execute instructions on at least one of the parallel execution slices of the pair of parallel execution slices.
  • 6. The processor core of claim 1, wherein the routing network is configured to dynamically control operations between the plurality of dispatch queues and the first and second even plurality of parallel execution slices based, at least in part, on: resource availability in the first and second even plurality of parallel execution slices; andexecution requirements for the first and second plurality of threads.
  • 7. The processor core of claim 1, wherein the processor core is configured to manage a transition between receiving the first number of threads and the second number of threads, and wherein the processor core is configured to map the first number of threads to the first plurality of parallel execution slices and the second number of threads to the second plurality of parallel execution slices.
  • 8. A processor core comprising: a plurality of dispatch queues, wherein the plurality of dispatch queues are configured to receive instructions associated with a first number of threads;a first even plurality of parallel execution slices, wherein each of the first even plurality of parallel execution slices includes a corresponding first set of register files;a second even plurality of parallel execution slices, wherein each of the second even plurality of parallel execution slices includes a corresponding second set of register files; anda routing network configured to direct an output of the plurality of dispatch queues to the first even plurality of parallel execution slices and the second even plurality of parallel execution slices, wherein the first even plurality of parallel execution slices are configured to independently execute a first number of threads and the second even plurality of parallel execution slices are configured to independently execute a second number of threads that differ from the first number of threads, wherein the first even plurality of parallel execution slices write results of the execution of the first number of threads to the first set of register files, and wherein the second even plurality of parallel execution slices writes results of the execution of the second number of threads to the second set of register files, wherein the routing network determines whether instructions associated with the first number of threads comprise 64 bit wide instructions and responsive to determining that instructions associated with the first number of threads comprise 64 bit wide instructions, executes the instructions associated with the first number of threads on a first parallel execution slice of a pair of parallel execution slices and writes results of instruction execution to the first parallel execution slice of the pair of parallel execution slices and a second parallel execution slice of the pair of parallel execution slices.
  • 9. The processor core of claim 8, wherein the first and second even plurality of parallel execution slices are arranged as pairs of parallel execution slices, and wherein each parallel execution slice of the pair of parallel execution slices is configured to share information associated with the corresponding first or second set of register files.
  • 10. The processor core of claim 9, wherein a register file associated with a first parallel execution slice of the pair of parallel execution slices is configured to store a first set of bits and wherein a register file associated with a second parallel execution slice of the pair of parallel execution slices is configured to store a second set of bits.
  • 11. The processor core of claim 10, wherein the information associated with the register file shared by each parallel execution slice of the pair of parallel execution slices is configured to support instructions of doubled size as the instructions of one of the parallel execution slices.
  • 12. The processor core of claim 9, wherein the pair of parallel execution slices is configured to execute instructions on at least one of the parallel execution slices of the pair of parallel execution slices.
  • 13. The processor core of claim 8, wherein the routing network is configured to dynamically control operations between the plurality of dispatch queues and the first and second even plurality of parallel execution slices based, at least in part, on: resource availability in the first and second even plurality of parallel execution slices; andexecution requirements for the first and second plurality of threads.
  • 14. The processor core of claim 8, wherein the processor core is configured to manage a transition between receiving the first number of threads and the second number of threads, and wherein the processor core is configured to map the first number of threads to the first plurality of parallel execution slices and the second number of threads to the second plurality of parallel execution slices.
  • 15. A processor core comprising: a plurality of dispatch queues, wherein the plurality of dispatch queues are configured to receive instructions associated with a first number of threads;a first even plurality of parallel execution slices, wherein each of the first even plurality of parallel execution slices includes a corresponding first set of register files;a second even plurality of parallel execution slices, wherein each of the second even plurality of parallel execution slices includes a corresponding second set of register files; anda routing network configured to direct an output of the plurality of dispatch queues to the first even plurality of parallel execution slices and the second even plurality of parallel execution slices, wherein the first even plurality of parallel execution slices are configured to independently execute a first number of threads and the second even plurality of parallel execution slices are configured to independently execute a second number of threads that differ from the first number of threads, wherein the first even plurality of parallel execution slices write results of the execution of the first number of threads to the first set of register files, and wherein the second even plurality of parallel execution slices writes results of the execution of the second number of threads to the second set of register files, wherein the routing network determines whether instructions associated with the first number of threads comprise 64 bit wide instructions and responsive to determining that instructions associated with the first number of threads comprise 64 bit wide instructions, executes the instructions associated with the first number of threads on a first parallel execution slice of a pair of parallel execution slices and writes results of instruction execution to the first parallel execution slice of the pair of parallel execution slices and a second parallel execution slice of the pair of parallel execution slices, and wherein the routing network, responsive to determining that instructions associated with the first number of threads do not comprise 64 bit wide instructions, determines whether instructions associated with the first number of threads comprise 128 bit wide instructions and responsive to determining that instructions associated with the first number of threads comprise 128 bit wide instructions, executes the instructions associated with the first number of threads on a first parallel execution slice and a second parallel execution slice of a pair of parallel execution slices and writes results of instruction execution to the first parallel execution slice of the pair of parallel execution slices and the second parallel execution slice of the pair of parallel execution slices.
  • 16. The processor core of claim 15, wherein the first and second even plurality of parallel execution slices are arranged as pairs of parallel execution slices, and wherein each parallel execution slice of the pair of parallel execution slices is configured to share information associated with the corresponding first or second set of register files.
  • 17. The processor core of claim 16, wherein a register file associated with a first parallel execution slice of the pair of parallel execution slices is configured to store a first set of bits and wherein a register file associated with a second parallel execution slice of the pair of parallel execution slices is configured to store a second set of bits.
  • 18. The processor core of claim 17, wherein the information associated with the register file shared by each parallel execution slice of the pair of parallel execution slices is configured to support instructions of doubled size as the instructions of one of the parallel execution slices.
  • 19. The processor core of claim 16, wherein the pair of parallel execution slices is configured to execute instructions on at least one of the parallel execution slices of the pair of parallel execution slices.
  • 20. The processor core of claim 15, wherein the routing network is configured to dynamically control operations between the plurality of dispatch queues and the first and second even plurality of parallel execution slices based, at least in part, on: resource availability in the first and second even plurality of parallel execution slices; andexecution requirements for the first and second plurality of threads.
Parent Case Info

The present Application is a Continuation of U.S. patent application Ser. No. 14/869,305, filed on Sep. 29, 2015 and published as U.S. Patent Application Publication No. 20160092276 on Mar. 31, 2016, which is a Continuation of U.S. patent application Ser. No. 14/501,152 filed on Sep. 30, 2014 and issued as U.S. Pat. No. 9,720,696 on Aug. 1, 2017. The present Application claims priority to both of the above-referenced U.S. Patent Applications under 35 U.S.C. § 120. The disclosures of the above-referenced parent U.S. Patent Applications are incorporated herein by reference.

US Referenced Citations (136)
Number Name Date Kind
4858113 Saccardi Aug 1989 A
5055999 Frank et al. Oct 1991 A
5095424 Woffinden et al. Mar 1992 A
5471593 Branigin Nov 1995 A
5475856 Kogge Dec 1995 A
5553305 Gregor et al. Sep 1996 A
5630149 Bluhm May 1997 A
5680597 Kumar et al. Oct 1997 A
5822602 Thusoo Oct 1998 A
6026478 Dowling Feb 2000 A
6044448 Agrawal et al. Mar 2000 A
6073215 Snyder Jun 2000 A
6073231 Bluhm et al. Jun 2000 A
6092175 Levy et al. Jul 2000 A
6112019 Chamdani et al. Aug 2000 A
6119203 Snyder et al. Sep 2000 A
6138230 Hervin et al. Oct 2000 A
6145054 Mehrotra et al. Nov 2000 A
6170051 Dowling Jan 2001 B1
6212544 Borkenhagen et al. Apr 2001 B1
6237081 Le et al. May 2001 B1
6311261 Chamdani et al. Oct 2001 B1
6336183 Le et al. Jan 2002 B1
6356918 Chuang et al. Mar 2002 B1
6381676 Aglietti et al. Apr 2002 B2
6425073 Roussel et al. Jul 2002 B2
6463524 Delaney et al. Oct 2002 B1
6487578 Ranganathan Nov 2002 B2
6549930 Chrysos et al. Apr 2003 B1
6564315 Keller et al. May 2003 B1
6728866 Kahle et al. Apr 2004 B1
6732236 Favor May 2004 B2
6847578 Ranganathan Jan 2005 B2
6868491 Moore Mar 2005 B1
6883107 Rodgers et al. Apr 2005 B2
6944744 Ahmed et al. Sep 2005 B2
6948051 Rivers et al. Sep 2005 B2
6954846 Leibholz et al. Oct 2005 B2
6978459 Dennis et al. Dec 2005 B1
7020763 Saulsbury et al. Mar 2006 B2
7024543 Grisenthwaite et al. Apr 2006 B2
7086053 Long et al. Aug 2006 B2
7093105 Webb, Jr. et al. Aug 2006 B2
7100028 McGrath et al. Aug 2006 B2
7114163 Hardin et al. Sep 2006 B2
7124160 Saulsbury et al. Oct 2006 B2
7155600 Burky et al. Dec 2006 B2
7191320 Hooker et al. Mar 2007 B2
7263624 Marchand et al. Aug 2007 B2
7290261 Burky et al. Oct 2007 B2
7302527 Barrick et al. Nov 2007 B2
7386704 Schulz et al. Jun 2008 B2
7395419 Gonion Jul 2008 B1
7398374 Delano Jul 2008 B2
7469318 Chung et al. Dec 2008 B2
7478198 Latorre et al. Jan 2009 B2
7478225 Brooks et al. Jan 2009 B1
7490220 Balasubramonian et al. Feb 2009 B2
7512724 Dennis et al. Mar 2009 B1
7565652 Janssen et al. Jul 2009 B2
7600096 Parthasarathy et al. Oct 2009 B2
7669035 Young et al. Feb 2010 B2
7669036 Brown et al. Feb 2010 B2
7694112 Barowski et al. Apr 2010 B2
7707390 Ozer et al. Apr 2010 B2
7721069 Ramchandran et al. May 2010 B2
7793278 Du et al. Sep 2010 B2
7836317 Marchand et al. Nov 2010 B2
7889204 Hansen et al. Feb 2011 B2
7890735 Tran Feb 2011 B2
7926023 Okawa et al. Apr 2011 B2
7975134 Gonion Jul 2011 B2
7987344 Hansen et al. Jul 2011 B2
8041928 Burky et al. Oct 2011 B2
8046566 Abernathy et al. Oct 2011 B2
8074224 Nordquist et al. Dec 2011 B1
8099556 Ghosh et al. Jan 2012 B2
8103852 Bishop et al. Jan 2012 B2
8108656 Katragadda et al. Jan 2012 B2
8135942 Abernathy et al. Mar 2012 B2
8140832 Mejdrich et al. Mar 2012 B2
8141088 Morishita et al. Mar 2012 B2
8166282 Madriles et al. Apr 2012 B2
8219783 Hara Jul 2012 B2
8250341 Schulz et al. Aug 2012 B2
8335892 Minkin et al. Dec 2012 B1
8386751 Ramchandran et al. Feb 2013 B2
8412914 Gonion Apr 2013 B2
8464025 Yamaguchi et al. Jun 2013 B2
8489791 Byrne et al. Jul 2013 B2
8555039 Rychlik Oct 2013 B2
8578140 Yokoi Nov 2013 B2
8656401 Venkataramanan et al. Feb 2014 B2
8683182 Hansen et al. Mar 2014 B2
8700877 Shebanow et al. Apr 2014 B2
8713263 Bryant Apr 2014 B2
8732438 Caprioli et al. May 2014 B2
8850121 Ashcraft et al. Sep 2014 B1
8966232 Tran Feb 2015 B2
8984264 Karlsson et al. Mar 2015 B2
9223709 O'Bleness et al. Dec 2015 B1
9262174 Fetterman et al. Feb 2016 B2
9519484 Stark Dec 2016 B1
9665372 Eisen et al. May 2017 B2
9672043 Eisen et al. Jun 2017 B2
9690585 Eisen et al. Jun 2017 B2
9690586 Eisen et al. Jun 2017 B2
9720696 Chu et al. Aug 2017 B2
9740486 Boersma et al. Aug 2017 B2
9760375 Boersma et al. Sep 2017 B2
20020194251 Richter et al. Dec 2002 A1
20030120882 Granston et al. Jun 2003 A1
20040111594 Feiste et al. Jun 2004 A1
20040216101 Burky et al. Oct 2004 A1
20050138290 Hammarlund et al. Jun 2005 A1
20060095710 Pires Dos Reis Moreira May 2006 A1
20070022277 Iwamura et al. Jan 2007 A1
20080133885 Glew Jun 2008 A1
20090037698 Nguyen Feb 2009 A1
20090113182 Abernathy et al. Apr 2009 A1
20100100685 Kurosawa et al. Apr 2010 A1
20110161616 Tarjan et al. Jun 2011 A1
20120110271 Boersma et al. May 2012 A1
20120246450 Abdallah Sep 2012 A1
20140215189 Airaud et al. Jul 2014 A1
20140244239 Nicholson et al. Aug 2014 A1
20150134935 Blasco May 2015 A1
20160092276 Chu et al. Mar 2016 A1
20160103715 Sethia et al. Apr 2016 A1
20160202986 Ayub et al. Jul 2016 A1
20160202988 Ayub et al. Jul 2016 A1
20160202989 Eisen et al. Jul 2016 A1
20160202990 Brownscheidle et al. Jul 2016 A1
20160202991 Eisen et al. Jul 2016 A1
20160202992 Brownscheidle et al. Jul 2016 A1
20170168837 Eisen et al. Jun 2017 A1
Foreign Referenced Citations (8)
Number Date Country
101021778 Aug 2007 CN
101676865 Mar 2010 CN
101876892 Nov 2010 CN
102004719 Apr 2011 CN
H11252179 Sep 1999 JP
2006114036 Apr 2006 JP
2007172610 Jul 2007 JP
2013521557 Aug 2015 JP
Non-Patent Literature Citations (12)
Entry
“Method and system for Implementing “Register Threads” in a Simultaneously-Multithreaded (SMT) Processor Core”, An IP.com Prior Art Database Technical Disclosure, Authors et al.: Disclosed Anonymously, IP.com No. IPCOM000199825D, IP.com Electronic Publication: Sep. 17, 2010, pp. 1-4, <http:ip.com/IPCOM/000199825>.
Gebhart et al., “A Hierarchical Thread Scheduler and Register File for Energy-Efficient Throughput Processors”, ACM Transactions on Computer Systems, vol. 30, No. 2, Article 8, Publication date: Apr. 2012, pp. 8:1-8:38, © 2012 ACM, <http://doi.acm.org/10.1145/2166879.2166882>.
List of IBM Patents or Patent Applications Treated as Related, 3 pages.
Pechanek, et al., “ManArray Processor Interconnection Network: An Introduction”, Euro-Par' 99 Parallel Processing, Lecture Notes in Computer Science, 5th International Euro-Par Conference, Aug. 31-Sep. 3, 1999 Proceedings, pp. 761-765, vol. 1685, Spring Berlin Heidelberg, Toulouse, France.
Pechanek, et al., “The ManArray Embedded Processor Architecture”,Proceedings of the 26th Euromicro Conference, IEEE Computer Society, Sep. 5-7, 2000, pp. 348-355, vol. 1, Maastricht.
Czajkowski, et al., “Resource Management for Extensible Internet Servers”, Proceedings of the 8th ACM SIGOPS European Workshop on Support for Composing Distributed Applications, Sep. 1998, pp. 33-39, ACM, Portugal.
Bridges, et al., “A CPU Utilization Limit for Massively Parallel MIMD Computers”, Fourth Symposium on the Frontiers of Massively Parallel Computing, Oct. 19-21, 1992, pp. 83-92, IEEE, VA, US.
Office Action in U.S. Appl. No. 14/501,152 dated Sep. 30, 2016, 20 pages. (pp. 1-20 in pdf).
Notice of Allowance in U.S. Appl. No. 14/501,152 dated Mar. 24, 2017, 10 pages. (pp. 1-10 in pdf).
Office Action in U.S. Appl. No. 14/896,305 dated May 18, 2017, 24 pages. (pp. 1-24 in pdf).
Notice of Allowance in U.S. Appl. No. 14/869,305 dated Sep. 8, 2017, 13 pages. (pp. 1-13 in pdf).
Partial Translation of Office Action for JP Application 2017-530696 Teruyama, “Instruction window buffers with issued instruction buffers,” Toshiba technology bulletin, pp. 25-28, vol. 20(14), published No. 2002-0600, Toshiba Corp., Mar. 18, 2002.
Related Publications (1)
Number Date Country
20180067746 A1 Mar 2018 US
Continuations (2)
Number Date Country
Parent 14869305 Sep 2015 US
Child 15805267 US
Parent 14501152 Sep 2014 US
Child 14869305 US