The invention relates generally to multiprocessing systems and more specifically to multiple thread synchronization activities on one or more processing elements (real, virtual, or otherwise).
Multiprocessing systems continue to become increasingly important in computing systems for many applications, including general purpose processing systems and embedded control systems. In the design of such multiprocessing systems, an important architectural consideration is scalability. In other words, as more hardware resources are added to a particular implementation the machine should produce higher performance. Not only do embedded implementations require increased processing power, many also require the seemingly contradictory attribute of providing low power consumption. In the context of these requirements, particularly for the embedded market, solutions are implemented as “Systems on Chip” or “SoC.” The assignee of the present application, MIPS Technologies, Inc., offers a broad range of solutions for such SoC multiprocessing systems.
In multiprocessing systems, loss in scaling efficiency may be attributed to many different issues, including long memory latencies and waits due to synchronization. The present invention addresses improvements to synchronization among threads in a multithreaded multiprocessing environment, particularly when individual threads may be active on one or more multiple processors, on a single processor but distributed among multiple thread contexts, or resident in memory (virtualized threads).
Synchronization in a multithreaded system refers to the activities and functions of such a multiplicity of threads that coordinate use of shared system resources (e.g., system memory and interface FIFOs) through variables storing “state” bits for producer/consumer communication and mutual exclusion (MUTEX) tasks. Important considerations for implementing any particular synchronization paradigm include designing and implementing structures and processes that provide for deadlock-free operation while being very efficient in terms of time, system resources, and other performance measurements.
Details regarding the MIPS processor architecture are provided in the following document, which is incorporated by reference in its entirety for all purposes: D. Sweetman, See MIPS Run, Morgan Kaufmann Publishers, Inc. (1999).
The difficulty of finding a hardware synchronization solution for a RISC processor is compounded by the nature of the RISC paradigm. A CISC paradigm is easier, in some ways, to adapt hardware resources to particular problems because the instruction set may be extended virtually without limit as instructions and operands in an instruction pipeline may be of variable length. A designer that wants to implement a special hardware synchronization instruction set is able to add new synchronization instructions easily as many CISC instruction sets already contemplate extensions to basic instruction sets. However, that solution is generally not available to designers working with RISC instruction sets. Most instructions sets are filled or nearly filled with vacancies judiciously filled after many factors are extensively considered and evaluated. What is needed is a system for extending or enhancing existing instruction sets, with such a solution particularly useful in the RISC environment, but not exclusively useful as the CISC environment may also benefit from instruction set extension.
The present invention has been made in consideration of the above situation, and has as an object to provide a system, method, computer program product, and propagated signal which efficiently, in a specific embodiment, enables inter-thread synchronization among a plurality of threads that may be active on one or more of: multiple processors, on a single processor but distributed among multiple thread contexts, and/or resident in memory (virtualized threads) without deadlock. In a more generalized description of the preferred embodiment, a system, method, computer program, and propagated signal which efficiently enables extension of instructions and classes of instructions.
A preferred embodiment of the present invention includes a shared resource access control system having a gating storage responsive to a plurality of control bits with the control bits derived from an access reference identifying the shared resource, the gating storage including a plurality of sets of views with each set of views including a first view and a second view with the gating storage producing a particular view from a particular one set responsive to the control bits; and a controller, coupled to the gating storage, for controlling access to the shared resource using the particular one view.
Another preferred embodiment of the present invention includes a shared resource access control method, the method applying an access instruction for the data storage location to a memory system, the memory system including a plurality of data storage locations, each the data storage location associated with a set of views including a first view and a second view with the memory system producing a particular one view from a particular one set of views associated with the data storage location responsive to a set of control bits derived from an address identifying the data storage location; producing the particular one view from the particular one set of views; and controlling access to the data storage location using the particular one view.
Preferred embodiments of the present also include both a computer program product having a computer readable medium carrying program instructions for accessing a memory when executed using a computing system, the executed program instructions executing a method, as well as a propagated signal on which is carried computer-executable instructions which when executed by a computing system performs a method, the method including applying an access instruction for the data storage location to a memory system, the memory system including a plurality of data storage locations, each the data storage location associated with a set of views including a first view and a second view with the memory system producing a particular one view from a particular one set of views associated with the data storage location responsive to a set of control bits derived from an address identifying the data storage location; producing the particular one view from the particular one set of views; and controlling access to the data storage location using the particular one view.
An alternate preferred embodiment includes an apparatus for extending a load/store instruction having a target address, the apparatus including a memory system having a view associated with a data storage location identified by a tag derived from the target address, the data storage location associated with the load/store instruction, the memory system responsive to the target address to produce a particular view for the load/store instruction from the memory system; and a controller, coupled to the data storage location and to the memory system, for implementing a an load/store method for the load/store instruction using the particular view.
Other preferred embodiments include a method, and both a computer program product and a propagated signal carrying computer-executable instructions for extending an instruction using an instruction rule when executed by a computing system, the computer-executable instructions implementing a method. This method including producing, responsive to the target address, a particular view for the load/store instruction from a memory system, the memory system having a view associated with a data storage location identified by a tag derived from the target address, the data storage location associated with the load/store instruction; and implementing a load/store method for the load/store instruction using the particular view.
The present invention relates to multiple thread synchronization activities on one or more processing elements. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
Data-driven programming models map well to multithreaded architectures. For example, threads of execution are able to read data from memory-mapped I/O FIFOs, and may be suspended for as long as it takes for a FIFO to fill, while other threads continue to execute. When the data is available, the load completes, and the incoming data may be processed directly in the load destination register without requiring any I/O interrupt service, polling, or software task scheduling.
However, many architecture models have no provision for restartably interrupting a memory operation once a memory management unit has processed it. It would thus be impossible for a thread context of a blocked thread to be used by an exception handler, or for an operating system to swap out and re-assign such a thread context. The preferred embodiment of the present invention therefore introduces a specific implementation of a concept of “Gating Storage”—memory (or memory-like devices) that are tagged in the TLB (with extended bits or direct physical decode) as potentially requiring abort and restart of loads or stores. The abort/restart capability may require explicit support from the processor/memory interface protocols. It should be noted that in some implementations the memory is not tagged in a memory management unit; rather the memory may be direct mapped or otherwise identified.
The gating storage, as described herein, is a special case of a more generalized concept. As described herein, the gating storage may be conceptualized as a physical address subspace with special properties. In the specific examples described above and as set forth in more detail below, this gating storage serves as an Inter-Thread Communication (ITC) storage for enabling thread-to-thread and thread-to-I/O synchronization, particularly for load/store instructions. Each 64-bit location or “cell” within this gating storage space appears at multiple consecutive addresses, or “views”, distinguished by “view” bits (e.g., bits [6:3] though other implementations may use more, fewer, or different bits) of the load/store target address. Each view may have distinct semantics for the same instruction. A fundamental property of the gating storage is that loads and stores may be precisely view-referenced by the load or store. Any blocked loads and stores resume execution when the actions of other threads of execution, or possibly those of external devices, result in the completion requirements being satisfied. As gating storage references, blocked ITC loads and stores can be precisely aborted and restarted by systems software.
This structure has several motivations:
For example, in some views, cells within gating storage space may be “Empty” or “Full”. A load from a cell that is Empty causes the thread issuing the load to be suspended until the cell is written to by a store from another thread. A store to a cell, when Full, causes the thread issuing the store to be suspended until a previous value has been consumed by a load.
Such gating storage may define independent Empty and Full conditions, rather than a single Empty/Full bit, in order to allow for FIFO buffered gating storage. In a classical Empty/Full memory configuration, Empty would simply be the negation of Full. A FIFO cannot be both Empty and Full, but it is able to be neither Empty nor Full when it contains some data, but could accept more.
It is possible that one view in an implementation is a standard empty/full synchronization construct for producers and consumers. Another view may implement classical “P/V” semaphores by blocking loads even of “full” cells when the value of the cell is zero. Other views might implement atomic semaphore “get” and “put”, or fetch-and-increment or fetch-and-decrement operations without blocking, among other types, variations, and implementations of synchronization constructs.
As discussed above, a load/store target address may designate gating storage through a direct decode or use of special TLB entries. References to virtual memory pages whose TLB entries are tagged as gating storage resolve not to standard memory, but to a store with special attributes. Each page maps a set of 1-64 64-bit storage locations, called “cells”, each of which may be accessed in one of a multiplicity of ways, called “views” using standard load and store instructions. The view is encoded in the low order (and untranslated) view bits of a generated virtual address for the load/store target address. As included in the preferred embodiment of the present invention, a fundamental property of the gating storage is that it synchronizes executions streams. Loads and stores to/from a memory location in gating storage, as implemented, block until the state of the cell corresponds to the required conditions for completion in the selected view. A blocked load or store may be precisely aborted when necessary, and restarted by the controlling operating system when appropriate.
Each cell of the gating storage has Empty and Full Boolean states associated with it. The cell views are then defined as follows.
Each storage cell could thus be described by the C structure:
where all sixteen of the elements reference the same sixty-four bits of underlying storage data. References to this storage may have access types of less than sixty-four bits (e.g. SW/LW, SH/LH SB/LB), with the same Empty/Full protocol being enforced on a per-access basis. Store/Load pairs of the same data type to a given ITC address will always reference the same data, but the byte and halfword ordering within words, and the word ordering within 64-bit doublewords, may be implementation and endianness-dependent, i.e. a SW followed by a LB from the same ITC address is not guaranteed to be portable. While the design of ITC storage allows references to be expressed in terms of C language constructs, compiler optimizations may generate sequences that break ITC protocols, and great care must be taken if ITC is directly referenced as “memory” in a high-level language.
Systems that do not support 64-bit loads and stores need not implement all 64 bits of each cell as storage. When only 32 bits of storage are instantiated per cell, it must be visible in the least significant 32-bit word of each view, regardless of the endianness of the processor, while the results of referencing the most significant 32-bits of each view are implementation-dependent. Ignoring the 22 bit of the address on each access can satisfy this requirement. In this way a C language cast from a unit 64 to a unit 32 reference will acquire the data on both big-endian and little-endian CPU configurations. When more than 32 bits of Control view information are required in a 32-bit ITC store, the additional control bits should be referenced using one of the implementation-dependent views. Empty and Full bits are distinct so that decoupled multi-entry data buffers, such as FIFOs can be mapped into ITC storage.
The gating storage may be saved and restored by copying the {bypass_cell, ctl_cell} pair to and from general storage. While the full data width, 64 or 32 bits, of bypass_cell must be preserved, strictly speaking, only the least significant bits of the ef_state need to be manipulated. In the case of multi-entry data buffers (e.g. FIFOs), each cell must be read using an Empty/Full view until the Control view shows the cell to be Empty to drain the buffer on a copy. The FIFO state can then be restored by performing a series of Empty/Full stores to an equivalent FIFO cell starting in an Empty state. Implementations may provide depth counters in the implementation-specific bits of the Control view to optimize this process. Software must ensure that no other accesses are made to ITC cells during the save and restore processes.
The “physical address space” of gating storage may be made global across all VPEs and processors in a multiprocessor system as shown and described above, such that a thread is able to synchronize on a cell on a different VPE from the one on which it is executing. Global gating storage addresses could be derived from a CPUNum field of an EBase register of each VPE. CPUNum includes ten bits that correspond to the ten significant bits of storage address into the gating storage. Processors or cores designed for uniprocessor applications need not export a physical interface to the gating storage, and may treat the gating storage as a processor-internal resource.
This style of implementation allows a gating storage system “hit” where data memory 110 is already in the state desired, to have the same timing as a cache hit, but it presupposes tight integration with an instruction generation source, for example with a processor core. Less closely coupled implementations of gating storage system 100, where the gating storage block is instantiated more like a scratchpad RAM or an I/O device supporting a gating storage protocol, would be less core-intrusive, but may also stall the pipeline even on a “hit”.
View directory 105 is a special memory for views, also referred to herein as access method functions. An entry of directory 105 includes a view memory entry 210. An entry of memory 110, also referred to herein as a data storage location, holds data at an address derived from target address 200 and may include entry controls/flags/tags 220. As shown in
Each entry 210 includes a multiplicity of eight-byte views, any particular one of which is selectable by the value of the view bits [6:3] of the instruction operand. Further discussion of these views appears below, however for now it is sufficient to understand that each of the views is used to alter/enhance/modify the affect of the operand and/or the affect of an instruction upon its operand, or the method by which a processor operates upon the instruction and the instruction operand. In the example set forth herein, the instruction is a load/store instruction, the instruction operand is a memory location decoded into gating storage 100 such that sixteen views are available to redefine some aspect of the operation of the load/store instruction relative to this memory location. Specifically in the preferred embodiment, the views define possible synchronization constructs, functions, or methods that may be used accessing the particular memory location, such as using an Empty/Full primitive or a P/V semaphore. An inter-thread communication control unit (ITU), e.g., control logic 125 or the ITU as described in the incorporated patent application, accesses a memory location consistent with the desired synchronization construct selected by the appropriate view. In other cases, the constructs, functions, or methods maybe other than synchronization-related for load/store instructions. In some cases, other instructions may be processed through a memory system having access method functions applied dependent upon an associated operand.
Gating storage is an attribute of memory which may optionally be supported by processors implementing embodiments of the present invention. The user-mode load/store semantics of gating storage are identical with those of normal memory, except that completion of the operation may be blocked for unbounded periods of time. The distinguishing feature of gating storage is that outstanding load or store operations can be aborted and restarted. Preferably it is a TLB-mediated property of a virtual page whether or not a location is treated as gating storage (though other mechanisms may be implemented to identify gating storage locations).
When a load or store operation is performed on gating storage, no instructions beyond the load/store in program order are allowed to alter software-visible states of the system until a load result or store confirmation is returned from storage. In the event that an exception is taken using the thread context of an instruction stream which is blocked on a load/store to gating storage, or in the event where such a thread is halted by setting a ThreadStatus.H bit of the associated thread context, the pending load/store operation is aborted.
When a load or store is aborted, the abort is signaled to the storage subsystem, such that the operation unambiguously either completes or is abandoned without any side effects. When a load operation is abandoned, any hardware interlocks on the load dependence are released, so that the destination register may be used as an operand source, with its preload value.
After an aborted and abandoned load/store, a program counter as seen by the exception program counter register and the branch delay state as seen by a Cause.BD bit are set so as that an execution of an exception return (ERET) by the instruction stream associated by the thread context, or a clearing of the thread context halted state, causes a re-issue of the gating load/store. Gating storage accesses are never cached, and multiple stores to a gating storage address are never merged by a processor.
While the preceding description provides a complete description of a specific implementation of a gating storage for inter-thread communication in the synchronization of load/store instructions, the present invention has a broader implementation as well. In the more generalized case, gating storage provides a simple and efficient mechanism to extend an instruction set (particularly advantageous to processors implementing RISC instruction sets). This aspect of the present invention uses an operand of an instruction to modify, enhance, substitute, or otherwise affect an instruction using hardware features. In the load/store example used throughout this discussion, the gating storage adds multiple load/store commands to the basic instruction set, each of the added commands a variant of the basic command but including a hardware-managed instruction that implements a wide variety of synchronization constructs in the process of completing a load or a store. Normal loads/stores are still available by not including an operand within the gating storage. However, by modifying an instruction by use of special memory having special instruction functions/methods triggered from the operand may be implemented to extend many types of instructions in many different ways.
The inherently restricted number and complexity of instruction operand encodings in a “RISC” instruction set is augmented by adding computational semantics to basic instructions (e.g., RISC storage access instructions such as loads and stores), by using an instruction context (e.g., a portion of the storage address of the load or store as an opcode extension) to express a calculation or control function to be executed. This provides that an instruction may have a default instruction method and one or more variations that are implemented responsive to the instruction context. The preferred embodiment of the present invention described above provides for a standard load/store instruction to be extended using specifically chosen synchronization functions to be used instead of the standard instruction method when the target address is a data storage location in the gating storage. The preferred embodiment implements many “flavors” of the alternate synchronizing instruction method through the views (which may be referred to as access method functions) that tune a particular synchronizing load/store using the desired synchronization method.
A problem addressed by this “extension” aspect of the present invention is that some implementations of MIPS Technologies, Inc. processors required a range of synchronizing operations that could not reasonably be directly encoded in an extension to the MIPS32/MIPS64 instruction set. The present invention uses a memory-like space for designated interthread communication storage (ITC) that allowed a potentially very large number of synchronized, shared variables. A number of operations were to be available for each location: synchronized loads/stores, semaphore operations, bypass accesses to data and control information, etc., and it was desirable for synchronized loads and stores to be available for the full range of memory data types supported by the MIPS32/MIPS64 architecture: byte, halfword, word, and doubleword.
Rather than invent new instructions that perform distinct operations on the memory address expressed to the instruction, the preferred embodiment treats loads and stores to the designated ITC memory space as “load-plus-operation” and “store-plus-operation” instructions, where the “operation” is determined by decoding of a subset of the bits of the effective address of the load or store instruction. In the case of the preferred embodiment, this has evolved from using a pair of bits (2**4 and 2**3) as a four-element opcode space, performing Empty/Full synchronization, “forcing”, “bypass”, and “control” operations on the ITC variable referenced by the higher-order address bits, to the current scheme where four bits, 2**6 through 2**3, are used to create a 16-opcode space, in which the system defines “bypass”, “control”, Empty/Full synchronization, Empty/Full “try” operations, Blocking semaphore “P” and “V” operations, and Semaphore “try” operations. This is but one example of how an instruction may be extended using a context of the instruction to determine an applicable instruction method to be used. Other extensions are possible to load/store instructions, other extensions are possible for other instructions, particularly those having an associated operand. However, other instructions may be extended by using some other contextual information to differentiate between instances in which a default instruction method is to be used and when an alternate instruction method. In the present context, instruction methods are the procedures implemented by a processor in executing an instruction. The extension aspect of the present invention provides for a different set of procedures to be used when executing the same instruction when a context of the instruction indicates that a different implementation should be used.
The invention described in this application may, of course, be embodied in hardware; e.g., within or coupled to a Central Processing Unit (“CPU”), microprocessor, microcontroller, System on Chip (“SOC”), or any other programmable device. Additionally, embodiments may be embodied in software (e.g., computer readable code, program code, instructions and/or data disposed in any form, such as source, object or machine language) disposed, for example, in a computer usable (e.g., readable) medium configured to store the software. Such software enables the function, fabrication, modeling, simulation, description and/or testing of the apparatus and processes described herein. For example, this can be accomplished through the use of general programming languages (e.g., C, C++), GDSII databases, hardware description languages (HDL) including Verilog HDL, VHDL, AHDL (Altera HDL) and so on, or other available programs, databases, and/or circuit (i.e., schematic) capture tools. Such software can be disposed in any known computer usable medium including semiconductor, magnetic disk, optical disc (e.g., CD-ROM, DVD-ROM, etc.) and as a computer data signal embodied in a computer usable (e.g., readable) transmission medium (e.g., carrier wave or any other medium including digital, optical, or analog-based medium). As such, the software can be transmitted over communication networks including the Internet and intranets. Embodiments of the invention embodied in software may be included in a semiconductor intellectual property core (e.g., embodied in HDL) and transformed to hardware in the production of integrated circuits. Additionally, implementations of the present invention may be embodied as a combination of hardware and software.
In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention.
A “computer-readable medium” for purposes of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
A “processor” or “process” includes any human, hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor may perform its functions in “real time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems.
Reference throughout this specification to “one embodiment”, “an embodiment”, or “a specific embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention and not necessarily in all embodiments. Thus, respective appearances of the phrases “in one embodiment”, “in an embodiment”, or “in a specific embodiment” in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any specific embodiment of the present invention may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the present invention.
Embodiments of the invention may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of the present invention may be achieved by any means as is known in the art. Distributed, or networked systems, components and circuits may be used. Communication, or transfer, of data may be wired, wireless, or by any other means.
It will also be appreciated that one or more of the elements depicted in the drawings/figures may also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope of the present invention to implement a program or code that may be stored in a machine-readable medium or transmitted using a carrier wave to permit a computer to perform any of the methods described above.
Additionally, any signal arrows in the drawings/Figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted. Furthermore, the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. Combinations of components or steps will also be considered as being noted, where terminology is foreseen as rendering the ability to separate or combine is unclear.
As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
The foregoing description of illustrated embodiments of the present invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the present invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the present invention in light of the foregoing description of illustrated embodiments of the present invention and are to be included within the spirit and scope of the present invention.
Thus, while the present invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the present invention. It is intended that the invention not be limited to the particular terms used in following claims and/or to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include any and all embodiments and equivalents falling within the scope of the appended claims.
The above-described arrangements of apparatus and methods are merely illustrative of applications of the principles of this invention and many other embodiments and modifications may be made without departing from the spirit and scope of the invention as defined in the claims.
These and other novel aspects of the present invention will be apparent to those of ordinary skill in the art upon review of the drawings and the remaining portions of the specification. Therefore, the scope of the invention is to be determined solely by the appended claims.
This application is a continuation-in-part (CIP) of the following co-pending Non-Provisional U.S. patent applications, which are hereby expressly incorporated by reference in their entireties for all purposes: Ser. No.Filing DateTitle10/929,34227 Aug. 2004INTEGRATED MECHANISM FORSUSPENSION AND DEALLOCATION OFCOMPUTATIONAL THREADS OFEXECUTION IN A PROCESSOR10/929,10227 Aug. 2004MECHANISMS FOR DYNAMICCONFIGURATION OF VIRTUAL PROCESSORRESOURCES10/928,74627 Aug. 2004APPARATUS, METHOD, AND INSTRUCTIONFOR INITIATION OF CONCURRENTINSTRUCTION STREAMS IN AMULTITHREADING MICROPROCESSOR10/929,09727 Aug. 2004MECHANISMS FOR SOFTWAREMANAGEMENT OF MULTIPLECOMPUTATIONAL CONTEXTS This application is a continuation-in-part (CIP) of the following co-pending Non-Provisional U.S. patent applications, which are hereby expressly incorporated by reference in their entireties for all purposes: Ser. No.Filing DateTitle10/684,35010 Oct. 2003MECHANISMS FOR ASSURING QUALITY OFSERVICE FOR PROGRAMS EXECUTING ONAMULTITHREADED PROCESSOR10/684,34810 Oct. 2003INTEGRATED MECHANISM FORSUSPENSION AND DEALLOCATION OFCOMPUTATIONAL THREADS OFEXECUTION INA PROCESSOR Each of the applications identified in the first paragraph is a continuation-in-part (CIP) of each of the following co-pending Non-Provisional U.S. patent applications, which are hereby expressly incorporated by reference in their entireties for all purposes: Ser. No.Filing DateTitle10/684,35010 Oct. 2003MECHANISMS FOR ASSURING QUALITY OFSERVICE FOR PROGRAMS EXECUTING ONAMULTITHREADED PROCESSOR10/684,34810 Oct. 2003INTEGRATED MECHANISM FORSUSPENSION AND DEALLOCATION OFCOMPUTATIONAL THREADS OFEXECUTION INA PROCESSOR Each of the co-pending Non-Provisional U.S. patent applications identified in the first two paragraphs above claim the benefit of the following U.S. Provisional Applications, which are hereby expressly incorporated by reference in their entireties for all purposes: Ser. No.Filing DateTitle60/499,18028 Aug. 2003MULTITHREADING APPLICATION SPECIFICEXTENSION60/502,35812 Sep. 2003MULTITHREADING APPLICATION SPECIFICEXTENSION TO A PROCESSORARCHITECTURE60/502,35912 Sep. 2003MULTITHREADING APPLICATION SPECIFICEXTENSION TO A PROCESSORARCHITECTURE This application is related to the following Non-Provisional U.S. patent applications: Ser. No.(Client Ref.)Filing DateTitle10/955,23130 Sep. 2004A SMART MEMORY BASEDSYNCHRONIZATION CONTROLLER FOR AMULTI-THREADED MULTIPROCESSOR SOC All of the above-referenced related patent applications and priority patent applications are hereby expressly incorporated by reference in their entireties for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4817051 | Chang | Mar 1989 | A |
4860190 | Kaneda et al. | Aug 1989 | A |
5159686 | Chastain et al. | Oct 1992 | A |
5295265 | Ducateau et al. | Mar 1994 | A |
5428754 | Baldwin | Jun 1995 | A |
5499349 | Nikhil et al. | Mar 1996 | A |
5511192 | Shirakihara | Apr 1996 | A |
5515538 | Kleiman | May 1996 | A |
5659786 | George et al. | Aug 1997 | A |
5727203 | Hapner et al. | Mar 1998 | A |
5758142 | McFarling et al. | May 1998 | A |
5799188 | Manikundalam et al. | Aug 1998 | A |
5812811 | Dubey et al. | Sep 1998 | A |
5835748 | Orenstein et al. | Nov 1998 | A |
5867704 | Tanaka et al. | Feb 1999 | A |
5892934 | Yard | Apr 1999 | A |
5933627 | Parady | Aug 1999 | A |
5944816 | Dutton et al. | Aug 1999 | A |
5949994 | Dupree et al. | Sep 1999 | A |
5961584 | Wolf | Oct 1999 | A |
6061710 | Eickemeyer et al. | May 2000 | A |
6088787 | Predko | Jul 2000 | A |
6128720 | Pechanek et al. | Oct 2000 | A |
6175916 | Ginsberg et al. | Jan 2001 | B1 |
6189093 | Ekner et al. | Feb 2001 | B1 |
6205543 | Tremblay et al. | Mar 2001 | B1 |
6223228 | Ryan et al. | Apr 2001 | B1 |
6240531 | Spilo et al. | May 2001 | B1 |
6253306 | Ben-Meir et al. | Jun 2001 | B1 |
6286027 | Dwyer, III et al. | Sep 2001 | B1 |
6330656 | Bealkowski et al. | Dec 2001 | B1 |
6330661 | Torii | Dec 2001 | B1 |
6401155 | Saville et al. | Jun 2002 | B1 |
6591379 | LeVine et al. | Jul 2003 | B1 |
6643759 | Andersson et al. | Nov 2003 | B2 |
6668308 | Barroso et al. | Dec 2003 | B2 |
6671791 | McGrath | Dec 2003 | B1 |
6675192 | Emer et al. | Jan 2004 | B2 |
6687812 | Shimada | Feb 2004 | B1 |
6697935 | Borkenhagen et al. | Feb 2004 | B1 |
6738796 | Mobini | May 2004 | B1 |
6779065 | Murty et al. | Aug 2004 | B2 |
6877083 | Arimilli et al. | Apr 2005 | B2 |
6889319 | Rodgers et al. | May 2005 | B1 |
6920634 | Tudor | Jul 2005 | B1 |
6922745 | Kumar et al. | Jul 2005 | B2 |
6925550 | Sprangle et al. | Aug 2005 | B2 |
6971103 | Hokenek et al. | Nov 2005 | B2 |
6986140 | Brenner et al. | Jan 2006 | B2 |
6993598 | Pafumi et al. | Jan 2006 | B2 |
7020879 | Nemirovsky et al. | Mar 2006 | B1 |
7065094 | Petersen et al. | Jun 2006 | B2 |
7069421 | Yates et al. | Jun 2006 | B1 |
7073042 | Uhlig et al. | Jul 2006 | B2 |
7093106 | Ambekar et al. | Aug 2006 | B2 |
7127561 | Hill et al. | Oct 2006 | B2 |
7134124 | Ohsawa et al. | Nov 2006 | B2 |
7152170 | Park | Dec 2006 | B2 |
7181600 | Uhler | Feb 2007 | B1 |
7185183 | Uhler | Feb 2007 | B1 |
7185185 | Joy et al. | Feb 2007 | B2 |
7203823 | Albuz et al. | Apr 2007 | B2 |
7216338 | Barnett et al. | May 2007 | B2 |
7321965 | Kissell | Jan 2008 | B2 |
7376954 | Kissell | May 2008 | B2 |
7386636 | Day et al. | Jun 2008 | B2 |
7424599 | Kissell | Sep 2008 | B2 |
7428732 | Sandri et al. | Sep 2008 | B2 |
20020083173 | Musoll et al. | Jun 2002 | A1 |
20020083278 | Noyes | Jun 2002 | A1 |
20020091915 | Parady | Jul 2002 | A1 |
20020103847 | Potash | Aug 2002 | A1 |
20020147760 | Torii | Oct 2002 | A1 |
20020174318 | Stuttard et al. | Nov 2002 | A1 |
20030014471 | Ohsawa et al. | Jan 2003 | A1 |
20030074545 | Uhler | Apr 2003 | A1 |
20030079094 | Rajwar et al. | Apr 2003 | A1 |
20030093652 | Song | May 2003 | A1 |
20030105796 | Sandri et al. | Jun 2003 | A1 |
20030115245 | Fujisawa | Jun 2003 | A1 |
20030126416 | Marr et al. | Jul 2003 | A1 |
20030225816 | Morrow et al. | Dec 2003 | A1 |
20040015684 | Peterson | Jan 2004 | A1 |
20040139306 | Albuz et al. | Jul 2004 | A1 |
20050050305 | Kissell | Mar 2005 | A1 |
20050050395 | Kissell | Mar 2005 | A1 |
20050120194 | Kissell | Jun 2005 | A1 |
20050125629 | Kissell | Jun 2005 | A1 |
20050125795 | Kissell | Jun 2005 | A1 |
20050240936 | Jones et al. | Oct 2005 | A1 |
20050251613 | Kissell | Nov 2005 | A1 |
20050251639 | Vishin et al. | Nov 2005 | A1 |
20060161421 | Kissell | Jul 2006 | A1 |
20060161921 | Kissell | Jul 2006 | A1 |
20060190945 | Kissell | Aug 2006 | A1 |
20060190946 | Kissell | Aug 2006 | A1 |
20060195683 | Kissell | Aug 2006 | A1 |
20060206686 | Banerjee et al. | Sep 2006 | A1 |
20070043935 | Kissell | Feb 2007 | A2 |
20070044105 | Kissell | Feb 2007 | A2 |
20070044106 | Kissell | Feb 2007 | A2 |
20070106887 | Kissell | May 2007 | A1 |
20070106988 | Kissell | May 2007 | A1 |
20070106989 | Kissell | May 2007 | A1 |
20070106990 | Kissell | May 2007 | A1 |
20080140998 | Kissell | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
0725334 | Aug 1996 | EP |
0917057 | May 1999 | EP |
1089173 | Apr 2001 | EP |
8-249195 | Sep 1996 | JP |
2007-504536 | Mar 2007 | JP |
WO0153935 | Jul 2001 | WO |
WO 03019360 | Mar 2003 | WO |
WO 2005022385 | Mar 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20050251613 A1 | Nov 2005 | US | |
20070186028 A2 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
60499180 | Aug 2003 | US | |
60502358 | Sep 2003 | US | |
60502359 | Sep 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10929342 | Aug 2004 | US |
Child | 10954988 | US | |
Parent | 10929102 | Aug 2004 | US |
Child | 10929342 | US | |
Parent | 10928746 | Aug 2004 | US |
Child | 10929102 | US | |
Parent | 10929097 | Aug 2007 | US |
Child | 10928746 | US | |
Parent | 10684350 | Oct 2003 | US |
Child | 10929097 | US | |
Parent | 10684348 | Oct 2003 | US |
Child | 10684350 | US | |
Parent | 10684350 | Oct 2003 | US |
Child | 10929342 | US | |
Parent | 10684348 | Oct 2003 | US |
Child | 10684350 | US | |
Parent | 10684350 | Oct 2003 | US |
Child | 10929102 | US | |
Parent | 10684348 | Oct 2003 | US |
Child | 10684350 | US | |
Parent | 10684350 | Oct 2003 | US |
Child | 10928746 | US | |
Parent | 10684348 | Oct 2003 | US |
Child | 10684350 | US | |
Parent | 10684350 | Oct 2003 | US |
Child | 10929097 | US | |
Parent | 10684348 | Oct 2003 | US |
Child | 10684350 | US |