The present invention relates scheduling in computer systems, and more particularly to synchronizing the scheduling of programs running as streams on multiple processors.
This application is related to U.S. patent application Ser. No. 10/643,744, entitled “Multistreamed Processor Vector Packing Method and Apparatus”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,577, entitled “System and Method for Processing Memory Instructions”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,742, entitled “Decoupled Store Address and Data in a Multiprocessor System”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,586, entitled “Decoupled Scalar Vector Computer Architecture System and Method”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,585, entitled “Latency Tolerant Distributed Shared Memory Multiprocessor Computer”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,754, entitled “Relaxed Memory Consistency Model”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,758, entitled “Remote Translation Mechanism for a Multinode System”, filed on even date herewith; and to U.S. patent application Ser. No. 10/643,741, entitled “Multistrean Processing Memory-And Barrier-Synchronization Method and Apparatus”, filed on even date herewith, each of which is incorporated herein by reference.
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings hereto: Copyright © 2003, Cray, Inc. All Rights Reserved.
Through all the changes that have occurred since the beginning of the computer age, there has been one constant, the need for speed. In general, this need has been satisfied in one or both of two methods. The first method involves making the hardware faster. For example, each new generation of hardware, be it processors, disks, memory systems, network systems or bus architectures is typically faster than the preceding generation. Unfortunately, developing faster hardware is expensive, and there are physical limitations to how fast a certain architecture can be made to run.
The second method involves performing tasks simultaneously through parallel processing. In parallel processing, two or more processors execute portions of a software application simultaneously. Parallel processing can be particularly advantageous when a problem can be broken into multiple pieces that have few interdependencies.
While parallel processing has resulted in faster systems, certain problems arise in parallel processing architectures. One problem that arises is that the parallel processors often share resources, and contention for these shared resources must be managed. A second problem is that events affecting the application may occur and one or more of the parallel processes may need to be informed of the event. For example, an exception event may occur when an invalid arithmetic operation occurs. Each parallel processing unit of an application may need to know of the exception.
As a result, there is a need in the art for the present invention.
The above-mentioned shortcomings, disadvantages and problems are addressed by the present invention, which will be understood by reading and studying the following specification.
One aspect of the systems and methods includes scheduling program units that are part of a process executed within an operating system. Additionally, at least one thread is started within the operating system, the thread is associated with the process. Further, a plurality of streams within the thread are selected for execution on a multiple processor unit. Upon the occurrence of a context shifting event, one of the streams enters a kernel mode. If the first stream to enter kernel mode must block, then the execution of the other streams of the plurality of streams is also blocked.
The present invention describes systems, clients, servers, methods, and computer-readable media of varying scope. In addition to the aspects and advantages of the present invention described in this summary, further aspects and advantages of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.
In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the present invention.
Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
In the Figures, the same reference number is used throughout to refer to an identical component which appears in multiple Figures. Signals and connections may be referred to by the same reference number or label, and the actual meaning will be clear from its use in the context of the description. Further, the same base reference number (e.g. 120) is used in the specification and figures when generically referring to the actions or characteristics of a group of identical components. A numeric index introduced by a decimal point (e.g. 120.1) is used when a specific component among the group of identical components performs an action or has a characteristic.
The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
In some embodiments, a multiple processor unit 102 includes four processors 104.1-104.4 and four cache memory controllers 106. Although each multiple processor unit is shown in
In one embodiment, the hardware environment is included within the Cray X1 computer system, which represents the convergence of the Cray T3E and the traditional Cray parallel vector processors. The Cray X1 computer system is a highly scalable, cache coherent, shared-memory multiprocessor that uses powerful vector processors as its building blocks, and implements a modernized vector instruction set. In these embodiments, multiple processor unit 102 is a Multi-streaming processor (MSP). It is to be noted that
Application 202 may be configured to run as multiple program units. In some embodiments, a program unit comprises a thread 206. Typically, each thread 206 may be executed in parallel. In some embodiments, an application may have up to four threads and the operating environment assigns each thread to be executed on a different multiple processor unit 102. In some embodiments, the threads 206 of an application may be distributed across more than one multiple processor unit 102. For example, threads 206.1 may be assigned to multiple processor unit 102.1 and thread 206.2 of an application 202 may be assigned to multiple processor unit 102.2.
In addition, a thread 206 may be executed as multiple streams 210. Each stream 210 is assigned a processor 104 on the multiple processor unit 102 assigned to the thread. Typically a thread will be executed as multiple streams when there are vector operations that can take place in parallel, or when there have been sections of scalar code that have been identified as being able to execute in parallel. Each stream comprises code that is capable of being executed by the assigned processor 104 substantially independently and in parallel with the other processors 104 on the multiple processor unit 102.
In some embodiments, each application 202 has an application context 204 and each thread 206 has a thread context 208. Application context 204 and thread context 208 are used by the operating environment 200 to manage the state of an application and thread, and may be used to save and restore the state of the application as the application or thread is moved on or off a processor 104. In some embodiments, application context 204 includes information such as the memory associated with the application, file information regarding open files and other operating system information associated with the application. Thread context 208 includes information such as the register state for the thread, a signal state for the thread and a thread identification. The signal state includes information such as what signals are currently being handled by the thread and what signals are pending for the thread. Other thread context information includes a thread ID that may be used to identify and interact with the thread, and a set of stream register state data. The stream register state data comprises register data for the processor executing the stream.
Certain events require synchronization among the threads running as part of an application. For example, an event requiring a context shift for the application or thread may occur, and other threads running as part of the application may need to be informed or may need to handle the event.
The method begins when an application is started within an operating system (block 310). Typically the application will be scheduled on one of the processors in the system as one of many processes executing within an operating environment.
Next, the application indicates that threads should be started (block 320). In some embodiments, the operating system arranges for the threads to be scheduled on one of the available multiple processor units.
Next, the system identifies streams within a thread and schedules the streams on one of the processors on a multiple processor unit (block 330). As noted above, a stream comprises code (vector or scalar) that can be executed in parallel on the processor.
During the execution of one or more of the threads and/or streams within the thread, a context shifting event may occur (block 340). There are multiple reasons for context shift events, the quantity and type of context shifting event will depend on the operating environment. Typically the context shift will require an elevated privilege for the thread or stream. In some embodiments, the elevated privilege is achieved by entering kernel mode.
In some embodiments of the inventions, the context shifting event is a “signal.” A signal in Unicos/mp and other UNIX variations is typically an indication that some type of exceptional event has occurred. Examples of such events include floating point exceptions when an invalid floating point operation is attempted, a memory access exception when a process or thread attempts to access memory that does not exist or is not mapped to the process. Other types of signals are possible and known to those of skill in the art. Additionally, it should be noted that in some operating environments, a signal may be referred to as an exception.
In alternative embodiments, the context shifting event may be a non-local goto. For example, in Unicos/mp and other UNIX variants, a combination of “setjmpo” and “longjmp( )” function calls can establish a non-local goto. In essence, the “setjmp” call establishes the location to go to, and the “longjmp” call causes process or thread to branch to the location. The goto is a non-local goto because it causes the execution of the thread or process to continue at a point outside of the scope of the currently executing function. A context shift is required, because the processor registers must be set to reflect the new process or thread execution location.
In further alternative embodiments, the context shifting event may be a system call. Typically a system call requires that the process or thread enter a privileged mode in order to execute the system call. In Unicos/mp and UNIX variants, the system call must typically execute in kernel mode, while normally a process or thread executes in user mode. In order to execute in kernel mode, a context shift is required.
Those of skill in the art will appreciate that other context shifting events are possible and within the scope of the invention.
Upon receiving indication of a context shifting event, the first stream that enters kernel mode sets a lock to prevent other streams executing on processors in multiple processor unit 102 from also entering kernel mode (block 341). Methods of setting and clearing locks are known in the art and are typically provided by the operating environment.
The stream that enters kernel mode will typically be executing using a kernel stack. As the stream is executing in kernel mode, it may or may not need to block within the kernel to wait for the availability of a resource (block 342). If the stream does not need to block within the kernel, the other streams executing on other processors of multiple processor unit 102 continue to operate in user (non-privileged) mode (block 350). An example of a case where a stream entering the kernel may not need to block is when the stream needs to interact with a TLB (Translation Lookaside Buffer). Typically the code executed in the kernel for this type of operation is fairly short, and does not have the potential for interfering with other streams or processes.
However, if the stream executing in kernel mode needs to block, then the other streams executing on other processors are also blocked (block 344). In some embodiments, a hardware interrupt may be sent to the other processors to indicate that they should block.
In some embodiments, the streams being blocked execute instructions to save their current context into thread context stream register state data associated with their stream (block 346). In some embodiments, the streams need to execute kernel code in order to save their context. In these embodiments, the first stream to enter the kernel executes using the kernel stack. The subsequent streams are allowed to enter the kernel, but execute on auxiliary stacks.
Systems and methods for scheduling threads in a parallel processing environment have been disclosed. The systems and methods described provide advantages over previous systems.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the present invention.
The terminology used in this application is meant to include all of these environments. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Therefore, it is manifestly intended that this invention be limited only by the following claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
3881701 | Schoenman et al. | May 1975 | A |
RE28577 | Schmidt | Oct 1975 | E |
4380786 | Kelly | Apr 1983 | A |
4414624 | Summer et al. | Nov 1983 | A |
4541046 | Nagashima et al. | Sep 1985 | A |
4733348 | Hiraoka et al. | Mar 1988 | A |
4771391 | Blasbalg | Sep 1988 | A |
4868818 | Madan et al. | Sep 1989 | A |
4888679 | Fossum et al. | Dec 1989 | A |
4933933 | Dally et al. | Jun 1990 | A |
5008882 | Peterson et al. | Apr 1991 | A |
5012409 | Fletcher et al. | Apr 1991 | A |
5031211 | Nagai et al. | Jul 1991 | A |
5036459 | Den Haan et al. | Jul 1991 | A |
5068851 | Bruckert et al. | Nov 1991 | A |
5072883 | Vidusek | Dec 1991 | A |
5105424 | Flaig et al. | Apr 1992 | A |
5157692 | Horie et al. | Oct 1992 | A |
5161156 | Baum et al. | Nov 1992 | A |
5170482 | Shu et al. | Dec 1992 | A |
5175733 | Nugent | Dec 1992 | A |
5197130 | Chen et al. | Mar 1993 | A |
5218601 | Chujo et al. | Jun 1993 | A |
5218676 | Ben-ayed et al. | Jun 1993 | A |
5220804 | Tilton et al. | Jun 1993 | A |
5239545 | Buchholz | Aug 1993 | A |
5276899 | Neches | Jan 1994 | A |
5280474 | Nickolls et al. | Jan 1994 | A |
5297738 | Lehr et al. | Mar 1994 | A |
5311931 | Lee | May 1994 | A |
5313628 | Mendelsohn et al. | May 1994 | A |
5313645 | Rolfe | May 1994 | A |
5331631 | Teraslinna | Jul 1994 | A |
5333279 | Dunning | Jul 1994 | A |
5341482 | Cutler et al. | Aug 1994 | A |
5341504 | Mori et al. | Aug 1994 | A |
5347450 | Nugent | Sep 1994 | A |
5353283 | Tsuchiya | Oct 1994 | A |
5365228 | Childs et al. | Nov 1994 | A |
5375223 | Meyers et al. | Dec 1994 | A |
5418916 | Hall et al. | May 1995 | A |
5430850 | Papadopoulos et al. | Jul 1995 | A |
5430884 | Beard et al. | Jul 1995 | A |
5434995 | Oberlin et al. | Jul 1995 | A |
5435884 | Simmons et al. | Jul 1995 | A |
5437017 | Moore et al. | Jul 1995 | A |
5440547 | Easki et al. | Aug 1995 | A |
5446915 | Pierce | Aug 1995 | A |
5456596 | Gourdine | Oct 1995 | A |
5472143 | Bartels et al. | Dec 1995 | A |
5497480 | Hayes et al. | Mar 1996 | A |
5517497 | LeBoudec et al. | May 1996 | A |
5530933 | Frink et al. | Jun 1996 | A |
5546549 | Barrett et al. | Aug 1996 | A |
5548639 | Ogura et al. | Aug 1996 | A |
5550589 | Shiojiri et al. | Aug 1996 | A |
5555542 | Ogura et al. | Sep 1996 | A |
5560029 | Papadopoulos et al. | Sep 1996 | A |
5606696 | Ackerman et al. | Feb 1997 | A |
5613114 | Anderson et al. | Mar 1997 | A |
5640524 | Beard et al. | Jun 1997 | A |
5649141 | Yamazaki | Jul 1997 | A |
5721921 | Kessler et al. | Feb 1998 | A |
5740967 | Simmons et al. | Apr 1998 | A |
5765009 | Ishizaka | Jun 1998 | A |
5781775 | Ueno | Jul 1998 | A |
5787494 | Delano et al. | Jul 1998 | A |
5796980 | Bowles | Aug 1998 | A |
5812844 | Jones et al. | Sep 1998 | A |
5835951 | McMahan | Nov 1998 | A |
5860146 | Vishin et al. | Jan 1999 | A |
5860602 | Tilton et al. | Jan 1999 | A |
5897664 | Nesheim et al. | Apr 1999 | A |
5946717 | Uchibori | Aug 1999 | A |
5951882 | Simmons et al. | Sep 1999 | A |
5978830 | Nakaya et al. | Nov 1999 | A |
5987571 | Shibata et al. | Nov 1999 | A |
5995752 | Chao et al. | Nov 1999 | A |
6003123 | Carter et al. | Dec 1999 | A |
6014728 | Baror | Jan 2000 | A |
6016969 | Tilton et al. | Jan 2000 | A |
6047323 | Krause | Apr 2000 | A |
6088701 | Whaley et al. | Jul 2000 | A |
6101590 | Hansen | Aug 2000 | A |
6105113 | Schimmel | Aug 2000 | A |
6161208 | Dutton et al. | Dec 2000 | A |
6247169 | DeLong | Jun 2001 | B1 |
6269390 | Boland | Jul 2001 | B1 |
6269391 | Gillespie | Jul 2001 | B1 |
6308250 | Klausler | Oct 2001 | B1 |
6308316 | Hashimoto et al. | Oct 2001 | B1 |
6339813 | Smith et al. | Jan 2002 | B1 |
6356983 | Parks | Mar 2002 | B1 |
6366461 | Pautsch et al. | Apr 2002 | B1 |
6389449 | Nemirovsky et al. | May 2002 | B1 |
6490671 | Frank et al. | Dec 2002 | B1 |
6496902 | Faanes et al. | Dec 2002 | B1 |
6519685 | Chang | Feb 2003 | B1 |
6553486 | Ansari | Apr 2003 | B1 |
6591345 | Seznec | Jul 2003 | B1 |
6615322 | Arimilli et al. | Sep 2003 | B2 |
6684305 | Deneau | Jan 2004 | B1 |
6782468 | Nakazato | Aug 2004 | B1 |
6816960 | Koyanagi | Nov 2004 | B2 |
6910213 | Hirono et al. | Jun 2005 | B1 |
6922766 | Scott | Jul 2005 | B2 |
6925547 | Scott et al. | Aug 2005 | B2 |
6931510 | Damron | Aug 2005 | B1 |
6952827 | Alverson et al. | Oct 2005 | B1 |
6976155 | Drysdale et al. | Dec 2005 | B2 |
7028143 | Barlow et al. | Apr 2006 | B2 |
7089557 | Lee | Aug 2006 | B2 |
7103631 | van der Veen | Sep 2006 | B1 |
7111296 | Wolrich et al. | Sep 2006 | B2 |
7137117 | Ginsberg | Nov 2006 | B2 |
7143412 | Koenen | Nov 2006 | B2 |
7162713 | Pennello | Jan 2007 | B2 |
7191444 | Alverson et al. | Mar 2007 | B2 |
7334110 | Faanes et al. | Feb 2008 | B1 |
20020078122 | Joy et al. | Jun 2002 | A1 |
20020091747 | Rehg et al. | Jul 2002 | A1 |
20020169938 | Scott et al. | Nov 2002 | A1 |
20020172199 | Scott et al. | Nov 2002 | A1 |
20030005380 | Nguyen et al. | Jan 2003 | A1 |
20030097531 | Arimilli et al. | May 2003 | A1 |
20040044872 | Scott | Mar 2004 | A1 |
20040064816 | Alverson et al. | Apr 2004 | A1 |
20040162949 | Scott et al. | Aug 2004 | A1 |
20050044128 | Scott et al. | Feb 2005 | A1 |
20050044339 | Sheets | Feb 2005 | A1 |
20050044340 | Sheets et al. | Feb 2005 | A1 |
20050125801 | King | Jun 2005 | A1 |
Number | Date | Country |
---|---|---|
0353819 | Feb 1990 | EP |
0473452 | Mar 1992 | EP |
0475282 | Mar 1992 | EP |
0501524 | Sep 1992 | EP |
0570729 | Nov 1993 | EP |
WO-8701750 | Mar 1987 | WO |
WO-8808652 | Nov 1988 | WO |
WO-9516236 | Jun 1995 | WO |
WO-96102831 | Apr 1996 | WO |
WO-9632681 | Oct 1996 | WO |