1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for administering connection identifiers for collective operations in a parallel computer.
2. Description Of Related Art
The development of the EDVAC computer system of 1948 is often cited as the beginning of the computer era. Since that time, computer systems have evolved into extremely complicated devices. Today's computers are much more sophisticated than early systems such as the EDVAC. Computer systems typically include a combination of hardware and software components, application programs, operating systems, processors, buses, memory, input/output devices, and so on. As advances in semiconductor processing and computer architecture push the performance of the computer higher and higher, more sophisticated computer software has evolved to take advantage of the higher performance of the hardware, resulting in computer systems today that are much more powerful than just a few years ago. Parallel computing is an area of computer technology that has experienced advances. Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. Parallel computing is based on the fact that the process of solving a problem usually can be divided into smaller tasks, which may be carried out simultaneously with some coordination.
Parallel computers execute parallel algorithms. A parallel algorithm can be split up to be executed a piece at a time on many different processing devices, and then put back together again at the end to get a data processing result. Some algorithms are easy to divide up into pieces. Splitting up the job of checking all of the numbers from one to a hundred thousand to see which are primes could be done, for example, by assigning a subset of the numbers to each available processor, and then putting the list of positive results back together. In this specification, the multiple processing devices that execute the individual pieces of a parallel program are referred to as ‘compute nodes.’ A parallel computer is composed of compute nodes and other processing nodes as well, including, for example, input/output (‘I/O’) nodes, and service nodes.
Parallel algorithms are valuable because it is faster to perform some kinds of large computing tasks via a parallel algorithm than it is via a serial (non-parallel) algorithm, because of the way modern processors work. It is far more difficult to construct a computer with a single fast processor than one with many slow processors with the same throughput. There are also certain theoretical limits to the potential speed of serial processors. On the other hand, every parallel algorithm has a serial part and so parallel algorithms have a saturation point. After that point adding more processors does not yield any more throughput but only increases the overhead and cost.
Parallel algorithms are designed also to optimize one more resource the data communications requirements among the nodes of a parallel computer. There are two ways parallel processors communicate, shared memory or message passing. Shared memory processing needs additional locking for the data and imposes the overhead of additional processor and bus cycles and also serializes some portion of the algorithm.
Message passing processing uses high-speed data communications networks and message buffers, but this communication adds transfer overhead on the data communications networks as well as additional memory need for message buffers and latency in the data communications among nodes. Designs of parallel computers use specially designed data communications links so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.
Many data communications network architectures are used for message passing among nodes in parallel computers. Compute nodes may be organized in a network as a ‘torus’ or ‘mesh,’ for example. Also, compute nodes may be organized in a network as a tree. A torus network connects the nodes in a three-dimensional mesh with wrap around links. Every node is connected to its six neighbors through this torus network, and each node is addressed by its x,y,z coordinate in the mesh. In such a manner, a torus network lends itself to point to point operations. In a tree network, the nodes typically are connected into a binary tree: each node has a parent, and two children (although some nodes may only have zero children or one child, depending on the hardware configuration). Although a tree network typically is inefficient in point to point communication, a tree network does provide high bandwidth and low latency for certain collective operations, message passing operations where all compute nodes participate simultaneously, such as, for example, an allgather operation. In computers that use a torus and a tree network, the two networks typically are implemented independently of one another, with separate routing circuits, separate physical links, and separate message buffers.
In parallel computers, administering collective operations among many hundreds or thousands of compute nodes often present challenges to the science of automated computing machinery. Each collective operation must be assigned a unique connection identifier. In many parallel computer the number of available connection identifiers is limited. Present techniques of allocating such a scarce resource among compute nodes, groups of compute nodes, and the like, is often inequitable and inefficient.
Methods, apparatus, and products for administering connection identifiers for collective operations in a parallel computer are disclosed. Administering connection identifiers in accordance with embodiments of the present invention includes, prior to calling a collective operation, determining, by a first compute node of a communicator to receive an instruction to execute the collective operation, whether a value stored in a global connection identifier (‘ConnID’) utilization buffer exceeds a predetermined threshold, the value stored in the global ConnID utilization buffer representing a number of connection identifiers in use. If the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold, administering connection identifiers in accordance with embodiments of the present invention includes: calling the collective operation with a next available ConnID including retrieving, from an element of a ConnID buffer, the next available ConnID and locking the element of the ConnID buffer from access by other compute nodes. If the value stored in the global ConnID utilization buffer exceeds the predetermined threshold, administering connection identifiers in accordance with embodiments of the present invention includes: repeatedly determining whether the value stored in the global ConnID utilization buffer exceeds the predetermined threshold until the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for administering connection identifiers for collective operations in a parallel computer in accordance with embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The compute nodes (102) are coupled for data communications by several independent data communications networks including a Joint Test Action Group (‘JTAG’) network (104), a global combining network (106) which is optimized for collective operations, and a torus network (108) which is optimized point to point operations. The global combining network (106) is a data communications network that includes data communications links connected to the compute nodes so as to organize the compute nodes as a tree. Each data communications network is implemented with data communications links among the compute nodes (102). The data communications links provide data communications for parallel operations among the compute nodes of the parallel computer. The links between compute nodes are bi-directional links that are typically implemented using two separate directional data communications paths.
In addition, the compute nodes (102) of parallel computer are organized into at least one operational group (132) of compute nodes for collective parallel operations on parallel computer (100). An operational group of compute nodes is the set of compute nodes upon which a collective parallel operation executes. Collective operations are implemented with data communications among the compute nodes of an operational group. Collective operations are those functions that involve all the compute nodes of an operational group. A collective operation is an operation, a message-passing computer program instruction that is executed simultaneously, that is, at approximately the same time, by all the compute nodes in an operational group of compute nodes. Such an operational group may include all the compute nodes in a parallel computer (100) or a subset all the compute nodes. Collective operations are often built around point to point operations. A collective operation requires that all processes on all compute nodes within an operational group call the same collective operation with matching arguments. A ‘broadcast’ is an example of a collective operation for moving data among compute nodes of an operational group. A ‘reduce’ operation is an example of a collective operation that executes arithmetic or logical functions on data distributed among the compute nodes of an operational group. An operational group may be implemented as, for example, an MPI ‘communicator.’
‘MPI’ refers to ‘Message Passing Interface,’ a prior art parallel communications library, a module of computer program instructions for data communications on parallel computers. Examples of prior-art parallel communications libraries that may be improved for use with systems according to embodiments of the present invention include MPI and the ‘Parallel Virtual Machine’ (‘PVM’) library. PVM was developed by the University of Tennessee, The Oak Ridge National Laboratory, and Emory University. MPI is promulgated by the MPI Forum, an open group with representatives from many organizations that define and maintain the MPI standard. MPI at the time of this writing is a de facto standard for communication among compute nodes running a parallel program on a distributed memory parallel computer. This specification sometimes uses MPI terminology for ease of explanation, although the use of MPI as such is not a requirement or limitation of the present invention.
Some collective operations have a single originating or receiving process running on a particular compute node in an operational group. For example, in a ‘broadcast’ collective operation, the process on the compute node that distributes the data to all the other compute nodes is an originating process. In a ‘gather’ operation, for example, the process on the compute node that received all the data from the other compute nodes is a receiving process. The compute node on which such an originating or receiving process runs is referred to as a logical root.
Most collective operations are variations or combinations of four basic operations: broadcast, gather, scatter, and reduce. The interfaces for these collective operations are defined in the MPI standards promulgated by the MPI Forum. Algorithms for executing collective operations, however, are not defined in the MPI standards. In a broadcast operation, all processes specify the same root process, whose buffer contents will be sent. Processes other than the root specify receive buffers. After the operation, all buffers contain the message from the root process.
In a scatter operation, the logical root divides data on the root into segments and distributes a different segment to each compute node in the operational group. In scatter operation, all processes typically specify the same receive count. The send arguments are only significant to the root process, whose buffer actually contains sendcount*N elements of a given data type, where N is the number of processes in the given group of compute nodes. The send buffer is divided and dispersed to all processes (including the process on the logical root). Each compute node is assigned a sequential identifier termed a ‘rank.’ After the operation, the root has sent sendcount data elements to each process in increasing rank order. Rank 0 receives the first sendcount data elements from the send buffer. Rank 1 receives the second sendcount data elements from the send buffer, and so on.
A gather operation is a many-to-one collective operation that is a complete reverse of the description of the scatter operation. That is, a gather is a many-to-one collective operation in which elements of a datatype are gathered from the ranked compute nodes into a receive buffer in a root node.
A reduce operation is also a many-to-one collective operation that includes an arithmetic or logical function performed on two data elements. All processes specify the same ‘count’ and the same arithmetic or logical function. After the reduction, all processes have sent count data elements from computer node send buffers to the root process. In a reduction operation, data elements from corresponding send buffer locations are combined pair-wise by arithmetic or logical operations to yield a single corresponding element in the root process's receive buffer. Application specific reduction operations can be defined at runtime. Parallel communications libraries may support predefined operations. MPI, for example, provides the following pre-defined reduction operations:
In addition to compute nodes, the parallel computer (100) includes input/output (‘I/O’) nodes (110, 114) coupled to compute nodes (102) through the global combining network (106). The compute nodes in the parallel computer (100) are partitioned into processing sets such that each compute node in a processing set is connected for data communications to the same I/O node. Each processing set, therefore, is composed of one I/O node and a subset of compute nodes (102). The ratio between the number of compute nodes to the number of I/O nodes in the entire system typically depends on the hardware configuration for the parallel computer. For example, in some configurations, each processing set may be composed of eight compute nodes and one I/O node. In some other configurations, each processing set may be composed of sixty-four compute nodes and one I/O node. Such example are for explanation only, however, and not for limitation. Each I/O nodes provide I/O services between compute nodes (102) of its processing set and a set of I/O devices. In the example of
The parallel computer (100) of
The system of
In some embodiments, a header in each data communications packet relating to a collective operation includes a connection identifier in the form of a bit pattern or set of bits uniquely identifying a particular collective operation. In some parallel computers, the number of possible, or said another way, ‘available,’ connection identifiers is limited. The number of available connection identifiers may be limited for various reasons. One reason, for example, is that in some parallel computers, the size of each packet header, and more specifically, the size of connection identifier is limited in size or set to a fixed number of bits. In some parallel computers, for example, the portion of a packet header representing the connection identifier is limited to 5 bits in length, allowing for a maximum of 32 concurrent and unique connection identifiers.
In parallel computers of the prior art having multiple communicators—groups of processes or, as in the example of
Even when two communicators, each being allocated an equal number of connection identifiers, are operating in a parallel computer, the allocation may be inequitable in operation rather than numerically. Consider, for example, that the parent communicator executes far fewer collective operations than the subcommunicator. In this example, the subcommunicator may be forced to wait for one collective operation to complete and release a connection identifier before processing a subsequent collective operation. At the same time, however, the parent communicator, executing far fewer collective operations than the subcommunicator may have one or more connection identifiers available. That is, in some embodiments one communicator may have more need for a larger number of connection identifiers than another.
In contrast to such prior art techniques for allocating connection identifiers, and as mentioned above, the system of
If the value stored in the global ConnID utilization buffer (216) does not exceed the predetermined threshold (212), the first node (210) may call the collective operation with a next available ConnID (222). In the system of
If the value stored in the global ConnID utilization buffer exceeds the predetermined threshold, the first node (210) in the example of
The master node (214) and first node (210) are depicted as separate nodes in the example of
The arrangement of nodes, networks, and I/O devices making up the exemplary system illustrated in
Administering connection identifiers for collective operations in a parallel computer according to embodiments of the present invention may be generally implemented on a parallel computer that includes a plurality of compute nodes. In fact, such computers may include thousands of such compute nodes. Each compute node is in turn itself a kind of computer composed of one or more computer processors (or processing cores), its own computer memory, and its own input/output adapters. For further explanation, therefore,
Also stored in RAM (156) is a messaging module (160), a library of computer program instructions that carry out parallel communications among compute nodes, including point to point operations as well as collective operations. Application program (158) executes collective operations by calling software routines in the messaging module (160). A library of parallel communications routines may be developed from scratch for use in systems according to embodiments of the present invention, using a traditional programming language such as the C programming language, and using traditional programming methods to write parallel communications routines that send and receive data among nodes on two independent data communications networks. Alternatively, existing prior art libraries may be improved to operate according to embodiments of the present invention. Examples of prior-art parallel communications libraries include the ‘Message Passing Interface’ (‘MPI’) library and the ‘Parallel Virtual Machine’ (‘PVM’) library.
The messaging module (160) in the example of
If the value (218) stored in the global ConnID utilization buffer (216) does not exceed the predetermined threshold (212), the messaging module (160) of the compute node (152) calls the collective operation with a next available ConnID (222). In the example of
If the value stored in the global ConnID utilization buffer exceeds the predetermined threshold, the messaging module (160) of the compute node (152) in the example of
In the example
Although administering ConnIDs in accordance with embodiments of the present invention is described in the example of
Also stored in RAM (156) is an operating system (162), a module of computer program instructions and routines for an application program's access to other resources of the compute node. It is typical for an application program and parallel communications library in a compute node of a parallel computer to run a single thread of execution with no user login and no security issues because the thread is entitled to complete access to all resources of the node. The quantity and complexity of tasks to be performed by an operating system on a compute node in a parallel computer therefore are smaller and less complex than those of an operating system on a serial computer with many threads running simultaneously. In addition, there is no video I/O on the compute node (152) of
The exemplary compute node (152) of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
Combining Network Adapter (188) that couples example compute node (152) for data communications to a network (106) that is optimal for collective message passing operations on a global combining network configured, for example, as a binary tree. The Global Combining Network Adapter (188) provides data communications through three bidirectional links: two to children nodes (190) and one to a parent node (192).
Example compute node (152) includes two arithmetic logic units (‘ALUs’). ALU (166) is a component of each processing core (164), and a separate ALU (170) is dedicated to the exclusive use of Global Combining Network Adapter (188) for use in performing the arithmetic and logical functions of reduction operations. Computer program instructions of a reduction routine in parallel communications library (160) may latch an instruction for an arithmetic or logical function into instruction register (169). When the arithmetic or logical function of a reduction operation is a ‘sum’ or a ‘logical or,’ for example, Global Combining Network Adapter (188) may execute the arithmetic or logical operation by use of ALU (166) in processor (164) or, typically much faster, by use dedicated ALU (170).
The example compute node (152) of
For further explanation,
For further explanation,
For further explanation,
For further explanation,
In the example of
For further explanation,
If the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold the method of
If the value stored in the global ConnID utilization buffer exceeds the predetermined threshold, the method of
For further explanation,
The method of
By using a DMA engine, such as the DMA engine (197) in the example of
In the method of
If the fetched value does not exceed the predetermined threshold, the method of
For further explanation,
The method of
The method of
For further explanation,
The method of
In the method of
The method of
If there is a ConnID stored in the predefined memory location of the master node after retrieving the next available ConnID, the method of
If there is no ConnID stored in the predefined memory location of the master node after retrieving the next available ConnID, the method of
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
This application is a continuation application of and claims priority from U.S. patent application Ser. No. 12/847,573, filed on Jul. 30, 2010, and U.S. patent application Ser. No. 13/661,527, filed on Oct. 26, 2012.
Number | Name | Date | Kind |
---|---|---|---|
4860201 | Stolfo et al. | Aug 1989 | A |
4910669 | Gorin et al. | Mar 1990 | A |
5050162 | Golestani | Sep 1991 | A |
5063562 | Barzilai et al. | Nov 1991 | A |
5095444 | Motles | Mar 1992 | A |
5218676 | Ben-Ayed et al. | Jun 1993 | A |
5347450 | Nugent | Sep 1994 | A |
5414839 | Joshi | May 1995 | A |
5446676 | Huang et al. | Aug 1995 | A |
5491691 | Shtayer et al. | Feb 1996 | A |
5651099 | Konsella | Jul 1997 | A |
5815793 | Ferguson | Sep 1998 | A |
5826262 | Bui et al. | Oct 1998 | A |
5826265 | Van Huben et al. | Oct 1998 | A |
5835482 | Allen | Nov 1998 | A |
5859981 | Levin et al. | Jan 1999 | A |
5862381 | Advani et al. | Jan 1999 | A |
5875190 | Law | Feb 1999 | A |
5912893 | Rolfe et al. | Jun 1999 | A |
5915099 | Takata et al. | Jun 1999 | A |
5918020 | Blackard et al. | Jun 1999 | A |
5933425 | Iwata | Aug 1999 | A |
5937201 | Matsushita et al. | Aug 1999 | A |
5953336 | Moore et al. | Sep 1999 | A |
5982771 | Caldara et al. | Nov 1999 | A |
5995503 | Crawley et al. | Nov 1999 | A |
5999734 | Willis et al. | Dec 1999 | A |
6006032 | Blandy et al. | Dec 1999 | A |
6047122 | Spiller | Apr 2000 | A |
6057839 | Advani et al. | May 2000 | A |
6101495 | Tsuchida et al. | Aug 2000 | A |
6115357 | Packer et al. | Sep 2000 | A |
6118777 | Sylvain | Sep 2000 | A |
6126331 | Komatsu et al. | Oct 2000 | A |
6167490 | Levy et al. | Dec 2000 | A |
6182183 | Wingard et al. | Jan 2001 | B1 |
6253372 | Komatsu et al. | Jun 2001 | B1 |
6336143 | Diedrich et al. | Jan 2002 | B1 |
6343339 | Daynes | Jan 2002 | B1 |
6438702 | Hodge | Aug 2002 | B1 |
6490566 | Schmidt | Dec 2002 | B1 |
6493637 | Steeg | Dec 2002 | B1 |
6563823 | Przygienda et al. | May 2003 | B1 |
6594805 | Tetelbaum et al. | Jul 2003 | B1 |
6600721 | Edholm | Jul 2003 | B2 |
6601098 | Case et al. | Jul 2003 | B1 |
6633937 | Thomson | Oct 2003 | B2 |
6687247 | Wilford et al. | Feb 2004 | B1 |
6725313 | Wingard et al. | Apr 2004 | B1 |
6742044 | Aviani et al. | May 2004 | B1 |
6748413 | Bournas | Jun 2004 | B1 |
6772255 | Daynes | Aug 2004 | B2 |
6775703 | Burns et al. | Aug 2004 | B1 |
6836480 | Basso et al. | Dec 2004 | B2 |
6839768 | Ma et al. | Jan 2005 | B2 |
6839829 | Daruwalla et al. | Jan 2005 | B1 |
6894974 | Aweva et al. | May 2005 | B1 |
6901052 | Buskirk et al. | May 2005 | B2 |
6952692 | Bhattiprolu et al. | Oct 2005 | B1 |
6990529 | Yang et al. | Jan 2006 | B2 |
7032224 | Kadakia et al. | Apr 2006 | B2 |
7120712 | Wingard et al. | Oct 2006 | B2 |
7197577 | Nellitheertha | Mar 2007 | B2 |
7216217 | Hansen et al. | May 2007 | B2 |
7286471 | Kloth et al. | Oct 2007 | B2 |
7299155 | Ebert et al. | Nov 2007 | B2 |
7301541 | Hansen et al. | Nov 2007 | B2 |
7466652 | Lau et al. | Dec 2008 | B2 |
7478138 | Chang et al. | Jan 2009 | B2 |
7480298 | Blackmore et al. | Jan 2009 | B2 |
7480609 | Cavanagh et al. | Jan 2009 | B1 |
7509244 | Shakeri et al. | Mar 2009 | B1 |
7527558 | Lavoie et al. | May 2009 | B2 |
7539209 | Pelley | May 2009 | B2 |
7647441 | Wingard et al. | Jan 2010 | B2 |
7684332 | Ray et al. | Mar 2010 | B2 |
7738443 | Kumar | Jun 2010 | B2 |
7743382 | Schumacher et al. | Jun 2010 | B2 |
7813369 | Blackmore et al. | Oct 2010 | B2 |
7913369 | Gakovic | Mar 2011 | B2 |
7953085 | Chang et al. | May 2011 | B2 |
7958183 | Arimilli et al. | Jun 2011 | B2 |
8041969 | Archer et al. | Oct 2011 | B2 |
8055879 | Archer et al. | Nov 2011 | B2 |
8087025 | Graupner | Dec 2011 | B1 |
8195152 | Edwards | Jun 2012 | B1 |
20010047458 | Iizuka | Nov 2001 | A1 |
20020062459 | Lasserre et al. | May 2002 | A1 |
20020065930 | Rhodes | May 2002 | A1 |
20020194392 | Cheng et al. | Dec 2002 | A1 |
20030004699 | Choi et al. | Jan 2003 | A1 |
20030021287 | Lee et al. | Jan 2003 | A1 |
20030074142 | Steeg | Apr 2003 | A1 |
20030093254 | Frankel et al. | May 2003 | A1 |
20030093255 | Freyensee et al. | May 2003 | A1 |
20040001508 | Zheng et al. | Jan 2004 | A1 |
20040015494 | Basso et al. | Jan 2004 | A1 |
20040098373 | Bayliss et al. | May 2004 | A1 |
20040107240 | Zabarski et al. | Jun 2004 | A1 |
20040111398 | England et al. | Jun 2004 | A1 |
20040246897 | Ma et al. | Dec 2004 | A1 |
20040255002 | Kota et al. | Dec 2004 | A1 |
20050053034 | Chiueh | Mar 2005 | A1 |
20050138161 | McDaniel et al. | Jun 2005 | A1 |
20050182834 | Black | Aug 2005 | A1 |
20050278453 | Cherkasova | Dec 2005 | A1 |
20060002424 | Gadde | Jan 2006 | A1 |
20060018283 | Lewis et al. | Jan 2006 | A1 |
20060059196 | Sato et al. | Mar 2006 | A1 |
20060075067 | Blackmore et al. | Apr 2006 | A1 |
20060203739 | Padmanabhan et al. | Sep 2006 | A1 |
20060292292 | Brightman et al. | Dec 2006 | A1 |
20070014316 | Ryu et al. | Jan 2007 | A1 |
20070016589 | Hara et al. | Jan 2007 | A1 |
20070094429 | Wingard et al. | Apr 2007 | A1 |
20070121511 | Morandin | May 2007 | A1 |
20070179760 | Smith | Aug 2007 | A1 |
20070260746 | Mirtorabi et al. | Nov 2007 | A1 |
20070294426 | Huang et al. | Dec 2007 | A1 |
20080016249 | Ellis et al. | Jan 2008 | A1 |
20080109569 | Leonard et al. | May 2008 | A1 |
20080126739 | Archer et al. | May 2008 | A1 |
20080168177 | Subramaniam | Jul 2008 | A1 |
20080240115 | Briscoe et al. | Oct 2008 | A1 |
20080248747 | Buckley | Oct 2008 | A1 |
20080306721 | Yang | Dec 2008 | A1 |
20080310350 | Dykema et al. | Dec 2008 | A1 |
20080313376 | Archer et al. | Dec 2008 | A1 |
20080313661 | Blocksome et al. | Dec 2008 | A1 |
20090003344 | Kumar | Jan 2009 | A1 |
20090006808 | Blumrich et al. | Jan 2009 | A1 |
20090006810 | Almasi et al. | Jan 2009 | A1 |
20090037707 | Blocksome | Feb 2009 | A1 |
20090043988 | Archer et al. | Feb 2009 | A1 |
20090067334 | Archer et al. | Mar 2009 | A1 |
20090089328 | Miller | Apr 2009 | A1 |
20090092075 | Corson et al. | Apr 2009 | A1 |
20090113308 | Almasi et al. | Apr 2009 | A1 |
20090125604 | Chang et al. | May 2009 | A1 |
20090129277 | Supalov et al. | May 2009 | A1 |
20090138892 | Almasi et al. | May 2009 | A1 |
20090196282 | Fellman et al. | Aug 2009 | A1 |
20090201832 | Brown | Aug 2009 | A1 |
20090300154 | Branson et al. | Dec 2009 | A1 |
20100005189 | Archer et al. | Jan 2010 | A1 |
20100017492 | Reistad | Jan 2010 | A1 |
20100023631 | Archer et al. | Jan 2010 | A1 |
20100037035 | Archer et al. | Feb 2010 | A1 |
20100058313 | Hansmann et al. | Mar 2010 | A1 |
20100241774 | Olszewski et al. | Sep 2010 | A1 |
20100274872 | Harrang et al. | Oct 2010 | A1 |
20100287320 | Querol et al. | Nov 2010 | A1 |
20110113083 | Shahar | May 2011 | A1 |
20110238949 | Archer et al. | Sep 2011 | A1 |
20110258627 | Faraj et al. | Oct 2011 | A1 |
20120030370 | Faraj et al. | Feb 2012 | A1 |
20120174105 | Archer et al. | Jul 2012 | A1 |
20120185230 | Archer et al. | Jul 2012 | A1 |
20120185867 | Archer et al. | Jul 2012 | A1 |
20120185873 | Archer et al. | Jul 2012 | A1 |
20120210094 | Blocksome et al. | Aug 2012 | A1 |
20120246256 | Blocksome et al. | Sep 2012 | A1 |
20130024866 | Archer et al. | Jan 2013 | A1 |
20130046844 | Faraj et al. | Feb 2013 | A1 |
20130060557 | Archer et al. | Mar 2013 | A1 |
20130060833 | Archer et al. | Mar 2013 | A1 |
20130061238 | Archer et al. | Mar 2013 | A1 |
20130067479 | Archer et al. | Mar 2013 | A1 |
20130067483 | Archer et al. | Mar 2013 | A1 |
20130067487 | Faraj et al. | Mar 2013 | A1 |
20130111482 | Archer et al. | May 2013 | A1 |
20130124665 | Blocksome et al. | May 2013 | A1 |
20130160025 | Faraj et al. | Jun 2013 | A1 |
20130179620 | Faraj et al. | Jul 2013 | A1 |
Entry |
---|
Wattenhofer, “Principles of Distributed Computing”, Apr. 2005, 5 pages, Distributed Computing Group, Zurich. |
Wikipedia, “Graphical user interface”, Mar. 2007, 5 pages, Wikipedia.org (online publication), URL: http://en.wikipedia.org/wiki/Graphical—user—interface. |
Vadhiyar et al., “Performance Modeling for Self Adapting Collective Communications for MPI”, Los Alamos Computer Science Institute (LACSI) Symposium, Oct. 2001, 15 pages, LACSI, Rice University, Houston TX. |
Final Office Action, U.S. Appl. No. 11/924,934, May 22, 2014. |
Notice of Allowance, U.S. Appl. No. 13/663,545, Jul. 16, 2014. |
Blaise Barney, “Message Passing Interface (MPA)”, Jul. 21, 2011, Lawrence Livermore National Laboratory, <http://web.archive.org/web/20110721045616/https://computing.llnl.gov/tutorials/mpl/>. |
“DeinoMPI—MPI—Comm—split”, May 11, 2011, Deino Software, <http://web.archive.org/web/20110501135905?http://mpi.deino.net/mpi—functions/MPI—Comm—split.html>. |
Office Action, U.S. Appl. No. 13/185,856, May 23, 2013. |
Final Office Action, U.S. Appl. No. 12/748,579, May 10, 2013. |
Notice of Allowance, U.S. Appl. No. 12/985,075, Jun. 12, 2013. |
Office Action, U.S. Appl. No. 13/231,326 Jun. 6, 2013. |
Office Action, U.S. Appl. No. 13/690,474, Jun. 25, 2013. |
Faraj, et al.; “STAR-MPI: Self Tuned Adaptive Routines for MPI Collective Operations”, Proceedings of the 20th Annual International Conference on Supercomputing (ICS'06), Jun. 2006, pp. 199-208, ACM New York, USA, DOI: 10.1145/1183401.1183431. |
Notice of Allowance, U.S. Appl. No. 13/661,527, Feb. 7, 2013. |
Notice of Allowance, U.S. Appl. No. 12/847,573, Jan. 11, 2013. |
Notice of Allowance, U.S. Appl. No. 12/189,336, Mar. 27, 2013. |
Office Action, U.S. Appl. No. 13/006,696, Mar. 4, 2013. |
Final Office Action, U.S. Appl. No. 13/007,905, Apr. 17, 2013. |
Final Office Action, U.S. Appl. No. 12/985,075, Apr. 18, 2013. |
Final Office Action, U.S. Appl. No. 13/667,456, Apr. 19, 2013. |
“MPI-2: Extensions to the Message-Passing Interface,” Message Passing Interface Forum, Nov. 15, 2003, http://mpi-forum.cs.uiuc.edu/docs/mpi2-report.pdf, Accessed Nov. 7, 2013, 370 Pages. |
“MPI Performance Topics”, https://computing.llnl.gov/tutorials/mpi—performance/, Accessed Jul. 8, 2011, 20 Pages. |
Saphir, W., “Message Buffering and its Effect on the Communication Performance of Parallel Computers”, Apr. 1994, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.5359&rep=rep1&type=pdf, pp. 1-20. |
“Derived Datatypes with MPI”, http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html, Accessed Jul. 11, 2011, 10 Pages. |
Willis, J., et al., “MinSim: Optimized, Compiled VHDL Simulation Using Networked & Parallel Computers”, Proceedings of Fall 1993 VHDL International User's Forum, Fall 1993, http://www.eda.org/VIUF—proc/Fall93/abstract—fall93.html#WILLIS93A, Accessed Nov. 7, 2013, pp. 137-144. |
Almasi, G., et al. “Optimization of MPICollective Communication on BlueGene/L Systems”, ICS'05, Jun. 20-22, 2005, pp. 253-262, ACM, Boston, MA, USA. |
Chan, E., et al. “Collective Communication on Architectures that Support Simultaneous Communication over Multiple links”, PPoPP'06, Mar. 29-31, 2006, pp. 2-11, ACM, New York, New York, USA. |
Huang, S., et al., “DZB: MPI One-sided Exploitation of LAPI APIs Component Design”, Communication Protocols & Application Tools Development, Mar. 16, 2006, pp. 1-70, IBM Corporation Poughkeepsie, NY, USA. |
Weizhen, M. et al., “One-To-All Personalized Communication in Torus Networks”, PDCn'07 Proceedings of the 25th IASTED International Multi-Conference: parallel and distributed computing and networks, Innsbruck, Austria,Year: 2007 (Month Unknown), pp. 291-296, ACTA Press Anaheim, CA, USA. |
Sottile, M., et al., “Performance analysis of parallel programs via message-passing graph traversal”, Feb. 25, 2006, Proc. 20th IEEE Int'l Parallel and Distributed Processing Symp. (IPDPS), Conference Date: Apr. 25-29 2006, pp. 1-29, Los Alamos Nat. Lab., NM, USA. URL: https://smartech.gatech.edu/bitstream/handle/1853/14424/GT-CSE-06-10.pdf. |
Stankovic, N., et al., “Visual Programming for Message-Passing Systems”, International Journal of Software Engineering and Knowledge Engineering, (1999), (Month Unknown), 25 Pages, vol. 9, URL: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.4673. |
“DeinoMPI—MPI—Comm—split”, May 11, 2011, Deino Software, http://web.archive.org/web/20110501135905/http://mpi.deino.net/mpi—functions/MPI—Comm split.html, Accessed May. 30, 2013, 4 Pages. |
Barney, B., “Message Passing Interface (MPI)”, Jul. 21, 2011, Lawrence Livermore National Laboratory, http://web.archive,org/web/20110721045616/https://computing.llnl.gov/tutorials/mpi/, Accessed Nov. 7, 2013, 31 Pages. |
Faraj, A., et al., “STAR-MPI: Self Tuned Adaptive Routines for MPI Collective Operations”, Proceedings of the 20th Annual International Conference on Supercomputing (ICS'06), Jun. 2006, pp. 199-208, ACM, New York, New York, USA, DOI: 10.1145/1183401.1183431. |
Ribler, R., et al., “The Autopilot performance-directed adaptive control system,” Future Generations Computer Systems, Sep. 1, 2001, pp. 175-187, vol. 18, No. 1, Elsevier Science Publications, Amsterdam, NL. |
Zhang, Y., et al., “Automatic Performance Tuning for J2EE Application Server Systems,” Lecture Notes in Computer Science, Year: 2005, (Month Unknown), pp. 520-527, vol. 3806, Springer Berlin Heidelberg. |
Chung, I-Hsin, et al., “Automated Cluster-Based Web Service Performance Tuning,” Proceedings of the 13th IEEE International Symposium on High Performance Distributed Computing, 2004, Honolulu, HI, USA, Jun. 4-6, 2004, pp. 36-44, Piscataway, NJ, USA. |
Hondroudakis, A., et al., “An Empirically Derived Framework for Classifying Parallel Program Performance Tuning Problems,” Proceedings of the Sigmetrics Symposium on Parallel and Distributed Tools, SPOT 1998, Welches, OR, Aug. 3-4, 1998. Sigmetrics Symposium on Parallel and Distributed Tools, Aug. 3, 1998, pp. 112-123, vol. SYMP 2, New York, NY, US, ACM. |
Gara, A., et al., “Overview of the Blue Gene/L System Architecture,” IBM Journal of Research & Development, Mar./May 2005, pp. 195-211, vol. 49, No. 2/3, IBM, New York, USA. |
Adiga, N.R., et al., “Blue Gene/L Torus Interconnection Network.” IBM Journal of Research & Development, Mar./May 2005, pp. 265-276, vol. 49, No. 2/3, IBM, New York, USA. |
Barnett, M. et al., “Broadcasting on Meshes With Worm-Hole Routing,” Second Revised Version, Dec. 1995, pp. 1-22, University of Texas, Department of Computer Sciences. |
Faraj, A., et al. “MPI Collective Communications on the Blue Gene/P Supercomputer: Algorithms and Optimizations”, 17th IEEE Symposium on High Performance Interconnects,New York, NY, Aug. 25-27, 2009, pp. 63-72, IEEE. |
Faraj, A., et al. “A Study of Process Arrival Patterns for MPI Collective Operations”, International Journal of Parallel Programming, Jan. 10, 2008, pp. 1-28, Springer (Online). |
Faraj, A., et al. “Automatic Generation and Tuning of MPI Collective Communication Routines”, ICS'05, Jun. 20-22, 2005, pp. 393-402, Boston, MA, USA. ACM. |
Gropp1, “Tutorial on MPI: The Message-Passing Interface”, Argonne National Laboratory, Apr. 23, 2009, URL: https://web.archive.org/web/20090423041649/http://www. mcs. anl.gov/research/projects/mpi/tutorial/gropp/node82.html. |
Gropp2, “Tutorial on MPI: The Message-Passing Interface”, Argonne National Laboratory, Apr. 23, 2009, URL: https://web.archive.org/web/20090423035309/http:/ /www.mcs. anl.gov/research/projects/mpi/tutorial/gropp/node81.html. |
Weed, “Message Passing Programming with MPI—Overview and Function Description”, Mississippi State University, Jul. 1999, 124 pages. |
Number | Date | Country | |
---|---|---|---|
20130179620 A1 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12847573 | Jul 2010 | US |
Child | 13783713 | US | |
Parent | 13661527 | Oct 2012 | US |
Child | 12847573 | US |