1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, systems, and products for broadcasting a message in a parallel computer.
2. Description Of Related Art
The development of the EDVAC computer system of 1948 is often cited as the beginning of the computer era. Since that time, computer systems have evolved into extremely complicated devices. Today's computers are much more sophisticated than early systems such as the EDVAC. Computer systems typically include a combination of hardware and software components, application programs, operating systems, processors, buses, memory, input/output devices, and so on. As advances in semiconductor processing and computer architecture push the performance of the computer higher and higher, more sophisticated computer software has evolved to take advantage of the higher performance of the hardware, resulting in computer systems today that are much more powerful than just a few years ago.
Parallel computing is an area of computer technology that has experienced advances. Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. Parallel computing is based on the fact that the process of solving a problem usually can be divided into smaller tasks, which may be carried out simultaneously with some coordination.
Parallel computers execute parallel algorithms. A parallel algorithm can be split up to be executed a piece at a time on many different processing devices, and then put back together again at the end to get a data processing result. Some algorithms are easy to divide up into pieces. Splitting up the job of checking all of the numbers from one to a hundred thousand to see which are primes could be done, for example, by assigning a subset of the numbers to each available processor, and then putting the list of positive results back together. In this specification, the multiple processing devices that execute the individual pieces of a parallel program are referred to as ‘compute nodes.’ A parallel computer is composed of compute nodes and other processing nodes as well, including, for example, input/output (‘I/O’) nodes, and service nodes.
Parallel algorithms are valuable because it is faster to perform some kinds of large computing tasks via a parallel algorithm than it is via a serial (non-parallel) algorithm, because of the way modern processors work. It is far more difficult to construct a computer with a single fast processor than one with many slow processors with the same throughput. There are also certain theoretical limits to the potential speed of serial processors. On the other hand, every parallel algorithm has a serial part and so parallel algorithms have a saturation point. After that point adding more processors does not yield any more throughput but only increases the overhead and cost.
Parallel algorithms are designed also to optimize one more resource the data communications requirements among the nodes of a parallel computer. There are two ways parallel processors communicate, shared memory or message passing. Shared memory processing needs additional locking for the data and imposes the overhead of additional processor and bus cycles and also serializes some portion of the algorithm.
Message passing processing uses high-speed data communications networks and message buffers, but this communication adds transfer overhead on the data communications networks as well as additional memory needed for message buffers and latency in the data communications among nodes. Designs of parallel computers use specially designed data communications links so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.
Many data communications network topologies are used for message passing among nodes in parallel computers. Such network topologies may include for example, a tree, a rectangular mesh, and a torus. In a tree network, the nodes typically are connected into a binary tree: each node typically has a parent and two children (although some nodes may only have zero children or one child, depending on the hardware configuration). A tree network typically supports communications where data from one compute node migrates through tiers of the tree network to a root compute node or where data is multicast from the root to all of the other compute nodes in the tree network. In such a manner, the tree network lends itself to collective operations such as, for example, reduction operations or broadcast operations. The tree network, however, does not lend itself to and is typically inefficient for point-to-point operations.
A rectangular mesh topology connects compute nodes in a three-dimensional mesh, and every node is connected with up to six neighbors through this mesh network. Each compute node in the mesh is addressed by its x, y, and z coordinate. A torus network connects the nodes in a manner similar to the three-dimensional mesh topology, but adds wrap-around links in each dimension such that every node is connected to its six neighbors through this torus network. In computers that use a torus and a tree network, the two networks typically are implemented independently of one another, with separate routing circuits, separate physical links, and separate message buffers. Other network topology often used to connect nodes of a network includes a star, a ring, or a hypercube. While the tree network generally lends itself to collective operations, a mesh or a torus network generally lends itself well for point-to-point communications. Although in general each type of network is optimized for certain communications patterns, those communications patterns may generally be supported by any type of network.
As mentioned above, the tree network is optimized for collective operations. Some collective operations have a single originating or receiving process running on a particular compute node in an operational group. For example, in a ‘broadcast’ collective operation, the process on the compute node that distributes the data to all the other compute nodes is an originating process. In a ‘gather’ operation, for example, the process on the compute node that received all the data from the other compute nodes is a receiving process. The compute node on which such an originating or receiving process runs is referred to as a logical root.
The collective tree network supports efficient collective operations because of the low latency associated with propagating a logical root's message to all of the other nodes in the collective tree network. The low latency for such data transfers result from the collective tree network's ability to multicast data from the physical root of the tree to the leaf nodes of the tree. The physical root of the collective tree network is the node at the top of the physical tree topology and is physically configured to only have child nodes without a parent node. In contrast, the leaf nodes are nodes at the bottom of the tree topology and are physically wired to only have a parent node without any children nodes. Currently, when the logical root is ready to broadcast a message to the other nodes in the operational group, the logical root must first send the entire message to the physical root of the tree network, which in turn, multicasts the entire message down the tree network to all the nodes in the operational group. The drawback to this current mechanism is that the initial step of sending the entire message from the logical root to the physical root before any of the other nodes receive the message may delay the propagation of the message to all of the nodes in the operational group.
Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of nodes connected together using a multicast data communications network optimized for collective operations. One node is configured as a physical root. The nodes are organized into at least one operational group of nodes for collective parallel operations, and one node is assigned to be a logical root. Broadcasting a message in a parallel computer includes: transmitting, by the logical root to all of the nodes in the operational group directly connected to the logical root, a message; and for each node in the operational group except the logical root: receiving, by that node, the message; if that node is the physical root, then transmitting, by that node, the message to all of the child nodes of the physical root except the child node from which the message was received; if that node received the message from the parent node for that node and if that node is not a leaf node, then transmitting, by that node, the message to all of the child nodes of that node; and if that node received the message from a child node and if that node is not the physical root, then transmitting, by that node, the message to all of the child nodes of that node except the child node from which the message was received and transmitting the message to the parent node of that node.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, systems, and computer program products for broadcasting a message in a parallel computer according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The compute nodes (102) are coupled for data communications by several independent data communications networks including a Joint Test Action Group (‘JTAG’) network (104), a global combining network (106) which is optimized for collective operations, and a rectangular mesh or torus network (108) which is optimized point to point operations. The rectangular mesh or torus network (108) is characterized by at least two dimensions. The global combining network (106) is a multicast data communications network that includes data communications links connected to the compute nodes so as to organize the compute nodes as a tree. Each data communications network is implemented with data communications links among the compute nodes (102). The data communications links provide data communications for parallel operations among the compute nodes of the parallel computer. The links between compute nodes are bi-directional links that are typically implemented using two separate directional data communications paths.
In addition, the compute nodes (102) of parallel computer are organized into at least one operational group (132) of compute nodes for collective parallel operations on parallel computer (100). An operational group of compute nodes is the set of compute nodes upon which a collective parallel operation executes. Collective operations are implemented with data communications among the compute nodes of an operational group. Collective operations are those functions that involve all the compute nodes of an operational group. A collective operation is an operation, a message-passing computer program instruction that is executed simultaneously, that is, at approximately the same time, by all the compute nodes in an operational group of compute nodes. Such an operational group may include all the compute nodes in a parallel computer (100) or a subset all the compute nodes. Collective operations are often built around point to point operations. A collective operation requires that all processes on all compute nodes within an operational group call the same collective operation with matching arguments. A ‘broadcast’ is an example of a collective operation for moving data among compute nodes of an operational group. A ‘reduce’ operation is an example of a collective operation that executes arithmetic or logical functions on data distributed among the compute nodes of an operational group. An operational group may be implemented as, for example, an MPI ‘communicator.’
‘MPI’ refers to ‘Message Passing Interface,’ a prior art parallel communications library, a module of computer program instructions for data communications on parallel computers. Examples of prior-art parallel communications libraries that may be improved for use with systems according to embodiments of the present invention include MPI and the ‘Parallel Virtual Machine’ (‘PVM’) library. PVM was developed by the University of Tennessee, The Oak Ridge National Laboratory, and Emory University. MPI is promulgated by the MPI Forum, an open group with representatives from many organizations that define and maintain the MPI standard. MPI at the time of this writing is a de facto standard for communication among compute nodes running a parallel program on a distributed memory parallel computer. This specification sometimes uses MPI terminology for ease of explanation, although the use of MPI as such is not a requirement or limitation of the present invention.
Some collective operations have a single originating or receiving process running on a particular compute node in an operational group. For example, in a ‘broadcast’ collective operation, the process on the compute node that distributes the data to all the other compute nodes is an originating process. In a ‘gather’ operation, for example, the process on the compute node that received all the data from the other compute nodes is a receiving process. The compute node on which such an originating or receiving process runs is referred to as a logical root.
Most collective operations are variations or combinations of four basic operations: broadcast, gather, scatter, and reduce. The interfaces for these collective operations are defined in the MPI standards promulgated by the MPI Forum. Algorithms for executing collective operations, however, are not defined in the MPI standards. In a broadcast operation, all processes specify the same root process, whose buffer contents will be sent. Processes other than the root specify receive buffers. After the operation, all buffers contain the message from the root process.
In a scatter operation, the logical root divides data on the root into segments and distributes a different segment to each compute node in the operational group. In scatter operation, all processes typically specify the same receive count. The send arguments are only significant to the root process, whose buffer actually contains sendcount*N elements of a given data type, where N is the number of processes in the given group of compute nodes. The send buffer is divided and dispersed to all processes (including the process on the logical root). Each compute node is assigned a sequential identifier termed a ‘rank.’ After the operation, the root has sent sendcount data elements to each process in increasing rank order. Rank 0 receives the first sendcount data elements from the send buffer. Rank 1 receives the second sendcount data elements from the send buffer, and so on.
A gather operation is a many-to-one collective operation that is a complete reverse of the description of the scatter operation. That is, a gather is a many-to-one collective operation in which elements of a datatype are gathered from the ranked compute nodes into a receive buffer in a root node.
A reduce operation is also a many-to-one collective operation that includes an arithmetic or logical function performed on two data elements. All processes specify the same ‘count’ and the same arithmetic or logical function. After the reduction, all processes have sent count data elements from computer node send buffers to the root process. In a reduction operation, data elements from corresponding send buffer locations are combined pair-wise by arithmetic or logical operations to yield a single corresponding element in the root process's receive buffer. Application specific reduction operations can be defined at runtime. Parallel communications libraries may support predefined operations. MPI, for example, provides the following pre-defined reduction operations:
In addition to compute nodes, the parallel computer (100) includes input/output (‘I/O’) nodes (110, 114) coupled to compute nodes (102) through the global combining network (106). The compute nodes in the parallel computer (100) are partitioned into processing sets such that each compute node in a processing set is connected for data communications to the same I/O node. Each processing set, therefore, is composed of one I/O node and a subset of compute nodes (102). The ratio between the number of compute nodes to the number of I/O nodes in the entire system typically depends on the hardware configuration for the parallel computer. For example, in some configurations, each processing set may be composed of eight compute nodes and one I/O node. In some other configurations, each processing set may be composed of sixty-four compute nodes and one I/O node. Such example are for explanation only, however, and not for limitation. Each I/O nodes provide I/O services between compute nodes (102) of its processing set and a set of I/O devices. In the example of
The parallel computer (100) of
As described in more detail below in this specification, the parallel computer (100) of
The arrangement of nodes, networks, and I/O devices making up the exemplary system illustrated in
Broadcasting a message in a parallel computer according to embodiments of the present invention may be generally implemented on a parallel computer that includes a plurality of compute nodes. In fact, such computers may include thousands of such compute nodes. Each compute node is in turn itself a kind of computer composed of one or more computer processors (or processing cores), its own computer memory, and its own input/output adapters. For further explanation, therefore,
Stored in RAM (156) is an application (158), a module of computer program instructions that carries out parallel, user-level data processing using parallel algorithms. Also stored in RAM (156) is a messaging module (160), a library of computer program instructions that carry out parallel communications among compute nodes, including point to point operations as well as collective operations. Application (158) executes point to point and collective operations by calling software routines in the messaging module (160). A library of parallel communications routines may be developed from scratch for use in systems according to embodiments of the present invention, using a traditional programming language such as the C programming language, and using traditional programming methods to write parallel communications routines that send and receive data among nodes on two independent data communications networks. Alternatively, existing prior art libraries may be improved to operate according to embodiments of the present invention. Examples of prior-art parallel communications libraries include the ‘Message Passing Interface’ (‘MPI’) library and the ‘Parallel Virtual Machine’ (‘PVM’) library.
The application (158) or the messaging module (160) may include computer program instructions for broadcasting a message in a parallel computer according to embodiments of the present invention. The application (158) or the messaging module (160) may operate generally for broadcasting a message in a parallel computer according to embodiments of the present invention by: transmitting, by a logical root to all of the compute nodes in the operational group directly connected to the logical root, a message for broadcasting to all of the compute nodes in the operational group; and for each compute node in the operational group except the logical root: receiving, by that compute node, the message for broadcasting to all of the compute nodes in the operational group; if that compute node is the physical root, then transmitting, by that compute node, the message to all of the child nodes of the physical root except the child node from which the message was received; if that compute node received the message from the parent node for that compute node and if that compute node is not a leaf node, then transmitting, by that compute node, the message to all of the child nodes of that compute node; and if that compute node received the message from a child node and if that compute node is not the physical root, then transmitting, by that compute node, the message to all of the child nodes of that compute node except the child node from which the message was received and transmitting the message to the parent node of that compute node.
Also stored in RAM (156) is an operating system (162), a module of computer program instructions and routines for an application program's access to other resources of the compute node. It is typical for an application program and parallel communications library in a compute node of a parallel computer to run a single thread of execution with no user login and no security issues because the thread is entitled to complete access to all resources of the node. The quantity and complexity of tasks to be performed by an operating system on a compute node in a parallel computer therefore are smaller and less complex than those of an operating system on a serial computer with many threads running simultaneously. In addition, there is no video I/O on the compute node (152) of
The exemplary compute node (152) of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
Example compute node (152) includes two arithmetic logic units (‘ALUs’). ALU (166) is a component of each processing core (164), and a separate ALU (170) is dedicated to the exclusive use of Global Combining Network Adapter (188) for use in performing the arithmetic and logical functions of reduction operations. Computer program instructions of a reduction routine in parallel communications library (160) may latch an instruction for an arithmetic or logical function into instruction register (169). When the arithmetic or logical function of a reduction operation is a ‘sum’ or a ‘logical or,’ for example, Global Combining Network Adapter (188) may execute the arithmetic or logical operation by use of ALU (166) in processor (164) or, typically much faster, by use dedicated ALU (170).
The example compute node (152) of
For further explanation,
For further explanation,
For further explanation,
For further explanation,
In the example of
For further explanation,
In the example of
In the example of
For example, when compute node 3 receives the message from compute node 1, compute node 3 transmits the message to both of its child nodes, compute nodes 7 and 8. When compute node 4 receives the message from compute node 1, compute node 4 transmits the message to both of its child nodes, compute nodes 9 and 10. When compute node 0, the physical root (202), receives the message from compute node 1, compute node 0 transmits the message to its other child node, compute node 2. Upon receiving the message from compute node 0, compute node 2 transmits the message to both of its child nodes, compute nodes 5 and 6. When compute node 5 receives the message from compute node 2, compute node 5 transmits the message to both of its child nodes, compute nodes 11 and 12. When compute node 6 receives the message from compute node 2, compute node 6 transmits the message to both of its child nodes, compute nodes 13 and 14.
In the example of
In the example of
In the example of
For example, when compute node 3 receives the message from compute node 7, compute node 3 transmits the message to its other child node, compute node 8, and its parent node, compute node 1. Upon receiving the message from compute node 3, compute node 1 transmits the message to its other child node, compute node 4, and its parent node, compute node 0. When compute node 4 receives the message from compute node 1, compute node 4 transmits the message to both of its child nodes, compute nodes 9 and 10. When compute node 0, the physical root (202), receives the message from compute node 1, compute node 0 transmits the message to its other child node, compute node 2. Upon receiving the message from compute node 0, compute node 2 transmits the message to both of its child nodes, compute nodes 5 and 6. When compute node 5 receives the message from compute node 2, compute node 5 transmits the message to both of its child nodes, compute nodes 11 and 12. When compute node 6 receives the message from compute node 2, compute node 6 transmits the message to both of its child nodes, compute nodes 13 and 14.
In the example of
In the example of
In the example of
For example, when compute node 1 receives the message from compute node 0, compute node 1 transmits the message to both its child nodes, compute nodes 3 and 4. Upon receiving the message from compute node 1, compute node 3 transmits the message to both its child nodes, compute nodes 7 and 8. When compute node 4 receives the message from compute node 1, compute node 4 transmits the message to both of its child nodes, compute nodes 9 and 10. Upon receiving the message from compute node 0, compute node 2 transmits the message to both of its child nodes, compute nodes 5 and 6. When compute node 5 receives the message from compute node 2, compute node 5 transmits the message to both of its child nodes, compute nodes 11 and 12. When compute node 6 receives the message from compute node 2, compute node 6 transmits the message to both of its child nodes, compute nodes 13 and 14.
For further explanation,
The method of
As illustrated in
In the example of
As provided above, the method of
The method of
As provided above, the method of
The method of
As provided above, the method of
The method of
As provided above, the method of
Exemplary embodiments of the present invention are described largely in the context of a fully functional parallel computer system for broadcasting a message in a parallel computer. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on computer readable media for use with any suitable data processing system. Such computer readable media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web as well as wireless transmission media such as, for example, networks implemented according to the IEEE 802.11 family of specifications. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
This application is a continuation application of and claims priority from U.S. patent application Ser. No. 12/060,492 filed on Apr. 1, 2008.
This invention was made with Government support under Contract No. B554331 awarded by the Department of Energy. The Government has certain rights in this invention.
Number | Name | Date | Kind |
---|---|---|---|
4715032 | Nilsson | Dec 1987 | A |
4843540 | Stolfo | Jun 1989 | A |
5101480 | Shin et al. | Mar 1992 | A |
5105424 | Flaig et al. | Apr 1992 | A |
5333279 | Dunning | Jul 1994 | A |
5377333 | Nakagoshi et al. | Dec 1994 | A |
5513371 | Cypher et al. | Apr 1996 | A |
5541914 | Krishnamoorthy et al. | Jul 1996 | A |
5590334 | Saulpaugh et al. | Dec 1996 | A |
5617538 | Heller | Apr 1997 | A |
5668815 | Gittinger et al. | Sep 1997 | A |
5721828 | Frisch | Feb 1998 | A |
5822604 | Ogasawara et al. | Oct 1998 | A |
5822605 | Higuchi et al. | Oct 1998 | A |
5826049 | Ogata et al. | Oct 1998 | A |
5832215 | Kato et al. | Nov 1998 | A |
5864712 | Carmichael et al. | Jan 1999 | A |
5875329 | Shan | Feb 1999 | A |
5878241 | Wilkinson et al. | Mar 1999 | A |
5892923 | Yasuda et al. | Apr 1999 | A |
5937202 | Crosetto et al. | Aug 1999 | A |
5949988 | Feisulli et al. | Sep 1999 | A |
5958017 | Scott et al. | Sep 1999 | A |
6000024 | Maddox et al. | Dec 1999 | A |
6038651 | VanHuben et al. | Mar 2000 | A |
6067609 | Meeker et al. | May 2000 | A |
6076131 | Nugent | Jun 2000 | A |
6108692 | Van Seters et al. | Aug 2000 | A |
6212617 | Hardwick | Apr 2001 | B1 |
6272548 | Cotter et al. | Aug 2001 | B1 |
6289424 | Stevens | Sep 2001 | B1 |
6292822 | Hardwick | Sep 2001 | B1 |
6334138 | Kureya | Dec 2001 | B1 |
6449667 | Ganmukhi et al. | Sep 2002 | B1 |
6473849 | Keller et al. | Oct 2002 | B1 |
6480885 | Olivier | Nov 2002 | B1 |
6647438 | Connor et al. | Nov 2003 | B1 |
6691101 | MacNicol et al. | Feb 2004 | B2 |
6714552 | Cotter | Mar 2004 | B1 |
6742063 | Hellum et al. | May 2004 | B1 |
6754211 | Brown | Jun 2004 | B1 |
6834301 | Hanchett | Dec 2004 | B1 |
6914606 | Amemiya et al. | Jul 2005 | B2 |
6954806 | Yosimoto et al. | Oct 2005 | B2 |
6982960 | Lee et al. | Jan 2006 | B2 |
7010576 | Bae | Mar 2006 | B2 |
7073043 | Arimilli et al. | Jul 2006 | B2 |
7133359 | Weis | Nov 2006 | B2 |
7143392 | Ii et al. | Nov 2006 | B2 |
7171484 | Krause et al. | Jan 2007 | B1 |
7203743 | Shah-Heydari | Apr 2007 | B2 |
7263598 | Ambuel | Aug 2007 | B2 |
7263698 | Wildhagen et al. | Aug 2007 | B2 |
7284033 | Jhanji | Oct 2007 | B2 |
7352739 | Rangarajan et al. | Apr 2008 | B1 |
7363474 | Rodgers et al. | Apr 2008 | B2 |
7487501 | Silvera et al. | Feb 2009 | B2 |
7496699 | Pope et al. | Feb 2009 | B2 |
7509244 | Shakeri et al. | Mar 2009 | B1 |
7539989 | Blackmore et al. | May 2009 | B2 |
7555566 | Blumrich et al. | Jun 2009 | B2 |
7571439 | Rabinovici et al. | Aug 2009 | B1 |
7587516 | Bhanot et al. | Sep 2009 | B2 |
7590983 | Neiman et al. | Sep 2009 | B2 |
7600095 | Archer et al. | Oct 2009 | B2 |
7613134 | Rangaraajan et al. | Nov 2009 | B2 |
7640315 | Meyer et al. | Dec 2009 | B1 |
7664110 | Lovett et al. | Feb 2010 | B1 |
7673011 | Archer et al. | Mar 2010 | B2 |
7697443 | Archer et al. | Apr 2010 | B2 |
7707366 | Tagawa | Apr 2010 | B2 |
7725329 | Kil et al. | May 2010 | B2 |
7739451 | Wiedenman et al. | Jun 2010 | B1 |
7774448 | Shah-Heydari | Aug 2010 | B2 |
7796527 | Archer et al. | Sep 2010 | B2 |
7808930 | Boers et al. | Oct 2010 | B2 |
7835378 | Wijnands et al. | Nov 2010 | B2 |
7853639 | Archer et al. | Dec 2010 | B2 |
7936681 | Gong et al. | May 2011 | B2 |
7948999 | Blocksome et al. | May 2011 | B2 |
7974221 | Tamassia et al. | Jul 2011 | B2 |
7984448 | Almasi et al. | Jul 2011 | B2 |
7991857 | Berg et al. | Aug 2011 | B2 |
8090797 | Chinta et al. | Jan 2012 | B2 |
8131825 | Nord et al. | Mar 2012 | B2 |
8136104 | Papakipos et al. | Mar 2012 | B2 |
8161268 | Faraj | Apr 2012 | B2 |
8161480 | Archer et al. | Apr 2012 | B2 |
8326943 | Chinta et al. | Dec 2012 | B2 |
8365186 | Faraj et al. | Jan 2013 | B2 |
8436720 | Archer et al. | May 2013 | B2 |
8565089 | Archer et al. | Oct 2013 | B2 |
20020016901 | Carvey et al. | Feb 2002 | A1 |
20020054051 | Ladd | May 2002 | A1 |
20020065984 | Thompson et al. | May 2002 | A1 |
20020091819 | Melchione et al. | Jul 2002 | A1 |
20020144027 | Schmisseur | Oct 2002 | A1 |
20030041173 | Hoyle | Feb 2003 | A1 |
20030182376 | Smith | Sep 2003 | A1 |
20030188054 | Yosimoto et al. | Oct 2003 | A1 |
20030212877 | Dally et al. | Nov 2003 | A1 |
20030225852 | Bae | Dec 2003 | A1 |
20040034678 | Kuszmaul et al. | Feb 2004 | A1 |
20040073590 | Bhanot et al. | Apr 2004 | A1 |
20040107387 | Larsson et al. | Jun 2004 | A1 |
20050094577 | Ashwood-Smith | May 2005 | A1 |
20050135395 | Fan et al. | Jun 2005 | A1 |
20050165980 | Clayton et al. | Jul 2005 | A1 |
20050243711 | Alicherry et al. | Nov 2005 | A1 |
20060156312 | Supalov | Jul 2006 | A1 |
20060168359 | Bissessur et al. | Jul 2006 | A1 |
20060179181 | Seong | Aug 2006 | A1 |
20060182137 | Zhou et al. | Aug 2006 | A1 |
20060277323 | Joublin et al. | Dec 2006 | A1 |
20060282838 | Gupta et al. | Dec 2006 | A1 |
20070110063 | Tang et al. | May 2007 | A1 |
20070174558 | Jia et al. | Jul 2007 | A1 |
20070226686 | Beardslee et al. | Sep 2007 | A1 |
20070242611 | Archer et al. | Oct 2007 | A1 |
20070245122 | Archer et al. | Oct 2007 | A1 |
20070245163 | Lu et al. | Oct 2007 | A1 |
20070288935 | Tannenbaum et al. | Dec 2007 | A1 |
20070294666 | Papakipos et al. | Dec 2007 | A1 |
20070294681 | Tuck et al. | Dec 2007 | A1 |
20080022079 | Archer et al. | Jan 2008 | A1 |
20080077366 | Neuse et al. | Mar 2008 | A1 |
20080109569 | Leonard et al. | May 2008 | A1 |
20080127146 | Liao et al. | May 2008 | A1 |
20080155249 | Backof et al. | Jun 2008 | A1 |
20080177505 | Volponi | Jul 2008 | A1 |
20080201603 | Ritz et al. | Aug 2008 | A1 |
20080250325 | Feigenbaum et al. | Oct 2008 | A1 |
20080263320 | Archer et al. | Oct 2008 | A1 |
20080263329 | Archer et al. | Oct 2008 | A1 |
20080273543 | Blocksome et al. | Nov 2008 | A1 |
20080288949 | Bohra et al. | Nov 2008 | A1 |
20080301683 | Archer et al. | Dec 2008 | A1 |
20090006662 | Chen et al. | Jan 2009 | A1 |
20090006663 | Archer et al. | Jan 2009 | A1 |
20090019218 | Sinclair et al. | Jan 2009 | A1 |
20090019258 | Shi | Jan 2009 | A1 |
20090037377 | Archer et al. | Feb 2009 | A1 |
20090037511 | Alamasi et al. | Feb 2009 | A1 |
20090037707 | Blocksome | Feb 2009 | A1 |
20090040946 | Archer et al. | Feb 2009 | A1 |
20090043910 | Barsness et al. | Feb 2009 | A1 |
20090052462 | Archer et al. | Feb 2009 | A1 |
20090055474 | Archer et al. | Feb 2009 | A1 |
20090063815 | Arimilli et al. | Mar 2009 | A1 |
20090064140 | Arimilli et al. | Mar 2009 | A1 |
20090064176 | Ohly et al. | Mar 2009 | A1 |
20090067334 | Archer et al. | Mar 2009 | A1 |
20090154486 | Archer et al. | Jun 2009 | A1 |
20090196361 | Chan et al. | Aug 2009 | A1 |
20090240838 | Berg et al. | Sep 2009 | A1 |
20090240915 | Faraj et al. | Sep 2009 | A1 |
20090245134 | Archer et al. | Oct 2009 | A1 |
20090248712 | Yuan | Oct 2009 | A1 |
20090259713 | Blumrich et al. | Oct 2009 | A1 |
20090292905 | Faraj | Nov 2009 | A1 |
20090307467 | Faraj | Dec 2009 | A1 |
20090310544 | Jain et al. | Dec 2009 | A1 |
20090319621 | Barsness et al. | Dec 2009 | A1 |
20100017420 | Archer et al. | Jan 2010 | A1 |
20100023631 | Archer et al. | Jan 2010 | A1 |
20100057738 | Ianni | Mar 2010 | A1 |
20100082788 | Mundy | Apr 2010 | A1 |
20100122268 | Jia | May 2010 | A1 |
20100185718 | Archer et al. | Jul 2010 | A1 |
20100191911 | Heddes et al. | Jul 2010 | A1 |
20100274997 | Archer et al. | Oct 2010 | A1 |
20110010471 | Heidelberger et al. | Jan 2011 | A1 |
20110119673 | Bloch et al. | May 2011 | A1 |
20110125974 | Anderson | May 2011 | A1 |
20110153908 | Schaefer et al. | Jun 2011 | A1 |
20110179134 | Mayo et al. | Jul 2011 | A1 |
20110238950 | Archer et al. | Sep 2011 | A1 |
20110258245 | Blocksome et al. | Oct 2011 | A1 |
20110258627 | Faraj et al. | Oct 2011 | A1 |
20110267197 | Archer et al. | Nov 2011 | A1 |
20110270986 | Archer et al. | Nov 2011 | A1 |
20110289177 | Archer et al. | Nov 2011 | A1 |
20110296137 | Archer et al. | Dec 2011 | A1 |
20110296139 | Archer et al. | Dec 2011 | A1 |
20120066284 | Archer et al. | Mar 2012 | A1 |
20120117361 | Archer et al. | May 2012 | A1 |
20120179881 | Archer et al. | Jul 2012 | A1 |
20120197882 | Jensen | Aug 2012 | A1 |
20120216021 | Archer et al. | Aug 2012 | A1 |
20120317399 | Blocksome et al. | Dec 2012 | A1 |
20120331270 | Archer et al. | Dec 2012 | A1 |
20130042088 | Archer et al. | Feb 2013 | A1 |
20130042245 | Archer et al. | Feb 2013 | A1 |
20130042254 | Archer et al. | Feb 2013 | A1 |
20130067198 | Archer et al. | Mar 2013 | A1 |
20130073603 | Archer et al. | Mar 2013 | A1 |
20130073832 | Archer et al. | Mar 2013 | A1 |
20130074098 | Archer et al. | Mar 2013 | A1 |
20130080563 | Archer et al. | Mar 2013 | A1 |
20130086358 | Archer et al. | Apr 2013 | A1 |
20130111496 | Archer et al. | May 2013 | A1 |
20130151713 | Faraj | Jun 2013 | A1 |
20130173675 | Archer et al. | Jul 2013 | A1 |
20130212145 | Archer et al. | Aug 2013 | A1 |
20130212558 | Archer et al. | Aug 2013 | A1 |
20130212561 | Archer et al. | Aug 2013 | A1 |
20130246533 | Archer et al. | Sep 2013 | A1 |
20130290673 | Archer et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
1835414 | Sep 2007 | EP |
2000156039 | Jun 2000 | JP |
2003317487 | Nov 2003 | JP |
WO 2007057281 | May 2007 | WO |
Entry |
---|
Faraj, A., et al. “Automatic Generation and Tuning of MPI Collective Communication Routines”, ICS' 05, Jun. 20-22, Boston, MA, USA. pp. 393-402, ACM. |
Shrimali, G., et al., “Building Packet Buffers Using Interleaved Memories”, (Proc. Sixth Workshop High Performance Switching and Routing (HPSR '05), May 2005, pp. 1-5, IEEE. |
Ong, H., et al., “Kernel-level Single System Image for Petascale Computing”, SIGOPS Oper. Syst. Rev., Apr. 2006, pp. 50-54, vol. 40, No. 2, ACM, New York, NY, USA. |
Foster, I., et al., “Message Passing and Threads,” Sourcebook of Parallel Computing, (Month Unknown) 2003, pp. 301-317, Morgan Kaufmann Publishers Inc. URL: http://web.eecs.utk.edu/˜dongarra/WEB-PAGES/SPRING-2006/chapter10.pdf. |
Simonsson, P., “Implementation of a Distributed Shared Memory using MPI,” Chalmers University of Technology and Goteborg University, 2004, Supervised together with Anders Gidenstam, Master's Thesis, Finished Jan. 11, 2005, pp. 1-98, Goteborg, Sweden. |
Message Passing Interface Forum,“MPI: A Message-Passing Interface Standard Version 2.2”, MPI Specification, Sep. 4, 2009, pp. 1-647, High Performance Computing Center Stuttgart (HLRS). |
Vetter, J., et al., “Real-Time Performance Monitoring, Adaptive Control, and Interactive Steering of Computational Grids”, International Journal of High Performance Computing Applications Winter 2000, pp. 357-366 (10 Pages), vol. 14, No. 4, Sage Publications, Inc. Thousand Oaks, CA, USA. |
Wikipedia, “Cache (computing)—Wikipedia, the free encyclopedia”, Cache (computing), Edited by EmausBot, Jul. 22, 2011, Accessed Aug. 10, 2013, 6 Pages. |
Wikipedia, “Fuzzy logic—Wikipedia, the free encyclopedia”, Fuzzy Logic, Edited by Jeff Silvers, Aug. 1, 2011, Accessed Aug. 10, 2013, 10 Pages. |
Sistare, S., et al., “Optimization of MPI collectives on clusters of large-scale SMP's”, Conference on High Performance Networking and Computing, Proceedings of the 1999 ACM/IEEE Conference on Supercomputing; Nov. 1999, pp. 1-14, ACM, New York, NY, USA. |
Tanenbaum, A., “Structured Computer Organization”, Jan. 1984, pp. 1-5, Second Edition, Prentice-Hall, Inc., Englewood Cliffs, NJ, USA, ISBN: 0-13-854489-1. |
Shaw, D., et al., DADO: A Tree-Structured Machine Architecture for Production Systems, AAAI-82 Proceedings, Jan. 1982, AAAI (www.aaai.org), pp. 242-246, AAAI Press. |
Rosenberg, J., “Dictionary of Computers, Information Processing & Telecommunications”, Sep. 1987, pp. 1-5, Second Edition, John Wiley & Sons, New York, NY, USA. |
“Swap two variables using XOR | BetterExplained,” URL: http://betterexplained.com/articles/swap-two-variables-using-xor/, accessed Jan. 16, 2007, 11 pages. |
Office Action, U.S. Appl. No. 12/060,492, May 27, 2010, pp. 1-10. |
Final Office Action, U.S. Appl. No. 12/060,492, Dec. 2, 2010, pp. 1-10. |
Office Action, U.S. Appl. No. 12/060,492, Jul. 16, 2012, pp. 1-18. |
Notice of Allowance, U.S. Appl. No. 12/060,492, Dec. 13, 2012, pp. 1-12. |
Specification of U.S. Appl. No. 60/271,124, Filed Feb. 24, 2001, pp. 12-13, 27 and 42-43. |
Office Action, U.S. Appl. No. 13/166,183, Feb. 24, 2014, pp. 1-22. |
Office Action, U.S. Appl. No. 13/206,116, Feb. 7, 2014, pp. 1-32. |
Keller, R., et al., “MPI Development Tools and Applications for the Grid”, In Workshop on Grid Applications and Programming Tools, Jun. 20, 2003, pp. 1-12, Innovative Computing Laboratory, Computer Science Department, University of Tennessee, Knoxville, TN, USA. |
Edmonds et al., “AM++: A Generalized Active Message Framework”, The 19th International Conference on Parallel Architectures and Compilation Techniques (PACT'10), Sep. 11-15, 2010, pp. 1-10, ACM, New York, NY USA. ISBN: 978-1-4503-0178-7. |
Bangalore et al., “Extending the Message Passing Interface (MPI)”, Proceedings of the 1994 Conference on Scalable Parallel Libraries, Oct. 12-14, 1994, pp. 106-118, IEEE Computer Society Press, USA. IEEE Digital Object Identifier: 10.1109/SPLC.1994.376998. |
Herbordt, M.C., Weems, C.C.; “Computing Parallel Prefix and Reduction Using Coterie Structures”; Frontiers of Massively Parallel Computation; 1992; Fourth Symposium; Oct. 19-21, 1992; pp. 141-149. |
Fisher, A., et al., “Computing the Hough Transform on a Scan Line Array Processor [image processing] ”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, Issue: 3, Mar. 1989, pp. 262-265, IEEE Xplore. |
Kodama, Y., et al., “Efficient MPI Collective Operations for Clusters in Long-and-Fast Networks”, pp. 1-9, 2006 IEEE International Conference on Cluster Computing, Sep. 2006, IEEE. |
Shaw, D., et al., DADO: A Tree-Structured Machine Architecture for Production Systems, AAAI-82 Proceedings, Month: Unknown, Year: 1982, AAAI (www.aaai.org), pp. 242-246, Columbia University. |
Choi, H., et al., “An All-Reduce Operation in Star Networks Using All-to-All Broadcast Communication Patterns”, V.S. Sunderam et al. (Eds.): ICCS 2005, LNCS 3514, pp. 419-426, 2005, Springer-Verlag Berlin Heidelberg 2005. |
Patarasuk, P., et al., “Bandwidth Efficient All-reduce Operation on Tree Topologies”, IEEE IPDPS Workshop on High-Level Parallel Programming Models and Supportive Environments, 2007, pp. 1-8, IEEE. |
Better Explained, “Swap two variables using XOR I BetterExplained”, http://betterexplained.com/articles/swap-two-variables-using-xor, Accessed Jun. 4, 2011, pp. 1-8. |
Bafna, R., et al, “Coprocessor Design to Support MPI Primitives in Configurable Mutliprocessors”, Integration, the VLSI Journal, vol. 40 , Issue: 3 , pp. 235-252, Apr. 2007, Elsevier, URL: http://web.njit.edu/˜ziavras/INTEGRATION-1.pdf. |
Tang, H., et al., “Optimizing Threaded MPI Execution on SMP Clusters,” ICS '01 Proceedings of the 15th International Conference on Supercomputing, Jun. 2001, pp. 381-392, ACM, New York, USA. |
Sunggu Lee., et al., “Interleaved All-To-All Reliable Broadcast on Meshes and Hypercubes”, IEEE Transactions on Parallel and Distributed Systems, May 1994, pp. 449-458, vol. 5, No. 5, IEEE Xplore. |
Wikipedia, “Depth-first search—Wikipedia, the free encyclopedia”, http://web.archive.org/web/20070505212029/http://en.wikipedia.org/wiki/Depth-first—search, Apr. 29, 2009, pp. 1-5. |
Bruck J., et al. Efficient Algorithms for all-to-all communications in multiportmessage-passing systems, Parallel and Distributed Systems, IEEE Transactions on, vol. 8, Issue: 11, pp. 1143-1156, Nov. 1997. |
Sistare., et al., “Optimization of MPI collectives on clusters of large-scale SMP's”, Conference on High Performance Networking and Computing, Proceedings of the 1999 ACM/IEEE Conference on Supercomputing; Nov. 1999, pp. 1-18, ACM, New York, USA. |
Tanenbaum, A., “Structured Computer Organization”, Jan. 1984, pp. 1-5, Second Edition, Prentice-Hall, Inc., Englewood Cliffs, N.J., USA, ISBN: 0-13-854489-1. |
Rosenberg, J., “Dictionary of Computers, Information Processing & Telecommunications”, Sep. 1987, 5 pages, Second Edition, John Wiley & Sons, New York. |
Number | Date | Country | |
---|---|---|---|
20130246533 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12060492 | Apr 2008 | US |
Child | 13861963 | US |