1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer.
2. Description of Related Art
The development of the EDVAC computer system of 1948 is often cited as the beginning of the computer era. Since that time, computer systems have evolved into extremely complicated devices. Today's computers are much more sophisticated than early systems such as the EDVAC. Computer systems typically include a combination of hardware and software components, application programs, operating systems, processors, buses, memory, input/output devices, and so on. As advances in semiconductor processing and computer architecture push the performance of the computer higher and higher, more sophisticated computer software has evolved to take advantage of the higher performance of the hardware, resulting in computer systems today that are much more powerful than just a few years ago.
Parallel computing is an area of computer technology that has experienced advances. Parallel computing is the simultaneous execution of the same application (split up and specially adapted) on multiple processors in order to obtain results faster. Parallel computing is based on the fact that the process of solving a problem usually can be divided into smaller jobs, which may be carried out simultaneously with some coordination.
Parallel computers execute parallel algorithms. A parallel algorithm can be split up to be executed a piece at a time on many different processing devices, and then put back together again at the end to get a data processing result. Some algorithms are easy to divide up into pieces. Splitting up the job of checking all of the numbers from one to a hundred thousand to see which are primes could be done, for example, by assigning a subset of the numbers to each available processor, and then putting the list of positive results back together. In this specification, the multiple processing devices that execute the individual pieces of a parallel program are referred to as ‘compute nodes.’ A parallel computer is composed of compute nodes and other processing nodes as well, including, for example, input/output (‘I/O’) nodes, and service nodes.
Parallel algorithms are valuable because it is faster to perform some kinds of large computing jobs via a parallel algorithm than it is via a serial (non-parallel) algorithm, because of the way modern processors work. It is far more difficult to construct a computer with a single fast processor than one with many slow processors with the same throughput. There are also certain theoretical limits to the potential speed of serial processors. On the other hand, every parallel algorithm has a serial part and so parallel algorithms have a saturation point. After that point adding more processors does not yield any more throughput but only increases the overhead and cost.
Parallel algorithms are designed also to optimize one more resource the data communications requirements among the nodes of a parallel computer. There are two ways parallel processors communicate, shared memory or message passing. Shared memory processing needs additional locking for the data and imposes the overhead of additional processor and bus cycles and also serializes some portion of the algorithm.
Message passing processing uses high-speed data communications networks and message buffers, but this communication adds transfer overhead on the data communications networks as well as additional memory need for message buffers and latency in the data communications among nodes. Designs of parallel computers use specially designed data communications links so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.
Many data communications network architectures are used for message passing among nodes in parallel computers. Compute nodes may be organized in a network as a ‘torus’ or ‘mesh,’ for example. Also, compute nodes may be organized in a network as a tree. A torus network connects the nodes in a three-dimensional mesh with wrap around links. Every node is connected to its six neighbors through this torus network, and each node is addressed by its x,y,z coordinate in the mesh. In a tree network, the nodes typically are connected into a binary tree: each node has a parent and two children (although some nodes may only have zero children or one child, depending on the hardware configuration). In computers that use a torus and a tree network, the two networks typically are implemented independently of one another, with separate routing circuits, separate physical links, and separate message buffers.
A torus network lends itself to point to point operations, but a tree network typically is inefficient in point to point communication. A tree network, however, does provide high bandwidth and low latency for certain collective operations, message passing operations where all compute nodes participate simultaneously, such as, for example, an allgather.
There is at this time a general trend in computer processor development to move from multi-core to many-core processors: from dual-, tri-, quad-, hexa-, octo-core chips to ones with tens or even hundreds of cores. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose heterogeneous cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. This trend is impacting the supercomputing world as well, where large transistor count chips are more efficiently used by replicating cores, rather than building chips that are very fast but very inefficient in terms of power utilization.
At the same time, the network link speed and number of links into and out of a compute node are dramatically increasing. IBM's BlueGene/Q™ supercomputer, for example, will have a five-dimensional torus network, which implements ten bidirectional data communications links per compute node—and BlueGene/Q will support many thousands of compute nodes. To keep these links filled with data, DMA engines are employed, but increasingly, the HPC community is interested in latency. In traditional supercomputers with pared-down operating systems, there is little or no multi-tasking within compute nodes. When a data communications link is unavailable, a task typically blocks or ‘spins’ on a data transmission, in effect, idling a processor until a data transmission resource becomes available. In the trend for more powerful individual processors, such blocking or spinning has a bad effect on latency.
Methods, parallel computers, and computer program products for data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of example embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of example embodiments of the invention.
Example methods, computers, and computer program products for data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The parallel computer (100) in the example of
In addition, the compute nodes (102) of parallel computer are organized into at least one operational group (132) of compute nodes for collective parallel operations on parallel computer (100). An operational group of compute nodes is the set of compute nodes upon which a collective parallel operation executes. Collective operations are implemented with data communications among the compute nodes of an operational group. Collective operations are those functions that involve all the compute nodes of an operational group. A collective operation is an operation, a message-passing computer program instruction that is executed simultaneously, that is, at approximately the same time, by all the compute nodes in an operational group of compute nodes. Such an operational group may include all the compute nodes in a parallel computer (100) or a subset all the compute nodes. Collective operations are often built around point to point operations. A collective operation requires that all processes on all compute nodes within an operational group call the same collective operation with matching arguments. A ‘broadcast’ is an example of a collective operations for moving data among compute nodes of an operational group. A ‘reduce’ operation is an example of a collective operation that executes arithmetic or logical functions on data distributed among the compute nodes of an operational group. An operational group may be implemented as, for example, an MPI ‘communicator.’
‘MPI’ refers to ‘Message Passing Interface,’ a prior art applications messaging module or parallel communications library, an application-level messaging module of computer program instructions for data communications on parallel computers. Such an application messaging module is disposed in an application messaging layer in a data communications protocol stack. Examples of prior-art parallel communications libraries that may be improved for use with parallel computers that process data communications in a PAMI of a parallel computer according to embodiments of the present invention include IBM's MPI library, the ‘Parallel Virtual Machine’ (‘PVM’) library, MPICH, OpenMPI, and LAM/MPI. MPI is promulgated by the MPI Forum, an open group with representatives from many organizations that define and maintain the MPI standard. MPI at the time of this writing is a de facto standard for communication among compute nodes running a parallel program on a distributed memory parallel computer. This specification sometimes uses MPI terminology for ease of explanation, although the use of MPI as such is not a requirement or limitation of the present invention.
Most collective operations are variations or combinations of four basic operations: broadcast, gather, scatter, and reduce. In a broadcast operation, all processes specify the same root process, whose buffer contents will be sent. Processes other than the root specify receive buffers. After the operation, all buffers contain the message from the root process.
A scatter operation, like the broadcast operation, is also a one-to-many collective operation. All processes specify the same receive count. The send arguments are only significant to the root process, whose buffer actually contains sendcount * N elements of a given datatype, where N is the number of processes in the given group of compute nodes. The send buffer will be divided equally and dispersed to all processes (including itself). Each compute node is assigned a sequential identifier termed a ‘rank.’ After the operation, the root has sent sendcount data elements to each process in increasing rank order. Rank 0 receives the first sendcount data elements from the send buffer. Rank 1 receives the second sendcount data elements from the send buffer, and so on.
A gather operation is a many-to-one collective operation that is a complete reverse of the description of the scatter operation. That is, a gather is a many-to-one collective operation in which elements of a datatype are gathered from the ranked compute nodes into a receive buffer in a root node.
A reduce operation is also a many-to-one collective operation that includes an arithmetic or logical function performed on two data elements. All processes specify the same ‘count’ and the same arithmetic or logical function. After the reduction, all processes have sent count data elements from computer node send buffers to the root process. In a reduction operation, data elements from corresponding send buffer locations are combined pair-wise by arithmetic or logical operations to yield a single corresponding element in the root process's receive buffer. Application specific reduction operations can be defined at runtime. Parallel communications libraries may support predefined operations. MPI, for example, provides the following predefined reduction operations:
In addition to compute nodes, the example parallel computer (100) includes input/output (‘I/O’) nodes (110, 114) coupled to compute nodes (102) through one of the data communications networks (174). The I/O nodes (110, 114) provide I/O services between compute nodes (102) and I/O devices (118, 120, 122). I/O nodes (110, 114) are connected for data communications I/O devices (118, 120, 122) through local area network (‘LAN’) (130). Computer (100) also includes a service node (116) coupled to the compute nodes through one of the networks (104). Service node (116) provides service common to pluralities of compute nodes, loading programs into the compute nodes, starting program execution on the compute nodes, retrieving results of program operations on the computer nodes, and so on. Service node (116) runs a service application (124) and communicates with users (128) through a service application interface (126) that runs on computer terminal (122).
As the term is used here, a parallel active messaging interface or ‘PAMI’ (218) is a system-level messaging layer in a protocol stack of a parallel computer that is composed of data communications endpoints each of which is specified with data communications parameters for a thread of execution on a compute node of the parallel computer. The PAMI is a ‘parallel’ interface in that many instances of the PAMI operate in parallel on the compute nodes of a parallel computer. The PAMI is an ‘active messaging interface’ in that data communications messages in the PAMI are active messages, ‘active’ in the sense that such messages implement callback functions to advise of message dispatch and instruction completion and so on, thereby reducing the quantity of acknowledgment traffic, and the like, burdening the data communication resources of the PAMI.
Each data communications endpoint of a PAMI is implemented as a combination of a client, a context, and a task. A ‘client’ as the term is used in PAMI operations is a collection of data communications resources dedicated to the exclusive use of an application-level data processing entity, an application or an application messaging module such as an MPI library. A ‘context’ as the term is used in PAMI operations is composed of a subset of a client's collection of data processing resources, context functions, and a work queue of data transfer instructions to be performed by use of the subset through the context functions operated by an assigned thread of execution. In at least some embodiments, the context's subset of a client's data processing resources is dedicated to the exclusive use of the context. A ‘task’ as the term is used in PAMI operations refers to a canonical entity, an integer or objection oriented programming object, that represents in a PAMI a process of execution of the parallel application. That is, a task is typically implemented as an identifier of a particular instance of an application executing on a compute node, a compute core on a compute node, or a thread of execution on a multi-threading compute core on a compute node.
In the example of
In addition to the send instruction mentioned above, which readers will recognize as a rendezvous send, data communications instructions processed by the parallel computer here include both eager send instructions, receive instructions, DMA PUT instructions, DMA GET instructions, and so on. Some data communications instructions, typically GETs and PUTs are one-sided DMA instructions in that there is no cooperation required from a target processor, no computation on the target side to complete such a PUT or GET because data is transferred directly to or from memory on the other side of the transfer. In this setting, the term ‘target’ is used for either PUT or GET. A PUT target receives data directly into its RAM from an origin endpoint. A GET target provides data directly from its RAM to the origin endpoint. Thus readers will recognize that the designation of an endpoint as an origin endpoint for a transfer is a designation of the endpoint that initiates execution of a DMA transfer instruction—rather than a designation of the direction of the transfer: PUT instructions transfer data from an origin endpoint to a target endpoint. GET instructions transfer data from a target endpoint to an origin endpoint.
The origin endpoint and the target endpoint, or first target endpoint in transfers that use pluralities of target endpoints, can be any two endpoints on any of the compute nodes (102), including two endpoints on the same compute node. A sequence of data communications instructions resides in a work queue of a context and results in data transfers between two endpoints, an origin endpoint and a target endpoint—although as seen here, a target endpoint can function as a first target endpoint among a plurality of target endpoints for a data transfer. Data communications instructions are ‘active’ in the sense that the instructions implement callback functions to advise of instruction dispatch and instruction completion, thereby reducing the quantity of acknowledgment traffic required on the network. Each such instruction effects a data transfer, from an origin endpoint to a target endpoint, through some form of data communications resources, networks, shared memory segments, adapters, DMA controllers, and the like.
The arrangement of compute nodes, networks, and I/O devices making up the example parallel computer illustrated in
Data communications in a PAMI according to embodiments of the present invention is generally implemented on a parallel computer that includes a plurality of compute nodes. In fact, such computers may include thousands of such compute nodes, with a compute node typically executing at least one instance of a parallel application. Each compute node is in turn itself a computer composed of one or more computer processors, its own computer memory, and its own input/output (‘I/O’) adapters. For further explanation, therefore,
Also stored RAM (156) is an application messaging module (216), a library of computer program instructions that carry out application-level parallel communications among compute nodes, including point to point operations as well as collective operations. Although the application program can call PAMI routines directly, the application program (158) often executes point-to-point data communications operations by calling software routines in the application messaging module (216), which in turn is improved according to embodiments of the present invention to use PAMI functions to implement such communications. An application messaging module can be developed from scratch to use a PAMI according to embodiments of the present invention, using a traditional programming language such as the C programming language or C++, for example, and using traditional programming methods to write parallel communications routines that send and receive data among PAMI endpoints and compute nodes through data communications networks or shared-memory transfers. In this approach, the application messaging module (216) exposes a traditional interface, such as MPI, to the application program (158) so that the application program can gain the benefits of a PAMI with no need to recode the application. As an alternative to coding from scratch, therefore, existing prior art application messaging modules may be improved to use the PAMI, existing modules that already implement a traditional interface. Examples of prior-art application messaging modules that can be improved to process data communications in a PAMI according to embodiments of the present invention include such parallel communications libraries as the traditional ‘Message Passing Interface’ (‘MPI’) library, the ‘Parallel Virtual Machine’ (‘PVM’) library, MPICH, and the like.
Also represented in RAM in the example of
Also represented in RAM (156) in the example of
In the example of
Also stored in RAM (156) in the example compute node of
The example compute node (152) of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
The data communications adapters in the example of
The example compute node (152) includes a number of arithmetic logic units (‘ALUs’). ALUs (166) are components of processors (164), and a separate ALU (170) is dedicated to the exclusive use of collective operations adapter (188) for use in performing the arithmetic and logical functions of reduction operations. Computer program instructions of a reduction routine in an application messaging module (216) or a PAMI (218) may latch an instruction for an arithmetic or logical function into instruction register (169). When the arithmetic or logical function of a reduction operation is a ‘sum’ or a ‘logical OR,’ for example, collective operations adapter (188) may execute the arithmetic or logical operation by use of an ALU (166) in a processor (164) or, typically much faster, by use of the dedicated ALU (170).
The example compute node (152) of
For further explanation,
For further explanation,
For further explanation,
For further explanation,
In the example of
For further explanation,
The application layer (208) provides communications among instances of a parallel application (158) running on the compute nodes (222, 224) by invoking functions in an application messaging module (216) installed on each compute node. Communications among instances of the application through messages passed between the instances of the application. Applications may communicate messages invoking function of an application programming interface (‘API’) exposed by the application messaging module (216). In this approach, the application messaging module (216) exposes a traditional interface, such as an API of an MPI library, to the application program (158) so that the application program can gain the benefits of a PAMI, reduced network traffic, callback functions, and so on, with no need to recode the application. Alternatively, if the parallel application is programmed to use PAMI functions, the application can call the PAMI functions directly, without going through the application messaging module.
The example protocol stack of
The protocol stack of
For further explanation,
The PAMI (218) in this example includes PAMI clients (302, 304), tasks (286, 298), contexts (190, 292, 310, 312), and endpoints (288, 300). A PAMI client is a collection of data communications resources (294, 295, 314) dedicated to the exclusive use of an application-level data processing entity, an application or an application messaging module such as an MPI library. Data communications resources assigned in collections to PAMI clients are explained in more detail below with reference to
Again referring to the example of
The PAMI (218) includes contexts (290, 292, 310, 312). A ‘context’ as the term is used in PAMI operations is composed of a subset of a client's collection of data processing resources, context functions, and a work queue of data transfer instructions to be performed by use of the subset through the context functions operated by an assigned thread of execution. That is, a context represents a partition of the local data communications resources assigned to a PAMI client. Every context within a client has equivalent functionality and semantics. Context functions implement contexts as threading points that applications use to optimize concurrent communications. Communications initiated by a local process, an instance of a parallel application, uses a context object to identify the specific threading point that will be used to issue a particular communication independent of communications occurring in other contexts. In the example of
Context functions, explained here with regard to references (472-482) on
Posts and advances (480, 482 on
In at least some embodiments, a context's subset of a client's data processing resources is dedicated to the exclusive use of the context. In the example of
For further explanation, here is an example pseudocode Hello World program for an application using a PAMI:
This short program is termed ‘pseudocode’ because it is an explanation in the form of computer code, not a working model, not an actual program for execution. In this pseudocode example, an application initializes a client and a context for an application named “PAMI.” PAMI_Client_initialize and PAMI_Context_createv are initialization functions (316) exposed to applications as part of a PAMI's API. These functions, in dependence upon the application name “PAMI,” pull from a PAMI configuration (318) the information needed to establish a client and a context for the application. The application uses this segment:
For further explanation of data communications resources assigned in collections to PAMI clients,
The DMA controllers (225, 226) in the example of
For further explanation, here is an example use case, a description of the overall operation of an example PUT DMA transfer using the DMA controllers (225, 226) and network (108) in the example of
The example of
The overall operation of an example PUT DMA transfer with the DMA controllers (225) and the network (108) in the example of
The DMA engine (225) then transfers by its transmit and receive threads (502, 504) through the network (108) the data descriptor (234) as well as the transfer data (494). The DMA engine (228), upon receiving by its receive thread (504) the data descriptor and the transfer data, places the transfer data (494) into the RAM (156) of the target application and inserts into the DMA controller's receive FIFO (232) a data descriptor (236) that specifies the target endpoint and the location of the transfer data (494) in RAM (156). The target application (159) calls an advance function (483) on a context (513) of the target endpoint (354). The advance function (483) checks the communications resources assigned to its context for incoming messages, including checking the receive FIFO (232) of the DMA controller (225) for data descriptors that specify the target endpoint (354). The advance function (483) finds the data descriptor for the PUT transfer and advises the target application (159) that its transfer data has arrived. Again, a GET-type DMA transfer works in a similar manner, with some differences described in more detail below, including, of course, the fact that transfer data flows in the opposite direction. And typical SEND transfers also operate similarly, some with rendezvous protocols, some with eager protocols, with data transmitted in packets over the a network through non-DMA network adapters rather than DMA controllers.
By use of an architecture like that illustrated and described with reference to
For further explanation,
Each endpoint (338, 340, 342, 344) in the example of
For efficient utilization of storage in an environment where multiple tasks of a client reside on the same physical compute node, an application may choose to write an endpoint table (288, 300 on
Endpoints (342, 344) on compute node (153) serve respectively two application instances (157, 159). The tasks (334, 336) in endpoints (342, 344) are different. The task (334) in endpoint (342) is identified by the task ID (249) of application (157), and the task (336) in endpoint (344) is identified by the task ID (251) of application (159). The clients (304, 305) in endpoints (342, 344) are different, separate clients. Client (304) in endpoint (342) associates data communications resources (e.g., 294, 296, 314 on
Contrasted with the PAMIs (218) on compute node (153), the PAMI (218) on compute node (152) serves only one instance of a parallel application (158) with two endpoints (338, 340). The tasks (332, 333) in endpoints (338, 340) are the same, because they both represent a same instance of a same application (158); both tasks (332,333) therefore are identified, either with a same variable value, references to a same object, or the like, by the task ID (250) of application (158). The clients (302, 303) in endpoints (338, 340) are optionally either different, separate clients or the same client. If they are different, each associates a separate collection of data communications resources. If they are the same, then each client (302, 303) in the PAMI (218) on compute node (152) associates a same set of data communications resources and is identified with a same value, object reference, or the like. Contexts (290, 292) in endpoints (338, 340) are different, separate contexts. Context (290) in endpoint (338) operates on behalf of application (158) a subset of the data communications resources of client (302) regardless whether clients (302, 303) are the same client or different clients, and context (292) in endpoint (340) operates on behalf of application (158) a subset of the data communications resources of client (303) regardless whether clients (302, 303) are the same client or different clients. Thus the tasks (332, 333) are the same; the clients (302, 303) can be the same; and the endpoints (338, 340) are distinguished at least by different contexts (290, 292), each of which operates on behalf of one of the threads (251-254) of application (158), identified typically by a context offset or a threading point.
Endpoints (338, 340) being as they are on the same compute node (152) can effect DMA data transfers between endpoints (338, 340) through DMA controller (225) and a segment of shared local memory (227). In the absence of such shared memory (227), endpoints (338, 340) can effect DMA data transfers through the DMA controller (225) and the network (108), even though both endpoints (338, 340) are on the same compute node (152). DMA transfers between endpoint (340) on compute node (152) and endpoint (344) on another compute node (153) go through DMA controllers (225, 226) and either a network (108) or a segment of shared remote memory (346). DMA transfers between endpoint (338) on compute node (152) and endpoint (342) on another compute node (153) also go through DMA controllers (225, 226) and either a network (108) or a segment of shared remote memory (346). The segment of shared remote memory (346) is a component of a Non-Uniform Memory Access (‘NUMA’) architecture, a segment in a memory module installed anywhere in the architecture of a parallel computer except on a local compute node. The segment of shared remote memory (346) is ‘remote’ in the sense that it is not installed on a local compute node. A local compute node is ‘local’ to the endpoints located on that particular compute node. The segment of shared remote memory (346), therefore, is ‘remote’ with respect to endpoints (338, 340) on compute node (158) if it is in a memory module on compute node (153) or anywhere else in the same parallel computer except on compute node (158).
Endpoints (342, 344) being as they are on the same compute node (153) can effect DMA data transfers between endpoints (342, 344) through DMA controller (226) and a segment of shared local memory (348). In the absence of such shared memory (348), endpoints (342, 344) can effect DMA data transfers through the DMA controller (226) and the network (108), even though both endpoints (342, 344) are on the same compute node (153). DMA transfers between endpoint (344) on compute node (153) and endpoint (340) on another compute node (152) go through DMA controllers (226, 225) and either a network (108) or a segment of shared remote memory (346). DMA transfers between endpoint (342) on compute node (153) and endpoint (338) on another compute node (158) go through DMA controllers (226, 225) and either a network (108) or a segment of shared remote memory (346). Again, the segment of shared remote memory (346) is ‘remote’ with respect to endpoints (342, 344) on compute node (153) if it is in a memory module on compute node (158) or anywhere else in the same parallel computer except on compute node (153).
For further explanation,
The method of
The method of
The method of
The RTS message (394) specifies a dispatch callback function (396) to be called upon dispatch, that is, upon receipt of the RTS by the first target endpoint (354). The RTS is transmitted by action of an advance function called on a context of the origin endpoint (352). The RTS is received by action of an advance function, called by an application on a context of the first target endpoint (354), checking its data communications resources for incoming messages, discovering the RTS, and executing the dispatch callback (396) specified by the RTS.
The method of
The method of
A dispatch callback (396) of the RTS (394) posts (266) the receive instructions (542) to work queues (282) in contexts of the plurality of target endpoints that carry out the actual data transfers, and advance functions (482) of the contexts (512) in the plurality of target endpoints execute the receive instructions. The receive instructions can be implemented as canonical rendezvous receives, of course, with data packets coursing back and forth across a packet-switching network, but it is probably preferred, when DMA functionality is available, to implement the receive instructions (542) with DMA GET-type instructions, either through segments of shared memory or across a network, conveying data transfers (405, 406, 407) from segments (1, 2, 3) of the source buffer (546) directly from PAMI memory of origin endpoint (352) to the target buffer (548), which is PAMI memory of the first target endpoint.
Each advance function (482) in the plurality of target endpoints in this example executes in a separate thread (251) of execution. In embodiments, one thread can advance all contexts, more than one thread can advance more than one context, or, as here, a separate thread is assigned to advance each context separately. In environments with sufficient resources, it is probably preferred for maximizing the advantages of parallelism that each advance function is run on a separate hardware thread or even a separate compute core, thereby literally running exactly in parallel, at exactly the same time instead of merely in separate quanta of time on a same core, hardware thread, or software thread.
When the SEND dispatch callback (396) posts the receive instructions to the plurality of target endpoints (564), the callback (396) includes a an instruction parameter the number of target endpoints (564) participating in the transfer and resets an atomic counter (412) at a memory location accessible by all of the participating target endpoints (564), also advising through an instruction parameter the memory address of the atomic counter (412). The counter is said to be ‘atomic’ because it increments and returns its new counter value in a single atomic operation, preventing race conditions in reading the counter value. The SEND dispatch callback (396) also configures each of the posted receive instructions with its own done callback (550, 552, 408), so that each target endpoint (564) can determine upon completing its portion of the transfer whether the entire transfer is complete. When the overall transfer is complete, the counter value will be ‘3,’ corresponding to the number of target endpoints participating in the overall transfer. Each target endpoint (564) executes (542) its own separate portion (405, 406, 407) of the overall transfer and then increments-and-reads (554, 556, 558) the atomic counter (412).
In this example, the sub-transfers are completed in order. Target endpoint (354) completes its transfer (405), calls its done callback (550) which increments-and-reads (554) atomically the counter value, finds the counter value to be ‘1,’ and simply exits. Target endpoint (560) completes its transfer (406), calls its done callback (552) which increments-and-reads (556) atomically the counter value, finds the counter value to be ‘2,’ and simply exits. Target endpoint (562) completes its transfer (407), calls its done callback (408) which increments-and-reads (558) atomically the counter value, finds the counter value to be ‘3’ (signifying completion of the overall transfer), and calls the transfer done callback (397), a previously registered done callback for the overall transfer. The transfer done callback (397) notifies (410) the receiving application (159) of overall transfer completion for the SEND (390) and returns an acknowledgement message (416) to the origin endpoint (352). An advance function of the origin endpoint, routinely monitoring its assigned data communications resources, finds the incoming acknowledgement message (416) and calls (420) its corresponding SEND done callback function (391). The SEND done callback function (391) advises (421) the originating application (158) of completion of the overall SEND (390) data transfer.
For further explanation,
The method of
Also in the method of
Example embodiments of the present invention are described largely in the context of a fully functional parallel computer that processes data communications in a PAMI. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
As will be appreciated by those of skill in the art, aspects of the present invention may be embodied as method, apparatus or system, or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment or an embodiment combining software and hardware aspects (firmware, resident software, micro-code, microcontroller-embedded code, and the like) that may all generally be referred to herein as a “circuit,” “module,” “system,” or “apparatus.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer readable media may be utilized. Such a computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described in this specification with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of computer apparatus, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in a flowchart or block diagram may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
This invention was made with Government support under Contract No. B554331 awarded by the Department of Energy. The Government has certain rights in this invention.
Number | Name | Date | Kind |
---|---|---|---|
4933840 | Sera et al. | Jun 1990 | A |
4933846 | Humphrey et al. | Jun 1990 | A |
5050162 | Golestani | Sep 1991 | A |
5083265 | Valiant | Jan 1992 | A |
5136582 | Firoozmand | Aug 1992 | A |
5193179 | Laprade et al. | Mar 1993 | A |
5218676 | Ben-Ayed et al. | Jun 1993 | A |
5319638 | Lin | Jun 1994 | A |
5347450 | Nugent | Sep 1994 | A |
5437042 | Culley et al. | Jul 1995 | A |
5448698 | Wilkes | Sep 1995 | A |
5453978 | Sethu et al. | Sep 1995 | A |
5488608 | Flammer, III | Jan 1996 | A |
5617537 | Yamada et al. | Apr 1997 | A |
5680116 | Hashimoto et al. | Oct 1997 | A |
5689509 | Gaytan et al. | Nov 1997 | A |
5721921 | Kessler et al. | Feb 1998 | A |
5758075 | Graziano et al. | May 1998 | A |
5781775 | Ueno | Jul 1998 | A |
5790530 | Moh et al. | Aug 1998 | A |
5796735 | Miller et al. | Aug 1998 | A |
5802278 | Isfeld et al. | Sep 1998 | A |
5802366 | Row et al. | Sep 1998 | A |
5835482 | Allen | Nov 1998 | A |
5928351 | Horie et al. | Jul 1999 | A |
5933425 | Iwata | Aug 1999 | A |
5954794 | Fishler et al. | Sep 1999 | A |
5959995 | Wicki et al. | Sep 1999 | A |
5961659 | Benner | Oct 1999 | A |
5995503 | Crawley et al. | Nov 1999 | A |
6070189 | Bender et al. | May 2000 | A |
6072781 | Feeney et al. | Jun 2000 | A |
6081506 | Buyukkoc et al. | Jun 2000 | A |
6085303 | Thorson et al. | Jul 2000 | A |
6105122 | Muller et al. | Aug 2000 | A |
6161198 | Hill et al. | Dec 2000 | A |
6337852 | Desnoyers et al. | Jan 2002 | B1 |
6356951 | Gentry, Jr. | Mar 2002 | B1 |
6486983 | Beshai et al. | Nov 2002 | B1 |
6519310 | Chapple | Feb 2003 | B2 |
6591310 | Johnson | Jul 2003 | B1 |
6601089 | Sistare et al. | Jul 2003 | B1 |
6711632 | Chow et al. | Mar 2004 | B1 |
6735662 | Connor | May 2004 | B1 |
6744765 | Dearth et al. | Jun 2004 | B1 |
6748413 | Bournas | Jun 2004 | B1 |
6754732 | Dixon et al. | Jun 2004 | B1 |
6801927 | Smith et al. | Oct 2004 | B1 |
6847911 | Huckaby et al. | Jan 2005 | B2 |
6847991 | Kurapati | Jan 2005 | B1 |
6857030 | Webber | Feb 2005 | B2 |
6901052 | Buskirk et al. | May 2005 | B2 |
6977894 | Achilles et al. | Dec 2005 | B1 |
6981074 | Oner et al. | Dec 2005 | B2 |
7031305 | Yu et al. | Apr 2006 | B1 |
7054958 | Iyer et al. | May 2006 | B2 |
7089289 | Blackmore et al. | Aug 2006 | B1 |
7111092 | Mitten et al. | Sep 2006 | B1 |
7120916 | Firth et al. | Oct 2006 | B1 |
7155541 | Ganapathy et al. | Dec 2006 | B2 |
7155560 | McGrew et al. | Dec 2006 | B2 |
7237036 | Boucher et al. | Jun 2007 | B2 |
7319695 | Agarwal et al. | Jan 2008 | B1 |
7406086 | Deneroff et al. | Jul 2008 | B2 |
7418470 | Howard et al. | Aug 2008 | B2 |
7464138 | Le et al. | Dec 2008 | B2 |
7533197 | Leonard et al. | May 2009 | B2 |
7552312 | Archer et al. | Jun 2009 | B2 |
7805546 | Archer et al. | Sep 2010 | B2 |
7827024 | Archer et al. | Nov 2010 | B2 |
7836143 | Blocksome et al. | Nov 2010 | B2 |
7890670 | Archer et al. | Feb 2011 | B2 |
8250164 | Archer et al. | Aug 2012 | B2 |
8286188 | Brief | Oct 2012 | B1 |
20030093485 | Dougall et al. | May 2003 | A1 |
20030195991 | Masel et al. | Oct 2003 | A1 |
20030233497 | Shih | Dec 2003 | A1 |
20040001508 | Zheng et al. | Jan 2004 | A1 |
20040057380 | Biran et al. | Mar 2004 | A1 |
20040078405 | Bhanot et al. | Apr 2004 | A1 |
20040218631 | Ganfield | Nov 2004 | A1 |
20050002334 | Chao et al. | Jan 2005 | A1 |
20050018682 | Ferguson et al. | Jan 2005 | A1 |
20050033874 | Futral et al. | Feb 2005 | A1 |
20050068946 | Beshai | Mar 2005 | A1 |
20050078669 | Oner | Apr 2005 | A1 |
20050091334 | Chen et al. | Apr 2005 | A1 |
20050100035 | Chiou et al. | May 2005 | A1 |
20050108425 | Rabinovitch | May 2005 | A1 |
20050114561 | Lu et al. | May 2005 | A1 |
20050198113 | Mohamed et al. | Sep 2005 | A1 |
20050213570 | Stacy et al. | Sep 2005 | A1 |
20050289235 | Suematsu et al. | Dec 2005 | A1 |
20060002424 | Gadde | Jan 2006 | A1 |
20060045005 | Blackmore et al. | Mar 2006 | A1 |
20060045109 | Blackmore et al. | Mar 2006 | A1 |
20060047771 | Blackmore et al. | Mar 2006 | A1 |
20060056405 | Chang et al. | Mar 2006 | A1 |
20060059257 | Collard et al. | Mar 2006 | A1 |
20060075057 | Gildea et al. | Apr 2006 | A1 |
20060150010 | Stiffler et al. | Jul 2006 | A1 |
20060161733 | Beckett et al. | Jul 2006 | A1 |
20060161737 | Martin et al. | Jul 2006 | A1 |
20060190640 | Yoda et al. | Aug 2006 | A1 |
20060195336 | Greven et al. | Aug 2006 | A1 |
20060206635 | Alexander et al. | Sep 2006 | A1 |
20060218429 | Sherwin et al. | Sep 2006 | A1 |
20060227774 | Hoenicke | Oct 2006 | A1 |
20060230119 | Hausauer et al. | Oct 2006 | A1 |
20060253619 | Torudbakken et al. | Nov 2006 | A1 |
20070041383 | Banikazemi et al. | Feb 2007 | A1 |
20070165672 | Keels et al. | Jul 2007 | A1 |
20070169176 | Cook et al. | Jul 2007 | A1 |
20070169179 | Narad | Jul 2007 | A1 |
20070198519 | Dice et al. | Aug 2007 | A1 |
20080016249 | Ellis et al. | Jan 2008 | A1 |
20080022079 | Archer et al. | Jan 2008 | A1 |
20080101295 | Tomita et al. | May 2008 | A1 |
20080109573 | Leonard et al. | May 2008 | A1 |
20080222317 | Go et al. | Sep 2008 | A1 |
20080267066 | Archer et al. | Oct 2008 | A1 |
20080270563 | Blocksome et al. | Oct 2008 | A1 |
20080273543 | Blocksome et al. | Nov 2008 | A1 |
20080281997 | Archer et al. | Nov 2008 | A1 |
20080281998 | Archer et al. | Nov 2008 | A1 |
20080301327 | Archer et al. | Dec 2008 | A1 |
20080301704 | Archer et al. | Dec 2008 | A1 |
20080313341 | Archer et al. | Dec 2008 | A1 |
20090006662 | Chen et al. | Jan 2009 | A1 |
20090006808 | Blumrich et al. | Jan 2009 | A1 |
20090006810 | Almasi et al. | Jan 2009 | A1 |
20090007141 | Blocksome et al. | Jan 2009 | A1 |
20090019190 | Blocksome | Jan 2009 | A1 |
20090022156 | Blocksome | Jan 2009 | A1 |
20090031001 | Archer et al. | Jan 2009 | A1 |
20090031002 | Blocksome | Jan 2009 | A1 |
20090031055 | Archer et al. | Jan 2009 | A1 |
20090125604 | Chang et al. | May 2009 | A1 |
20090154486 | Archer et al. | Jun 2009 | A1 |
20090210586 | Tanabe | Aug 2009 | A1 |
20090248894 | Archer et al. | Oct 2009 | A1 |
20090248895 | Archer et al. | Oct 2009 | A1 |
20090254920 | Truschin et al. | Oct 2009 | A1 |
20090276582 | Furtek et al. | Nov 2009 | A1 |
20100005189 | Archer et al. | Jan 2010 | A1 |
20100036940 | Carey et al. | Feb 2010 | A1 |
20100082848 | Blocksome et al. | Apr 2010 | A1 |
20100232448 | Sugumar et al. | Sep 2010 | A1 |
20100268852 | Archer et al. | Oct 2010 | A1 |
20110197204 | Archer et al. | Aug 2011 | A1 |
20110265098 | Dozsa et al. | Oct 2011 | A1 |
20120079035 | Archer et al. | Mar 2012 | A1 |
20120079133 | Archer et al. | Mar 2012 | A1 |
20120117137 | Blocksome et al. | May 2012 | A1 |
20120117138 | Blocksome et al. | May 2012 | A1 |
20120117211 | Blocksome et al. | May 2012 | A1 |
20120117281 | Blocksome et al. | May 2012 | A1 |
20120137294 | Archer et al. | May 2012 | A1 |
20120144400 | Davis et al. | Jun 2012 | A1 |
20120144401 | Faraj | Jun 2012 | A1 |
20120151485 | Archer et al. | Jun 2012 | A1 |
20120179736 | Blocksome et al. | Jul 2012 | A1 |
20120179760 | Blocksome et al. | Jul 2012 | A1 |
20120185679 | Archer et al. | Jul 2012 | A1 |
20120185873 | Archer et al. | Jul 2012 | A1 |
20120210094 | Blocksome et al. | Aug 2012 | A1 |
20120254344 | Archer et al. | Oct 2012 | A1 |
20130018947 | Archer et al. | Jan 2013 | A1 |
20130061244 | Davis et al. | Mar 2013 | A1 |
20130061245 | Faraj | Mar 2013 | A1 |
20130061246 | Archer et al. | Mar 2013 | A1 |
20130066938 | Archer et al. | Mar 2013 | A1 |
20130067111 | Archer et al. | Mar 2013 | A1 |
20130067206 | Archer et al. | Mar 2013 | A1 |
20130073751 | Blocksome et al. | Mar 2013 | A1 |
20130073752 | Blocksome | Mar 2013 | A1 |
20130074097 | Archer et al. | Mar 2013 | A1 |
20130081059 | Archer et al. | Mar 2013 | A1 |
20130091510 | Archer et al. | Apr 2013 | A1 |
20130097263 | Blocksome et al. | Apr 2013 | A1 |
20130097404 | Blocksome et al. | Apr 2013 | A1 |
20130097614 | Blocksome et al. | Apr 2013 | A1 |
20130110901 | Blocksome et al. | May 2013 | A1 |
20130117403 | Archer et al. | May 2013 | A1 |
20130117761 | Archer et al. | May 2013 | A1 |
20130117764 | Archer et al. | May 2013 | A1 |
20130124666 | Archer et al. | May 2013 | A1 |
20130125135 | Archer et al. | May 2013 | A1 |
20130125140 | Archer et al. | May 2013 | A1 |
20130174180 | Blocksome et al. | Jul 2013 | A1 |
20130185465 | Blocksome | Jul 2013 | A1 |
Entry |
---|
Kumar et al., A Network on Chip Architecture and Design Methodology, IEEE Computer Society Annual Symposium on VLSI, 2002. |
Final Office Action, U.S. Appl. No. 11/776,707, USPTO Mail Date Jan. 6, 2011. |
Final Office Action, U.S. Appl. No. 11/740,361, USPTO Mail Date Oct. 4, 2010. |
Office Action, U.S. Appl. No. 11/755,501, USPTO Mail Date Nov. 26, 2010. |
Office Action, U.S. Appl. No. 12/702,661, USPTO Mail Date Dec. 14, 2012. |
Office Action, U.S. Appl. No. 12/956,903, USPTO Mail Date Mar. 19, 2013. |
Ron Brightwell, Keith D. Underwood, “An Analysis of NIC Resource Usage for Offloading MPI,” ipdps, vol. 9, pp. 183a, 18th International Parallel and Distributed Processing Symposium (IPDPS'04)—Workshop 8, 2004. |
Keith D. Underwood, Ron Brightwell, “The Impact of MPI Queue Usage on Message Latency,” icpp, pp. 152-160, 2004 International Conference on Parallel Processing (ICPP'04), 2004. |
Keith D. Underwood, K. Scott Hemmert, Arun Rodrigues, Richard Murphy, Ron Brightwell, “A Hardware Acceleration Unit for MPI Queue Processing,” ipdps, vol. 1, pp. 96b, 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05)—Papers, 2005. |
Knudson, B., et al., “IBM System Blue Gene Solution: Blue Gene/P Application Development,” IBM Redbooks, Aug. 2009, pp. 1-406, Fourth Edition, International Technical Support Organization, Rochester, Minnesota. |
Blocksome, M., et al., “Optimizing MPI Collectives using Efficient Intra-node Communication Techniques over the BlueGene/P Supercomputer,” Computer Science IBM Research Report, Dec. 2010, pp. 1-25, IBM Systems and Technology Group, Rochester, Minnesota. |
Final Office Action, U.S. Appl. No. 11/740,361, USPTO Mail Date Sep. 29, 2011. |
Office Action, U.S. Appl. No. 11/776,718, USPTO Mail Date Oct. 19, 2011. |
U.S. Appl. No. 11/776,707, filed Jul. 12, 2007, Blocksome. |
U.S. Appl. No. 11/739,948, filed Apr. 25, 2007, Blocksome, et al. |
U.S. Appl. No. 11/740,361, filed Apr. 26, 2007, Archer, et al. |
U.S. Appl. No. 11/746,333, filed May 9, 2007, Archer, et al. |
U.S. Appl. No. 11/754,765, filed May 29, 2007, Archer, et al. |
U.S. Appl. No. 11/764,302, filed Jun. 18, 2007, Archer, et al. |
U.S. Appl. No. 11/755,501, filed May 30, 2007, Archer, et al. |
U.S. Appl. No. 11/829,325, filed Jul. 27, 2007, Archer, et al. |
U.S. Appl. No. 11/829,334, filed Jul. 27, 2007, Archer, et al. |
U.S. Appl. No. 11/776,718, filed Jul. 12, 2007, Blocksome. |
U.S. Appl. No. 11/829,339, filed Jul. 27, 2007, Blocksome. |
Watson, Robert, “DMA Controller Programming in C,” C Users Journal, v11n11, Nov. 1993, p. 35-50. |
Office Action Dated May 26, 2009 in U.S. Appl. No. 11/829,325. |
Office Action Dated Aug. 27, 2009 in U.S. Appl. No. 11/739,948. |
Office Action Dated Sep. 1, 2009 in U.S. Appl. No. 11/776,718. |
RCE, U.S. Appl. No. 11/740,361, USPTO Mail Date Jan. 30, 2012. |
Notice of Allowance, U.S. Appl. No. 11/755,501, USPTO Mail Date Jun. 9, 2011. |
Final Office Action, U.S. Appl. No. 11/776,718, USPTO Mail Date Mar. 30, 2012. |
Office Action, U.S. Appl. No. 13/676,700, USPTO Mail Date Jun. 5, 2013. |
Kumar et al., “The Deep Computing Messaging Framework: Generalized Scalable Message Passing Blue Gene/P Supercomputer”, Proceedings of the 22nd Annual International Conference on Supercomputing (ICS '08), Jun. 2008, pp. 94-103, ACM New York, USA. |
Banikazemi et al., “MPI-LAPI: An Efficient Implementation of MPI for IBM RS/6000 SP Systems”, IEEE Transactions on Parallel and Distributed Systems, Oct. 2001, vol. 12, Issue 10, pp. 1081-1093, IEEE Xplore Digital Library (online publication), IEEE.org, USA, DOI: 10.1109/71.963419. |
Myricom, “Myrinet Express (MX): A High-Performance, Low-Level, Message-Passing Interface for Myrinet”, Myricom.com (online publication), Version 1.2, Oct. 2006, pp. 1 -65, Myricom Inc., USA. |
Dinan et al., “Hybrid Parallel Programming With MPI and Unified Parallel C”, Proceedings of the 7th ACM International Conference on Computing Frontiers (CF'10), May 2010, pp. 177-186, ACM New York, USA. |
Dozsa et al., “Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems”, Proceedings of the 17th European MPI Users' Group Meeting Conference on Recent Advances in the Message Passing Interface (EuroMPI'10), Apr. 2010, pp. 11-20 (reprinted pp. 1-9), Springer-Verlag Berlin, Heidelberg. |
Foster et al., “Managing Multiple Communication Methods in High-Performance Networked Computing Systems”, Journal of Parallel and Distributed Computing , vol. 40, Issue 1, Jan. 1997, pp. 1-25, (online publication), ScienceDirect.com, USA. |
Robinson et al., “A Task Migration Implementation of the Message-Passing Interface”, Proceedings of the 5th IEEE International Symposium on High Performance Distributed Computing (HPDC'96), May 1996, pp. 61-68, IEEE Computer Society, Washington DC, USA. |
Final Office Action, U.S. Appl. No. 12/892,192, USPTO Mail Date May 2, 2013. |
Office Action, U.S. Appl. No. 12/892,192, USPTO Mail Date Sep. 30, 2013. |
Notice of Allowance, U.S. Appl. No. 13/007,860, USPTO Mail Date Jul. 3, 2013. |
Office Action, U.S. Appl. No. 12/892,153, USPTO Mail Date Apr. 25, 2013. |
Final Office Action, U.S. Appl. No. 12/892,153, USPTO Mail Date Aug. 14, 2013. |
Office Action, U.S. Appl. No. 12/985,611, USPTO Mail Date Aug. 2, 2013. |
Office Action, U.S. Appl. No. 13/007,848, USPTO Mail Date May 15, 2013. |
Final Office Action, U.S. Appl. No. 13/007,848, USPTO Mail Date Sep. 13, 2013. |
Notice of Allowance, U.S. Appl. No. 12/963,671, USPTO Mail Date Sep. 18, 2013. |
Final Office Action, U.S. Appl. No. 12/940,198, USPTO Mail Date Aug. 14, 2013. |
Final Office Action, U.S. Appl. No. 12/940,259, USPTO Mail Date Aug. 14, 2013. |
Final Office Action, U.S. Appl. No. 12/940,282, USPTO Mail Date Sep. 10, 2013. |
Notice of Allowance, U.S. Appl. No. 12/940,300, USPTO Mail Date Apr. 29, 2013. |
Notice of Allowance, U.S. Appl. No. 12/963,694, USPTO Mail Date Jun. 18, 2013. |
Office Action, U.S. Appl. No. 12/985,651, USPTO Mail Date Aug. 5, 2013. |
Notice of Allowance, U.S. Appl. No. 13/290,670, USPTO Mail Date Mar. 27, 2013. |
Notice of Allowance, U.S. Appl. No. 13/290,642, USPTO Mail Date May 1, 2013. |
Office Action, U.S. Appl. No. 13/292,293, USPTO Mail Date Jul. 19, 2013. |
Office Action, U.S. Appl. No. 13/659,370, USPTO Mail Date Oct. 21, 2013. |
Final Office Action, U.S. Appl. No. 13/668,503, USPTO Mail Date Jul. 11, 2013. |
Office Action, U.S. Appl. No. 13/671,762, USPTO Mail Date May 13, 2013. |
Final Office Action, U.S. Appl. No. 13/671,762, USPTO Mail Date Sep. 13, 2013. |
Office Action, U.S. Appl. No. 13/673,188, USPTO Mail Date Jul. 25, 2013. |
Final Office Action, U.S. Appl. No. 13/678,799, USPTO Mail Date Aug. 30, 2013. |
Final Office Action, U.S. Appl. No. 13/677,507, USPTO Mail Date Aug. 22, 2013. |
Office Action, U.S. Appl. No. 13/690,168, USPTO Mail Date Aug. 15, 2013. |
Notice of Allowance, U.S. Appl. No. 13/681,903, USPTO Mail Date Sep. 30, 2013. |
Office Action, U.S. Appl. No. 13/680,772, USPTO Mail Date Aug. 15, 2013. |
Office Action, U.S. Appl. No. 13/710,066, USPTO Mail Date Jul. 19, 2013. |
Notice of Allowance, U.S. Appl. No. 13/709,305, USPTO Mail Date Aug. 27, 2013. |
Final Office Action, U.S. Appl. No. 13/711,108, USPTO Mail Date Jul. 5, 2013. |
Notice of Allowance, U.S. Appl. No. 13/711,108, USPTO Mail Date Sep. 19, 2013. |
Notice of Allowance, U.S. Appl. No. 13/784,198, USPTO Mail Date Sep. 20, 2013. |
Final Office Action, U.S. Appl. No. 12/956,903, USPTO Mail Date Nov. 6, 2013. |
Notice of Allowance, U.S. Appl. No. 13/292,293, USPTO Mail Date Nov. 7, 2013. |
Watson, R., “DMA controller programming in C,” C Users Journal, Nov. 1993, pp. 35-50 (10 Total Pages), v11 n11, R & D Publications, Inc., Lawrence, KS, USA. ISSN: 0898-9788. |
Notice of Allowance, U.S. Appl. No. 12/702,661, USPTO Mail Date May 15, 2013. |
Notice of Allowance, U.S. Appl. No. 13/666,604, USPTO Mail Date Sep. 25, 2013. |
Office Action, U.S. Appl. No. 13/666,604, USPTO Mail Date May 30, 2013. |
Office Action, U.S. Appl. No. 13/671,055, USPTO Mail Date Jul. 31, 2013. |
Office Action, U.S. Appl. No. 13/769,715, USPTO Mail Date Jul. 31, 2013. |
Almasi, G., et al., “MPI on BlueGene/L: Designing an Efficient General Purpose Messaging Solution for a Large Cellular System,” Recent Advances in Parallel Virtual Machine and Message Passing Interface, Proceedings. 10th European PVM/MPI User's Group Meeting, Venice, Italy, Sep. 29-Oct. 2, 2003, pp. 352-361, Springer Berlin Heidelberg. DOI: 10.1007/978-3-540-39924-7—49. |
Almasi, G., et al., “Architecture and Performance of the BlueGene/L Message Layer,” Recent Advances in Parallel Virtual Machine and Message Passing Interface, Proceedings. 11th European PVM/MPI Users' Group Meeting Budapest, Hungary, Sep. 19-22, 2004, pp. 405-414, Springer Berlin Heidelberg. DOI: 10.1007/978-3-540-30218-6—55. |
Pritchard, J., “COM and CORBA Side by Side: Architectures, Strategies, and Implementations,” Jul. 1999, pp. 74-84, Addison Wesley Longman, Inc., Reading, Massachusetts, USA. ISBN 0--201-37945-7. |
Zukowski, J., et al., “Mastering Java 1.2,” Month: Unknown, Year: 1998, pp. 900-903, SYBEX, San Francisco, California, USA. ISBN: 0-7821-2180-2. |
Fink, T., “Integrating MPI Components into Metacomputing Applications,” Recent Advances in Parallel Virtual Machine and Message Passing Interface, Proceedings. 7th European PVM/MPI Users' Group Meeting Balatonfured, Hungary, Sep. 10-13, 2000, pp. 208-2015, Springer Berlin Heidelberg. DOI: 10.1007/3-540-45255-9—30. |
Moreira, J., et al., “The Blue Gene/L Supercomputer: A Hardware and Software Story,” International Journal of Parallel Programming, Jun. 2007, pp. 181-206, vol. 35, No. 3, Springer Science+Business Media, LLC, USA. DOI: 10.1007/s10766-007-0037-2. |
Ribler, R., et al., “The Autopilot performance-directed adaptive control system,” Sep. 1, 2001, pp. 175-187, vol. 18, No. 1, Elsevier Science Publishers, Elsevier Science Publishers B. V. Amsterdam, The Netherlands, The Netherlands. DOI: 10.1016/S0167-739X(01)00051-6. |
Zhang, Y., et al., “Automatic Performance Tuning for J2EE Application Server Systems,”Automatic Performance Tuning for J2EE Application Server Systems, Web Information Systems Engineering—WISE 2005, Proceedings. 6th International Conference on Web Information Systems Engineering, New York, NY, USA, Nov. 20-22, 2005, pp. 520-527, Springer Berlin Heidelberg. DOI: 10.1007/11581062—43. |
Chung, I., et al., “Automated Cluster-Based Web Service Performance Tuning,” Proceedings. 13th IEEE International Symposium on High Performance Distributed Computing (HPDC-13 '04), Jun. 4-6, 2004, pp. 36-44, IEEE Computer Society, CS Digital Library. ISBN: 0-7803-2175-4. |
Hondroudakis, A., et al., “An Empirically Derived Framework for Classifying Parallel Program Performance Tuning Problems,” Proceeding. SPDT '98 Proceedings of the SIGMETRICS symposium on Parallel and distributed Tools, Aug. 3-4, 1998, pp. 112-123, ACM, New York, NY, USA. DOI: 10.1145/281035.281047. |
Gara, M., et al., “Overview of the Blue Gene/L system architecture,” IBM Journal of Research & Development, Mar. 2005, pp. 195-212, vol. 49, Issue: 2.3, IEEE Xplore Digital Library. DOI: 10.1147/rd.492.0195. |
Adiga, N., et al., “Blue Gene/L torUS-interconnection network,” IBM Journal of Research & Development, Mar. 2005, pp. 265-276, vol. 49, Issue: 2, IBM Corp. Riverton, NJ, USA, ACM Digital Library. DOI: 10.1147/rd.492.0265. |
Barnett, M., et al., “Broadcasting on Meshes with Worm-Hole Routing,” Second Revised Version, Dec. 1995, pp. 1-22, CiteSeerX (Online Publication). URL: http://webcache.googleusercontent.com/search?q=cache:Kkso1JLnAYwJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.50.5075%26rep%3Drep1%26type%3Dps+&cd=1&h1=en&ct=clnk&gl=us. |
Notice of Allowance, U.S. Appl. No. 12/985,651, USPTO Mail Date Feb. 20, 2014. |
Notice of Allowance, U.S. Appl. No. 13/659,370, USPTO Mail Date Mar. 13, 2014. |
Number | Date | Country | |
---|---|---|---|
20120137294 A1 | May 2012 | US |