High-performance computing (HPC) applications typically execute calculations on computing clusters that include many individual computing nodes connected by a high-speed network fabric. Typical computing clusters may include hundreds or thousands of individual nodes. Each node may include several processors, processor cores, or other parallel computing resources. A typical computing job therefore may be executed by a large number of individual processes distributed across each computing node and across the entire computing cluster.
Processes within a job may communicate data with each other using a message-passing communication paradigm. In particular, many HPC applications may use a message passing interface (MPI) library to perform message-passing operations such as sending or receiving messages. MPI is a popular message passing library maintained by the MPI Forum, and has been implemented for numerous computing languages, operation systems, and HPC computing platforms. In use, each process is given an MPI rank, typically an integer, that is used to identify the process in MPI execution. The MPI rank is similar to a network address and may be used by the processes to send and receive messages. MPI supports operations including two-sided send and receive operations, collective operations such as reductions and barriers, and one-sided communication operations such as get and put.
Many HPC applications are increasingly performing calculations using a shared-memory multiprocessing model. For example, HPC applications may use a shared memory multiprocessing application programming interface (API) such as OpenMP. As a result, many current HPC application processes are multi-threaded. Increasing the number of processor cores or threads per HPC process may improve node resource utilization and thereby increase computation performance. Many system MPI implementations are thread-safe or may otherwise be executed in multithreaded mode. However, performing multiple MPI operations concurrently may reduce overall performance through increased overhead. For example, typical MPI implementations assign a single MPI rank to each process regardless of the number of threads executing within the process. Multithreaded MPI implementations may also introduce other threading overhead, for example overhead associated with thread synchronization and shared communication state. In some implementations, to avoid threading overhead, multithreaded applications may funnel all MPI communications to a single thread; however, that single thread may not be capable of fully utilizing available networking resources.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one of A, B, and C” can mean (A); (B); (C): (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C): (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
Referring now to
Referring now to
Referring back to
Each processor 120 may be embodied as any type of processor capable of performing the functions described herein. Each illustrative processor 120 is a multi-core processor, however in other embodiments each processor 120 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Each processor 120 illustratively includes four processor cores 122, each of which is an independent processing unit capable of executing programmed instructions. In some embodiments, each of the processor cores 122 may be capable of hyperthreading; that is, each processor core 122 may support execution on two or more logical processors or hardware threads. Although each of the illustrative computing nodes 102 includes two processors 120 having four processor cores 122 in the embodiment of
The memory 126 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 126 may store various data and software used during operation of the computing node 102 such as operating systems, applications, programs, libraries, and drivers. The memory 126 is communicatively coupled to the processor 120 via the I/O subsystem 124, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 126, and other components of the computing node 102. For example, the I/O subsystem 124 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 124 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processors 120, the memory 126, and other components of the computing node 102, on a single integrated circuit chip. The data storage device 128 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
The communication subsystem 130 of the computing node 102 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing nodes 102 and/or other remote devices over the network 104. The communication subsystem 130 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., InfiniBand®, Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication. The communication subsystem 130 may include one or more network adapters and/or network ports that may be used concurrently to transfer data over the network 104.
As discussed in more detail below, the computing nodes 102 may be configured to transmit and receive data with each other and/or other devices of the system 100 over the network 104. The network 104 may be embodied as any number of various wired and/or wireless networks. For example, the network 104 may be embodied as, or otherwise include, a switched fabric network, a wired or wireless local area network (LAN), a wired or wireless wide area network (WAN), a cellular network, and/or a publicly-accessible, global network such as the Internet. As such, the network 104 may include any number of additional devices, such as additional computers, routers, and switches, to facilitate communications among the devices of the system 100.
Referring now to
The host process module 302 is configured to manage relationships between processes and threads executed by the computing node 102. As shown, the host process module 302 includes a host process 304, and the host process 304 may establish a number of threads 306. The illustrative host process 304 establishes two threads 306a, 306b, but it should be understood that numerous threads 306 may be established. For example, in some embodiments, the host process 304 may establish one thread 306 for each hardware thread supported by the computing node 102 (e.g., sixteen threads 306 in the illustrative embodiment). The host process 304 may be embodied as an operating system process, managed executable process, application, job, or other program executed by the computing node 102. Each of the threads 306 may be embodied as an operating system thread, managed executable thread, application thread, worker thread, lightweight thread, or other program executed within the process space of the host process 304. Each of the threads 306 may share the memory space of the host process 304.
The host process module 302 is further configured to create message passing interface (MPI) endpoints for each of the threads 306 and to assign each of the threads 306 to a proxy process 312 (described further below). The MPI endpoints may be embodied as any MPI rank, network address, or identifier that may be used to route messages to particular threads 306 executing within the host process 304. The MPI endpoints may not distinguish among threads 306 that are executing within a different host process 304; for example, the MPI endpoints may be embodied as local MPI ranks within the global MPI rank of the host process 304.
The message passing module 308 is configured to receive MPI operations addressed to the MPI endpoints of the threads 306 and communicate those MPI operations to the associated proxy process 312. MPI operations may include any message passing operation, such as sending messages, receiving messages, collective operations, or one-sided operations. The message passing module 308 may communicate the MPI operations using any available intra-node communication technique, such as shared-memory communication.
The proxy process module 310 is configured to perform the MPI operations forwarded by the message passing module 308 using a number of proxy processes 312. Similar to the host process 304, each of the proxy processes 312 may be embodied as an operating system process, managed executable process, application, job, or other program executed by the computing node 102. Each of the proxy processes 312 establishes an execution environment, address space, and other resources that are independent of other proxy processes 312 of the computing node 102. As described above, each of the proxy processes 312 may be assigned to one of the threads 306. The illustrative proxy process module 310 establishes two proxy processes 312a, 312b, but it should be understood that numerous proxy processes 312 may be established. Although illustrated as including one proxy process 312 for each thread 306, it should be understood that in some embodiments one proxy process 312 may be shared by several threads 306, host processes 304, or other jobs, and that a thread 306 may interact with several proxy processes 312.
Referring now to
In the illustrative API stack 400, the host process 304 establishes instances of the MPI proxy library 402, the MPI library 404, and the intra-node communication library 406 that are shared by all of the threads 306. For example, each of the libraries 402, 404, 406 may be loaded into the address space of the host process 304 using an operating system dynamic loader or dynamic linker. Each of the threads 306 interfaces with the MPI proxy library 402. The MPI proxy library 402 may implement the same programmatic interface as the MPI library 404. Thus, the threads 306 may submit ordinary MPI operations (e.g., send operations, receive operations, collective operations, or one-sided communication operations) to the MPI proxy library 402. The MPI proxy library 402 may pass-through many MPI operations directly to the MPI library 404. The MPI library 404 may be embodied as a shared instance of a system MPI library 404. In some embodiments, the MPI library 404 of the host process 304 may be configured to execute in thread-safe mode. Additionally, although the proxy processes 312 are illustrated as external to the MPI library 404, in some embodiments the MPI library 404 may create or otherwise manage the proxy processes 312 internally. Additionally or alternatively, in some embodiments the proxy processes 312 may be created externally as a system-managed resource.
The MPI proxy library 402 may intercept and redirect some MPI operations to the intra-node communication library 406. For example, the MPI proxy library 402 may implement an MPI endpoints extension interface that allows distinct MPI endpoints to be established for each of the threads 306. Message operations directed toward those MPI endpoints may be redirected to the intra-node communication library 406. The intra-node communication library 406 communicates with the proxy processes 312, and may use any form of efficient intra-node communication, such as shared-memory communication.
Each of the proxy processes 312 establishes an instance of the MPI library 404. For example, the proxy process 312a establishes the MPI library 404a, the proxy process 312b establishes the MPI library 404b, and so on. The MPI library 404 established by each proxy process 312 may be the same system MPI library 404 established by the host process 304. In some embodiments, the MPI library 404 of each proxy process 312 may be configured to execute in single-threaded mode. Each MPI library 404 of the proxy processes 312 uses the communication subsystem 130 to communicate with remote computing nodes 102. In some embodiments, concurrent access to the communication subsystem 130 by multiple proxy processes 312 may be managed by an operating system, virtual machine monitor (VMM), hypervisor, or other control structure of the computing node 102 (not shown). Additionally or alternatively, in some embodiments one or more of the proxy processes 312 may be assigned isolated, reserved, or otherwise dedicated network resources of the communication subsystem 130, such as dedicated network adapters, network ports, or network bandwidth. Although illustrated as establishing an instance of the MPI library 404, in other embodiments each proxy process 312 may use any other communication library or other method to perform MPI operations. For example, each proxy process 312 may establish a low-level network API other than the MPI library 404.
Although the MPI proxy library 402 and the MPI library 404 are illustrated as implementing the MPI as established by the MPI Forum, it should be understood that in other embodiments the API stack 400 may include any middleware library for interprocess and/or internode communication in high-performance computing applications. Additionally, in some embodiments the threads 306 may interact with a communication library that implements a different interface from the underlying communication library. For example, rather than a proxy library, the threads 306 may interact with an adapter library that forwards calls to the proxy processes 312 and/or to the MPI library 404.
Referring now to
In block 504a, the computing node 102 assigns the thread 306a to a proxy process 312a. As part of assigning the thread 306a to the proxy process 312a, the computing node 102 may initialize an intra-node communication link between the thread 306a and the proxy process 312a. The computing node 102 may also perform any other initialization required to support MPI communication using the proxy process 312a, for example, initializing a global MPI rank for the proxy process 312a. In some embodiments, in block 506a, the computing node 102 may pin the proxy process 312a and the thread 306a to execute on the same processor core 122. Executing on the same processor core 122 may improve intra-node communication performance, for example by allowing data transfer using a shared cache memory. The computing node 102 may use any technique for pinning the proxy process 312a and/or the thread 306a, including assigning the proxy process 312a and the thread 306a to hardware threads executed by the same processor core 122, setting operating system processor affinity, or other techniques. Additionally, although illustrated as assigning the threads 306 to the proxy processes 312 in parallel, it should be understood that in some embodiments the threads 306 may be assigned to the proxy processes 312 in a serial or single-threaded manner, for example by the host process 304.
In block 508a, the computing node 102 receives an MPI operation called by the thread 306a on the associated MPI endpoint. The MPI operation may be embodied as any message passing command, including a send, a receive, a ready-send (i.e., only send when the recipient endpoint is ready), a collective operation, a one-sided communication operation, or other command. As shown in
In block 510a, the computing node 102 communicates the MPI operation from the thread 306a to the proxy process 312a. The computing node 102 may use any technique for intra-node data transfer. To improve performance, the computing node 102 may use an efficient or high-performance technique to avoid unnecessary copies of data in the memory 126. For example, the computing node 102 may communicate the MPI operation using a shared memory region of the memory 126 that is accessible to both the thread 306a and the proxy process 312a. In some embodiments, the thread 306a and the proxy process 312a may communicate using a lock-free command queue stored in the shared memory region. In some embodiments, the computing node 102 may allow the thread 306a and/or the proxy process 312a to allocate data buffers within the shared memory region, which may further reduce data copies. As illustrated in
In block 512a, the computing node 102 performs the MPI operation using the proxy process 312a. As illustrated in
Referring back to block 502, as described above, execution of the method 500 proceeds in parallel to blocks 504a, 504b. The blocks 504b, 508b, 510b, 512b correspond to the blocks 504a, 508a, 510a, 512a, respectively, but are executed by the computing node 102 using the thread 306b and the proxy process 312b rather than the thread 306a and the proxy process 312a. In other embodiments, the method 500 may similarly execute blocks 504, 508, 510, 512 in parallel for many threads 306 and proxy processes 312. The computing node 102 may perform numerous MPI operations in parallel, originating from many threads 306 and performed by many proxy processes 312.
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes a computing device for multi-threaded message passing, the computing device comprising a host process module to (i) create a first message passing interface endpoint for a first thread of a plurality of threads established by a host process of the computing device and (ii) assign the first thread to a first proxy process; a message passing module to (i) receive, during execution of the first thread, a first message passing interface operation associated with the first message passing interface endpoint and (ii) communicate the first message passing interface operation from the first thread to the first proxy process; and a proxy process module to perform the first message passing interface operation by the first proxy process.
Example 2 includes the subject matter of Example 1, and wherein the first message passing interface operation comprises a send operation, a receive operation, a ready-send operation, a collective operation, a synchronization operation, an accumulate operation, a get operation, or a put operation.
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to perform the first message passing interface operation by the first proxy process comprises to communicate by the first proxy process with a remote computing device using a communication subsystem of the computing device.
Example 4 includes the subject matter of any of Examples 1-3, and wherein to communicate using the communication subsystem of the computing device comprises to communicate using network resources of the communication subsystem, wherein the network resources are dedicated to the first proxy process.
Example 5 includes the subject matter of any of Examples 1-4, and wherein the network resources comprise a network adapter, a network port, or an amount of network bandwidth.
Example 6 includes the subject matter of any of Examples 1-5, and wherein to assign the first thread to the first proxy process comprises to pin the first thread and the first proxy process to a processor core of the computing device.
Example 7 includes the subject matter of any of Examples 1-6, and wherein the message passing module is further to return an operation result from the first proxy process to the first thread in response to performance of the first message passing interface operation.
Example 8 includes the subject matter of any of Examples 1-7, and wherein to communicate the first message passing interface operation from the first thread to the first proxy process comprises to communicate the first message passing interface operation using a shared memory region of the computing device.
Example 9 includes the subject matter of any of Examples 1-8, and wherein to communicate the first message passing interface operation using the shared memory region comprises to communicate the first message passing interface operation using a lock-free command queue of the computing device.
Example 10 includes the subject matter of any of Examples 1-9, and wherein to perform the first message passing interface operation comprises to perform the first message passing interface operation by the first proxy process using a first instance of a message passing interface library established by the first proxy process.
Example 11 includes the subject matter of any of Examples 1-10, and wherein the first instance of the message passing interface library comprises a first instance of the message passing interface library established in a single-threaded mode of execution.
Example 12 includes the subject matter of any of Examples 1-11, and wherein to receive the first message passing interface operation comprises to intercept the first message passing interface operation targeted for a shared instance of the message passing interface library established by the host process.
Example 13 includes the subject matter of any of Examples 1-12, and wherein the host process module is further to (i) create a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device and (ii) assign the second thread to a second proxy process; the message passing module is further to (i) receive, during execution of the second thread, a second message passing interface operation associated with the second message passing interface endpoint and (ii) communicate the second message passing interface operation from the second thread to the second proxy process; and the proxy process module is further to perform the second message passing interface operation by the second proxy process.
Example 14 includes the subject matter of any of Examples 1-13, and wherein the host process module is further to (i) create a second message passing interface endpoint for the first thread and (ii) assign the first thread to a second proxy process; the message passing module is further to (i) receive, during the execution of the first thread, a second message passing interface operation associated with the second message passing interface endpoint and (ii) communicate the second message passing interface operation from the first thread to the second proxy process; and the proxy process module is further to perform the second message passing interface operation by the second proxy process.
Example 15 includes the subject matter of any of Examples 1-14, and wherein the host process module is further to (i) create a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device and (ii) assign the second thread to the first proxy process; the message passing module is further to (i) receive, during execution of the second thread, a second message passing interface operation associated with the second message passing interface endpoint and (ii) communicate the second message passing interface operation from the second thread to the first proxy process; and the proxy process module is further to perform the second message passing interface operation by the first proxy process.
Example 16 includes a method for multi-threaded message passing, the method comprising creating, by a computing device, a first message passing interface endpoint for a first thread of a plurality of threads established by a host process of the computing device; assigning, by the computing device, the first thread to a first proxy process; receiving, by the computing device while executing the first thread, a first message passing interface operation associated with the first message passing interface endpoint; communicating, by the computing device, the first message passing interface operation from the first thread to the first proxy process; and performing, by the computing device, the first message passing interface operation by the first proxy process.
Example 17 includes the subject matter of Example 16, and wherein receiving the first message passing interface operation comprises receiving a send operation, a receive operation, a ready-send operation, a collective operation, a synchronization operation, an accumulate operation, a get operation, or a put operation.
Example 18 includes the subject matter of any of Examples 16 and 17, and wherein performing the first message passing interface operation by the first proxy process comprises communicating from the first proxy process to a remote computing device using a communication subsystem of the computing device.
Example 19 includes the subject matter of any of Examples 16-18, and wherein communicating using the communication subsystem of the computing device comprises communicating using network resources of the communication subsystem, wherein the network resources are dedicated to the first proxy process.
Example 20 includes the subject matter of any of Examples 16-19, and wherein the network resources comprise a network adapter, a network port, or an amount of network bandwidth.
Example 21 includes the subject matter of any of Examples 16-20, and wherein assigning the first thread to the first proxy process comprises pinning the first thread and the first proxy process to a processor core of the computing device.
Example 22 includes the subject matter of any of Examples 16-21, and further including returning, by the computing device, an operation result from the first proxy process to the first thread in response to performing the first message passing interface operation.
Example 23 includes the subject matter of any of Examples 16-22, and wherein communicating the first message passing interface operation from the first thread to the first proxy process comprises communicating the first message passing interface operation using a shared memory region of the computing device.
Example 24 includes the subject matter of any of Examples 16-23, and wherein communicating the first message passing interface operation using the shared memory region comprises communicating the first message passing interface operation using a lock-free command queue of the computing device.
Example 25 includes the subject matter of any of Examples 16-24, and wherein performing the first message passing interface operation comprises performing the first message passing interface operation by the first proxy process using a first instance of a message passing interface library established by the first proxy process.
Example 26 includes the subject matter of any of Examples 16-25, and wherein performing the first message passing interface operation by the first proxy process comprises performing the first message passing interface operation by the first proxy process using the first instance of the message passing interface library established in a single-threaded mode of execution.
Example 27 includes the subject matter of any of Examples 16-26, and wherein receiving the first message passing interface operation comprises intercepting the first message passing interface operation targeted for a shared instance of the message passing interface library established by the host process.
Example 28 includes the subject matter of any of Examples 16-27, and further including creating, by the computing device, a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device; assigning, by the computing device, the second thread to a second proxy process; receiving, by the computing device while executing the second thread, a second message passing interface operation associated with the second message passing interface endpoint; communicating, by the computing device, the second message passing interface operation from the second thread to the second proxy process; and performing, by the computing device, the second message passing interface operation by the second proxy process.
Example 29 includes the subject matter of any of Examples 16-28, and further including creating, by the computing device, a second message passing interface endpoint for the first thread; assigning, by the computing device, the first thread to a second proxy process; receiving, by the computing device while executing the first thread, a second message passing interface operation associated with the second message passing interface endpoint; communicating, by the computing device, the second message passing interface operation from the first thread to the second proxy process; and performing, by the computing device, the second message passing interface operation by the second proxy process.
Example 30 includes the subject matter of any of Examples 16-29, and further including creating, by the computing device, a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device; assigning, by the computing device, the second thread to the first proxy process; receiving, by the computing device while executing the second thread, a second message passing interface operation associated with the second message passing interface endpoint; communicating, by the computing device, the second message passing interface operation from the second thread to the first proxy process; and performing, by the computing device, the second message passing interface operation by the first proxy process.
Example 31 includes a computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 16-30.
Example 32 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 16-30.
Example 33 includes a computing device comprising means for performing the method of any of Examples 16-30.
Example 34 includes a computing device for multi-threaded message passing, the computing device comprising means for creating a first message passing interface endpoint for a first thread of a plurality of threads established by a host process of the computing device; means for assigning the first thread to a first proxy process; means for receiving, while executing the first thread, a first message passing interface operation associated with the first message passing interface endpoint; means for communicating the first message passing interface operation from the first thread to the first proxy process; and means for performing the first message passing interface operation by the first proxy process.
Example 35 includes the subject matter of Example 34, and wherein the means for receiving the first message passing interface operation comprises means for receiving a send operation, a receive operation, a ready-send operation, a collective operation, a synchronization operation, an accumulate operation, a get operation, or a put operation.
Example 36 includes the subject matter of any of Examples 34 and 35, and wherein the means for performing the first message passing interface operation by the first proxy process comprises means for communicating from the first proxy process to a remote computing device using a communication subsystem of the computing device.
Example 37 includes the subject matter of any of Examples 34-36, and wherein the means for communicating using the communication subsystem of the computing device comprises means for communicating using network resources of the communication subsystem, wherein the network resources are dedicated to the first proxy process.
Example 38 includes the subject matter of any of Examples 34-37, and wherein the network resources comprise a network adapter, a network port, or an amount of network bandwidth.
Example 39 includes the subject matter of any of Examples 34-38, and wherein the means for assigning the first thread to the first proxy process comprises means for pinning the first thread and the first proxy process to a processor core of the computing device.
Example 40 includes the subject matter of any of Examples 34-39, and further including means for returning an operation result from the first proxy process to the first thread in response to performing the first message passing interface operation.
Example 41 includes the subject matter of any of Examples 34-40, and wherein the means for communicating the first message passing interface operation from the first thread to the first proxy process comprises means for communicating the first message passing interface operation using a shared memory region of the computing device.
Example 42 includes the subject matter of any of Examples 34-41, and wherein the means for communicating the first message passing interface operation using the shared memory region comprises means for communicating the first message passing interface operation using a lock-free command queue of the computing device.
Example 43 includes the subject matter of any of Examples 34-42, and wherein the means for performing the first message passing interface operation comprises means for performing the first message passing interface operation by the first proxy process using a first instance of a message passing interface library established by the first proxy process.
Example 44 includes the subject matter of any of Examples 34-43, and wherein the means for performing the first message passing interface operation by the first proxy process comprises means for performing the first message passing interface operation by the first proxy process using the first instance of the message passing interface library established in a single-threaded mode of execution.
Example 45 includes the subject matter of any of Examples 34-44, and wherein the means for receiving the first message passing interface operation comprises means for intercepting the first message passing interface operation targeted for a shared instance of the message passing interface library established by the host process.
Example 46 includes the subject matter of any of Examples 34-45, and further including means for creating a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device; means for assigning the second thread to a second proxy process; means for receiving, while executing the second thread, a second message passing interface operation associated with the second message passing interface endpoint; means for communicating the second message passing interface operation from the second thread to the second proxy process; and means for performing the second message passing interface operation by the second proxy process.
Example 47 includes the subject matter of any of Examples 34-46, and further including means for creating a second message passing interface endpoint for the first thread; means for assigning the first thread to a second proxy process; means for receiving, while executing the first thread, a second message passing interface operation associated with the second message passing interface endpoint; means for communicating the second message passing interface operation from the first thread to the second proxy process; and means for performing the second message passing interface operation by the second proxy process.
Example 48 includes the subject matter of any of Examples 34-47, and further including means for creating a second message passing interface endpoint for a second thread of the plurality of threads established by the host process of the computing device; means for assigning the second thread to the first proxy process; means for receiving, while executing the second thread, a second message passing interface operation associated with the second message passing interface endpoint; means for communicating the second message passing interface operation from the second thread to the first proxy process; and means for performing the second message passing interface operation by the first proxy process.