A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Embodiments relate generally to data processing, and more particularly, to improving performance of ring allreduce processing in computing systems.
Allreduce is a commonly used message passing interface (MPI) operation used for data parallelism in training deep learning models with multiple computation units. Allreduce implemented in training deep learning models often operates on messages having very large sizes and is constrained by the number of computation units that can be used. For a typical neural network workload, a message size is approximately 400 megabytes (MBs) and only one core of a computation unit (such as a graphics processing unit (GPU) or a central processing unit (CPU)) can be dedicated to each MPI operation without losing too much computing capability for a neural network workload.
In some computing networks (such as 10 gigabyte (GB) Ethernet), a ring allreduce method is suitable for handling messages with very large sizes. This is partly because the ring allreduce method always has the same sender node and receiver node in each step, making network traffic predictable. However, due to limits of computer architectures, the ring allreduce method is not optimal in terms of network bandwidth utilization. For each step in a reduce-scatter stage, two chunks of a message need to be reduced together. During the reduce time, there is no network traffic between nodes because each computation step has a data dependency on a communications result that must be completed before the computation starts, and communications between nodes cannot start again until the computation is finished. Thus, no overlap of communication and computation is possible, thereby negatively affecting system performance.
So that the manner in which the above recited features of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments and are therefore not to be considered limiting of its scope. The figures are not to scale. In general, the same reference numbers will be used throughout the drawings and accompanying written description to refer to the same or like parts.
Implementations of the disclosure provide a double buffer technique for removing the dependencies between the communication and computation steps of a known ring allreduce method. In embodiments of the present invention, the communication and computation steps are overlapped and the computation steps do not result in additional processing overhead. This results in improved processing time for allreduce operations, thereby also improving bandwidth utilization.
Many parallel applications require accessing reduced results across all processes rather than just a root process. In a similar complementary style of MPI allgather to MPI gather, MPI allreduce reduces values and distributes the results to all processes. The function prototype has the following format:
The function MPI_allreduce is identical to MPI_reduce with the exception that it does not need a root process ID (since the results are distributed to all processes).
Embodiments of the present invention overcome this deficiency. Instead of evenly separating a message into N chunks (where N is the number of nodes, N being a natural number) and send/reduce these chunks in a ring fashion, as illustrated above in
In embodiments, all nodes split the payload of the message evenly into chunks, wherein the number of chunks is equal to two times the number of nodes (2*N). The chunks are numbered from 0 to 2*N−1, where N is the number of nodes. All nodes are arranged in a virtual ring (e.g., node 0 sends chunks to node 1, node 1 sends chunks to node 2, . . . node N−1 sends chunks to node 0). Each node starts from a different even-numbered chunk of the message. For example, node 0 starts from chunk 0, node 1 starts from chunk 2, . . . node N−1 starts from chunk 2*(N−1). In a first initialization step, each node sends a chunk to the next node in the ring, and each node receives a chunk from a previous node in the ring. This populates the first chunk of a double buffer technique at each node (e.g., chunk number 0 at node 0, chunk number 2 at node 1, etc.). At a second initialization step, each node sends a new chunk (e.g., the current chunk number—1) to the next node in the ring. In parallel, the next node reduces the chunk received at the first step with the local chunk of the same index in the send buffer. This populates the second chunk of a double buffer technique at each node. Once each node is populated with two chunks, each node in parallel sends the chunk just reduced to the next node and receives a new chunk from the previous node. This is repeated until all chunks have been fully reduced. Finally, each node passes fully reduced chunks along the ring until the fully reduced chunks have been propagated to all nodes.
At block 308, the current node sends a chunk at the current index in the send buffer of the current node to the next node in the ring. At block 310, the current node receives a chunk from the previous node in the ring and stores the received chunk at the current index of the receive buffer of the current node. This populates a second chunk at the current node during a second initialization step. At block 312, the current node reduces the chunk in the send buffer at the previous index of the receive buffer and the chunk in the receive buffer at the previous index of the receive buffer and stores the result at the previous index of the receive buffer. Processing continues with block 314 of
At block 314 of
An example double buffer ring allreduce process running on each node represented as pseudocode according to embodiments is shown below in Table 1. This process works for any number of nodes greater than one.
In some embodiments, the computing device is to implement double buffer ring allreduce processing, as provided in
The computing device 500 may additionally include one or more of the following: cache 562, a graphical processing unit (GPU) 512 (which may be the hardware accelerator in some implementations), a wireless input/output (I/O) interface 520, a wired I/O interface 530, memory circuitry 540, power management circuitry 550, non-transitory storage device 560, and a network interface 570 for connection to a network 572. The following discussion provides a brief, general description of the components forming the illustrative computing device 500. Example, non-limiting computing devices 500 may include a desktop computing device, blade server device, workstation, or similar device or system.
In embodiments, the processor cores 518 are capable of executing machine-readable instruction sets 514, reading data and/or instruction sets 514 from one or more storage devices 560 and writing data to the one or more storage devices 560. Those skilled in the relevant art will appreciate that the illustrated embodiments as well as other embodiments may be practiced with other processor-based device configurations, including portable electronic or handheld electronic devices, for instance smartphones, portable computers, wearable computers, consumer electronics, personal computers (“PCs”), network PCs, minicomputers, server blades, mainframe computers, and the like. For example, machine-readable instruction sets 514 may include instructions to implement double buffer ring allreduce processing, as provided in
The processor cores 518 may include any number of hardwired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements that are disposed partially or wholly in a PC, server, or other computing system capable of executing processor-readable instructions.
The computing device 500 includes a bus or similar communications link 516 that communicably couples and facilitates the exchange of information and/or data between various system components including the processor cores 518, the cache 562, the graphics processor circuitry 512, one or more wireless I/O interfaces 520, one or more wired I/O interfaces 530, one or more storage devices 560, and/or one or more network interfaces 570. The computing device 500 may be referred to in the singular herein, but this is not intended to limit the embodiments to a single computing device 500, since in certain embodiments, there may be more than one computing device 500 that incorporates, includes, or contains any number of communicably coupled, collocated, or remote networked circuits or devices.
The processor cores 518 may include any number, type, or combination of currently available or future developed devices capable of executing machine-readable instruction sets.
The processor cores 518 may include (or be coupled to) but are not limited to any current or future developed single- or multi-core processor or microprocessor, such as: on or more systems on a chip (SOCs); central processing units (CPUs); digital signal processors (DSPs); graphics processing units (GPUs); application-specific integrated circuits (ASICs), programmable logic units, field programmable gate arrays (FPGAs), and the like. Unless described otherwise, the construction and operation of the various blocks shown in
The system memory 540 may include read-only memory (“ROM”) 542 and random-access memory (“RAM”) 546. A portion of the ROM 542 may be used to store or otherwise retain a basic input/output system (“BIOS”) 544. The BIOS 544 provides basic functionality to the computing device 500, for example by causing the processor cores 518 to load and/or execute one or more machine-readable instruction sets 514. In embodiments, at least some of the one or more machine-readable instruction sets 514 cause at least a portion of the processor cores 518 to provide, create, produce, transition, and/or function as a dedicated, specific, and particular machine, for example a word processing machine, a digital image acquisition machine, a media playing machine, a gaming system, a communications device, a smartphone, a neural network, a machine learning model, or similar devices.
The computing device 500 may include at least one wireless input/output (I/O) interface 520. The at least one wireless I/O interface 520 may be communicably coupled to one or more physical output devices 522 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wireless I/O interface 520 may communicably couple to one or more physical input devices 524 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The at least one wireless I/O interface 520 may include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to: BLUETOOTH®, near field communication (NFC), and similar.
The computing device 500 may include one or more wired input/output (I/O) interfaces 530. The at least one wired I/O interface 530 may be communicably coupled to one or more physical output devices 522 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wired I/O interface 530 may be communicably coupled to one or more physical input devices 524 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The wired I/O interface 530 may include any currently available or future developed I/O interface. Example wired I/O interfaces include but are not limited to: universal serial bus (USB), IEEE 1394 (“FireWire”), and similar.
The computing device 500 may include one or more communicably coupled, non-transitory, data storage devices 560. The data storage devices 560 may include one or more hard disk drives (HDDs) and/or one or more solid-state storage devices (SSDs). The one or more data storage devices 560 may include any current or future developed storage appliances, network storage devices, and/or systems. Non-limiting examples of such data storage devices 560 may include, but are not limited to, any current or future developed non-transitory storage appliances or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or more electro-resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some implementations, the one or more data storage devices 560 may include one or more removable storage devices, such as one or more flash drives, flash memories, flash storage units, or similar appliances or devices capable of communicable coupling to and decoupling from the computing device 500.
The one or more data storage devices 560 may include interfaces or controllers (not shown) communicatively coupling the respective storage device or system to the bus 516. The one or more data storage devices 560 may store, retain, or otherwise contain machine-readable instruction sets, data structures, program modules, data stores, databases, logical structures, and/or other data useful to the processor cores 518 and/or graphics processor circuitry 512 and/or one or more applications executed on or by the processor cores 518 and/or graphics processor circuitry 512. In some instances, one or more data storage devices 560 may be communicably coupled to the processor cores 518, for example via the bus 516 or via one or more wired communications interfaces 530 (e.g., Universal Serial Bus or USB); one or more wireless communications interfaces 520 (e.g., Bluetooth®, Near Field Communication or NFC); and/or one or more network interfaces 570 (IEEE 802.3 or Ethernet, IEEE 802.11, or Wi-Fi®, etc.).
Processor-readable instruction sets 514 and other programs, applications, logic sets, and/or modules may be stored in whole or in part in the system memory 540. Such instruction sets 514 may be transferred, in whole or in part, from the one or more data storage devices 560. The instruction sets 514 may be loaded, stored, or otherwise retained in system memory 540, in whole or in part, during execution by the processor cores 518 and/or graphics processor circuitry 512.
The computing device 500 may include power management circuitry 550 that controls one or more operational aspects of the energy storage device 552. In embodiments, the energy storage device 552 may include one or more primary (i.e., non-rechargeable) or secondary (i.e., rechargeable) batteries or similar energy storage devices. In embodiments, the energy storage device 552 may include one or more supercapacitors or ultracapacitors. In embodiments, the power management circuitry 550 may alter, adjust, or control the flow of energy from an external power source 554 to the energy storage device 552 and/or to the computing device 500. The power source 554 may include, but is not limited to, a solar power system, a commercial electric grid, a portable generator, an external energy storage device, or any combination thereof.
For convenience, the processor cores 518, the graphics processor circuitry 512, the wireless I/O interface 520, the wired I/O interface 530, the storage device 560, and the network interface 570 are illustrated as communicatively coupled to each other via the bus 516, thereby providing connectivity between the above-described components. In alternative embodiments, the above-described components may be communicatively coupled in a different manner than illustrated in
Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing computing device 500, for example, are shown in
The machine-readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine-readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein.
In another example, the machine-readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine-readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine-readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example process of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended.
The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
Descriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
The following examples pertain to further embodiments. Example 1 is an apparatus to perform ring allreduce operations. The apparatus of Example 1 comprises instructions to send a chunk of a message in a receive buffer at a current index of a send buffer to a next node in a virtual ring of nodes; receive a chunk of the message from a previous node in the virtual ring of nodes and store the chunk at the current index of the receive buffer; reduce a chunk in a send buffer at a previous index of the receive buffer and a chunk in the receive buffer at a previous index of the receive buffer and store a result at the previous index of the receive buffer; repeat the sending, receiving and storing, and reducing and storing steps until all chunks of the message are reduced; and send reduced chunks to the next node and receive reduced chunks from the previous node.
In Example 2, the subject matter of Example 1 can optionally include wherein at a first initialization step, send a chunk at the current index of the send buffer to the next node, receive a chunk from the previous node and store the received chunk at the current index of the receive buffer, and update the current index of the send buffer and the current index of the receive buffer.
In Example 3, the subject matter of Example 2 can optionally include wherein at a second initialization step, send a chunk at the current index of the send buffer to the next node, receive a chunk from the previous node and store the received chunk at the current index of the receive buffer, and reduce a chunk in the send buffer at the previous index of the receive buffer and a chunk in the receive buffer at the previous index of the receive buffer and store a result at the previous index of the receive buffer.
In Example 4, the subject matter of Example 1 can optionally include wherein reducing chunks comprises performing a ring allreduce operation on the chunks.
In Example 5, the subject matter of Example 1 can optionally include wherein the message is comprised of 2*N chunks, where N is a number of nodes in the virtual ring.
In Example 6, the subject matter of Example 1 can optionally include wherein the send buffer comprises 2*N entries and the receive buffer comprises 2*N entries, where N is a number of nodes in the virtual ring.
Example 7 is a method for performing ring allreduce operations. The method of Example 7 can include sending a chunk of a message in a receive buffer at a current index of a send buffer to a next node in a virtual ring of nodes; receiving a chunk of the message from a previous node in the virtual ring of nodes and storing the chunk at the current index of the receive buffer; reducing a chunk in a send buffer at a previous index of the receive buffer and a chunk in the receive buffer at a previous index of the receive buffer and storing a result at the previous index of the receive buffer; repeating the sending, receiving and storing, and reducing and storing steps until all chunks of the message are reduced; and sending reduced chunks to the next node and receive reduced chunks from the previous node.
In Example 8, the subject matter of Example 7 can optionally include wherein at a first initialization step, sending a chunk at the current index of the send buffer to the next node, receiving a chunk from the previous node and storing the received chunk at the current index of the receive buffer, and updating the current index of the send buffer and the current index of the receive buffer.
In Example 9, the subject matter of Example 8 can optionally include wherein at a second initialization step, sending a chunk at the current index of the send buffer to the next node, receiving a chunk from the previous node and storing the received chunk at the current index of the receive buffer, and reducing a chunk in the send buffer at the previous index of the receive buffer and a chunk in the receive buffer at the previous index of the receive buffer and storing a result at the previous index of the receive buffer.
In Example 10, the subject matter of Example 7 can optionally include wherein reducing chunks comprises performing a ring allreduce operation on the chunks.
In Example 11, the subject matter of Example 7 can optionally include wherein the message is comprised of 2*N chunks, where N is a number of nodes in the virtual ring.
In Example 12, the subject matter of Example 7 can optionally include wherein the send buffer comprises 2*N entries and the receive buffer comprises 2*N entries, where N is a number of nodes in the virtual ring.
Example 13 is at least one non-transitory machine-readable storage medium for storing instructions for performing ring allreduce operations. The at least one non-transitory machine-readable storage medium of Example 13 comprises instructions that, when executed, cause at least one processor to at least: send a chunk of a message in a receive buffer at a current index of a send buffer to a next node in a virtual ring of nodes; receive a chunk of the message from a previous node in the virtual ring of nodes and store the chunk at the current index of the receive buffer; reduce a chunk in a send buffer at a previous index of the receive buffer and a chunk in the receive buffer at a previous index of the receive buffer and store a result at the previous index of the receive buffer; repeat the sending, receiving and storing, and reducing and storing steps until all chunks of the message are reduced; and send reduced chunks to the next node and receive reduced chunks from the previous node.
In Example 14, the subject matter of Example 13 can optionally include instructions that when executed further cause the at least one processor to at a first initialization step, send a chunk at the current index of the send buffer to the next node, receive a chunk from the previous node and store the received chunk at the current index of the receive buffer, and update the current index of the send buffer and the current index of the receive buffer.
In Example 15, the subject matter of Example 14 can optionally include instructions that when executed further cause the at least one processor to at a second initialization step, send a chunk at the current index of the send buffer to the next node, receive a chunk from the previous node and store the received chunk at the current index of the receive buffer, and reduce a chunk in the send buffer at the previous index of the receive buffer and a chunk in the receive buffer at the previous index of the receive buffer and store a result at the previous index of the receive buffer.
In Example 16, the subject matter of Example 13 can optionally include wherein reducing chunks comprises performing a ring allreduce operation on the chunks.
In Example 17, the subject matter of Example 13 can optionally include wherein the message is comprised of 2*N chunks, where N is a number of nodes in the virtual ring.
In Example 18, the subject matter of Example 13 can optionally include wherein the send buffer comprises 2*N entries and the receive buffer comprises 2*N entries, where N is a number of nodes in the virtual ring.
Example 19 is an apparatus to perform ring allreduce operations. The apparatus of Example 19 comprises means for sending a chunk of a message in a receive buffer at a current index of a send buffer to a next node in a virtual ring of nodes; means for receiving a chunk of the message from a previous node in the virtual ring of nodes and storing the chunk at the current index of the receive buffer; means for reducing a chunk in a send buffer at a previous index of the receive buffer and a chunk in the receive buffer at a previous index of the receive buffer and storing a result at the previous index of the receive buffer; means for repeating the sending, receiving and storing, and reducing and storing steps until all chunks of the message are reduced; and means for sending reduced chunks to the next node and receive reduced chunks from the previous node.
In Example 20, the subject matter of Example 19 can optionally include wherein at a first initialization step, means for sending a chunk at the current index of the send buffer to the next node, means for receiving a chunk from the previous node and storing the received chunk at the current index of the receive buffer, and means for updating the current index of the send buffer and the current index of the receive buffer.
In Example 21, the subject matter of Example 20 can optionally include wherein at a second initialization step, means for sending a chunk at the current index of the send buffer to the next node, means for receiving a chunk from the previous node and storing the received chunk at the current index of the receive buffer, and means for reducing a chunk in the send buffer at the previous index of the receive buffer and a chunk in the receive buffer at the previous index of the receive buffer and storing a result at the previous index of the receive buffer.
In Example 22, the subject matter of Example 19 can optionally include wherein means for reducing chunks comprises means for performing a ring allreduce operation on the chunks.
In Example 23, the subject matter of Example 19 can optionally include wherein the message is comprised of 2*N chunks, where N is a number of nodes in the virtual ring.
In Example 24, the subject matter of Example 19 can optionally include wherein the send buffer comprises 2*N entries and the receive buffer comprises 2*N entries, where N is a number of nodes in the virtual ring.
The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Persons skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the features set forth in the appended claims.
This application claims, under 35 U.S.C. § 371, the benefit of and priority to International Application No. PCT/CN2020/132818, filed Nov. 30, 2020, titled METHOD OF RING ALLREDUCE PROCESSING, the entire content of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2020/132818 | 11/30/2020 | WO |