A direct memory access (DMA) engine is a module which coordinates direct memory access transfers of data between devices (e.g., input/output interfaces and display controllers) and memory, or between different locations in memory, within a computer system. A DMA engine is often located on a processor, such as a processor having a central processing unit (CPU) or a graphics processor (GPU) and receives commands from an application running on the processor. Based on the commands, the DMA engine reads data from a DMA source (e.g., a first buffer defined in memory) and writes data to a DMA destination (e.g., a second buffer defined in memory).
The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.
Conventional processors include one or more direct memory access engines to read and write blocks of data stored in a system memory. The direct memory access engines relieve processor cores from the burden of managing transfers. In response to data transfer requests from the processor cores, the direct memory access engines provide requisite control information to the corresponding source and destination such that data transfer operations can be executed without delaying computation code, thus allowing communication and computation to overlap in time. With the direct memory access engine asynchronously handling the formation and communication of control information, processor cores are freed to perform other tasks while awaiting satisfaction of the data transfer requests.
Distributed architectures are increasingly common alternatives to monolithic processing architecture in which physically or logically separated processing units are operated in a coordinated fashion via a high-performance interconnection. One example of such a distributed architecture is a chiplet architecture, which captures the advantages of fabricating some portions of a processing unit at smaller nodes while allowing other portions to be fabricated at nodes having larger dimensions if the other portions do not benefit from the reduced scales of the smaller nodes. In some cases, the number of direct memory access engines is higher in chiplet-based systems (such as relative to an equivalent monolithic, non-chiplet based design).
To increase system performance by improving utilization of direct memory access engines,
Fabric 104 is representative of any communication interconnect that complies with any of various types of protocols utilized for communicating among the components of the computing system 100. Fabric 104 provides the data paths, switches, routers, and other logic that connect the processing units 102, I/O interfaces 106, memory controller(s) 108, display controller 110, and other device(s) 112 to each other. Fabric 104 handles the request, response, and data traffic, as well as probe traffic to facilitate coherency. Fabric 104 also handles interrupt request routing and configuration access paths to the various components of computing system 100. Additionally, fabric 104 handles configuration requests, responses, and configuration data traffic. In some embodiments, fabric 104 is bus-based, including shared bus configurations, crossbar configurations, and hierarchical buses with bridges. In other embodiments, fabric 104 is packet-based, and hierarchical with bridges, crossbar, point-to-point, or other interconnects. From the point of view of fabric 104, the other components of computing system 100 are referred to as “clients”. Fabric 104 is configured to process requests generated by various clients and pass the requests on to other clients.
Memory controller(s) 108 are representative of any number and type of memory controllers coupled to any number and type of memory device(s). For example, the type of memory device(s) coupled to memory controller(s) 108 include Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), NAND Flash memory, NOR flash memory, Ferroelectric Random Access Memory (FeRAM), or others. Memory controller(s) 108 are accessible by processors 102, I/O interfaces 106, display controller 110, and other device(s) 112 via fabric 104. I/O interfaces 106 are representative of any number and type of I/O interfaces (e.g., peripheral component interconnect (PCI) bus, PCI-Extended (PCI-X), PCIE (PCI Express) bus, gigabit Ethernet (GBE) bus, universal serial bus (USB)). Various types of peripheral devices are coupled to I/O interfaces 106. Such peripheral devices include (but are not limited to) displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth. Other device(s) 112 are representative of any number and type of devices (e.g., multimedia device, video codec).
In various embodiments, each of the processors 102 is a parallel processor (e.g., vector processors, graphics processing units (GPUs), general-purpose GPUs (GPGPUs), non-scalar processors, highly-parallel processors, artificial intelligence (AI) processors, inference engines, machine learning processors, other multithreaded processing units, and the like). Each parallel processor 102 is constructed as a multi-chip module (e.g., a semiconductor die package) including two or more base integrated circuit dies (described in more detail below with respect to
In various embodiments, the host processor 114 executes a number of processes, such as executing one or more application(s) 116 that generate commands and executing a user mode driver 118 (or other drivers, such as a kernel mode driver). In various embodiments, the one or more applications 116 include applications that utilizes the functionality of the processors 102, such as applications that generate work in the system 100 or an operating system (OS). An application 116 may include one or more graphics instructions that instruct the processors 102 to render a graphical user interface (GUI) and/or a graphics scene. For example, the graphics instructions may include instructions that define a set of one or more graphics primitives to be rendered by the processors 102. Although various embodiments of DMA transfer command routing are described below in the context of runtime user mode drivers, it should be recognized that software-managed routing of DMA transfer commands is not limited to such contexts. In various embodiments, the methods and architectures described are applicable to any of a variety of software managers such as kernel mode drivers, operating system, hypervisors, and the like without departing from the scope of this disclosure.
In some embodiments, the application 116 utilizes a graphics application programming interface (API) 120 to invoke a user mode driver 118 (or a similar GPU driver). User mode driver 118 issues one or more commands to the one or more processors 102 for performing compute operations (e.g., rendering one or more graphics primitives into displayable graphics images). Based on the instructions issued by application 116 to the user mode driver 118, the user mode driver 118 formulates one or more commands that specify one or more operations for processors 102 to perform. In some embodiments, the user mode driver 118 is a part of the application 116 running on the host processor 114. For example, the user mode driver 118 may be part of a gaming application running on the host processor 114. Similarly, a kernel mode driver (not shown) may be part of an operating system running on the host processor 114.
As described in more detail with respect to
Software executing at the host processor 114 (such as runtime user mode driver 118) perform software-managed coordination of DMA transfer command execution across the various processors 102. In various embodiments, as described below with respect to
Referring now to
In some embodiments, each of the processors 204A-N includes one or more processing devices. In one embodiment, at least one of processors 204A-N includes one or more general purpose processing devices, such as CPUs. In some implementations, such processing devices are implemented using processor 102 as shown and described with respect to
The I/O interfaces 206 include one or more I/O interfaces (e.g., peripheral component interconnect (PCI) bus, PCI-Extended (PCI-X), PCIE (PCI Express) bus, gigabit Ethernet (GBE) bus, universal serial bus (USB), and the like). In some implementations, I/O interfaces 206 are implemented using input driver 112, and/or output driver 114 as shown and described with respect to
In various embodiments, each processor includes a cache subsystem with one or more levels of caches. In some embodiments, each of the processors 204A-N includes a cache (e.g., level three (L3) cache) which is shared among multiple processor cores of a core complex. The memory controller 210 includes at least one memory controller accessible by processors 204A-N, such as accessible via intra-die interconnect 208. In various embodiments, memory controller 210 includes one or more of any suitable type of memory controller. Each of the memory controllers are coupled to (or otherwise in communication with) and control access to any number and type of memory devices (not shown). In some implementations, such memory devices include dynamic random access memory (DRAM), static random access memory (SRAM), NAND Flash memory, NOR flash memory, ferroelectric random access memory (FeRAM), or any other suitable memory device. The intra-die interconnect 208 includes any computer communications medium suitable for communication among the devices shown in
In various embodiments, as described below with respect to
It should be recognized that although the graphics processing stacked die chiplets 304 are described below in the particular context of parallel accelerated processor (e.g., GPU) terminology for ease of illustration and description, in various embodiments, the architecture described is applicable to any of a variety of types of parallel processors (such as previously described more broadly with reference to
In various embodiments, the processor multi-chip module 302 includes an inter-chip data fabric 306 that operates as a high-bandwidth die-to-die interconnect between chiplets (e.g., between any combination of the first graphics processing stacked die chiplet 304A, the second graphics processing stacked die chiplet 304B, and the third graphics processing stacked die chiplet 304C). In some embodiments, the processor multi-chip module 302 include one or more processor cores 308 (e.g., CPUs and/or GPUs, or processor core dies) formed over each of the chiplets 304A-304C. Additionally, in various embodiments, each of the chiplets 304A-304C includes one or more levels of cache memory 310 and one or more memory PHYs (not shown) for communicating with external system memory modules 312, such as dynamic random access memory (DRAM) modules.
Each of the chiplets 304A-304C also includes one or more DMA engines 314. In various embodiments, the one or more DMA engines 314 coordinate DMA transfers of data between devices and memory (or between different locations in memory) within system 300. The one or more DMA engines 314 coordinate, in various embodiments, moving of data between the multiple devices/accelerators while computation(s) are performed on other data at, for example, the processor cores 308. In various embodiments, the one or more DMA engines 314 are, in some embodiments, part of a DMA controller (not shown) but the terms DMA engine and DMA controller are used interchangeably herein. The DMA engines 314, in response to commands, operates to transfer data into and out of, for example, the one or more memory modules 312 without involvement of the processor cores 308. Similarly, the DMA engines 314, in some embodiments, performs intra-chip data transfers. As will be appreciated, the DMA engines 314 relieve processor cores from the burden of managing data transfers, and in various embodiments is used as a global data transfer agent to handle various data transfer requirements from software, such as memory-to-memory data copying.
The one or more DMA engines 314 provide for fetching and decoding of command packets from application/agent queues and respective DMA buffers to perform the desired data transfer operations as specified by DMA commands, also known as descriptors. DMA commands include memory flow commands that transfer or control the transfer of memory locations containing data or instructions (e.g., read/get or write/put commands for transferring data in or out of memory). The DMA command descriptors indicate, in various embodiments, a source address from which to read the data, a transfer size, and a destination address to which the data are to be written for each data transfer operation. The descriptors are commonly organized in memory as a linked list, or chain, in which each descriptor contains a field indicating the address in the memory of the next descriptor to be executed. In various embodiments, the descriptors are also an array of commands with valid bits, where the command is of a known size and the one or more DMA engines 314 stop when it reaches an invalidate command. The last descriptor in the list has a null pointer in the “next descriptor” field, indicating to the DMA engine that there are no more commands to be executed, and DMA should become idle once it has reached the end of the chain.
In response to data transfer requests from the processor cores, the DMA engines 314 provide the requisite control information to the corresponding source and destination so that the data transfer requests are satisfied. Because the DMA engines 314 handle the formation and communication of the control information, processor cores are freed to perform other tasks while awaiting satisfaction of the data transfer requests. In various embodiments, each of the DMA engines 314 include one or more specialized auxiliary processor(s) that transfer data between locations in memory and/or peripheral input/output (I/O) devices and memory without intervention of processor core(s) or CPUs.
In some embodiments, demand for DMA is handled by placing DMA commands generated by one or more of the processor cores 308 in memory mapped IO (MMIO) locations such as at DMA buffer(s) 316 (also interchangeably referred to as DMA queues for holding DMA transfer commands). In various embodiments, the DMA buffer is a hardware structure into which read or write instructions are transferred such that the DMA engines 314 can read DMA commands out of (e.g., rather than needing to go to DRAM memory). To perform data transfer operations, in various embodiments, the DMA engines 314 receive instructions (e.g., DMA transfer commands/data transfer requests) generated by the processor cores 308 by accessing a sequence of commands in the DMA buffer(s) 316. The DMA engines 314 then retrieves the DMA commands (also known as descriptors) from the DMA buffer(s) 316 for processing. In some embodiments, the DMA commands specify, for example, a start address for direct virtual memory access (DVMA) and I/O bus accesses, and a transfer length up to a given maximum.
Although the DMA buffer(s) 316 are illustrated in
The DMA engines 314 accesses DMA transfer commands (or otherwise receives commands) from the DMA buffer(s) 316 over a bus (not shown). Based on the received instructions, in some embodiments, the DMA engines 314 read and buffer data from any memory (e.g., memory modules 312) via the data fabric 306, and write the buffered data to any memory via the data fabric 306. In some implementations, a DMA source and DMA destination are physically located on different devices (e.g., different chiplets). Similarly, in multi-processor systems, the DMA source and DMA destination are located on different devices associated with different processors in some cases. In such cases, the DMA engine 314 resolves virtual addresses to obtain physical addresses, and issues remote read and/or write commands to affect the DMA transfer. For example, in various embodiments, based on the received instructions, DMA engines 314 send a message to a data fabric device with instructions to affect a DMA transfer.
During DMA, the one or more processor cores 308 queue DMA commands in the DMA buffer(s) 316 and can signal their presence to the DMA engines 314. For example, in some embodiments, an application program running on the system 300 prepares an appropriate chain of descriptors in memory accessible to the DMA engine (e.g., DMA buffers 316) to initiate a chain of DMA data transfers. The processor cores 308 then sends a message (or other notification) to the DMA engine 314 indicating the memory address of the first descriptor in the chain, which is a request to the DMA engine to start execution of the descriptors. The application typically sends the message to the “doorbell” of the DMA engine—a control register with a certain bus address that is specified for this purpose. Sending such a message to initiate DMA execution is known as “ringing the doorbell” of the DMA engine 314. The DMA engine 314 responds by reading and executing the first descriptor. It then updates a status field of the descriptor to indicate to the application that the descriptor has been executed. The DMA engine 314 follows the “next” field through the entire linked list, marking each descriptor as executed, until it reaches the null pointer in the last descriptor. After executing the last descriptor, the DMA engine 314 becomes idle and is ready to receive a new list for execution.
In various embodiments, such as illustrated in
In this manner, the interconnect 318 forms part of a data fabric which facilitates communication among components of multi-processor computing system 300. Further, the inter-chip data fabric 306 extends the data fabric over the various communicably coupled graphics processing stacked die chiplets 304 and I/O interfaces (not shown) which also form part of the data fabric. In various embodiments, the interconnect 318 includes any computer communications medium suitable for communication among the devices shown in
As will be appreciated, the inter-chip data fabric 306 and/or the interconnects 318 often have such a high bandwidth (such as in modern architectures with a larger number of buses and interconnects between system components, particularly in high performance computing and machine learning systems) that a single DMA engine is not capable of saturating available data fabric bandwidth. In various embodiments, and as described in more detail below, the system 300 utilizes the increased number of DMA engines 314 (e.g., one per chiplet 304 as illustrated in the embodiment of
During operation, in response to notifications (e.g., doorbell rings), the DMA engine 314 reads and executes the DMA transfer commands (with its associated parameters) from the DMA buffers 316 to execute data transfer operations and packet transfers. In various embodiments, the operation parameters (e.g., DMA command parameters) are usually the base address, the stride, the element size and the number of elements to communicate, for both the sender and the receiver sides. In particular, the DMA engines 314 are configured such that multiple DMA engines 314 across multiple dies (e.g., MCMs 302) or chiplets 304 read that same location containing the packet with DMA transfer parameters. Subsequently, as described in more detail below, system software (e.g, user mode driver 118 of
Referring now to
In various embodiments, the system 400 executes any of various types of software applications. In some embodiments, as part of executing a software application (not shown), the host processor 402 of system 400 launches tasks to be executed at the PAPs 406. For example, when a software application executing at the host processor 402 requires graphics processing, the host processor 402 provides graphics commands and graphics data in a command buffer in the system memory 404 (e.g., implemented as dynamic random access memory (DRAM), static random access memory (SRAM), nonvolatile RAM, or other type of memory) for subsequent receival and processing by the PAPs 406. Depending on the embodiment, software commands are generated by a user application, a user mode driver, or another software application.
The system 400 includes N=3 number of communicably coupled PAPs 406. As shown, the system 400 includes a first PAP 406A, a second PAP 406B, and a third PAP 406C that are communicably coupled together with one or more I/O interfaces 408 that provide a communications interface between, for example, the host processor 402, the system memory 404, and the one or more PAPs 406. Such as previously described with respect to
In various embodiments, the system 400 also includes one or more inter-chip data fabrics 410 that operates as high-bandwidth die-to-die interconnects between PAPs. Additionally, in various embodiments, each of the PAPs 406 includes one or more levels of cache memory 418 and one or more memory PHYs (not shown) for communicating with external system memory modules, such as system memory module 404. When considered as a whole, the main memory (e.g., system memory module 404) communicably coupled to the multiple PAPs 406 and their local caches form the shared memory for the device 400. As will be appreciated, each PAP 406 only has a direct physical connection to a portion of the whole share memory system.
In various embodiments, each PAP 406 includes one or more DMA engines 412 (e.g., a first DMA engine 412A and a second DMA engine 412B positioned at the first PAP 406A). In various embodiments, the DMA engines 412 coordinate DMA transfers of data between devices and memory (or between different locations in memory) within system 400. The DMA engines 412 coordinate, in various embodiments, moving of data between the multiple devices/accelerators while computation(s) are performed on other data at, for example, processor cores (not shown) of the PAPs 406. In various embodiments, the one or more DMA engines 412 are, in some embodiments, part of a DMA controller (not shown) but the terms DMA engine and DMA controller are used interchangeably herein. The DMA engines 412, in response to commands, operates to transfer data into and out of, for example, device memory (e.g., cache memory 418 of each PAP 406, PAP associated memory modules, system memory module 404, and the like) without involvement of the processor cores. Similarly, the DMA engines 412, in some embodiments, performs intra-device transfers.
It should be recognized that although described here in the particular context of GPU terminology for ease of illustration and description, in various embodiments, the architecture described is applicable to any of a variety of types of parallel processors (such as previously described more broadly with reference to
In various embodiments, the user space and software applications executing at the host processor 402 (such as user mode driver 414) has a more holistic view and understanding of system resource constraints, particularly relative to individual system components such as each individual PAP 406. In some instances, for example, the user mode driver 414 determines with a degree of confidence exceeding a predetermined threshold that compute operations running at the one or more PAPs 406 (e.g., currently performing a large amount of compute work) are scheduled such that a conventional DMA transfer will complete in time before the requested data is needed. In such circumstances, increasing DMA throughput and having DMA transfers complete faster would not improve system operations (and therefore it would not be computationally profitable or energy efficient). Therefore, the system 400 determines to perform conventional DMA and/or turns on fewer DMA engines for performing DMA transfers.
However, as will be appreciated, there are various circumstances in which the system 400 benefits from performing software-managed splitting of DMA transfer commands. As illustrated in
In another embodiment, the DMA command 416 is a command from software to move data from memory of the first PAP 406A (e.g., cache memory 418A) into memory of the second PAP 406B (e.g., cache memory 418B). Conventionally, because the PAPs 406 are not configured to coordinate with each other due to being separate devices, such a DMA transfer is performed by turning on one of the DMA engines 412 (e.g., DMA engine 412A) at the first PAP 406A, reading the data out of its own memory, and writing to its peer (i.e., the second PAP 406B) via a communications path such as the I/O interfaces 408 or the inter-chip data fabric 410A between the first PAP 406A and the second PAP 406B.
To perform software-managed routing of the DMA command 416, the user mode driver 414 instead (or in addition to the DMA engine 412A) turns on DMA engine 412A at the first PAP 406A and copies the requested data from local memory to its peer (e.g., DMA engine 412C the second PAP 406B). The user mode driver 414 also turns on DMA engine 412C at the second PAP 406B to copy from the peer DMA engine 412C into its local memory at the second PAP 406B. Such operations are conventionally not possible as the DMA engines do not have knowledge that each other exist due to being on different devices. Similarly, it should be recognized that the resources recruited for performing the DMA transfer operations are not limited to DMA engines at the source and destination devices. In other embodiments, DMA engines at an idle third PAP 406C are recruited by the user mode driver 414 to read data out of the first PAP 406A for storage into the second PAP 406B. That is, a device that is not involved in either side of the transfer is, in various embodiments, recruited to perform the DMA transfer due to its availability to perform work.
In other embodiments, such as described in more detail below, software determines routing of DMA transfer commands based on network congestion status and bandwidth availability (instead of looking at processing hardware contention such as described in
In various embodiments, the system 500 executes any of various types of software applications. In some embodiments, as part of executing a software application (not shown), the host processor 502 of system 500 launches tasks to be executed at the PAPs 506. For example, when a software application executing at the host processor 502 requires graphics processing, the host processor 502 provides graphics commands and graphics data in a command buffer in the system memory 504 (e.g., implemented as dynamic random access memory (DRAM), static random access memory (SRAM), nonvolatile RAM, or other type of memory) for subsequent receival and processing by the PAPs 506. Depending on the embodiment, software commands are generated by a user application, a user mode driver, or another software application.
The system 500 includes N=3 number of communicably coupled PAPs 506. As shown, the system 400 includes a first PAP 506A, a second PAP 506B, and a third PAP 506C that are communicably coupled together with one or more I/O interfaces 508 that provide a communications interface between, for example, the host processor 502, the system memory 504, and the one or more PAPs 506. Such as previously described with respect to
In various embodiments, the system 500 also includes one or more inter-chip data fabrics 510 that operates as high-bandwidth die-to-die interconnects between PAPs. Additionally, in various embodiments, each of the PAPs 506 includes one or more levels of cache memory 518 and one or more memory PHYs (not shown) for communicating with external system memory modules, such as system memory module 504. When considered as a whole, the main memory (e.g., system memory module 504) communicably coupled to the multiple PAPs 506 and their local caches form the shared memory for the device 500. As will be appreciated, each PAP 506 only has a direct physical connection to a portion of the whole share memory system.
In various embodiments, each PAP 506 includes one or more DMA engines 512 (e.g., a first DMA engine 512A and a second DMA engine 512B positioned at the first PAP 506A). In various embodiments, the DMA engines 512 coordinate DMA transfers of data between devices and memory (or between different locations in memory) within system 500. The DMA engines 512 coordinate, in various embodiments, moving of data between the multiple devices/accelerators while computation(s) are performed on other data at, for example, processor cores (not shown) of the PAPs 506. In various embodiments, the one or more DMA engines 512 are, in some embodiments, part of a DMA controller (not shown) but the terms DMA engine and DMA controller are used interchangeably herein. The DMA engines 512, in response to commands, operates to transfer data into and out of, for example, device memory (e.g., cache memory 518 of each PAP 506, PAP associated memory modules, system memory module 504, and the like) without involvement of the processor cores. Similarly, the DMA engines 512, in some embodiments, performs intra-device transfers.
It should be recognized that although described here in the particular context of GPU terminology for ease of illustration and description, in various embodiments, the architecture described is applicable to any of a variety of types of parallel processors (such as previously described more broadly with reference to
In various embodiments, the user space and software applications executing at the host processor 502 (such as user mode driver 514) has a more holistic view and understanding of system resource constraints, particularly relative to individual system components such as each individual PAP 506. Further, software has better ability (such as relative to hardware) to look at global state of system operations and can also take advantage of data path links that are currently idle or less congested with traffic or oversubscribed at any given time.
As shown, system 500 includes a plurality of PAPs 506 (e.g., the first PAP 506A, the second PAP 506B, and the third PAP 506C) communicably coupled to a shared I/O interface 508 (e.g., point-to-point PCIE system). Additionally, the system 500 includes direct connections between the PAPs 506 that are not shared by that I/O interface 508 such that each hardware device is only aware of its direct links. For example, the first inter-chip data fabric 410A is a direct, unshared link between the first PAP 506a and the second PAP 506B that is unknown to the third PAP 506C. Similarly, the second inter-chip data fabric 510B is a direct, unshared link between the second PAP 506B and the third PAP 506C that is unknown to the first PAP 506A.
A command from software, such as DMA command 516, is targeted to the first PAP 506A to move data from memory of the first PAP 506A to the memory of the third PAP 506C. However, the I/O interface 508 is currently congested (e.g., saturated with network traffic) and therefore unable to timely transport the data associated with the DMA command 516. In various embodiments, due to the ability of software such as the user mode driver 514 to understand that DMA transfer command at run time and understand that there are multiple paths for data associated with the DMA command 516 to be routed, the user mode driver 514 is configured to recruit various other resources to perform the read or writes as instructed by the DMA command 516.
As those skilled in the art will appreciate, given a sufficiently large network system, there can be multiple indirect links by which communications can take one or more extra hops across the communications network to complete the DMA transfer. Software (e.g., user mode driver 514) monitors network congestion across the entire communications network including, for example, the I/O interface 508, the inter-chip data fabrics, and other communication paths (not shown) between components of system 500. In one embodiment, the user mode driver 514 recruits the second PAP 506B to assist in the DMA transfer by instructing DMA engine 512A at the first PAP 506A to read the requested data out of its own memory and write the requested data to the second PAP 506B via inter-chip data fabric 510A (instead of the congested I/O interface 508). Subsequently, the user mode driver 514 instructs DMA engine 512C at the second PAP 506B to write the requested data to the third PAP 506C via inter-chip data fabric 510B (instead of the congested I/O interface 508).
In this manner, the user mode driver 514 routes DMA traffic across globally less congested communications links and completes the data transfer requested by DMA command 516 in two hops by recruiting usage of the inter-chip data fabrics 510. As will be appreciated, each hardware device only knows of its direct connection to its neighbors and can only choose its least congested local link (but which may be a globally sub-optimal decision because the less congested local link may still be more congested relative to other non-direct links and paths through the system). Software is better suited for tailoring the routing and/or splitting of DMA transfer command policies for applications at run time. In some embodiments, software decides, for example, that for certain copies of data or copies when a system component is in a certain use state (e.g., currently busy or network congested) to split or route a DMA transfer command differently than if a system component was idle.
The method 600 begins at block 602 with the determining, by software, as to whether the system would benefit from increased DMA traffic throughput. For example, such as previously described in more detail with respect to
However, as will be appreciated, there are various circumstances in which the system benefits from performing software-managed routing and/or splitting of DMA transfer commands. Accordingly, the method 600 continues at block 606 with recruiting of one or more system components to assist in completing the data transfer requested by a DMA transfer command. For example, such as illustrated in
In another embodiment, the DMA command 416 is a command from software to move data from memory of the first PAP 406A (e.g., cache memory 418A) into memory of the second PAP 406B (e.g., cache memory 418B). To perform software-managed routing of the DMA command 416, the user mode driver 414 instead recruits (or in addition to the DMA engine 412A) other system components by turning on DMA engine 412A at the first PAP 406A and copying the requested data from local memory to its peer (e.g., DMA engine 412C the second PAP 406B). The user mode driver 414 also turns on DMA engine 412C at the second PAP 406B to copy from the peer DMA engine 412C into its local memory at the second PAP 406B. In other embodiments, DMA engines at an idle third PAP 406C are recruited by the user mode driver 414 to read data out of the first PAP 406A for storage into the second PAP 406B. That is, a device that is not involved in either side of the transfer is, in various embodiments, recruited to perform the DMA transfer due to its availability to perform work.
In other embodiments, such as illustrated with respect to
It should be recognized that the software-managed recruiting of system components to assist in DMA transfers and the routing of DMA transfer commands has primarily been described here in the context of routing whole packets of transfer commands for ease of description and illustration. However, those skilled in the art will recognize that the software-managed execution of DMA transfers is not limited to that particular level of execution granularity and that a single DMA transfer command may be split by software to be performed by two or more DMA engines at various system locations based on system congestion and/or resource contention without departing from the scope of this disclosure.
In some embodiments, for example, the user mode driver 514 splits a single DMA transfer command/job description into two or more smaller workloads and recruits different system resources to execute those smaller workloads. With reference to
Additionally, it should be recognized that although primarily described here in the context of software-recruitment of DMA engines to assist in executing DMA transfers, the software-managed recruiting of system components is applicable to various other devices or system components without departing from the scope of this disclosure. In some embodiments, the user mode driver also turns on one or more compute engines, CPUs, or other processors to perform data reads and writes. Those devices still perform DMA, just no longer at a copy only DMA engine. For example, in one embodiment, software-managed recruiting of system components to assist in DMA transfers includes splitting a single DMA transfer command/job description into two or more smaller workloads, and assigning a first of the smaller workloads for a portion of the transfer to be executed by a DMA engine on one PAP device. At least a second of the smaller workloads with its respective portion of the transfer is assigned by the user mode driver to a compute engine on a different PAP device. In this manner, software runtime selects from various components of a broad system component portfolio (including processor cores, DMA engines, compute engines, and the like) to optimize for performance and minimize energy use.
At block 608, after transferring one or more portions of the data transfer (dependent upon whether the DMA transfer command is split into multiple smaller workloads), an indication is generated that signals completion of the data transfer requested by the DMA transfer command. For example, such as illustrated in
Accordingly, as discussed herein, the software-managed coordination of routing or splitting a whole DMA transfer packet and performance of the DMA transfer by recruited system components provides for work specified by a transfer command to be assigned to DMA engines (or other available processing and network bus resources) such that total bandwidth usage goes up without requiring any changes to device hardware (e.g., without each individual DMA engine needing to get bigger or have more capabilities), thereby increase overall DMA throughput and data fabric bandwidth usage. In this manner, system software is able to obtain increased performance out of existing hardware without requiring the multiple DMA engines to have paths for communicating with each other or requiring hardware/firmware to synchronize between them.
A computer readable storage medium may include any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.