Demand for accelerator devices has continued to increase because the accelerator devices are becoming more important as they may be used in various technological areas, such as machine learning and genomics. Typical architectures for accelerator devices, such as field programmable gate arrays (FPGAs), cryptography accelerators, graphics accelerators, and/or compression accelerators (referred to herein as “accelerator devices,” “accelerators,” or “accelerator resources”) capable of accelerating the execution of a set of operations in a workload (e.g., processes, applications, services, etc.) may allow static assignment of specified amounts of shared resources of the accelerator device (e.g., high bandwidth memory, data storage, etc.) among different portions of the logic (e.g., circuitry) of the accelerator device. Typically, the workload is allocated with the required processor(s), memory, and accelerator device(s) for the duration of the workload. The workload may use its allocated accelerator device at any point of time; however, in many cases, the accelerator devices will remain idle leading to wastage of resources.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
The illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit boards (“sleds”) on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives utilization information for the various resources, predicts resource utilization for different types of workloads based on past resource utilization, and dynamically reallocates the resources based on this information.
The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically-accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.
In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in
MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.
MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as—or similar to—dual-mode optical switching infrastructure 514 of
Sled 1004 may also include dual-mode optical network interface circuitry 1026. Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of
Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250 W), as described above with reference to
As shown in
In another example, in various embodiments, one or more pooled storage sleds 1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250 W or more. In various embodiments, any given high-performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high-performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to
In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include—without limitation—software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.
In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.
Referring now to
While two accelerator sleds 1202, one compute sled 1206, and one memory sled 1208 are shown in
As shown in
In the illustrative embodiment, each accelerator sled 1202 includes the micro-orchestrator logic unit 1220 and two accelerator devices 1222, and each accelerator device 1222 includes two kernels (e.g., each a set of circuitry and/or executable code usable to implement a set of functions) 1224. It should be appreciated that, in other embodiments, each orchestrator sled 1202 may include a different number of accelerator devices 1222, and each accelerator device 1222 may include a different number of kernels 1224. It should be appreciated that some accelerator sleds 1202 (e.g., the accelerator sled 1202a) may include an inter-accelerator communication interface 1226 (e.g., a high speed serial interface (HSSI)) between the accelerator devices 1222. The inter-accelerator communication interface 1226 is configured to communicatively connect the accelerator devices 1222 of the accelerator sled 1202a to share data. For example, accelerator devices 1222 concurrently executing tasks that share a data set may access to the same data set at the same time in order to read from and/or write to different parts of the data set. To do so, the accelerator devices 1222 of the accelerator sled 1202a share the data set via the inter-accelerator communication interface 1226. Alternatively, in some embodiments, the accelerator devices 1222 may share data that is present in shared memory (e.g., a shared virtual memory 1282). It should be appreciated that, in such embodiments, the accelerator devices 1222 need not be on the same accelerator sled 1202 to concurrently execute tasks that share the data.
The memory sled 1208, in the illustrative embodiment, includes a memory device 1280, which further includes the shared virtual memory 1282. As described above, the shared virtual memory 1282 may hold data that can be accessed by any of the accelerator devices 1222 capable of utilizing shared virtual memory. For example, the accelerator devices 1222 on an accelerator sled 1202 (e.g., the accelerator sled 1202b) that does not have an inter-accelerator communication interface 1226 (e.g., a HSSI) may use the shared data stored in the shared virtual memory 1282 to execute tasks in parallel. Additionally, the accelerator devices 1222 on different accelerator sleds 1202 (e.g., accelerator sleds 1202a, 1202b) may use the shared data stored in the shared virtual memory 1282 to execute the tasks in parallel. It should be appreciated that, in some embodiments, the accelerator devices 1222 that are communicatively connected to each other via the HSSI 1226 may share data via the shared virtual memory 1282 and/or the inter-accelerator communication interface 1226, as dictated by a micro-orchestrator logic unit 1220 and/or the orchestrator server 1204.
Referring now to
The compute engine 1310 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described below. In some embodiments, the compute engine 1310 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable-array (FPGA), a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Additionally, in some embodiments, the compute engine 1310 may include, or may be embodied as, a CPU 1312 and memory 1314. The CPU 1312 may be embodied as any type of processor capable of performing the functions described herein. For example, the CPU 1312 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit.
In some embodiments, the CPU 1312 may include a micro-orchestrator logic unit 1220 which may be embodied as any device or circuitry capable of determining the capabilities (e.g., functions that each accelerator device 1222 is capable of accelerating, the present load on each accelerator device 1222, whether each accelerator device 1222 is capable of accessing shared memory or communicating through an inter-accelerator communication interface, etc.) of the accelerator devices 1222 on the accelerator sled 1202, and dynamically dividing a job into tasks to be performed by one or more of the accelerator devices 1222 as a function of the determined capabilities of the accelerator devices 1222 and identified tasks within the job that would benefit from different types of acceleration (e.g., cryptographic acceleration, compression acceleration, parallel execution, etc.). For example, the micro-orchestrator logic unit 1220 may be embodied as a co-processor, embedded circuit, ASIC, FPGA, and/or other specialized circuitry. The micro-orchestrator logic unit 1220 may receive a request directly from a compute sled 1206 executing an application with a job to be accelerated. Alternatively, the micro-orchestrator logic unit 1220 may receive a request from the orchestrator server 1204 with a job to be accelerated. Subsequently, the micro-orchestrator logic unit 1220 may analyze the code (e.g., microcode) of the requested job to identify one or more functions (e.g., tasks) in the code that may be parallelized. Based on the analysis of the requested job, the micro-orchestrator logic unit 1220 may divide the functions of the job into multiple tasks that may be executed in parallel on one or more accelerator devices 1222 on the accelerator sled 1202. Additionally or alternatively, in some embodiments, the micro-orchestrator logic unit 1220 may transmit the analysis of the requested job to the orchestrator server 1204 such that the orchestrator server 1204 may determine how to divide the job into multiple tasks to be executed on one or more accelerator devices 1222 on multiple accelerator sleds 1202. In such embodiments, each micro-orchestrator logic unit 1220 of one or more accelerator sleds 1202 may receive assigned tasks from the orchestrator server 1204 to schedule the assigned tasks to available accelerator device(s) 1222 based on the job analysis and the configuration of the respective accelerator sled 1202.
In some embodiments, the orchestrator server 1204 may determine that only a portion of the job received from the compute sled 1206 can be accelerated based on the job analysis and the configuration of the respective accelerator sled 1202. In such embodiments, the orchestrator server 1204 may inform the requesting compute sled 1206 that the requested job cannot be accelerated as a whole and/or only a portion of the job can be accelerated. The orchestrator server 1204 may subsequently receive instructions from the compute sled 1206 indicative of how to execute the job. For example, the compute sled 1206 may return a simplified job that can be accelerated as a whole or request to perform the portion of the job that can be accelerated on one or more accelerator sleds 1202.
As discussed above, an accelerator sled 1202 may not include an inter-accelerator communication interface (e.g., a HSSI) between the accelerator devices 1222 (e.g., the accelerator sled 1202b). In such embodiments, an accelerator device 1222 on an accelerator sled 1202 may communicate with another accelerator device 1222 on the same accelerator sled 1202 via the shared virtual memory 1282 of the memory sled 1208 to execute parallel tasks that share data. Additionally or alternatively, one or more accelerator devices 1222 on an accelerator sled 1202 may communicate with one or more accelerator devices 1222 on a different accelerator sled 1202 via the shared virtual memory 1282 to execute parallel tasks that share data. As such, it should be appreciated that determining whether the tasks of the requested job may be executed in parallel on multiple kernels 1224 is based at least in part on the configuration of each accelerator device 1222.
The main memory 1314 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include future generation nonvolatile devices, such as a three dimensional crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product.
In some embodiments, 3D crosspoint memory (e.g., Intel 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of the main memory 1314 may be integrated into the CPU 1312. In operation, the memory 1314 may store various data and software used during operation of the accelerator sled 1202 such as operating systems, applications, programs, libraries, and drivers.
The compute engine 1310 is communicatively coupled to other components of the accelerator sled 1202 via the I/O subsystem 1320, which may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1312, the micro-orchestrator logic unit 1220, the memory 1314, and other components of the accelerator sled 1202. For example, the I/O subsystem 1320 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1320 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1312, the micro-orchestrator logic unit 1220, the memory 1314, and other components of the accelerator sled 1202, on a single integrated circuit chip.
The communication circuitry 1330 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the accelerator sled 1202 and another compute device (e.g., the orchestrator server 1204, a compute sled 1206, a memory sled 1208, and/or the client device 1210 over the network 1212). The communication circuitry 1330 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
The illustrative communication circuitry 1330 may include a network interface controller (NIC) 1332, which may also be referred to as a host fabric interface (HFI). The NIC 1332 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the accelerator sled 1202 to connect with another compute device (e.g., the orchestrator server 1204, a compute sled 1206, a memory sled 1208, and/or the client device 1210). In some embodiments, the NIC 1332 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1332 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1332. In such embodiments, the local processor of the NIC 1332 may be capable of performing one or more of the functions of the CPU 1312 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1332 may be integrated into one or more components of the accelerator sled 1202 at the board level, socket level, chip level, and/or other levels. Additionally or alternatively, the accelerator sled 1202 may include one or more peripheral devices. Such peripheral devices may include any type of peripheral device commonly found in a compute device such as a display, speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.
The accelerator subsystem 1340 may be embodied as any type of devices configured for reducing an amount of time required to process a requested job received directly or indirectly from a compute sled 1206 executing a workload (e.g., an application). To do so, in the illustrative embodiment, the accelerator subsystem 1340 includes the accelerator devices 1222, each of which may be embodied as any type of device configured for executing scheduled tasks of the requested job to be accelerated. Each accelerator device 1222 may be embodied as a single device such as an integrated circuit, an embedded system, a FPGA, a SOC, an ASIC, reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. In some embodiments, the accelerator subsystem 1340 may include a high-speed serial interface (HSSI). As discussed above, the HSSI 1340, in the illustrative embodiment, is an inter-accelerator communication interface (e.g., the inter-accelerator communication interface 1226) that facilitates communication between accelerator devices 1222 on the same accelerator sled 1202 (e.g., the accelerator sled 1202a).
Referring now to
In the illustrative environment 1400, the network communicator 1402, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the accelerator sled 1202, respectively. To do so, the network communicator 1402 is configured to receive and process data from one system or computing device (e.g., the orchestrator server 1204, a compute sled 1206, a memory sled 1208, etc.) and to prepare and send data to a system or computing device (e.g., the orchestrator server 1204, a compute sled 1230, a memory sled 1208, etc.). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1402 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC (e.g., an HFI) 1332.
The accelerator determiner 1404, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to determine a configuration of each accelerator device 1222 on the respective accelerator sled 1202. To do so, the accelerator determiner 1404 is configured to determine the features of hardware components of each accelerator device 1222. For example, the accelerator determiner 1404 determines a configuration of each accelerator device 1222, including a number of kernels 1224 of each accelerator device 1222, whether the respective accelerator device 1222 is communicatively coupled to an inter-accelerator communication interface, such as a HSSI 1226, and/or whether the respective accelerator device 1222 is capable of utilizing the shared virtual memory 1282 (e.g., is capable of mapping memory addresses of the shared virtual memory 1282 as local memory addresses). The accelerator determiner 1404 is further configured to generate accelerator configuration data for each accelerator device 1222 (e.g., any data indicative of the features of each accelerator device 1222), and the accelerator configuration data is stored in an accelerator configuration database 1412. As discussed in detail below, the accelerator configuration data is used to determine how to divide the requested job to be accelerated into multiple tasks that may be executed in parallel on one or more accelerator devices 1222.
The job analyzer 1406, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to analyze a requested job to be accelerated. Specifically, the job analyzer 1406 is configured to analyze the code of the requested job to determine how to divide functions in the code of the requested job into multiple tasks that may be executed on one or more accelerator devices 1222. Specifically, the job analyzer 1406 analyzes the requested job to determine whether the tasks may be concurrently executed on one or more accelerator devices 1222. It should be appreciated that the job analyzer 1406 may determine that the requested job includes tasks that are capable of being executed in parallel (e.g., either by sharing a data set or because they do not use the output of the other concurrently executed tasks as input) and tasks that are to be executed in sequence (e.g., because they use the final output of another task as an input data set).
For example, the job analyzer 1406 may determine that two tasks (e.g., task A and task B) may share the data associated with a requested job (i.e., read from or write to the different parts of the same data set at the same time, such as encoding or decoding different sections of an image or other data set). In such case, the job analyzer 1406 determines that the two tasks should be executed in parallel in order to reduce an amount of time required to complete the requested job. The job analyzer 1406 may also determine that multiple tasks do not share a data set and the output of one of the tasks does not depend on the output of the other task or vice versa. As such, the job analyzer 1406 may determine that these tasks may also be executed in parallel because they are independent of each other in terms of the data sets that they operate on. On the other hand, if the job analyzer 1406 determines that the output of the one task (e.g., task B) depends on the output of the first task (e.g., uses the final output of task A as an input), the job analyzer 1406 determines that the order of operations of those tasks is important to achieve a correct output; therefore, the task A should be executed prior to the execution of task B. As discussed below, the job analysis is used to schedule the tasks across one or more accelerator devices 1222 for efficient execution of the requested job.
The task scheduler 1408, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to schedule the tasks of the requested job across one or more accelerator devices 1222. The task scheduler 1408 is configured to maximize the parallelization of the tasks to minimize the execution time of the tasks based on the job analysis performed by the job analyzer 1406. In some cases, if the tasks of the requested job share a data set, the task scheduler 1408 is configured to schedule those tasks to be executed in parallel on one or more accelerator devices 1222 in order to reduce an amount of time required to process the requested job. For example, if a requested job includes compression of an image stored in a shared virtual memory 1282, the task scheduler 1408 divides the job into multiple tasks and schedules the tasks across one or more accelerator devices 1222 in parallel. The one or more accelerator devices 1222 then concurrently execute the compression tasks on different parts of the same image data stored in the shared virtual memory 1282. It should be appreciated that outputs of each task executed on the accelerator device 1222 are combined to obtain a correct output of the job. In some embodiments, the task scheduler 1408 may further identify the communication mechanism (e.g., an inter-accelerator communication interface 1226, shared virtual memory 1282, etc.) based on the accelerator configuration data 1412 for the present accelerator sled 1202. It should be appreciated that if the parallel tasks that share the data are scheduled on the accelerator devices 1222 on the same accelerator sled 1202, the accelerator devices 1222 may establish communication between the accelerator devices 1222 via the HSSI 1226, if the accelerator sled 1202 includes the HSSI 1226, or via the shared virtual memory 1282 to share data and produce a correct output.
If, however, the tasks are independent of one another and do not share data, the task scheduler 1408 may schedule those independent tasks on one or more accelerator devices 1222 on one or more accelerator sleds 1202 to be executed in parallel or at different times. Alternatively or additionally, some of the tasks of the requested job may not be executed in parallel if one or more tasks depend on one or more outputs of one or more previous tasks. In that case, the task scheduler 1408 schedules the dependent tasks to be sequentially executed in a correct order to achieve a correct output. For example, if the received job includes encrypted and compressed data, an output of a decompression task depends on the output of the decompression task (i.e., decompressed encrypted data); therefore, the decryption task should be executed prior to the execution of the decompression task to obtain the correct output. In such case, the task scheduler 1408 schedules the tasks on one or more accelerator devices 1222 such that the decryption task is first executed, followed by the execution of the decompression task. As such, the task scheduler 1408 is configured to examine the characteristics of the tasks determined by the job analyzer 1406, to efficiently schedule the tasks on one or more accelerator devices 1222. By analyzing the data dependencies of the tasks identified by the job analyzer 1406, the task scheduler 1408 is configured to schedule the tasks to maximize the parallelization of the tasks and to minimize the execution time of the job.
The task communicator 1410, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound communications to and from accelerator devices 1222 of the accelerator sled 1202 and/or with accelerator devices 1222 of other accelerator sleds 1202 in the system 1200. Between the accelerator devices 1222 on the same accelerator sled 1202, at least a portion of the functionality of the task communicator 1410 may be performed by the accelerator subsystem 1340, and, in the illustrative embodiment, by the HSSI 1342. Similarly, the task communicator 1410 may access data in the shared virtual memory 1282 to enable a task executed on the present accelerator sled 1202 to access a data set that is also being accessed by a concurrently executing task that utilizes the data set in the shared memory 1282 (e.g., on the same accelerator sled 1202 or on a different accelerator sled 1202). Similarly, the task communicator 1410 may receive output from one task as input for another task and/or provide the output of a task for use as input for by another task.
Referring now to
In block 1512, the accelerator sled 1202 determines whether an accelerated job request has been received. The accelerated job request includes a job to be accelerated. In some embodiments, the accelerated job request may be directly received from the compute sled 1206 executing an application. In other embodiments, the accelerator sled 1202 may receive an accelerated job request indirectly from the compute sled 1206 via the orchestrator server 1204. If the accelerator sled 1202 determines that an accelerated job request has not been received, the method 1500 loops back to block 1502 to continue determining the configuration of available accelerator devices 1222 and monitoring for an accelerated job request. If, however, the accelerator sled 1202 determines that an accelerated job request has been received, the method 1500 advances to block 1514.
In block 1514, the accelerator sled 1202 performs a job analysis on the requested job to be accelerated to determine how to divide the job into parallel tasks based on the configuration of the available accelerator devices 1222. To do so, in block 1516, the accelerator sled 1202 analyzes a code of the job and identifies functions in the code that can be parallelized in block 1518. The accelerator sled 1202 may determine whether an order and/or a timing of executions of certain tasks are material to obtain a correct output of the job. For example, the accelerator sled 1202 determines that the tasks may be executed in parallel if the tasks share the same data or if the tasks do not depend on any outputs of previous tasks as inputs. Additionally, the accelerator sled 1202 also identifies the tasks that cannot be executed in parallel. For example, the accelerator sled 1202 may identify the tasks that rely on outputs of previous tasks to be sequentially executed in a correct order to achieve a correct output.
In block 1520, the accelerator sled 1202 determines whether an orchestrator server authorization is required. In other words, the accelerator sled 1202 determines whether an authorization from the orchestrator server 1204 is required to schedule the tasks to one or more accelerator devices 1222 on the accelerator sled 1202. This is because, in some embodiments, the orchestrator server 1204 may determine, based on the job analysis received from one or more accelerator sleds 1202, how the requested job should be divided into multiple tasks that are executed on different accelerator devices 1222 on multiple accelerator sleds 1202 as described in detail below. In some embodiments, the accelerator sled 1202 may query the orchestrator server 1204 whether an authorization is required. Alternatively, in other embodiments, the accelerator sled 1202 may determine whether an orchestrator server authorization is required based on the configuration of the accelerator sled 1202 and/or the job analysis of the requested job.
If the accelerator sled 1202 determines that the orchestrator server authorization is not required, the method 1500 advances to block 1522 shown in
Subsequently, in block 1526, the accelerator sled 1202 schedules the tasks to one or more available accelerator devices 1222 based on the job analysis. As described above, the accelerator sled 1202 has analyzed the requested job to be accelerated to identify the tasks that can be executed in parallel to operate on the same data set, the tasks that may or may not be executed in parallel (e.g., use independent data sets), and the tasks that are to be executed in sequence because of their dependency on the final output of other tasks. Based on the job analysis, the accelerator sled 1202 is configured to schedule the tasks on one or more accelerator devices 1222 to maximize the parallelization of the tasks to reduce a total execution time of the job. To do so, in some embodiments, the accelerator sled 1202 may enable a parallel execution of tasks in block 1528. In such embodiments, the parallel execution of tasks on one or more accelerator devices 1222 on the accelerator sled 1202 is achieved by the inter-accelerator communication interface 1226 between the one or more accelerator devices 1222 on the accelerator sled 1202. However, it should be appreciated that the parallel execution of tasks may be achieved by virtualization of the shared data using the shared virtual memory 1282. Additionally, in some embodiments in block 1530, the accelerator sled 1202 may confirm the availability of one or more accelerator devices 1222 to execute one or more tasks (e.g., that the accelerator device 1222 is not presently experiencing a load above a predefined threshold amount, that one or more kernels of the accelerator device 1222 are not presently executing a task, etc.).
In block 1532, the accelerator sled 1202 may execute multiple scheduled tasks in parallel. For example, in some embodiments in block 1534, if the accelerator devices 1222 of the accelerator sled 1202 are configured to execute tasks that share the same data, one of the accelerator devices 1222 of the accelerator sled 1202 may communicate with the other accelerator device 1222 of the same accelerator sled 1202 using the inter-accelerator communication interface 1226 (e.g., the HSSI) or the shared virtual memory 1282, based on the configuration of the accelerator sled 1202, to share the data. Subsequently, in block 1536, the accelerator sled 1202 combines the outputs of the tasks to obtain an output of the job in block 1538. It should be appreciated that if the tasks are scheduled on the accelerator devices 1222 on the accelerator sled 1202 that has the HSSI 1226 (e.g., the accelerator sled 1202a), the accelerator devices 1222 may communicate with one another via the HSSI 1226 to combine the outputs of the tasks to obtain the output of the job. If, however, the tasks are scheduled on the accelerator devices 1222 on the same accelerator sled 1202 (e.g., the accelerator sled 1202b) that does not have a HSSI 1226, the accelerator devices 1222 may communicate with one another via the shared virtual memory 1282 to combine the outputs of the tasks to obtain the output of the job.
Referring back to block 1520 in
Subsequently, in block 1540, the accelerator sled 1202 determines whether an authorization from the orchestrator server 1204 has been received. If the authorization is not received during a predefined period of time, the accelerator sled 1202 determines that the accelerator devices 1222 of the accelerator sled 1202 have not been selected to execute the tasks by the orchestrator server 1204, and the method 1500 advances to the end.
If, however, the accelerator sled 1202 determines that the authorization has been received, the method 1500 advances to block 1542. In block 1542, the accelerator sled 1202 receives one or more assigned tasks to be executed on the accelerator sled 1202. If the accelerator sled 1202 determines that the assigned task(s) is not received during a predefined period of time in block 1544, the method 1500 advances to the end. If, however, the accelerator sled 1202 determines that the assigned task(s) has been received, the method 1500 advances to block 1546.
In block 1546, the accelerator sled 1202 determines one or more available accelerator devices 1222 of the respective accelerator sled 1202. Specifically, the accelerator sled 1202 determines available kernels 1224 of each accelerator device 1222 in block 1548 to determine how to schedule the assigned tasks across the multiple kernels 1224 of the accelerator devices 1222 on multiple accelerator sleds 1202.
Subsequently, in block 1550, the accelerator sled 1202 schedules the assigned tasks to the accelerator devices 1222 based on the job analysis to maximize the parallelization of the tasks to reduce a total execution time of the job. As discussed above, the job analysis indicates whether the tasks may be executed in parallel or require to be executed in a particular sequence. To do so, in block 1552, the accelerator sled 1202 may enable parallel execution of independent tasks. In such embodiments, the parallel execution of tasks on multiple accelerator devices 1222 on different accelerator sleds 1202 is achieved by sharing a data set using the shared virtual memory 1282 (e.g., mapping memory addresses of the shared virtual memory 1282 as local memory for each accelerator device 1222). It should be appreciated that the inter-accelerator communication interface 1226 is limited to providing the communication between the accelerator devices 1222 on the same accelerator sled 1202. In some embodiments, in block 1554, the accelerator sled 1202 may confirm that the accelerator devices 1222 on the respective accelerator sled 1202 are available to concurrently execute the multiple tasks at the same time.
Subsequently, in block 1556, the assigned tasks are executed in parallel on multiple accelerator devices 1222, potentially on different accelerator sleds 1202. For example, if the assigned tasks share the same data, the multiple accelerator devices 1222 on different accelerator sleds 1202 may concurrently execute the assigned tasks using the shared virtual memory 1282 to read from and/or write to the shared data present in the shared virtual memory 1282. In some embodiments, some of the assigned tasks may be assigned to the accelerator devices 1222 on a single accelerator sled 1202 that may or may not have an inter-accelerator communication interface 1226 (e.g., HSSI). If the tasks are assigned to the accelerator devices 1222 of the accelerator sled 1202 that has the HSSI 1226, in block 1558, the accelerator devices 1222 on that accelerator sled 1202 may concurrently execute the assigned tasks using the HSSI 1226 to share the data. If, however, the tasks are assigned to an accelerator sled 1202 that does not have a HSSI 1226 (e.g., the accelerator sled 1202b), the accelerator devices 1222 may concurrently execute the assigned tasks using the shared virtual memory 1282 to read from and/or write to the shared data stored in the shared virtual memory 1282, as indicated in block 1560. Subsequently, in block 1562, the accelerator sled 1202 combines the outputs of the assigned tasks of the corresponding accelerator device 1222 to obtain an output of the job.
Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
Example 1 includes a compute device comprising one or more accelerator devices; and a compute engine to determine a configuration of each accelerator device of the compute device, wherein the configuration is indicative of parallel execution features present in each accelerator device; receive, from a requester device remote from the compute device, a job to be accelerated; divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices as a function of a job analysis of the job and the configuration of each accelerator device; schedule the tasks to the one or more accelerator devices based on the job analysis; execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks; and combine task outputs from the accelerator devices that executed the tasks to obtain an output of the job.
Example 2 includes the subject matter of Example 1, and wherein the compute engine includes a micro-orchestrator logic unit.
Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to determine the configuration of each accelerator device comprises to determine, by the micro-orchestrator logic unit of the compute device, the configuration of each accelerator device.
Example 4 includes the subject matter of any of Examples 1-3, and wherein the compute engine is further to determine an availability of one or more of the accelerator devices and each of the accelerator devices is not a general purpose processor.
Example 5 includes the subject matter of any of Examples 1-4, and wherein to determine the availability of one or more of the accelerator devices comprises to determine one or more available kernels of each accelerator device.
Example 6 includes the subject matter of any of Examples 1-5, and wherein to schedule the tasks to the one or more accelerator devices comprises to schedule parallel execution of the tasks.
Example 7 includes the subject matter of any of Examples 1-6, and wherein to schedule the tasks to the one or more accelerator devices comprises to confirm that the one or more accelerator devices are available to simultaneously execute multiple tasks that share data.
Example 8 includes the subject matter of any of Examples 1-7, and wherein to execute the tasks comprises to concurrently execute two or more of the tasks on two or more of the accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 9 includes the subject matter of any of Examples 1-8, and wherein the compute engine is further to determine whether an authorization is required from an orchestrator server to execute the tasks on the compute device; transmit, in response to a determination that the authorization is required, the job analysis to the orchestrator server; receive an authorization from the orchestrator server; and receive the tasks to be accelerated.
Example 10 includes the subject matter of any of Examples 1-9, and wherein the compute device is a first compute device, and wherein to execute the tasks comprises to determine one or more accelerator devices of a second compute device to concurrently execute the tasks that share data; and execute one or more of the tasks on the one or more accelerator devices of the first compute device as one or more other tasks of the job are concurrently executed with one or more accelerator devices of the second compute device using a shared virtual memory.
Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine the one or more accelerator devices of the second compute device comprises to receive information regarding one or more accelerator devices of the second compute device from the orchestrator server.
Example 12 includes the subject matter of any of Examples 1-11, and wherein to execute the tasks comprises to execute the tasks simultaneously on the one or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 13 includes a method comprising determining, by a compute device, a configuration of each of one or more accelerator devices of the compute device, wherein the configuration is indicative of parallel execution features present in each accelerator device; receiving, from a requester device remote from the compute device and by the compute device, a job to be accelerated; dividing, by the compute device, the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices as a function of a job analysis of the job and the configuration of each accelerator device; scheduling, by the compute device, the tasks to the one or more accelerator devices based on the job analysis; executing, by the compute device, the tasks on the one or more accelerator devices for the parallelization of the multiple tasks; and combining, by the compute device, task outputs from the accelerator devices that executed the tasks to obtain an output of the job.
Example 14 includes the subject matter of Example 13, and wherein dividing the job as a function of the job analysis and the configuration of each accelerator device comprises performing, with a micro-orchestrator unit of the compute device, the job analysis.
Example 15 includes the subject matter of any of Examples 13 and 14, and wherein determining the configuration of each accelerator device comprises determining, by a micro-orchestrator logic unit of the compute device, the configuration of each accelerator device.
Example 16 includes the subject matter of any of Examples 13-15, and further including determining, by the compute device, an availability of one or more of the accelerator devices, and each of the accelerator devices is not a general purpose processor.
Example 17 includes the subject matter of any of Examples 13-16, and wherein determining the availability of one or more of the accelerator devices comprises determining one or more available kernels of each accelerator device.
Example 18 includes the subject matter of any of Examples 13-17, and wherein scheduling the tasks to the one or more accelerator devices comprises scheduling parallel execution of the tasks.
Example 19 includes the subject matter of any of Examples 13-18, and wherein scheduling the tasks to the one or more accelerator devices comprises confirming that the one or more accelerator devices are available to simultaneously execute multiple tasks that share data.
Example 20 includes the subject matter of any of Examples 13-19, and wherein executing the tasks comprises concurrently executing two or more of the tasks on two or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 21 includes the subject matter of any of Examples 13-20, and further including determining, by the compute device, whether an authorization is required from an orchestrator server to execute the tasks on the compute device; transmitting, by the compute device and in response to a determination that the authorization is required, the job analysis to the orchestrator server; receiving, by the compute device, an authorization from the orchestrator server; and receiving, by the compute device, the tasks to be accelerated.
Example 22 includes the subject matter of any of Examples 13-21, and wherein the compute device is a first compute device, and wherein executing the tasks comprises determining, by the first compute device, one or more accelerator devices of a second compute device that is remote to the first compute device to concurrently execute the tasks that share data; and executing one or more of the tasks on the one or more accelerator devices of the first compute device as one or more other tasks of the job are concurrently executed with one or more accelerator devices of the second compute device using a shared virtual memory.
Example 23 includes the subject matter of any of Examples 13-22, and wherein determining the one or more accelerator devices of the second compute device comprises receiving information regarding one or more accelerator devices of the second compute device from the orchestrator server.
Example 24 includes the subject matter of any of Examples 13-23, and wherein executing the tasks comprises executing the tasks simultaneously on the one or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 25 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute device to perform the method of any of Examples 13-24.
Example 26 includes a compute device comprising means for performing the method of any of Examples 13-24.
Example 27 includes a compute device comprising one or more accelerator devices; and a micro-orchestrator logic circuitry to determine a configuration of each accelerator device of the compute device, wherein the configuration is indicative of parallel execution features present in each accelerator device; receive, from a requester device remote from the compute device, a job to be accelerated; divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices as a function of a job analysis of the job and the configuration of each accelerator device; schedule the tasks to the one or more accelerator devices based on the job analysis; execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks; and combine task outputs from the accelerator devices that executed the tasks to obtain an output of the job.
Example 28 includes the subject matter of Example 27, and wherein the micro-orchestrator logic circuitry includes a micro-orchestrator logic unit.
Example 29 includes the subject matter of any of Examples 27 and 28, and wherein to determine the configuration of each accelerator device comprises to determine, by the micro-orchestrator logic unit of the compute device, the configuration of each accelerator device.
Example 30 includes the subject matter of any of Examples 27-29, and wherein the micro-orchestrator logic circuitry is further to determine an availability of one or more of the accelerator devices, and each of the accelerator devices is not a general purpose processor.
Example 31 includes the subject matter of any of Examples 27-30, and wherein to determine the availability of one or more of the accelerator devices comprises to determine one or more available kernels of each accelerator device.
Example 32 includes the subject matter of any of Examples 27-31, and wherein to schedule the tasks to the one or more accelerator devices comprises to schedule parallel execution of the tasks.
Example 33 includes the subject matter of any of Examples 27-32, and wherein to schedule the tasks to the one or more accelerator devices comprises to confirm that the one or more accelerator devices are available to simultaneously execute multiple tasks that share data.
Example 34 includes the subject matter of any of Examples 27-33, and wherein to execute the tasks comprises to concurrently execute two or more of the tasks on two or more of the accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 35 includes the subject matter of any of Examples 27-34, and wherein the micro-orchestrator logic circuitry is further to determine whether an authorization is required from an orchestrator server to execute the tasks on the compute device; transmit, in response to a determination that the authorization is required, the job analysis to the orchestrator server; receive an authorization from the orchestrator server; and receive the tasks to be accelerated.
Example 36 includes the subject matter of any of Examples 27-35, and wherein the compute device is a first compute device, and wherein to execute the tasks comprises to determine one or more accelerator devices of a second compute device that is remote to the first compute device to concurrently execute the tasks that share data; and execute one or more of the tasks on the one or more accelerator devices of the first compute device as one or more other tasks of the job are concurrently executed with one or more accelerator devices of the second compute device using a shared virtual memory.
Example 37 includes the subject matter of any of Examples 27-36, and wherein to determine the one or more accelerator devices of the second compute device comprises to receive information regarding one or more accelerator devices of the second compute device from the orchestrator server.
Example 38 includes the subject matter of any of Examples 27-37, and wherein to execute the tasks comprises to execute the tasks simultaneously on the one or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 39 includes a compute device comprising circuitry for determining a configuration of each of one or more accelerator devices of the compute device, wherein the configuration is indicative of parallel execution features present in each accelerator device; circuitry for receiving, from a requester device remote from the compute device, a job to be accelerated; means for dividing the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices as a function of a job analysis of the job and the configuration of each accelerator device; means for scheduling the tasks to the one or more accelerator devices based on the job analysis; circuitry for executing the tasks on the one or more accelerator devices for the parallelization of the multiple tasks; and means for combining task outputs from the accelerator devices that executed the tasks to obtain an output of the job.
Example 40 includes the subject matter of Example 39, and wherein the means for dividing the job as a function of the job analysis and the configuration of each accelerator device comprises means for performing the job analysis.
Example 41 includes the subject matter of any of Examples 39 and 40, and wherein the circuitry for determining the configuration of each accelerator device comprises circuitry for determining the configuration of each accelerator device.
Example 42 includes the subject matter of any of Examples 39-41, and further including circuitry for determining an availability of one or more of the accelerator devices, and each of the accelerator devices is not a general purpose processor.
Example 43 includes the subject matter of any of Examples 39-42, and wherein the circuitry for determining the availability of one or more of the accelerator devices comprises circuitry for determining one or more available kernels of each accelerator device.
Example 44 includes the subject matter of any of Examples 39-43, and wherein the means for scheduling the tasks to the one or more accelerator devices comprises means for scheduling parallel execution of the tasks.
Example 45 includes the subject matter of any of Examples 39-44, and wherein the means for scheduling the tasks to the one or more accelerator devices comprises means for confirming that the one or more accelerator devices are available to simultaneously execute multiple tasks that share data.
Example 46 includes the subject matter of any of Examples 39-45, and wherein the circuitry for executing the tasks comprises means for concurrently executing two or more of the tasks on two or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Example 47 includes the subject matter of any of Examples 39-46, and further including means for determining whether an authorization is required from an orchestrator server to execute the tasks on the compute device; circuitry for transmitting, in response to a determination that the authorization is required, the job analysis to the orchestrator server; circuitry for receiving an authorization from the orchestrator server; and circuitry for receiving the tasks to be accelerated.
Example 48 includes the subject matter of any of Examples 39-47, and wherein the compute device is a first compute device, and wherein the circuitry for executing the tasks comprises means for determining one or more accelerator devices of a second compute device that is remote to the first compute device to concurrently execute the tasks that share data; and means for executing one or more of the tasks on the one or more accelerator devices of the first compute device as one or more other tasks of the job are concurrently executed with one or more accelerator devices of the second compute device using a shared virtual memory.
Example 49 includes the subject matter of any of Examples 39-48, and wherein the means for determining the one or more accelerator devices of the second compute device comprises means for receiving information regarding one or more accelerator devices of the second compute device from the orchestrator server.
Example 50 includes the subject matter of any of Examples 39-49, and wherein the circuitry for executing the tasks comprises circuitry for executing the tasks simultaneously on the one or more accelerator devices of the compute device with a high speed serial interface (HSSI).
Number | Date | Country | Kind |
---|---|---|---|
201741030632 | Aug 2017 | IN | national |
The present application is a continuation of U.S. patent application Ser. No. 17/321,186, filed May 14, 2021, which is a continuation of U.S. patent application Ser. No. 15/721,829, filed Sep. 30, 2017, and is now U.S. Pat. No. 10,963,176, which claims the benefit of Indian Provisional Patent Application No. 201741030632, filed Aug. 30, 2017 and U.S. Provisional Patent Application No. 62/427,268, filed Nov. 29, 2016. The entire specifications of which are hereby incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6085295 | Ekanadham et al. | Jul 2000 | A |
6104696 | Kadambi et al. | Aug 2000 | A |
6353885 | Herzi et al. | Mar 2002 | B1 |
6367018 | Jain | Apr 2002 | B1 |
7415022 | Shiri Kadambi et al. | Aug 2008 | B2 |
7739677 | Kekre et al. | Jun 2010 | B1 |
7835388 | Hu | Nov 2010 | B2 |
7962736 | Polyudov | Jun 2011 | B1 |
8248928 | Wang et al. | Aug 2012 | B1 |
8812765 | Dai et al. | Aug 2014 | B2 |
9026765 | Marshak et al. | May 2015 | B1 |
9042402 | Loganathan et al. | May 2015 | B1 |
9253055 | Nelke et al. | Feb 2016 | B2 |
9612767 | Huang et al. | Apr 2017 | B2 |
9733980 | Khan et al. | Aug 2017 | B1 |
9859918 | Gopal | Jan 2018 | B1 |
9929747 | Gopal | Mar 2018 | B2 |
9936613 | Adiletta | Apr 2018 | B2 |
9954552 | Gopal | Apr 2018 | B2 |
9973207 | Gopal | May 2018 | B2 |
10033404 | Cutter | Jul 2018 | B2 |
10034407 | Miller et al. | Jul 2018 | B2 |
10045098 | Adiletta et al. | Aug 2018 | B2 |
10070207 | Adiletta et al. | Sep 2018 | B2 |
10085358 | Adiletta et al. | Sep 2018 | B2 |
10091904 | Miller | Oct 2018 | B2 |
10116327 | Cutter | Oct 2018 | B2 |
10191684 | Gopal et al. | Jan 2019 | B2 |
10234833 | Ahuja | Mar 2019 | B2 |
10263637 | Gopal | Apr 2019 | B2 |
10268412 | Guilford | Apr 2019 | B2 |
10313769 | Miller | Jun 2019 | B2 |
10334334 | Miller | Jun 2019 | B2 |
10348327 | Adiletta | Jul 2019 | B2 |
10349152 | Adiletta | Jul 2019 | B2 |
10356495 | Adiletta | Jul 2019 | B2 |
10368148 | Kumar | Jul 2019 | B2 |
10390114 | Schmisseur | Aug 2019 | B2 |
10397670 | Gorius | Aug 2019 | B2 |
10411729 | Miller | Sep 2019 | B2 |
10448126 | Gilsdorf | Oct 2019 | B2 |
10461774 | Balle | Oct 2019 | B2 |
10469252 | Schmisseur | Nov 2019 | B2 |
10474460 | Adiletta | Nov 2019 | B2 |
10476670 | Schmisseur | Nov 2019 | B2 |
10489156 | Munoz | Nov 2019 | B2 |
10542333 | Miller | Jan 2020 | B2 |
10963176 | Balle et al. | Mar 2021 | B2 |
10990309 | Bernat et al. | Apr 2021 | B2 |
11029870 | Balle et al. | Jun 2021 | B2 |
11128553 | Adiletta et al. | Sep 2021 | B2 |
11630702 | Kumar et al. | Apr 2023 | B2 |
20030026525 | Alvarez | Feb 2003 | A1 |
20030028594 | Laschkewitsch et al. | Feb 2003 | A1 |
20040205304 | McKenney et al. | Oct 2004 | A1 |
20050135231 | Bellovin | Jun 2005 | A1 |
20060036719 | Bodin et al. | Feb 2006 | A1 |
20060059492 | Fellenstein et al. | Mar 2006 | A1 |
20060168337 | Stahl et al. | Jul 2006 | A1 |
20060184670 | Beeson et al. | Aug 2006 | A1 |
20060239270 | Yao et al. | Oct 2006 | A1 |
20070147400 | Hu | Jun 2007 | A1 |
20080075071 | Beshai | Mar 2008 | A1 |
20080229318 | Franke | Sep 2008 | A1 |
20090198792 | Wittenschlaeger | Aug 2009 | A1 |
20100191823 | Archer et al. | Jul 2010 | A1 |
20110185125 | Jain et al. | Jul 2011 | A1 |
20110228767 | Singla et al. | Sep 2011 | A1 |
20110296231 | Dake | Dec 2011 | A1 |
20120054770 | Krishnamurthy et al. | Mar 2012 | A1 |
20120099863 | Xu et al. | Apr 2012 | A1 |
20120207139 | Husted et al. | Aug 2012 | A1 |
20120230343 | Schrum, Jr. | Sep 2012 | A1 |
20120303885 | Jeddeloh | Nov 2012 | A1 |
20130159638 | Koinuma et al. | Jun 2013 | A1 |
20130179485 | Chapman et al. | Jul 2013 | A1 |
20130232495 | Rossbach et al. | Sep 2013 | A1 |
20130297769 | Chang et al. | Nov 2013 | A1 |
20130325998 | Hormuth et al. | Dec 2013 | A1 |
20140012961 | Pope | Jan 2014 | A1 |
20140047272 | Breternitz et al. | Feb 2014 | A1 |
20140047341 | Breternitz et al. | Feb 2014 | A1 |
20140359044 | Davis et al. | Dec 2014 | A1 |
20150007182 | Rossbach et al. | Jan 2015 | A1 |
20150229529 | Engebretsen | Aug 2015 | A1 |
20150281065 | Liljenstolpe | Oct 2015 | A1 |
20150333824 | Swinkels et al. | Nov 2015 | A1 |
20150334867 | Faw et al. | Nov 2015 | A1 |
20150381426 | Roese et al. | Dec 2015 | A1 |
20160050194 | Rachmiel | Feb 2016 | A1 |
20160087847 | Krithivas et al. | Mar 2016 | A1 |
20160118121 | Kelly et al. | Apr 2016 | A1 |
20160127191 | Nair | May 2016 | A1 |
20160147592 | Guddeti | May 2016 | A1 |
20160162281 | Hokiyama | Jun 2016 | A1 |
20160164739 | Skalecki | Jun 2016 | A1 |
20160231939 | Cannata et al. | Aug 2016 | A1 |
20160234580 | Clarke et al. | Aug 2016 | A1 |
20160306677 | Hira et al. | Oct 2016 | A1 |
20170046179 | Teh et al. | Feb 2017 | A1 |
20170070431 | Nidumolu et al. | Mar 2017 | A1 |
20170093756 | Bernat et al. | Mar 2017 | A1 |
20170116004 | Devegowda et al. | Apr 2017 | A1 |
20170150621 | Breakstone et al. | May 2017 | A1 |
20170185786 | Ylinen et al. | Jun 2017 | A1 |
20170199746 | Nguyen et al. | Jul 2017 | A1 |
20170223436 | Moynihan et al. | Aug 2017 | A1 |
20170257970 | Alleman et al. | Sep 2017 | A1 |
20170279705 | Lin et al. | Sep 2017 | A1 |
20170315798 | Shivanna et al. | Nov 2017 | A1 |
20170317945 | Guo et al. | Nov 2017 | A1 |
20170329860 | Jones | Nov 2017 | A1 |
20180014306 | Adiletta | Jan 2018 | A1 |
20180014757 | Kumar | Jan 2018 | A1 |
20180017700 | Adiletta | Jan 2018 | A1 |
20180024578 | Ahuja | Jan 2018 | A1 |
20180024739 | Schmisseur | Jan 2018 | A1 |
20180024740 | Miller | Jan 2018 | A1 |
20180024752 | Miller | Jan 2018 | A1 |
20180024756 | Miller | Jan 2018 | A1 |
20180024764 | Miller | Jan 2018 | A1 |
20180024771 | Miller | Jan 2018 | A1 |
20180024775 | Miller | Jan 2018 | A1 |
20180024776 | Miller | Jan 2018 | A1 |
20180024838 | Nachimuthu | Jan 2018 | A1 |
20180024860 | Balle | Jan 2018 | A1 |
20180024861 | Balle | Jan 2018 | A1 |
20180024864 | Wilde | Jan 2018 | A1 |
20180024867 | Gilsdorf | Jan 2018 | A1 |
20180024932 | Nachimuthu et al. | Jan 2018 | A1 |
20180024947 | Miller | Jan 2018 | A1 |
20180024957 | Nachimuthu | Jan 2018 | A1 |
20180024958 | Nachimuthu | Jan 2018 | A1 |
20180024960 | Wagh | Jan 2018 | A1 |
20180025299 | Kumar | Jan 2018 | A1 |
20180026652 | Cutter | Jan 2018 | A1 |
20180026653 | Cutter | Jan 2018 | A1 |
20180026654 | Gopal | Jan 2018 | A1 |
20180026655 | Gopal | Jan 2018 | A1 |
20180026656 | Gopal | Jan 2018 | A1 |
20180026800 | Munoz | Jan 2018 | A1 |
20180026835 | Nachimuthu | Jan 2018 | A1 |
20180026849 | Guim | Jan 2018 | A1 |
20180026851 | Adiletta | Jan 2018 | A1 |
20180026868 | Guim | Jan 2018 | A1 |
20180026882 | Gorius | Jan 2018 | A1 |
20180026904 | Van De Groenendaal | Jan 2018 | A1 |
20180026905 | Balle et al. | Jan 2018 | A1 |
20180026906 | Balle | Jan 2018 | A1 |
20180026907 | Miller | Jan 2018 | A1 |
20180026908 | Nachimuthu | Jan 2018 | A1 |
20180026910 | Balle et al. | Jan 2018 | A1 |
20180026912 | Guim | Jan 2018 | A1 |
20180026913 | Balle | Jan 2018 | A1 |
20180026918 | Kumar | Jan 2018 | A1 |
20180027055 | Balle et al. | Jan 2018 | A1 |
20180027057 | Balle | Jan 2018 | A1 |
20180027058 | Balle et al. | Jan 2018 | A1 |
20180027059 | Miller | Jan 2018 | A1 |
20180027060 | Metsch | Jan 2018 | A1 |
20180027062 | Bernat | Jan 2018 | A1 |
20180027063 | Nachimuthu | Jan 2018 | A1 |
20180027066 | Van De Groenendaal et al. | Jan 2018 | A1 |
20180027067 | Guim | Jan 2018 | A1 |
20180027093 | Guim | Jan 2018 | A1 |
20180027312 | Adiletta | Jan 2018 | A1 |
20180027313 | Adiletta | Jan 2018 | A1 |
20180027376 | Kumar | Jan 2018 | A1 |
20180027679 | Schmisseur | Jan 2018 | A1 |
20180027680 | Kumar | Jan 2018 | A1 |
20180027682 | Adiletta | Jan 2018 | A1 |
20180027684 | Miller | Jan 2018 | A1 |
20180027685 | Miller | Jan 2018 | A1 |
20180027686 | Adiletta | Jan 2018 | A1 |
20180027687 | Adiletta | Jan 2018 | A1 |
20180027688 | Adiletta | Jan 2018 | A1 |
20180027703 | Adiletta | Jan 2018 | A1 |
20180266510 | Gopal | Jan 2018 | A1 |
20180067857 | Wang et al. | Mar 2018 | A1 |
20180077235 | Nachimuthu et al. | Mar 2018 | A1 |
20180150240 | Bernat | May 2018 | A1 |
20180150256 | Kumar | May 2018 | A1 |
20180150293 | Nachimuthu | May 2018 | A1 |
20180150298 | Balle | May 2018 | A1 |
20180150299 | Balle | May 2018 | A1 |
20180150330 | Bernat | May 2018 | A1 |
20180150334 | Bernat et al. | May 2018 | A1 |
20180150343 | Bernat | May 2018 | A1 |
20180150372 | Nachimuthu | May 2018 | A1 |
20180150391 | Mitchel | May 2018 | A1 |
20180150419 | Steinmacher-Burow | May 2018 | A1 |
20180150471 | Gopal | May 2018 | A1 |
20180150644 | Khanna | May 2018 | A1 |
20180151975 | Aoki | May 2018 | A1 |
20180152200 | Guilford | May 2018 | A1 |
20180152201 | Gopal | May 2018 | A1 |
20180152202 | Gopal | May 2018 | A1 |
20180152317 | Chang | May 2018 | A1 |
20180152366 | Cornett | May 2018 | A1 |
20180152383 | Burres | May 2018 | A1 |
20180152540 | Niell | May 2018 | A1 |
20180205392 | Gopal | Jul 2018 | A1 |
20190014396 | Adiletta | Jan 2019 | A1 |
20190021182 | Adiletta | Jan 2019 | A1 |
20190034102 | Miller | Jan 2019 | A1 |
20190034383 | Schmisseur | Jan 2019 | A1 |
20190034490 | Yap | Jan 2019 | A1 |
20190035483 | Schmisseur | Jan 2019 | A1 |
20190042090 | Raghunath | Feb 2019 | A1 |
20190042091 | Raghunath | Feb 2019 | A1 |
20190042122 | Schmisseur | Feb 2019 | A1 |
20190042126 | Sen | Feb 2019 | A1 |
20190042136 | Nachimuthu | Feb 2019 | A1 |
20190042234 | Bernat | Feb 2019 | A1 |
20190042277 | Nachimuthu | Feb 2019 | A1 |
20190042408 | Schmisseur | Feb 2019 | A1 |
20190042611 | Yap | Feb 2019 | A1 |
20190044809 | Willis | Feb 2019 | A1 |
20190044849 | Ganguli | Feb 2019 | A1 |
20190044859 | Sundar | Feb 2019 | A1 |
20190052457 | Connor | Feb 2019 | A1 |
20190062053 | Jensen | Feb 2019 | A1 |
20190065083 | Sen | Feb 2019 | A1 |
20190065112 | Schmisseur | Feb 2019 | A1 |
20190065172 | Nachimuthu | Feb 2019 | A1 |
20190065212 | Kumar | Feb 2019 | A1 |
20190065231 | Schmisseur | Feb 2019 | A1 |
20190065253 | Bernat | Feb 2019 | A1 |
20190065260 | Balle | Feb 2019 | A1 |
20190065261 | Narayan | Feb 2019 | A1 |
20190065281 | Bernat et al. | Feb 2019 | A1 |
20190065290 | Custodio | Feb 2019 | A1 |
20190065401 | Dormitzer | Feb 2019 | A1 |
20190065415 | Nachimuthu | Feb 2019 | A1 |
20190067848 | Aoki | Feb 2019 | A1 |
20190068444 | Grecco | Feb 2019 | A1 |
20190068464 | Bernat | Feb 2019 | A1 |
20190068466 | Chagam | Feb 2019 | A1 |
20190068509 | Hyatt | Feb 2019 | A1 |
20190068521 | Kumar | Feb 2019 | A1 |
20190068523 | Chagam | Feb 2019 | A1 |
20190068693 | Bernat | Feb 2019 | A1 |
20190068696 | Sen | Feb 2019 | A1 |
20190068698 | Kumar | Feb 2019 | A1 |
20190069433 | Balle | Feb 2019 | A1 |
20190069434 | Aoki | Feb 2019 | A1 |
20190129874 | Huang et al. | May 2019 | A1 |
20190196824 | Liu | Jun 2019 | A1 |
20190307014 | Adiletta | Oct 2019 | A1 |
20190342642 | Adiletta | Nov 2019 | A1 |
20190342643 | Adiletta | Nov 2019 | A1 |
20190387291 | Adiletta | Dec 2019 | A1 |
20200007511 | Van de Groenendaal et al. | Jan 2020 | A1 |
20200226027 | Krasner et al. | Jul 2020 | A1 |
20200241926 | Guim Bernat | Jul 2020 | A1 |
20200341810 | Ranganathan et al. | Oct 2020 | A1 |
20210209035 | Galbi et al. | Jul 2021 | A1 |
20210377140 | Adiletta | Dec 2021 | A1 |
Number | Date | Country |
---|---|---|
1468007 | Jan 2004 | CN |
1816003 | Aug 2006 | CN |
105721358 | Jun 2016 | CN |
105979007 | Sep 2016 | CN |
2002158733 | May 2002 | JP |
2018102414 | Jun 2018 | WO |
2018111228 | Jun 2018 | WO |
2020190801 | Sep 2020 | WO |
Entry |
---|
International Preliminary Report on Patentability for PCT Application No. PCT/US2017/063765, dated Jun. 4, 2019. |
International Search Report and Written Opinion for PCT Application No. PCT/US2017/063756, dated Feb. 23, 2018. |
International Search Report and Written Opinion for PCT Application No. PCT/US2021/051801, dated Jan. 3, 2022. |
Notice of Allowance for U.S. Appl. No. 16/344,582, dated Jan. 12, 2021. |
Notice of Allowance for U.S. Appl. No. 17/246,388, dated Dec. 13, 2022. |
Notice of Allowance for U.S. Appl. No. 17/404,749, dated Nov. 9, 2022. |
Office Action for U.S. Appl. No. 15/396,014 dated Nov. 4, 2019. |
Office Action for U.S. Appl. No. 15/396,014, dated May 10, 2022. |
Office Action for U.S. Appl. No. 15/396,014, dated May 14, 2020. |
Office Action for U.S. Appl. No. 15/396,014, dated Nov. 3, 2022. |
Office Action for U.S. Appl. No. 16/344,582, dated Jul. 22, 2020. |
Office Action for U.S. Appl. No. 16/513,345, dated Jan. 31, 2020. |
Office Action for U.S. Appl. No. 17/221,541, dated Oct. 25, 2022. |
Office Action for U.S. Appl. No. 17/246,388, dated Jul. 21, 2022. |
“Directory-Based Cache Coherence”, Parallel Computer Architecture and Programming, CMU 15-418/15-618, Spring 2019. |
“Secure In-Field Firmware Updates for MSP MCUs”, Texas Instruments, Application Report, Nov. 2015. |
Denneman, Frank, “Numa Deep Dive Part 3: Cache Coherency”, https://frankdenneman.nl/2016/07/11/numa-deep-dive-part-3-cache-coherency/, Jul. 11, 2016. |
Gorman, Mel, et al., “Optimizing Linux for AMD EPYC 7002 Series Processors with SUSE Linux Enterprise 15 SP1”, SUSE Best Practices, Nov. 2019. |
Ronciak, John A., et al., “Page-Flip Technology for use within the Linux Networking Stack”, Proceedings of the Linux Symposium, vol. Two, Jul. 2004. |
Shade, L.K., “Implementing Secure Remote Firmware Updates”, Embedded Systems Conference Silicon Valley, 2011, May 2011. |
Artail et al. “Speedy Cloud: Cloud Computing with Support for Hardware Acceleration Services”, 2017 IEEE, pp. 850-865. |
Asiatici et al. “Virtualized Execution Runtime for FPGA Accelerators in the Cloud”, 2017 IEEE, pp. 1900-1910. |
Caulfield et al, A Cloud-scale acceleration Architecture, Microsoft Corp, Oct. 2016, 13 pages. |
Diamantopoulos et al. “High-level Synthesizable Dataflow Map Reduce Accelerator for FPGA-coupled Data Centers”, 2015 IEEE, pp. 1-8. |
Ding et al. “A Unified OpenCL-flavor Programming Model with Scalable Hybrid Hardware Platform on FPGAs”, 2014 IEEE, 7 pages. |
Fahmy et al. “Virtualized FPGA Accelerators for Efficient Cloud Computing”, 2015 IEEE, pp. 430-435. |
Non-Published commonly owned U.S. Appl. No. 17/214,605, filed Mar. 26, 2021, 75 pages, Intel Corporation. |
Non-Published commonly owned U.S. Appl. No. 17/221,541, filed Apr. 2, 2021, 60 pages, Intel Corporation. |
Summons to attend oral proceedings pursuant to Rule 115(1) for European Patent Application No. 18191345.0, dated Aug. 23, 2022. |
Communication pursuant to Article 94(3) for European Patent Application No. 18191345.0, dated Apr. 16, 2021. |
Corrected Notice of Allowability for U.S. Appl. No. 17/015,479, dated Jun. 3, 2021. |
Extended European Search Report for European Patent Application No. 181934.0, dated Feb. 11, 2019. |
Extended European Search Report for European Patent Application No. 20217841.4, dated Apr. 16, 2021. |
Final Office Action for U.S. Appl. No. 15/719,770, dated Jun. 24, 2020. |
International Preliminary Report on Patentability for PCT Application No. PCT/US2017/038552, dated Jan. 22, 2019. |
International Search Report and Written Opinion for PCT Application No. PCT/US2017/038552, dated Oct. 11, 2017, 3 pages. |
Notice of Allowance for Chinese Patent Application No. 20170038785.3, dated Jul. 13, 2022. |
Notice of Allowance for U.S. Appl. No. 15/395,203, dated Apr. 10, 2018. |
Notice of Allowance for U.S. Appl. No. 15/719,770, dated Jun. 17, 2021. |
Notice of Allowance for U.S. Appl. No. 15/719,770, dated Mar. 1, 2021. |
Notice of Allowance for U.S. Appl. No. 15/721,829, dated Jan. 22, 2021. |
Notice of Allowance for U.S. Appl. No. 15/721,829, dated Sep. 11, 2020. |
Notice of Allowance for U.S. Appl. No. 16/055,602, dated Feb. 13, 2020. |
Notice of Allowance for U.S. Appl. No. 16/055,602, dated Oct. 28, 2019. |
Notice of Allowance for U.S. Appl. No. 16/513,345, dated Jun. 5, 2020. |
Notice of Allowance for U.S. Appl. No. 16/513,345, dated May 19, 2020. |
Notice of Allowance for U.S. Appl. No. 16/513,371, dated Jan. 31, 2020. |
Notice of Allowance for U.S. Appl. No. 16/513,371, dated Jun. 5, 2020. |
Notice of Allowance for U.S. Appl. No. 16/513,371, dated May 20, 2020. |
Notice of Allowance for U.S. Appl. No. 17/015,479, dated May 26, 2021. |
Office Action for U.S. Appl. No. 15/395,203, dated Dec. 1, 2017. |
Office Action for U.S. Appl. No. 15/719,770, dated Dec. 27, 2019. |
Office Action for U.S. Appl. No. 15/721,829 dated Dec. 23, 2019. |
Office Action for U.S. Appl. No. 15/721,829, dated May 13, 2020. |
Office Action for U.S. Appl. No. 16/055,602 dated Mar. 27, 2019. |
Office Action for U.S. Appl. No. 16/055,602, dated Aug. 15, 2019. |
Office Action for U.S. Appl. No. 17/015,479, dated Feb. 12, 2021. |
Office Action for U.S. Appl. No. 17/404,749, dated Jul. 27, 2022. |
Translation of Office Action and Search Report for Chinese Patent Application No. 201780038785.3, dated Feb. 18, 2022. |
Burdeniuk, et al., “An Event-Assisted Sequencer to Accelerate Matrix Algorithms”, 2010 IEEE, pp. 158-163. |
Fahmy Suhaib, et al., “Virtualized FPGA Accelerators for Efficient Cloud Computing”, 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (Cloudcom), IEEE, Nov. 30, 2015, pp. 430-435. |
Office Action for U.S. Appl. No. 17/221,541, dated Mar. 15, 2023. |
Office Action from European Patent Application No. 20217841.4 dated May 25, 2023, 9 pgs. |
Non-Final Office Action from U.S. Appl. No. 17/221,541 dated Aug. 1, 2023, 15 pgs. |
Non-Final Office Action from U.S. Appl. No. 18/103,739 dated Jun. 29, 2023, 11 pgs. |
Non-Final Office Action from U.S. Appl. No. 18/116,957 dated Jun. 30, 2023, 27 pgs. |
Office Action from Chinese Patent Application No. 202110060921.7 dated Sep. 11, 2023, 5 pgs. |
Number | Date | Country | |
---|---|---|---|
20220179575 A1 | Jun 2022 | US |
Number | Date | Country | |
---|---|---|---|
62427268 | Nov 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17321186 | May 2021 | US |
Child | 17681025 | US | |
Parent | 15721829 | Sep 2017 | US |
Child | 17321186 | US |