This disclosure relates to devices for processing packets of information, for example, in the fields of networking and storage.
Conventional computing devices typically include components such as a central processing unit (CPU), a graphics processing unit (GPU), random access memory, storage, and a network interface card (NIC), such as an Ethernet interface, to connect the computing device to a network. Typical computing devices are processor centric such that overall computing responsibility and control is centralized with the CPU. As such, the CPU performs processing tasks, memory management tasks such as shifting data between local caches within the CPU, the random access memory, and the storage, and networking tasks such as constructing and maintaining networking stacks, and sending and receiving data from external devices or networks. Furthermore, the CPU is also tasked with handling interrupts, e.g., from user interface devices. Demands placed on the CPU have continued to increase over time, although performance improvements in development of new CPUs have decreased over time.
General purpose CPUs are normally not designed for high-capacity network and storage workloads, which are typically packetized. In general, CPUs are relatively poor at performing packet stream processing, because such traffic is fragmented in time and does not cache well. Nevertheless, server devices typically use CPUs to process packet streams.
As one example, CPUs modeled on the x86 architecture encounter inefficiencies in various areas, including interfacing to hardware (e.g., interrupts, completions, doorbells, and other PCI-e communication overhead), software layering (e.g., kernel to user switching cost), locking and synchronization (e.g., overhead of protection of state and serialization of access at various processing steps), buffer management (e.g., the load on CPU and memory of allocating and freeing memory and meta-data, as well as managing and processing buffer lists), packet processing (e.g., costs in interrupts, thread scheduling, managing hardware queues, and maintaining linked lists), protocol processing (e.g., access control lists (ACL), flow lookup, header parsing, state checking, and manipulation for transport protocols), memory systems (e.g., data copy, memory, and CPU bandwidth consumption), and cache effects (e.g., cache pollution due to volume of non-cacheable data).
In general, this disclosure describes a new processing architecture that utilizes a data processing unit (DPU). Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of, the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage (e.g., solid state drive (SSD)) I/O to and from multiple other components and/or devices. This frees resources of the CPU, if present, for computing-intensive tasks.
For example, various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks are the domain of the DPU. In some cases, an application processor (e.g., a separate processing device, server, storage device or even a local CPU and/or local graphics processing unit (GPU) of the compute node hosting the DPU) may programmatically interface with the DPU to configure the DPU as needed and to offload any data-processing intense tasks. In this manner, an application processor can reduce its processing load, such that the application processor can perform those computing tasks for which the application processor is well suited, and offload data-focused tasks for which the application processor may not be well suited (such as networking, storage, and the like) to the DPU.
As described herein, the DPU may be optimized to perform input and output (I/O) tasks, such as storage and retrieval of data to and from storage devices (such as solid state drives), networking, and the like. For example, the DPU may be configured to execute a large number of data I/O processing tasks relative to a number of instructions that are processed. As various examples, the DPU may be provided as an integrated circuit mounted on a motherboard of a compute node (e.g., computing device or compute appliance) or a storage node, installed on a card connected to the motherboard, such as via a Peripheral Component Interconnect-Express (PCI-e) bus, or the like. Additionally, storage devices (such as SSDs) may be coupled to and managed by the DPU via, for example, the PCI-e bus (e.g., on separate cards). The DPU may support one or more high-speed network interfaces, such as Ethernet ports, without the need for a separate network interface card (NIC), and may include programmable hardware specialized for network traffic.
The DPU may be highly programmable such that the DPU may expose hardware primitives for selecting and programmatically configuring data processing operations, allowing the CPU to offload various data processing tasks to the DPU. The DPU may be optimized for these processing tasks as well. For example, the DPU may include hardware implementations of high-performance data processing tasks, such as cryptography, compression (including decompression), regular expression processing, lookup engines, or the like.
In some cases, the data processing unit may include a coherent cache memory implemented in circuitry, a non-coherent buffer memory implemented in circuitry, and a plurality of processing cores implemented in circuitry, each connected to the coherent cache memory and the non-coherent buffer memory. In other cases, the data processing unit may include two or more processing clusters, each of the processing clusters comprising a coherent cache memory implemented in circuitry, a non-coherent buffer memory implemented in circuitry, and a plurality of processing cores implemented in circuitry, each connected to the coherent cache memory and the non-coherent buffer memory. In either case, each of the processing cores may be programmable using a high-level programming language, e.g., C, C++, or the like.
In one example, this disclosure is directed to a device comprising one or more storage devices, and a data processing unit communicatively coupled to the storage devices. The data processing unit comprises a networking unit configured to control input and output of data between the data processing unit and a network, a plurality of programmable processing cores configured to perform processing tasks on the data, and one or more host units configured to at least one of control input and output of the data between the data processing unit and one or more application processors or control storage of the data with the storage devices.
In another example, this disclosure is direct to a system comprising a rack holding a plurality of devices that each includes one or more storage devices and at least one data processing unit communicatively coupled to the storage devices. The data processing unit comprises a networking unit configured to control input and output of data between the data processing unit and a network, a plurality of programmable processing cores configured to perform processing tasks on the data, and one or more host units configured to at least one of control input and output of the data between the data processing unit and one or more application processors or control storage of the data with the storage devices.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
In some examples, data center 10 may represent one of many geographically distributed network data centers. In the example of
This disclosure describes a new processing architecture in which a data processing unit (DPU) is utilized within one or more nodes. Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage (e.g., solid state drive (SSD)) I/O to and from multiple other components and/or devices.
In the illustrated example of
In accordance with the techniques described in this disclosure, one or more of the devices held in CPU rack 20, GPU rack 22, and/or DPU rack 24 may include DPUs. These DPUs, for example, may be responsible for various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks. In some cases, the DPUs may be used in conjunction with application processors to offload any data-processing intensive tasks and free the application processors for computing-intensive tasks. In other cases, where control plane tasks are relatively minor compared to the data-processing intensive tasks, the DPUs may take the place of the application processors.
For example, as further explained below, CPU rack 20 hosts a number of CPU blades 21 or other compute nodes that are designed for providing a high-speed execution environment. That is, each CPU blade may contain a number of multi-core processors specially tailored to provide high-performance application execution. Similarly, GPU rack 22 may host a number of GPU blades 23 or other compute nodes that are designed to operate under the direction of a CPU or a DPU for performing complex mathematical and graphical operations better suited for GPUs. SSD rack 26 may host a number of SSD blades 27 or other storage nodes that contain permanent storage devices designed for storage and retrieval of data.
In general, in accordance with the techniques described herein, various compute nodes within data center 10, such as any of CPU blades 21, GPU blades 23, and DPU blades 25, may include DPUs to perform data centric tasks within data center 10. In addition, various storage nodes within data center 10, such as any of SSD blades 27, may interact with DPUs within CPU blades 21, GPU blades 23, or DPU blades 25 to store data for the data centric tasks performed by the DPUs. As described herein, the DPU is optimized to perform input and output (I/O) tasks, such as storage and retrieval of data to and from storage devices (such as SSDs), networking, and the like. For example, the DPU may be configured to execute a large number of data I/O processing tasks relative to a number of instructions that are processed. The DPU may support one or more high-speed network interfaces, such as Ethernet ports, without the need for a separate network interface card (NIC), and may include programmable hardware specialized for network traffic. The DPU may be highly programmable such that the DPU may expose hardware primitives for selecting and programmatically configuring data processing operations. The DPU may be optimized for these processing tasks as well. For example, the DPU may include hardware implementations of high-performance data processing tasks, such as cryptography, compression (and decompression), regular expression processing, lookup engines, or the like.
In the example shown in
One or more of the devices in the different racks 20, 22, 24, or 26 may be configured to operate as storage systems and application servers for data center 10. For example, CPU rack 20 holds a plurality of CPU blades (“CPUs A-N”) 21 that each includes at least a CPU. One or more of CPU blades 21 may include a CPU, a DPU, and one or more storage devices, e.g., SSDs, communicatively coupled via PCI-e links or buses. In this implementation, the DPU is configured to retrieve data from the storage devices on behalf of the CPU, store data to the storage devices on behalf of the CPU, and retrieve data from network 7 on behalf of the CPU. One or more of CPU blades 21 may also include a GPU communicatively coupled to at least the DPU. In this case, the DPU is also configured to send offloaded processing tasks (e.g., graphics intensive processing tasks, or other tasks that may benefit from the highly parallel processing nature of a graphics processing unit) to the GPU. An example implementation of one of CPU blades 21 is described in more detail below with respect to compute node 100A of
In some examples, at least some of CPU blades 21 may not include their own DPUs, but instead are communicatively coupled to a DPU on another one of CPU blades 21. In other words, one DPU may be configured to control I/O and other data processing tasks for two or more CPUs on different ones of CPU blades 21. In still other examples, at least some of CPU blades 21 may not include their own DPUs, but instead are communicatively coupled to a DPU on one of DPU blades 25 held in DPU rack 24. In this way, the DPU may be viewed as a building block for building and scaling out data centers, such as data center 10.
As another example, GPU rack 22 holds a plurality of GPU blades (“GPUs A-M”) 23 that each includes at least a GPU. One or more of GPU blades 23 may include a GPU, a DPU, and one or more storage devices, e.g., SSDs, communicatively coupled via PCI-e links or buses. In this implementation, the DPU is configured to control input and output of data with network 7, feed the data from at least one of network 7 or the storage devices to the GPU for processing, and control storage of the data with the storage devices. An example implementation of one of GPU blades 23 is described in more detail below with respect to compute node 100B of
In some examples, at least some of GPU blades 23 may not include their own DPUs, but instead are communicatively coupled to a DPU on another one of GPU blades 23. In other words, one DPU may be configured to control I/O tasks to feed data to two or more GPUs on different ones of GPU blades 23. In still other examples, at least some of GPU blades 23 may not include their own DPUs, but instead are communicatively coupled to a DPU on one of DPU blades 25 held in DPU rack 24.
As a further example, DPU rack 24 holds a plurality of DPU blades (“DPUs A-X”) 25 that each includes at least a DPU. One or more of DPU blades 25 may include a DPU and one or more storage devices, e.g., SSDs, communicatively coupled via PCI-e links or buses such that DPU blades 25 may alternatively be referred to as “storage blades.” In this implementation, the DPU is configured to control input and output of data with network 7, perform programmable processing tasks on the data, and control storage of the data with the storage devices. An example implementation of one of DPU blades 25 is described in more detail below with respect to compute node 101 of
As illustrated in
In general, DPUs may be included on or communicatively coupled to any of CPU blades 21, GPU blades 23, DPU blades 25, and/or SSD blades 27 to provide computation services and storage facilities for applications and data associated with customers 11. In this way, the DPU may be viewed as a building block for building and scaling out data centers, such as data center 10.
In the illustrated example of
The DPUs or any of the devices within racks 20, 22, 24, and 26 that include at least one DPU may also be referred to as access nodes. In other words, the term DPU may be used herein interchangeably with the term access node. As access nodes, the DPUs may utilize switch fabric 14 to provide full mesh (any-to-any) interconnectivity such that any of the devices in racks 20, 22, 24, 26 may communicate packet data for a given packet flow to any other of the devices using any of a number of parallel data paths within the data center 10. For example, the DPUs may be configured to spray individual packets for packet flows between the DPUs and across some or all of the multiple parallel data paths in the data center switch fabric 14 and reorder the packets for delivery to the destinations so as to provide full mesh connectivity.
Although racks 20, 22, 24, and 26 are described in
Additional example details of various example access nodes are described in U.S. Provisional Patent Application No. 62/559,021, filed Sep. 15, 2017, entitled “Access Node for Data Centers,” the entire content of which is incorporated herein by reference. More details on data center network architectures and interconnected access nodes are available in U.S. patent application Ser. No. 15/939,227, filed Mar. 28, 2018, entitled “Non-Blocking Any-to-Any Data Center Network with Packet Spraying Over Multiple Alternate Data Paths,” the entire content of which is incorporated herein by reference.
A new data transmission protocol referred to as a Fabric Control Protocol (FCP) may be used by the different operational networking components of any of the DPUs of the devices within racks 20, 22, 24, 26 to facilitate communication of data across switch fabric 14. FCP is an end-to-end admission control protocol in which, in one example, a sender explicitly requests a receiver with the intention to transfer a certain number of bytes of payload data. In response, the receiver issues a grant based on its buffer resources, QoS, and/or a measure of fabric congestion. In general, FCP enables spray of packets of a flow to all paths between a source and a destination node, and may provide resilience against request/grant packet loss, adaptive and low latency fabric implementations, fault recovery, reduced or minimal protocol overhead cost, support for unsolicited packet transfer, support for FCP capable/incapable nodes to coexist, flow-aware fair bandwidth distribution, transmit buffer management through adaptive request window scaling, receive buffer occupancy based grant management, improved end to end QoS, security through encryption and end to end authentication and/or improved ECN marking support. More details on the FCP are available in U.S. Provisional Patent Application No. 62/566,060, filed Sep. 29, 2017, entitled “Fabric Control Protocol for Data Center Networks with Packet Spraying Over Multiple Alternate Data Paths,” the entire content of which is incorporated herein by reference.
In the example of
In some examples, SDN controller 18 operates to configure the DPUs of the devices within racks 20, 22, 24, 26 to logically establish one or more virtual fabrics as overlay networks dynamically configured on top of the physical underlay network provided by switch fabric 14. For example, SDN controller 18 may learn and maintain knowledge of the DPUs and establish a communication control channel with each of the DPUs. SDN controller 18 uses its knowledge of the DPUs to define multiple sets (groups) of two of more DPUs to establish different virtual fabrics over switch fabric 14. More specifically, SDN controller 18 may use the communication control channels to notify each of the DPUs for a given set which other DPUs are included in the same set. In response, the DPUs dynamically setup FCP tunnels with the other DPUs included in the same set as a virtual fabric over switch fabric 14. In this way, SDN controller 18 defines the sets of DPUs for each of the virtual fabrics, and the DPUs are responsible for establishing the virtual fabrics. As such, underlay components of switch fabric 14 may be unware of virtual fabrics. In these examples, the DPUs interface with and utilize switch fabric 14 so as to provide full mesh (any-to-any) interconnectivity between DPUs of any given virtual fabric. In this way, the devices within racks 20, 22, 24, 26 connected to any of the DPUs forming a given one of virtual fabrics may communicate packet data for a given packet flow to any other of the devices within racks 20, 22, 24, 26 coupled to the DPUs for that virtual fabric using any of a number of parallel data paths within switch fabric 14 that interconnect the DPUs of that virtual fabric. More details of DPUs or access nodes operating to spray packets within and across virtual overlay networks are available in U.S. Provisional Patent Application No. 62/638,788, filed Mar. 5, 2018, entitled “Network Access Node Virtual Fabrics Configured Dynamically over an Underlay Network,” the entire content of which is incorporated herein by reference.
Although not shown, data center 10 may also include, for example, one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile devices such as cellular phones or personal digital assistants, wireless access points, bridges, cable modems, application accelerators, or other network devices.
In the example of
DPU 102A may be configured according to the various techniques of this disclosure. In general, DPU 102A is a high-performance input/output (I/O) hub designed to aggregate and process network and storage (e.g., SSD) I/O to and from multiple other components and/or devices. For example, DPU 102A may be configured to execute a large number of data I/O processing tasks relative to a number of instructions that are processed. In other words, the ratio of I/O tasks that are processed by DPU 102A to a number of instructions that are executed by DPU 102A is high such that DPU 102A comprises a processor that is very I/O intense.
In the example of
As further described herein, in various examples DPU 102A is a highly programmable I/O processor, with a plurality of processing cores (as discussed below, e.g., with respect to
As a network interface subsystem, DPU 102A may implement full offload with minimum/zero copy and storage acceleration for compute node 100A. DPU 102A can thus form a nexus between various components and devices, e.g., CPU 104, storage device 114, GPU 106, and network devices of network 120A. For example, DPU 102A may support a network interface to connect directly to network 120A via Ethernet link 116 without a separate network interface card (NIC), as needed between a CPU and a network in conventional architectures.
In general, software programs executable on CPU 104 can perform instructions to offload some or all data-intensive processing tasks associated with the software program to DPU 102A. As noted above, DPU 102A includes processing cores that can be programmed (i.e., can execute software code), as well as specific hardware units configured specifically to implement various data-intensive operations, such as compression, cryptographic functions, and regular expression processing and application to data sets.
Each of the processing cores of DPU 102A may be programmable using a high-level programming language, e.g., C, C++, or the like. In general, the various hardware implementations of processes provided by DPU 102A may be associated with software libraries in the high-level programming language that may be utilized to construct software applications for execution by CPU 104 that, by way of the interfaces, invoke and leverage the functionality of DPU 102A. Thus, a programmer can write a software program in the programming language and use function or procedure calls associated with the hardware implementations of various processes of DPU 102A to perform these functions, and when CPU 104 executes the software program, CPU 104 offloads performance of these functions/procedures to DPU 102A.
Additionally, or alternatively, CPU 104 may offload other software procedures or functions to DPU 102A, to be executed by processing cores of DPU 102A. Furthermore, CPU 104 may offload software procedures or functions to GPU 106 via DPU 102A (e.g., computer graphics processes). In this manner, DPU 102A represents a dynamically programmable processing unit that can execute software instructions, as well as provide hardware implementations of various procedures or functions for data-processing tasks, which may improve performance of these procedures or functions.
In the example of
DPU 102B may be configured according to the various techniques of this disclosure. DPU 102B may operate substantially similar to DPU 102A described above with respect to
In the example of
As an example, in the case of artificial intelligence (AI) processing, control plane functions include executing control tasks to instruct a GPU to perform certain types of computationally intensive processing, and executing I/O tasks to feed a large amount of data to the GPU for processing. In general, I/O processing tasks that control data movement between GPUs and storage devices are more important for facilitating AI processing than the relatively minor control tasks. Therefore, in the example of AI processing, it makes sense to use DPU 102B in place of a CPU. In the example of
As shown, in this example, storage node 101 may include at least one DPU and at least one storage device, e.g., SSD. As another example, with respect to
In the example of
DPU 102C may be configured according to the various techniques of this disclosure. DPU 102C may operate substantially similar to DPU 102A of
In the example of
In the illustrated example of
In this example, DPU 130 represents a high performance, hyper-converged network, storage, and data processor and input/output hub. Cores 140 may comprise one or more of MIPS (microprocessor without interlocked pipeline stages) cores, ARM (advanced RISC (reduced instruction set computing) machine) cores, PowerPC (performance optimization with enhanced RISC-performance computing) cores, RISC-V (RISC five) cores, or CISC (complex instruction set computing or x86) cores. Each of cores 140 may be programmed to process one or more events or activities related to a given data packet such as, for example, a networking packet or a storage packet. Each of cores 140 may be programmable using a high-level programming language, e.g., C, C++, or the like.
As described herein, the new processing architecture utilizing a data processing unit (DPU) may be especially efficient for stream processing applications and environments. For example, stream processing is a type of data processing architecture well suited for high performance and high efficiency processing. A stream is defined as an ordered, unidirectional sequence of computational objects that can be of unbounded or undetermined length. In a simple embodiment, a stream originates in a producer and terminates at a consumer, and is operated on sequentially. In some embodiments, a stream can be defined as a sequence of stream fragments; each stream fragment including a memory block contiguously addressable in physical address space, an offset into that block, and a valid length. Streams can be discrete, such as a sequence of packets received from the network, or continuous, such as a stream of bytes read from a storage device. A stream of one type may be transformed into another type as a result of processing. For example, TCP receive (Rx) processing consumes segments (fragments) to produce an ordered byte stream. The reverse processing is performed in the transmit (Tx) direction. Independently of the stream type, stream manipulation requires efficient fragment manipulation, where a fragment is as defined above.
In some examples, the plurality of cores 140 may be capable of processing a plurality of events related to each data packet of one or more data packets, received by networking unit 142 and/or host units 146, in a sequential manner using one or more “work units.” In general, work units are sets of data exchanged between cores 140 and networking unit 142 and/or host units 146 where each work unit may represent one or more of the events related to a given data packet of a stream. As one example, a Work Unit (WU) is a container that is associated with a stream state and used to describe (i.e. point to) data within a stream (stored). For example, work units may dynamically originate within a peripheral unit coupled to the multi-processor system (e.g. injected by a networking unit, a host unit, or a solid state drive interface), or within a processor itself, in association with one or more streams of data, and terminate at another peripheral unit or another processor of the system. The work unit is associated with an amount of work that is relevant to the entity executing the work unit for processing a respective portion of a stream. In some examples, one or more processing cores of a DPU may be configured to execute program instructions using a work unit (WU) stack.
In some examples, in processing the plurality of events related to each data packet, a first one of the plurality of cores 140, e.g., core 140A may process a first event of the plurality of events. Moreover, first core 140A may provide to a second one of plurality of cores 140, e.g., core 140B a first work unit of the one or more work units. Furthermore, second core 140B may process a second event of the plurality of events in response to receiving the first work unit from first core 140B. Work units, including their structure and functionality, are described in more detail below with respect to
DPU 130 may act as a combination of a switch/router and a number of network interface cards. For example, networking unit 142 may be configured to receive one or more data packets from and transmit one or more data packets to one or more external devices, e.g., network devices. Networking unit 142 may perform network interface card functionality, packet switching, and the like, and may use large forwarding tables and offer programmability. Networking unit 142 may expose Ethernet ports for connectivity to a network, such as network 7 of
Memory controller 144 may control access to memory unit 134 by cores 140, networking unit 142, and any number of external devices, e.g., network devices, servers, external storage devices, or the like. Memory controller 144 may be configured to perform a number of operations to perform memory management in accordance with the present disclosure. For example, memory controller 144 may be capable of mapping accesses from one of the cores 140 to either of coherent cache memory 136 or non-coherent buffer memory 138. In some examples, memory controller 144 may map the accesses based on one or more of an address range, an instruction or an operation code within the instruction, a special access, or a combination thereof.
In some examples, memory controller 144 may be capable of mapping a virtual address to a physical address for non-coherent buffer memory 138 by performing a number of operations. For instance, memory controller 144 may map to non-coherent buffer memory 138 using a translation lookaside buffer (TLB) entry for a discrete stream of data packets. Moreover, memory controller 144 may map to a stream handle using the TLB entry for a continuous stream of data packets. In other examples, memory controller 144 may be capable of flushing modified cache lines associated with non-coherent buffer memory 138 after use by a first one of cores 140, e.g., core 140A. Moreover, memory controller 144 may be capable of transferring ownership of non-coherent buffer memory 138 to a second one of cores 140, e.g., core 140B, after the flushing.
In some examples, memory controller 144 may be capable of transferring ownership of a cache segment of the plurality of segments from first core 140A to second core 140B by performing a number of operations. For instance, memory controller 144 may hold onto a message generated by first core 140A. Additionally, memory controller 144 may flush the segment upon first core 140A completing an event using the segment. Furthermore, memory controller 144 may provide the message to second core 140B in response to both of: (1) there being no outstanding write operations for the segment, and (2) the segment not being flushed currently.
More details on the bifurcated memory system included in the DPU are available in U.S. patent application Ser. No. 15/949,892, filed Apr. 10, 2018, and titled “Relay Consistent Memory Management in a Multiple Processor System,” the entire content of which is incorporated herein by reference. Additional details regarding the operation and advantages of the DPU are described below with respect to
In general, DPU 150 represents a high performance, hyper-converged network, storage, and data processor and input/output hub. As illustrated in
In this example, DPU 150 represents a high performance, programmable multi-processor architecture that may provide solutions to various problems with existing processors (e.g., x86 architecture processors). As shown in
In general, work units are sets of data exchanged between processing clusters 156, networking unit 152, host units 154, central cluster 158, and external memory 170. Each work unit may represent a fixed length (e.g., 32-bytes) data structure including an action value and one or more arguments. In one example, a 32-byte work unit includes four sixty-four (64) bit words, a first word having a value representing the action value and three additional words each representing an argument. The action value may include a work unit handler identifier that acts as an index into a table of work unit functions to dispatch the work unit, a source identifier representing a source virtual processor or other unit (e.g., one of host units 154, networking unit 152, external memory 170, or the like) for the work unit, a destination identifier representing the virtual processor or other unit that is to receive the work unit, an opcode representing fields that are pointers for which data is to be accessed, and signaling network routing information.
The arguments of a work unit may be typed or untyped, and in some examples, one of the typed arguments acts as a pointer used in various work unit handlers. Typed arguments may include, for example, frames (having values acting as pointers to a work unit stack frame), flows (having values acting as pointers to state, which is relevant to the work unit handler function), and packets (having values acting as pointers to packets for packet and/or block processing handlers).
A flow argument may be used as a prefetch location for data specific to a work unit handler. A work unit stack is a data structure to help manage event driven, run-to-completion programming model of an operating system executed by data processing unit 150. An event driven model typically means that state which might otherwise be stored as function local variables must be stored as state outside the programming language stack. Run-to-completion also implies functions may be dissected to insert yield points. The work unit stack may provide the convenience of familiar programming constructs (call/return, call/continue, long-lived stack-based variables) to the execution model of DPU 150.
A frame pointer of a work unit may have a value that references a continuation work unit to invoke a subsequent work unit handler. Frame pointers may simplify implementation of higher level semantics, such as pipelining and call/return constructs. More details on work units, work unit stacks, and stream processing by data processing units are available in U.S. Provisional Patent Application No. 62/589,427, filed Nov. 21, 2017, entitled “Work Unit Stack Data Structures in Multiple Core Processor System,” and U.S. patent application Ser. No. 15/949,692, entitled “Efficient Work Unit Processing in a Multicore System,” filed Apr. 10, 2018, the entire content of each of which is incorporated herein by reference.
DPU 150 may deliver significantly improved efficiency over x86 processors for targeted use cases, such as storage and networking input/output, security and network function virtualization (NFV), accelerated protocols, and as a software platform for certain applications (e.g., storage, security, and data ingestion). DPU 150 may provide storage aggregation (e.g., providing direct network access to flash memory, such as SSDs) and protocol acceleration. DPU 150 provides a programmable platform for storage virtualization and abstraction. DPU 150 may also perform firewall and address translation (NAT) processing, stateful deep packet inspection, and cryptography. The accelerated protocols may include TCP, UDP, TLS, IPSec (e.g., accelerates AES variants, SHA, and PKC), RDMA, and iSCSI. DPU 150 may also provide quality of service (QoS) and isolation containers for data, and provide LLVM binaries.
DPU 150 may support software including network protocol offload (TCP/IP acceleration, RDMA and RPC); initiator and target side storage (block and file protocols); high level (stream) application APIs (compute, network and storage (regions)); fine grain load balancing, traffic management, and QoS; network virtualization and network function virtualization (NFV); and firewall, security, deep packet inspection (DPI), and encryption (IPsec, SSL/TLS).
In one particular example, DPU 150 may expose Ethernet ports of 100 Gbps, of which a subset may be used for local consumption (termination) and the remainder may be switched back to a network fabric via Ethernet interface 164. For each of host units 154, DPU 150 may expose a ×16 PCI-e interface 166. DPU 150 may also offer a low network latency to flash memory (e.g., SSDs) that bypasses local host processor and bus.
In the example of
In some examples, central cluster 158 may include three conceptual processing units (not shown in
Central cluster 158 may also include a plurality of processing cores, e.g., MIPS (microprocessor without interlocked pipeline stages) cores, ARM (advanced RISC (reduced instruction set computing) machine) cores, PowerPC (performance optimization with enhanced RISC-performance computing) cores, RISC-V (RISC five) cores, or CISC (complex instruction set computing or x86) cores. Central cluster 158 may be configured with two or more processing cores that each include at least one virtual processor. In one specific example, central cluster 158 is configured with four processing cores, each including two virtual processors, and executes a control operating system (such as a Linux kernel). The virtual processors are referred to as “virtual processors,” in the sense that these processors are independent threads of execution of a single core. However, it should be understood that the virtual processors are implemented in digital logic circuitry, i.e., in requisite hardware processing circuitry.
DPU 150 may be configured according to architectural principles of using a most energy efficient way of transporting data, managing metadata, and performing computations. DPU 150 may act as an input/output (I/O) hub that is optimized for executing short instruction runs (e.g., 100 to 400 instruction runs) or micro-tasks efficiently.
DPU 150 may provide high performance micro-task parallelism using the components thereof through work management. For example, DPU 150 may couple a low latency dispatch network with a work queue interface at each of processing clusters 156 to reduce delay from work dispatching to start of execution of the work by processing clusters 156. The components of DPU 150 may also operate according to a run-to-completion work flow, which may eliminate software interrupts and context switches. Hardware primitives may further accelerate work unit generation and delivery. DPU 150 may also provide low synchronization, in that the components thereof may operate according to a stream-processing model that encourages flow-through operation with low synchronization and inter-processor communication. The stream-processing model may further structure access by multiple processors (e.g., processing cores of processing clusters 156) to the same data and resources, avoid simultaneous sharing, and therefore, reduce contention. A processor may relinquish control of data referenced by a work unit as the work unit is passed to the next processing core in line. Furthermore, DPU 150 may provide a dedicated signaling/dispatch network, as well as a high capacity data network, and implement a compact work unit representation, which may reduce communication cost and overhead.
DPU 150 may also provide memory-related enhancements over conventional architectures. For example, DPU 150 may encourage a processing model that minimizes data movement, relying as much as possible on passing work by reference. DPU 150 may also provide hardware primitives for allocating and freeing buffer memory, as well as for virtualizing the memory space, thereby providing hardware-based memory management. By providing a non-coherent memory system for stream data, DPU 150 may eliminate detrimental effects of coherency that would otherwise result in surreptitious flushes or invalidates of memory, or artifactual communication and overhead. DPU 150 also provides a high bandwidth data network that allows unfettered access to memory and peripherals such that any stream data update can be done through main memory, and stream cache-to-stream cache transfers are not required. DPU 150 may be connected through a high bandwidth interface to external memory 170.
DPU 150 may also provide features that reduce processing inefficiencies and cost. For example, DPU 150 may provide a stream processing library (i.e., a library of functions available to programmers for interfacing with DPU 150) to be used when implementing software to be executed by DPU 150. That is, the stream processing library may provide one or more application programming interfaces (APIs) for directing processing tasks to DPU 150. In this manner, the programmer can write software that accesses hardware-based processing units of DPU 150, such that a CPU can offload certain processing tasks to hardware-based processing units of DPU 150. The stream processing library may handle message passing on behalf of programs, such that meta-data and state are pushed to the cache and stream memory associated with the core where processing occurs. In this manner, DPU 150 may reduce cache misses, that is, stalls due to memory accesses. DPU 150 may also provide lock-free operation. That is, DPU 150 may be implemented according to a message-passing model that enables state updates to occur without the need for locks, or for maintaining the stream cache through coherency mechanisms. DPU 150 may also be implemented according to a stream operating model, which encourages data unit driven work partitioning and provides an intuitive framework for determining and exploiting parallelism. DPU 150 also includes well-defined hardware models that process intensive operations such as cyclical redundancy checks (CRC), cryptography, compression, and the like.
In general, DPU 150 may satisfy a goal of minimizing data copy and data movement within the chip, with most of the work done by reference (i.e., passing pointers to the data between processors, e.g., processing cores within or between processing clusters 156). DPU 150 may support two distinct memory systems: a traditional, coherent memory system with a two-level cache hierarchy, and a non-coherent buffer memory system optimized for stream processing. The buffer memory may be shared and cached at the L1 level, but coherency is not maintained by hardware of DPU 150. Instead, coherency may be achieved through machinery associated with the stream processing model, in particular, synchronization of memory updates vs. memory ownership transfer. DPU 150 uses the non-coherent memory for storing packets and other data that would not cache well within the coherent memory system.
In the example of
A general-purpose operating system, such as Linux or Unix, can run on one or more of processing clusters 156. In some examples, central cluster 158 may be configured differently from processing clusters 156 (which may be referred to as stream processing clusters). For example, central cluster 158 may execute the operating system kernel (e.g., Linux kernel) as a control plane. Processing clusters 156 may function in run-to-completion thread mode. That is, processing clusters 156 may operate in a tight loop fed by work queues associated with each virtual processor in a cooperative multi-tasking fashion. Processing cluster 156 may further include one or more hardware accelerator units to accelerate networking, matrix multiplication, cryptography, compression, regular expression interpretation, timer management, direct memory access (DMA), and copy, among other tasks.
Networking unit 152 includes a forwarding pipeline implemented using flexible engines (e.g., a parser engine, a look-up engine, and a rewrite engine) and supports features of IP transit switching. Networking unit 152 may also use processing cores (e.g., MIPS cores, ARM cores, PowerPC cores, RISC-V cores, or CISC or x86 cores) to support control packets and low-bandwidth features, such as packet-multicast (e.g., for OSI Layers 2 and 3). DPU 150 may act as a combination of a switch/router and a number of network interface cards. The processing cores of networking unit 152 (and/or of processing clusters 156) may perform network interface card functionality, packet switching, and the like, and may use large forwarding tables and offer programmability.
Host units 154, processing clusters 156, central cluster 158, networking unit 152, and external memory 170 may be communicatively interconnected via three types of links. Direct links 162 (represented as dashed lines in
In this manner, processing clusters 156, host units 154, central cluster 158, networking unit 152, and external memory 170 are interconnected using two or three main network-on-chip (NoC) fabrics. These internal fabrics may include a data network fabric formed by grid links 160, and one or more control network fabrics including one or more of a signaling network formed by hub-and-spoke links 162, a coherency network formed by hub-and-spoke links 163, and a broadcast network formed by hub-and-spoke links 165. The signaling network, coherency network, and broadcast network are formed by direct links similarly arranged in a star-shaped network topology. Alternatively, in other examples, only the data network and one of the signaling network or the coherency network may be included. The data network is a two-dimensional mesh topology that carries data for both coherent memory and buffer memory systems. In one example, each grid link 160 provides a 512b wide data path in each direction. In one example, each direct link 162 and each direct link 163 provides a 128b wide bidirectional data path. The coherency network is a logical hub and spoke structure that carries cache coherency transactions (not including data). The signaling network is a logical hub and spoke structure that carries buffer memory requests and replies (not including data), synchronization and other commands, and work units and notifications.
DPU 150 includes various resources, i.e., elements in limited quantities that are consumed during performance of various functions. Example resources include work unit queue sizes, virtual processor cycles, accelerator cycles, bandwidth of external interfaces (e.g., host units 154 and networking unit 152), memory (including buffer memory, cache memory, and external memory), transient buffers, and time. In general, each resource can be translated to either time or space (e.g., memory). Furthermore, although certain resources can be reclaimed (such as memory), other resources (such as processing cycles and bandwidth) cannot be reclaimed.
In some examples, a broadcast network is formed by direct links 162 (or other, separate links that directly connect central cluster 158 to the other components, e.g., processing clusters 156, host units 154, networking unit 152, and external memory 170). Various components within DPU 150 (such as processing clusters 156, host units 154, networking unit 152, and external memory 170) may use the broadcast network to broadcast a utilization status of their corresponding resources to central cluster 158. Central cluster 158 may include an event queue manager (EQM) unit that stores copies of these utilization statuses for use when assigning various work units to these elements. Alternatively, in other examples, any of processing clusters 156 may include the EQM unit.
The utilization statuses may be represented as normalized color values (NCVs). Virtual processors may check the NCV of a desired resource to determine if the virtual processors can accept a work unit. If the NCV is above an allowable threshold for an initial work unit, each of the virtual processors places a corresponding flow in a pending state and sends an enqueue (NQ) event to the EQM. A flow is a sequence of computations that belong to a single ordering class. Each flow may be associated with a unique flow identifier (ID) that can be used to look up an entry for the flow in a global flow table (GFT). The flow entry may be linked to all reusable resources consumed by the flow so that these resources can be found and recovered when needed.
In response, the EQM enqueues the event into the specified event queue and monitors the NCV of the corresponding resource. If the NCV is below a desired dequeue (DQ) threshold, the EQM dequeues a calculated number of events from the head of the event queue. The EQM then translates these dequeued events into high-priority work unit messages and sends these work unit messages to their specified virtual processor destinations. The virtual processors use these dequeued events to determine if a flow can be transitioned from the pending state to an active state. For activated flows (i.e., those placed in the active state), the virtual processors may send a work unit to the desired resource. Work units that result from a reactivation are permitted to transmit if the NCV is below a threshold that is higher than the original threshold used to make the Event NQ decision as discussed above.
DPU 150 (and more particularly, networking unit 152, host units 154, processing clusters 156, and central clusters 158) uses the signaling network formed by direct links 162 to transport non-coherent buffer memory requests and replies, and work requests and notifications for inter-processor and interface unit communication (e.g., communication between processors of processing clusters 156 or processors of networking unit 152 and central cluster 158). DPU 150 (and more particularly, processing clusters 156 and central clusters 158) also uses the coherency network formed by direct links 163 to transport cache coherence requests and responses. Cores of processing clusters 156 and central cluster 158 may operate on a number of work queues in a prioritized matter. For example, each core may include one or more virtual processors, e.g., one to four virtual processors, and each virtual processor may operate on one to four work queues.
The signaling network formed by direct links 162 is a non-blocking, switched, low latency fabric that allows DPU 150 to reduce delay between event arrival (e.g., arrival of a packet on a network interface of networking unit 152 coupled to Ethernet lanes 164, arrival of a work request on one of PCI-e lanes 166 at one of host units 154, or arrival of remote procedure calls (RPCs) between processing cores of processing clusters 156 and/or central cluster 158) and start of execution by one of the cores. “Synchronization” refers to the proper sequencing and correct ordering of operations within DPU 150.
The coherency network formed by direct links 162 provide services including inter-cluster cache coherence (e.g., for request and/or reply traffic for write updates, read miss, and flush operations).
Central cluster 158 is a logical central reflection point on both the signaling network formed by direct links 162 and the coherency network formed by direct links 163 that provides ordering for data sent within the signaling network and the coherency network, respectively. Central cluster 158 generally performs tasks such as handling a global cache directory and processing synchronization and coherence transactions, ensuring atomicity of synchronized operations, and maintaining a wall-clock time (WCT) that is synchronized with outside sources (e.g., using precision time protocol (PTP), IEEE 1588). Central cluster 158 is configured to address several billion synchronization/coherence messages per second. Central cluster 158 may be subdivided into sub-units where necessary for capacity to handle aggregated traffic. Alternatively, in other examples, any of processing cluster 156 may perform the tasks described herein as being performed by central cluster 158.
As shown in
DPU 150 (and more particularly, networking unit 152, host units 154, processing clusters 156, and central clusters 158) use the data network formed by grid links 160 to transport buffer memory blocks to/from L1 buffer caches of cores within processing clusters 156 and central cluster 158. DPU 150 also uses the data network to transport cluster level buffer memory data, off-chip DRAM memory data, and data for external interfaces (e.g., interfaces provided by host units 154 and networking unit 152). DPU 150 also uses the data network to transport coherent memory lines to and from L2 caches of processing clusters 156, interface DMA engines, and off-chip DRAM memory.
“Messaging” may refer to work units and notifications for inter-processor and interface unit communication (e.g., between processing cores and/or processors of processing clusters 156, central cluster 158, host units 154, and networking unit 152). Central cluster 158 may include a central dispatch unit (CDU) (not shown) that is responsible for work unit (WU) queuing and flow control, work unit and completion notification dispatch, and load balancing and processor selection (e.g., selection of processors for performing work units among processing cores of processing clusters 156 and/or central cluster 158). The CDU may allow ordering of work units with respect to other messages of central cluster 158.
The CDU of central cluster 158 may also perform credit-based flow control, to manage the delivery of work units. The CDU may maintain a per-virtual-processor output queue plus per-peripheral unit queue of work units that are scheduled by the CDU, as the destination virtual processors allow, as a flow control scheme and to provide deadlock avoidance. The CDU may allocate each virtual processor of cores of processing clusters 156 a fixed amount of storage credits, which are returned when space is made available. The work queues may be relatively shallow. The CDU may include a work scheduling system that manages work production to match the consumption rate (this does not apply to networking unit 152, and may be performed via scheduling requests for storage). Processing clusters 156 switch work units destined for virtual processors within a common one of processing clusters 156 locally within the processing cluster's work unit queue system.
In general, central cluster 158 ensures that the ordering of messages of the same type (e.g., coherence, synchronization, or work units) seen on an output towards a cluster or peripheral is the same as the order in which the messages were seen at each input to central cluster 158. Ordering is not specified between multiple messages received from different inputs by central cluster 158. Alternatively, in other examples, any of processing cluster 156 may include the CDU and perform the tasks described herein as being performed by central cluster 158.
Networking unit 152 may expose Ethernet lanes 164 for connectivity to a network, such as network 7 of
Networking unit 152 connects to an Ethernet network via Ethernet lanes 164 and interfaces to the data network formed by grid links 160 and the signaling network formed by direct links 162, i.e., the data and signaling internal fabrics. Networking unit 152 provides a Layer 3 (i.e., OSI networking model Layer 3) switch forwarding path, as well as network interface card (NIC) assistance.
As NIC assistance, networking unit 152 may perform various stateless assistance processes, such as checksum offload for Internet protocol (IP), e.g., IPv4 or IPv6, transmission control protocol (TCP), and/or uniform datagram protocol (UDP). Networking unit 152 may also perform assistance processes for receive side-scaling (RSS), large send offload (LSO), large receive offload (LRO), virtual local area network (VLAN) manipulation, and the like. On the Ethernet media access control (MAC) side, in one example, networking unit 152 may use multiple combination units, each with four 25 Gb HSS lanes that can be configured as 1×40/100 G, 2×50 G, or 4×25/10/1 G. Networking unit 152 may also support Internet protocol security (IPsec), with a number of security associations (SAs). Networking unit 152 may include cryptographic units for encrypting and decrypting packets as necessary, to enable processing of the IPsec payload.
Networking unit 152 may also include a flexible network packet parsing unit. The packet parsing unit may be configured according to a specialized, high-performance implementation for common formats, including network tunnels (e.g., virtual extensible local area network (VXLAN), network virtualization using generic routing encapsulation (NVGRE), generic network virtualization encapsulation (GENEVE), multiprotocol label switching (MPLS), or the like). Networking unit 152 may also include an OSI Layer 3 (L3) switch that allows cut-through Ethernet to Ethernet switching, using a local memory (not shown) of networking unit 152, as well as host-to-host switching.
One or more hardware direct memory access (DMA) engine instances (not shown) may be attached to three data network ports of networking unit 152, which are coupled to respective grid links 160. The DMA engines of networking unit 152 are configured to fetch packet data for transmission. The packet data may be in on-chip or off-chip buffer memory (e.g., within buffer memory of one of processing clusters 156 or external memory 170), or in host memory.
Host units 154 provide interfaces to respective PCI-e bus lanes 166. This allows DPU 150 to operate as an endpoint or as a root (in dual mode). For example, DPU 150 may connect to a host system (e.g., an x86 server) as an endpoint device, and DPU 150 may connect as a root to endpoint devices, such as SSD disks, as shown in
In the example of
Each of host units 154 may also include a respective hardware DMA engine (not shown). Each DMA engine is configured to fetch data and buffer descriptors from host memory, and to deliver data and completions to host memory. Each DMA engine also sends messages to the PCI controller to trigger interrupt generation. Additional functionality may be provided by core processing units of host units 154 that execute software, which consume streams of buffer descriptors, such as generating DMA addresses for payload placement and/or generating completion addresses.
Processing clusters 156 and central cluster 158 may perform data protection mechanisms to protect data stored in on- or off-chip memory, such as in buffers or in external memory 170. Such data protection mechanisms may reduce or eliminate silent data corruption (SDC) probability with single bit soft errors (such errors may occur due to radiation, cosmic rays, internally generated alpha particles, noise, etc. . . . ) and escaped multi-bit errors.
DPU 150 may execute various types of applications. Examples of such applications are classified below according to three axes: layering, consumption model, and stream multiplexing. Three example layers of software/applications within the context of DPU 150 include access software, internal software, and applications. Access software represents system software, such as drivers and protocol stacks. Such access software is typically part of the kernel and runs in root/privileged mode, although in some cases, protocol stacks may be executed in user space. Internal software includes further system software and libraries, such as storage initiator/target software that execute on top of the access software. Traditionally, internal software is executed in kernel space. Applications represent user applications that execute in user space. Consumption models can be broadly classified on a spectrum with a protocol processing model (header consumption) at one end and byte processing model (data consumption) at the other end. Typically, system software is near the protocol processing model end, and user applications tend to form the majority of applications at the byte processing model end.
Table 1 below categorizes example software/applications according to the various layers and consumption models discussed above:
In this manner, DPU 150 may offer improvements over conventional processing systems with respect to work management, memory management, and/or processor execution.
In general, accelerators 188 perform acceleration for various data-processing functions, such as look-ups, matrix multiplication, cryptography, compression, regular expressions, or the like. That is, accelerators 188 may comprise hardware implementations of look-up engines, matrix multipliers, cryptographic engines, compression engines, regular expression interpreters, or the like. For example, accelerators 188 may include a lookup engine that performs hash table lookups in hardware to provide a high lookup rate. The lookup engine may be invoked through work units from external interfaces and virtual processors of cores 182, and generates lookup notifications through work units. Accelerators 188 may also include one or more cryptographic units to support various cryptographic processes, such as any or all of Advanced Encryption Standard (AES) 128, AES 256, Galois/Counter Mode (GCM), block cipher mode (BCM), Secure Hash Algorithm (SHA) 128, SHA 256, SHA 512, public key cryptography, elliptic curve cryptography, RSA, or the like. One or more of such cryptographic units may be integrated with networking unit 152 (
Core 190 also includes a L1 buffer cache 198, which may be 16 KB. Core 190 may use L1 buffer cache 198 for non-coherent data, such as packets or other data for software managed through stream processing mode. L1 buffer cache 198 may store data for short-term caching, such that the data is available for fast access.
When one of virtual processors 192, such as virtual processor 192A, accesses memory, virtual processor 192A uses L1 data cache 196 or L1 buffer cache 198, based on the physical memory address issued by a memory management unit (not shown). DPU 150 (
Various examples have been described. These and other examples are within the scope of the following claims.
This application is a continuation of U.S. patent application Ser. No. 16/031,921, filed Jul. 10, 2018, which claims the benefit of U.S. Provisional Appl. No. 62/530,691, filed Jul. 10, 2017, and U.S. Provisional Appl. No. 62/559,021, filed Sep. 15, 2017, the entire content of each of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4872157 | Hemmady et al. | Oct 1989 | A |
4872159 | Hemmady et al. | Oct 1989 | A |
5301324 | Dewey et al. | Apr 1994 | A |
5812549 | Sethu | Sep 1998 | A |
5828860 | Miyaoku et al. | Oct 1998 | A |
6021473 | Davis et al. | Feb 2000 | A |
6055579 | Goyal et al. | Apr 2000 | A |
6314491 | Freerksen et al. | Nov 2001 | B1 |
6647453 | Duncan et al. | Nov 2003 | B1 |
6842906 | Bowman-Amuah | Jan 2005 | B1 |
6901451 | Miyoshi et al. | May 2005 | B1 |
6901500 | Hussain | May 2005 | B1 |
6993630 | Williams et al. | Jan 2006 | B1 |
7035914 | Payne et al. | Apr 2006 | B1 |
7082477 | Sadhasivam et al. | Jul 2006 | B1 |
7275103 | Thrasher et al. | Sep 2007 | B1 |
7289964 | Bowman-Amuah et al. | Oct 2007 | B1 |
7342887 | Sindhu | Mar 2008 | B1 |
7480304 | Yeh et al. | Jan 2009 | B2 |
7486678 | Devanagondi et al. | Feb 2009 | B1 |
7664110 | Lovett et al. | Feb 2010 | B1 |
7733781 | Petersen | Jun 2010 | B2 |
7822731 | Yu et al. | Oct 2010 | B1 |
7843907 | Abou-Emara et al. | Nov 2010 | B1 |
7965624 | Ripa et al. | Jun 2011 | B2 |
7979588 | Tran | Jul 2011 | B1 |
8386540 | McAlister et al. | Feb 2013 | B1 |
8560757 | Pangborn et al. | Oct 2013 | B2 |
8582440 | Ofelt et al. | Nov 2013 | B2 |
8599863 | Davis | Dec 2013 | B2 |
8625427 | Terry et al. | Jan 2014 | B1 |
8737410 | Davis et al. | May 2014 | B2 |
8798077 | Mehra et al. | Aug 2014 | B2 |
8848728 | Revah et al. | Sep 2014 | B1 |
8850101 | Pangborn et al. | Sep 2014 | B2 |
8850125 | Pangborn et al. | Sep 2014 | B2 |
8918631 | Kumar et al. | Dec 2014 | B1 |
8966152 | Bouchard et al. | Feb 2015 | B2 |
9065860 | Pangborn et al. | Jun 2015 | B2 |
9118984 | DeCusatis et al. | Aug 2015 | B2 |
9154376 | Aziz | Oct 2015 | B2 |
9225628 | Zahavi | Dec 2015 | B2 |
9244798 | Kuila | Jan 2016 | B1 |
9262225 | Davis et al. | Feb 2016 | B2 |
9282384 | Graves | Mar 2016 | B1 |
9288101 | Dalal | Mar 2016 | B1 |
9294304 | Sindhu | Mar 2016 | B2 |
9294398 | DeCusatis et al. | May 2016 | B2 |
9369408 | Raghavan et al. | Jun 2016 | B1 |
9405550 | Biran et al. | Aug 2016 | B2 |
9565114 | Kabbani et al. | Feb 2017 | B1 |
9569366 | Pangborn et al. | Feb 2017 | B2 |
9632936 | Zuckerman et al. | Apr 2017 | B1 |
9800495 | Lu | Oct 2017 | B2 |
9853901 | Kampmann et al. | Dec 2017 | B2 |
9866427 | Yadav et al. | Jan 2018 | B2 |
9876735 | Davis et al. | Jan 2018 | B2 |
9946671 | Tawri et al. | Apr 2018 | B1 |
10003552 | Kumar et al. | Jun 2018 | B2 |
10104171 | Savic et al. | Oct 2018 | B1 |
10135731 | Davis et al. | Nov 2018 | B2 |
10140245 | Davis et al. | Nov 2018 | B2 |
10304154 | Appu et al. | May 2019 | B2 |
10387179 | Hildebrant et al. | Aug 2019 | B1 |
10425707 | Sindhu et al. | Sep 2019 | B2 |
10540288 | Noureddine et al. | Jan 2020 | B2 |
10565112 | Noureddine et al. | Feb 2020 | B2 |
10637685 | Goel et al. | Apr 2020 | B2 |
10645187 | Goyal et al. | May 2020 | B2 |
10659254 | Sindhu et al. | May 2020 | B2 |
10686729 | Sindhu et al. | Jun 2020 | B2 |
10725825 | Sindhu et al. | Jul 2020 | B2 |
10761931 | Goyal et al. | Sep 2020 | B2 |
10827191 | Dikshit et al. | Nov 2020 | B2 |
10841245 | Gray et al. | Nov 2020 | B2 |
10929175 | Goyal et al. | Feb 2021 | B2 |
10951393 | Thomas et al. | Mar 2021 | B2 |
10986425 | Sindhu et al. | Apr 2021 | B2 |
11048634 | Noureddine et al. | Jun 2021 | B2 |
20020015387 | Houh | Feb 2002 | A1 |
20020049859 | Bruckert et al. | Apr 2002 | A1 |
20020075862 | Mayes | Jun 2002 | A1 |
20020094151 | Li | Jul 2002 | A1 |
20020118415 | Dasylva et al. | Aug 2002 | A1 |
20020126634 | Mansharamani et al. | Sep 2002 | A1 |
20020126671 | Ellis et al. | Sep 2002 | A1 |
20030043798 | Pugel | Mar 2003 | A1 |
20030091271 | Dragone | May 2003 | A1 |
20030229839 | Wang et al. | Dec 2003 | A1 |
20040010612 | Pandya | Jan 2004 | A1 |
20040210320 | Pandya | Oct 2004 | A1 |
20040236912 | Glasco | Nov 2004 | A1 |
20050166086 | Watanabe | Jul 2005 | A1 |
20050259632 | Malpani et al. | Nov 2005 | A1 |
20060029323 | Nikonov et al. | Feb 2006 | A1 |
20060056406 | Bouchard et al. | Mar 2006 | A1 |
20060059484 | Selvaggi | Mar 2006 | A1 |
20060092976 | Lakshman et al. | May 2006 | A1 |
20060112226 | Hady et al. | May 2006 | A1 |
20060159103 | Jain | Jul 2006 | A1 |
20060277421 | Balestriere | Dec 2006 | A1 |
20070073966 | Corbin | Mar 2007 | A1 |
20070172235 | Snider et al. | Jul 2007 | A1 |
20070192545 | Gara et al. | Aug 2007 | A1 |
20070198656 | Mazzaferri et al. | Aug 2007 | A1 |
20070255906 | Handgen et al. | Nov 2007 | A1 |
20080002702 | Bajic | Jan 2008 | A1 |
20080138067 | Beshai | Jun 2008 | A1 |
20080244231 | Kunze et al. | Oct 2008 | A1 |
20090024836 | Shen et al. | Jan 2009 | A1 |
20090083263 | Felch et al. | Mar 2009 | A1 |
20090135739 | Hoover et al. | May 2009 | A1 |
20090135832 | Fan et al. | May 2009 | A1 |
20090228890 | Vaitovirta et al. | Sep 2009 | A1 |
20090234987 | Lee et al. | Sep 2009 | A1 |
20090285228 | Bagepalli et al. | Nov 2009 | A1 |
20100061391 | Sindhu et al. | Mar 2010 | A1 |
20100125903 | Devarajan et al. | May 2010 | A1 |
20100241831 | Mahadevan | Sep 2010 | A1 |
20100293353 | Sonnier | Nov 2010 | A1 |
20100318725 | Kwon | Dec 2010 | A1 |
20110289179 | Pekcan et al. | Jan 2011 | A1 |
20110055827 | Lin et al. | Mar 2011 | A1 |
20110113184 | Chu | Mar 2011 | A1 |
20110170553 | Beecroft et al. | Jul 2011 | A1 |
20110173392 | Gara et al. | Jul 2011 | A1 |
20110173514 | Pope et al. | Jul 2011 | A1 |
20110202658 | Okuno et al. | Aug 2011 | A1 |
20110219145 | Pope et al. | Sep 2011 | A1 |
20110222540 | Mital | Sep 2011 | A1 |
20110225594 | Iyengar et al. | Sep 2011 | A1 |
20110228783 | Flynn et al. | Sep 2011 | A1 |
20110238923 | Hooker et al. | Sep 2011 | A1 |
20110289180 | Sonnier et al. | Nov 2011 | A1 |
20110289279 | Sonnier et al. | Nov 2011 | A1 |
20120030431 | Anderson et al. | Feb 2012 | A1 |
20120033680 | Gopinath et al. | Feb 2012 | A1 |
20120036178 | Gavini et al. | Feb 2012 | A1 |
20120076153 | Manzella et al. | Mar 2012 | A1 |
20120096211 | Davis et al. | Apr 2012 | A1 |
20120176890 | Balus et al. | Jul 2012 | A1 |
20120177047 | Roitshtein | Jul 2012 | A1 |
20120207165 | Davis | Aug 2012 | A1 |
20120254587 | Biran et al. | Oct 2012 | A1 |
20120314710 | Shikano | Dec 2012 | A1 |
20130003725 | Hendel et al. | Jan 2013 | A1 |
20130024875 | Wang et al. | Jan 2013 | A1 |
20130028083 | Yoshida et al. | Jan 2013 | A1 |
20130088971 | Anantharam et al. | Apr 2013 | A1 |
20130145375 | Kang | Jun 2013 | A1 |
20130191443 | Gan et al. | Jul 2013 | A1 |
20130258912 | Zimmerman et al. | Oct 2013 | A1 |
20130330076 | Liboiron-Ladouceur et al. | Dec 2013 | A1 |
20130346789 | Brunel et al. | Dec 2013 | A1 |
20140023080 | Zhang et al. | Jan 2014 | A1 |
20140040909 | Winser et al. | Feb 2014 | A1 |
20140044128 | Suresh et al. | Feb 2014 | A1 |
20140059537 | Kamble et al. | Feb 2014 | A1 |
20140075085 | Schroder et al. | Mar 2014 | A1 |
20140115122 | Yengalasetti et al. | Apr 2014 | A1 |
20140161450 | Graves et al. | Jun 2014 | A1 |
20140181319 | Chen et al. | Jun 2014 | A1 |
20140187317 | Kohler et al. | Jul 2014 | A1 |
20140258479 | Tenginakai et al. | Sep 2014 | A1 |
20140269351 | Graves et al. | Sep 2014 | A1 |
20140281243 | Shalf et al. | Sep 2014 | A1 |
20140310467 | Shalf et al. | Oct 2014 | A1 |
20140359044 | Davis et al. | Dec 2014 | A1 |
20150019702 | Kancherla | Jan 2015 | A1 |
20150037032 | Xu et al. | Feb 2015 | A1 |
20150043330 | Hu et al. | Feb 2015 | A1 |
20150117860 | Braun | Apr 2015 | A1 |
20150143045 | Han et al. | May 2015 | A1 |
20150143073 | Winser et al. | May 2015 | A1 |
20150163171 | Sindhu et al. | Jun 2015 | A1 |
20150180603 | Darling et al. | Jun 2015 | A1 |
20150186313 | Sodhi et al. | Jul 2015 | A1 |
20150222533 | Birrittella et al. | Aug 2015 | A1 |
20150242324 | Novakovic et al. | Aug 2015 | A1 |
20150244617 | Nakil et al. | Aug 2015 | A1 |
20150254182 | Asher et al. | Sep 2015 | A1 |
20150256405 | Janardhanan et al. | Sep 2015 | A1 |
20150278148 | Sindhu | Oct 2015 | A1 |
20150278984 | Koker et al. | Oct 2015 | A1 |
20150279465 | Jokinen | Oct 2015 | A1 |
20150280939 | Sindhu | Oct 2015 | A1 |
20150281128 | Sindhu | Oct 2015 | A1 |
20150324205 | Eisen et al. | Nov 2015 | A1 |
20150325272 | Murphy | Nov 2015 | A1 |
20150334034 | Smedley et al. | Nov 2015 | A1 |
20150334202 | Frydman et al. | Nov 2015 | A1 |
20150355946 | Kang | Dec 2015 | A1 |
20150378776 | Lippett | Dec 2015 | A1 |
20150381528 | Davis et al. | Dec 2015 | A9 |
20160056911 | Ye et al. | Feb 2016 | A1 |
20160062800 | Stanfill et al. | Mar 2016 | A1 |
20160085722 | Verplanken | Mar 2016 | A1 |
20160092362 | Barron et al. | Mar 2016 | A1 |
20160164625 | Gronvall et al. | Jun 2016 | A1 |
20160188344 | Tamir et al. | Jun 2016 | A1 |
20160210159 | Wilson et al. | Jul 2016 | A1 |
20160239415 | Davis et al. | Aug 2016 | A1 |
20160241430 | Yadav et al. | Aug 2016 | A1 |
20160337723 | Graves | Nov 2016 | A1 |
20160364333 | Brown et al. | Dec 2016 | A1 |
20160364334 | Asaro et al. | Dec 2016 | A1 |
20160378538 | Kang | Dec 2016 | A1 |
20160380885 | Jani et al. | Dec 2016 | A1 |
20170005921 | Liu et al. | Jan 2017 | A1 |
20170031719 | Clark et al. | Feb 2017 | A1 |
20170032011 | Song et al. | Feb 2017 | A1 |
20170060615 | Thakkar et al. | Mar 2017 | A1 |
20170061566 | Min et al. | Mar 2017 | A1 |
20170068639 | Davis et al. | Mar 2017 | A1 |
20170083257 | Jain et al. | Mar 2017 | A1 |
20170214774 | Chen et al. | Jul 2017 | A1 |
20170235581 | Nickolls et al. | Aug 2017 | A1 |
20170265220 | Andreoli-Fang et al. | Sep 2017 | A1 |
20170286143 | Wagner et al. | Oct 2017 | A1 |
20170286157 | Hasting et al. | Oct 2017 | A1 |
20170346766 | Dutta | Nov 2017 | A1 |
20180011739 | Pothula et al. | Jan 2018 | A1 |
20180024771 | Miller et al. | Jan 2018 | A1 |
20180026901 | Sugunadass | Jan 2018 | A1 |
20180054485 | Warfield et al. | Feb 2018 | A1 |
20180095878 | Katayama | Apr 2018 | A1 |
20180115494 | Bhatia et al. | Apr 2018 | A1 |
20180145746 | Finkelstein | May 2018 | A1 |
20180152317 | Chang et al. | May 2018 | A1 |
20180234300 | Mayya et al. | Aug 2018 | A1 |
20180239702 | Farmahini Farahani et al. | Aug 2018 | A1 |
20180287818 | Goel et al. | Oct 2018 | A1 |
20180287965 | Sindhu et al. | Oct 2018 | A1 |
20180288505 | Sindhu et al. | Oct 2018 | A1 |
20180293168 | Noureddine et al. | Oct 2018 | A1 |
20180300928 | Koker et al. | Oct 2018 | A1 |
20180307494 | Ould-Ahmed-Vall et al. | Oct 2018 | A1 |
20180307535 | Suzuki et al. | Oct 2018 | A1 |
20180322386 | Sridharan et al. | Nov 2018 | A1 |
20180341494 | Sood et al. | Nov 2018 | A1 |
20180357169 | Lai | Dec 2018 | A1 |
20180357172 | Lai | Dec 2018 | A1 |
20190005176 | Illikkal et al. | Jan 2019 | A1 |
20190012278 | Sindhu et al. | Jan 2019 | A1 |
20190012350 | Sindhu et al. | Jan 2019 | A1 |
20190013965 | Sindhu et al. | Jan 2019 | A1 |
20190018806 | Koufaty et al. | Jan 2019 | A1 |
20190042292 | Palermo et al. | Feb 2019 | A1 |
20190042518 | Marolia et al. | Feb 2019 | A1 |
20190095333 | Heirman et al. | Mar 2019 | A1 |
20190102311 | Gupta et al. | Apr 2019 | A1 |
20190104057 | Goel et al. | Apr 2019 | A1 |
20190104206 | Goel et al. | Apr 2019 | A1 |
20190104207 | Goel et al. | Apr 2019 | A1 |
20190158428 | Gray et al. | May 2019 | A1 |
20190188079 | Kohli | Jun 2019 | A1 |
20190243765 | Sindhu et al. | Aug 2019 | A1 |
20190363989 | Shalev et al. | Nov 2019 | A1 |
20200007405 | Chitalia et al. | Jan 2020 | A1 |
20200021664 | Goyal et al. | Jan 2020 | A1 |
20200021898 | Sindhu et al. | Jan 2020 | A1 |
20200028776 | Atli et al. | Jan 2020 | A1 |
20200042479 | Wang et al. | Feb 2020 | A1 |
20200067839 | Iny et al. | Feb 2020 | A1 |
20200119903 | Thomas et al. | Apr 2020 | A1 |
20200133771 | Goyal et al. | Apr 2020 | A1 |
20200145680 | Dikshit et al. | May 2020 | A1 |
20200151101 | Noureddine et al. | May 2020 | A1 |
20200159568 | Goyal et al. | May 2020 | A1 |
20200159859 | Beckman et al. | May 2020 | A1 |
20200169513 | Goel et al. | May 2020 | A1 |
20200183841 | Noureddine et al. | Jun 2020 | A1 |
20200259682 | Goel et al. | Aug 2020 | A1 |
20200264914 | Dasgupta et al. | Aug 2020 | A1 |
20200280462 | Sindhu et al. | Sep 2020 | A1 |
20200314026 | Sindhu et al. | Oct 2020 | A1 |
20200356414 | Sindhu et al. | Nov 2020 | A1 |
20210117242 | Van De Groenendaal et al. | Apr 2021 | A1 |
20220116487 | Sundar et al. | Apr 2022 | A1 |
20220279421 | Sivakumar et al. | Sep 2022 | A1 |
20220286399 | McDonnell et al. | Sep 2022 | A1 |
Number | Date | Country |
---|---|---|
101447986 | Jun 2009 | CN |
103004158 | Mar 2013 | CN |
104052618 | Sep 2014 | CN |
104521196 | Apr 2015 | CN |
104954247 | Sep 2015 | CN |
104954251 | Sep 2015 | CN |
105024844 | Nov 2015 | CN |
105847148 | Aug 2016 | CN |
105900406 | Aug 2016 | CN |
1079571 | Feb 2001 | EP |
1489796 | Dec 2004 | EP |
1501246 | Jan 2005 | EP |
2289206 | Mar 2011 | EP |
2928134 | Oct 2015 | EP |
2009114554 | Sep 2009 | WO |
2013184846 | Dec 2013 | WO |
2014178854 | Nov 2014 | WO |
2016037262 | Mar 2016 | WO |
2019014268 | Jan 2019 | WO |
Entry |
---|
“QFX10000 Switches Svstem Architecture,” White Paper, Juniper Networks, Apr. 2015, 15 pp. |
Adya et al., “Cooperative Task Management without Manual Stack Management,” Proceedings of the 2002 Usenix Annual Technical Conference, Jun. 2002, 14 pp. |
Al-Fares et al., “Hedera: Dynamic Flow Scheduling for Data Center Networks,” NSDI'10 Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation, Apr. 28-30, 2010, 15 pp. |
Alizadeh et al., “CONGA: Distributed Congestion—Aware Load Balancing for Datacenters,” SIGCOMM '14 Proceedings of the 2014 ACM Conference on SIGCOMM, Aug. 17-22, 2014, pp. 503-514. |
Bakkum et al., “Accelerating SQL Database Operations on a GPU with CUDA,” Proceedings of the 3rd Workshop on Genral-Purpose Computation on Graphics Processing Units, Mar. 14, 2010, 10 pp. |
Banga et al., “Better operating system features for faster network servers,” ACM Sigmetrics Performance Evaluation Review, vol. 26, Issue 3, Dec. 1998, 11 pp. |
Barroso et al., “Attack of the killer Microseconds,” Communications of the ACM, vol. 60, No. 4, Apr. 2017, 7 pp. |
Benson et al., “MicroTE: Fine Grained Traffic Engineering for Data Centers,” CoNEXT '11 Proceedings of the Seventh Conference on emerging Networking EXperiments and Technologies Article No. 8, Dec. 6-9, 2011, 12 pp. |
Benson et al., “Network Traffic Characteristics of Data Centers in the Wild,” IMC '10 Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, Nov. 1-30, 2010, pp. 267-280. |
Corrected International Search Report and Written Opinion of International Application No. PCT/US2018/041506, dated Sep. 27, 2018, 2 pp. |
Deutsch, “Deflate Compressed Data Format Specification version 1.3,” IETF Network Working Group, RFC 1951, May 1996, 15 pp. |
First Office Action and Search Report, and translation thereof, from counterpart Chinese Agplication No. 201880046048.2, dated May 8, 2021, 26 pp. |
Ford et al., “TCP Extensions for Multipath Operation with Multiple Addresses,” Internet Engineering Task Force (IETF), RFC 6824, Jan. 2013, 64 pp. |
Friedman et al., “Programming with Continuations,” Technical Report 151, Nov. 1983, 14 pp. |
Gay et al., “The nesC Language: A Holistic Approach to Networked Embedded Systems,” accessed from http://nescc.sourceforge.net, last updated Dec. 14, 2004, 11 pp. |
Halbwachs et al., “The Synchronous Data Flow Programming Language LUSTRE,” Proceedings of the IEEE, vol. 79, No. 9, Sep. 1991, 16 pp. |
Haynes et al., “Continuations and Coroutines,” Technical Report No. 158, Jun. 1984, 19 pp. |
Hewitt, “Viewing Control Structures as Patterns of Passing Messages” Massachusetts Institute of Technology, Artificial Intelligence Laboratory, Dec. 1976, 61 pp. |
Hseush et al., Data Path Debugging: Data-Oriented Debugging for a Concurrent Programming Language, PADD '88 Proceedings of the 1988 ACM SIGPLAN and SIGOPS workshop on Parallel and distributed debugging, May 5-6, 1988, 12 pp. |
Huang et al., “Erasure Coding in Windows Azure Storage,” 2012 USENIX Annual Technical Conference, Jun. 13-15, 2012, 12 pp. |
Hurson, “Advances in Computers, vol. 92,” Jan. 13, 2014, Academic Press, XP055510879, 94-95 pp. |
International Search Report and Written Opinion of International Application No. PCT/US2018/041506, dated Sep. 19, 2018, 13 pp. |
Isen et al., “ESKIMO—Energy Savings using Semantic Knowledge of Inconsequential Memory Occupancy for DRAM subsystem,” 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Dec. 12-16, 2009, 10 pp. |
Kahn et al., “Actors as a Special Case of Concurrent Constraint Programming,” ECOOP/OOPSLA '90 Proceedings, Oct. 21-25, 1990, 10 pp. |
Kaminow, “Optical Integrated Circuits: A Personal Perspective,” Journal of Lightwave Technology, vol. 26, No. 9, May 1, 2008, pp. 994-1004. |
Kandula et al., “Dynamic Load Balancing Without Packet Reordering,” SIGCOMM Computer Communication Review, vol. 37, No. 2, Apr. 2007, pp. 53-62. |
Kandula et al., “The Nature of Datacenter Traffic: Measurements & Analysis,” IMC '09 Proceedings of the 9th ACM SIGCOMM conference on Internet measurement, Nov. 4-6, 2009, pp. 202-208. |
Kelly et al., A Block Diagram Compiler, The Bell System Technical Journal, Dec. 7, 1960, 10 pp. |
Kounavis et al., “Programming the data path in network processor-based routers,” Software—Practice and Experience, Oct. 21, 2003, 38 pp. |
Larus et al., “Using Cohort Scheduling to Enhance Server Performance,” Usenix Annual Technical Conference, Jun. 2002, 12 pp. |
Levis et al., “Tiny OS: An Operating System for Sensor Networks,” Ambient Intelligence, Jan. 2005, 34 pp. |
Lin et al., A Parameterized Dataflow Language Extension for Embedded Streaming Systems, 2008 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation, Jul. 21-24, 2008, 8 pp. |
Mishra et al., “Thread-based vs Event-based Implementation of a Group Communication Service,” Proceedings of the First Merged International Parallel Processing Symposium and Symposium on Parallel and Distributed Processing, Mar. 30-Apr. 3, 1998, 5 pp. |
Notice of Allowance from U.S. Appl. No. 16/031,676, dated Jan. 13, 2020, 16 pp. |
Office Action from U.S. Appl. No. 16/877,050, dated Apr. 14, 2022, 13 pp. |
Prosecution History from U.S. Appl. No. 16/031,921, now U.S. Pat. No. 11,303,472, dated Nov. 18, 2019 through Dec. 14, 2021, 133 pp. |
Prosecution History from U.S. Appl. No. 16/031,945, now U.S. Pat. No. 10,725,825, dated Jan. 24, 2020 through Jun. 30, 2020, 40 pp. |
Raiciu et al., “Improving Datacenter Performance and Robustness with Multipath TCP,” ACM SIGCOMM Computer Communication Review—SIGCOMM '11, vol. 41, No. 4, Aug. 2011, pp. 266-277. |
Response to Written Opinion from International Application No. PCT/US2018/041506, filed May 10, 2019, 19 pp. |
Schroeder et al., “Flash Reliability in Production: The Expected and the Unexpected,” 14th USENIX Conference on File and Storage Technologies (FAST '16), Feb. 22-25, 2016, 15 pp. |
Varela et al., “The Salsa Programming Language 2.0.0alpha Release Tutorial,” Tensselaer Polytechnic Institute, Nov. 2009, 52 pp. |
Von Behren et al., “Why Events Are a Bad Idea (for high-concurrency servers),” Proceedings of HotOS IX, May 2003, 6 pp. |
Wang et al., “A Spatial and Temporal Locality-Aware Adaptive Cache Design with Network Optimization for Tiled Many-Core Architectures,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 25. No. 9, Sep. 2017, pp. 2419-2433. |
Welsh et al., “SEDA: An Architecture for Well-Conditioned, Scalable Internet Services,” Eighteenth Symposium on Operating Systems Principles, Oct. 21-24, 2001, 14 pp. |
Zhu et al., “Congestion Control for Large-Scale RDMA Deployments,” SIGCOMM '15 Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, Aug. 17-21, 2015, pp. 523-536. |
Notice of Allowance from U.S. Appl. No. 16/877,050 dated Sep. 6, 2022, 11 pp. |
Office Action from U.S. Appl. No. 16/939,617 dated Dec. 9, 2022, 17 pp. |
Response to Office Action dated Apr. 14, 2022 from U.S. Appl. No. 16/877,050 filed filed Jul. 14, 2022, 11 pp. |
Response to Office Action dated Dec. 9, 2022 from U.S. Appl. No. 16/939,617, filed Mar. 9, 2023, 6 pp. |
“Notice of Allowance Issued in U.S. Appl. No. 16/939,617”, dated Apr. 4, 2023, 5 Pages. |
“First Office Action Issued in Chinese Patent Application No. 201880046042.5”, dated Dec. 26, 2022, 27 Pages. |
“Notice of Allowance Issued in Chinese Patent Application No. 201880046042.5”, dated May 29, 2023, 4 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 16/939,617”, dated Jul. 18, 2023, 5 Pages. |
Number | Date | Country | |
---|---|---|---|
20220224564 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
62559021 | Sep 2017 | US | |
62530691 | Jul 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16031921 | Jul 2018 | US |
Child | 17657081 | US |