Zero copy socket splicing

Information

  • Patent Grant
  • 11363124
  • Patent Number
    11,363,124
  • Date Filed
    Friday, October 30, 2020
    4 years ago
  • Date Issued
    Tuesday, June 14, 2022
    2 years ago
Abstract
Some embodiments provide a novel method for splicing Transmission Control Protocol (TCP) sockets on a computing device that processes a kernel of an operating system. The method receives a set of packets at a first TCP socket of the kernel. The method stores the set of packets at a kernel memory location sends the set of packets directly from the kernel memory location out through a second TCP socket of the kernel.
Description
BACKGROUND

In the field of computing, data transfers from one computer to another take up a significant amount of computing time. One of the processes that make this problem worse is that in some operations, such as virtual computing, data may need to be accessed by multiple separate processes on a particular physical machine (e.g., a host machine of a data center, standalone computer, etc.). In the prior art, different processes may each need their own copy of a set of data. In such circumstances, data used by multiple processes on the same machine will be copied, sometimes multiple times, from one memory location (accessible by a first process) to another memory location (accessible to a second process) on the same machine. Such copying may slow down the transmission and/or processing of the data. For example, in a prior art socket splicing operation, incoming data on a receiving socket is copied from a first memory location used by a receiving socket, to a second, intermediary memory location. The data is then copied from the intermediate memory location to a third memory location used by a transmitting socket. Each additional copy operation slows down the transmission of the data.


In some of the prior art, Berkeley Sockets (a.k.a. BSD sockets) are often used for inter process communication and are the de-facto standard API for I/O (convenient API for user-space I/O). With BSD, splicing TCP sockets requires performing two I/O operations (one read operation and one write operation) per I/O buffer. Additional performance costs include memory copying that consumes several CPU cycles and hurt other processes by “polluting” shared L3 cache and putting additional pressure on the memory channels. The performance costs also include additional system calls and a slow network stack. High-speed Ethernet speeds are reduced by these performance costs of the BSD Sockets because network speeds have outstripped those of the CPU and memory. Thus operations that require extra CPU and memory use become a bottleneck for data transmission. Because the network transmits data faster than a single CPU can feed the data into the network, more than a single CPU core is required to simply saturate a network link.


Attempts have been made to eliminate these performance costs by creating network systems that bypass the kernel of a computer in the network transmission path, such as with DPDK and Netmap. The kernel bypass methods attempt to avoid the performance penalties associated with BSD Sockets. However, by bypassing the kernels, these methods lose the use of network infrastructure that already exists inside the kernel. Without the existing kernel infrastructure, the kernel bypass methods require a substitute for that network. Thus, the developers of such kernel bypass methods also need to re-develop existing network infrastructure of the kernels (e.g., IP, TCP, ICMP, IGMP). Therefore, there is a need in the art for a dedicated memory allocator for I/O operations that inherently facilitates zero-copy I/O operations and exceptionless system calls rather than merely bypassing the kernel.


BRIEF SUMMARY

Modern computers use a bifurcated structure that includes a core operating system (the kernel) and applications that access that kernel operating in a user-space. Some data is used by both the kernel and by applications in the user-space. The prior art copies the data from memory locations used by the kernel to separate memory locations used by applications of the user-space. Unlike that prior art, some embodiments provide a novel method for performing zero-copy operations using a dedicated memory allocator for I/O operations (MAIO). Zero-copy operations are operations that allow separate processes (e.g., a kernel-space process and a user-space process, two sockets in a kernel-space, etc.) to access the same data without copying the data between separate memory locations. The term “kernel-space process,” as used herein, encompasses any operation or set of operations by the kernel, including operations that are part of a specific process, operations called by a specific process, or operations independent of any specific process.


To enable the zero-copy operations that share data between user-space processes and kernel-space processes without copying the data, the method of some embodiments provides a user-space process that maps a pool of dedicated kernel memory pages to a virtual memory address space of user-space processes. The method allocates a virtual region of the memory for zero-copy operations. The method allows access to the virtual region by both the user-space process and a kernel-space process. The MAIO system of the present invention greatly outperforms standard copying mechanism and performs at least on par and in many cases better than existing zero-copy techniques while preserving the ubiquitous BSD Sockets API.


In some embodiments, the method only allows a single user to access a particular virtual region. In some embodiments, the allocated virtual region implements a dedicated receiving (RX) ring for a network interface controller (NIC). The dedicated RX ring may be limited to a single tuple (e.g., a single combination of source IP address, source port address, destination IP address, destination port address, and protocol). The dedicated RX ring may alternately be limited to a defined group of tuples.


In the method of some embodiments, the allocated virtual region implements a dedicated transmission (TX) ring for a NIC. Similar to the case in which the virtual region implements an RX ring, the dedicated TX ring may be limited to a single tuple or a defined group of tuples.


The kernel has access to a finite amount of memory. Allocating that memory for use in zero-copy operations prevents the allocated memory from being used for other kernel functions. If too much memory is allocated, the kernel may run out of memory. Accordingly, in addition to allocating virtual memory, the user-space process of some embodiments may also de-allocate memory to free it for other kernel uses. Therefore, the user-space process of some embodiments identifies virtual memory, already allocated to zero-copy operations, to be de-allocated. In some cases, a user-space process may not de-allocate enough memory. Therefore, in some embodiments, when the amount of memory allocated by the user-space process is more than a threshold amount, the kernel-space process de-allocates at least part of the memory allocated by the user-space process. In some embodiments, either in addition to or instead of the kernel-space process de-allocating memory, when the amount of memory allocated by the user-space process is more than a threshold amount, the kernel-space process prevents the user-space process from allocating more memory.


In some embodiments, the kernel-space process is a guest kernel-space process on a guest virtual machine operating on a host machine. The method may additionally allow access to the virtual region by a user-space process of the host machine and/or a kernel-space process of the host.


Zero-copy processes can also be used for TCP splicing. Some embodiments provide a method of splicing TCP sockets on a computing device (e.g., a physical computer or a virtual computer) that processes a kernel of an operating system. The method receives a set of packets at a first TCP socket of the kernel, stores the set of packets at a kernel memory location, and sends the set of packets directly from the kernel memory location out through a second TCP socket of the kernel. In some embodiments, the receiving, storing, and sending are performed without a system call. Some embodiments preserve standard BSD Sockets API but provide seamless zero-copy I/O support.


Packets may sometimes come in to the receiving socket faster than the transmitting socket can send them on, causing a memory buffer to fill. If the memory buffer becomes completely full and packets continue to be received, packets would have to be discarded rather than sent. The capacity of a socket to receive packets without its buffer being overwhelmed is called a “receive window size.”


In some embodiments, when the buffer is full beyond a threshold level, the method sends an indicator of a reduced size of the receive window to the original source of the set of packets. In more severe cases, in some embodiments, when the buffer is full, the method sends an indicator to the original source of the set of packets that the receive window size is zero. In general, the buffer will be filled by the receiving socket and emptied (partially or fully) by the transmitting socket. That is, memory in the buffer will become available as the transmitting socket sends data out and releases the buffer memory that held that data. Accordingly, the method of some embodiments sends multiple indicators to the original source of the packets as the buffer fullness fluctuates. For example, when the transmitting socket empties the buffer, the method of some embodiments sends a second indicator that the receive window size is no longer zero.


In some embodiments, the set of packets is a first set of packets and the method waits for the first set of packets to be sent by the second TCP socket before allowing a second set of packets to be received by the first TCP socket. In some such embodiments, the kernel memory location identifies a set of memory pages; the method frees the memory pages with a driver completion handler after the data stored in the memory pages is sent.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 conceptually illustrates a process that allocates memory as a shared memory pool for user-space and kernel-space processes.



FIG. 2 conceptually illustrates a process for allocating a virtual region of memory for zero-copy operations.



FIG. 3 conceptually illustrates kernel memory allocated as a virtual memory address space in a user-space.



FIG. 4 conceptually illustrates system calls using dedicated ring buffers.



FIG. 5 illustrates a zero-copy memory accessible by the user-spaces and kernel-spaces of both a guest machine and a host machine.



FIG. 6 illustrates a dedicated memory allocation I/O system operating on a multi-tenant host.



FIG. 7 conceptually illustrates a process 700 of some embodiments for allocating and de-allocating kernel memory for shared memory access with kernel-space and user-space processes.



FIG. 8 conceptually illustrates a process 800 for zero-copy TCP splicing.



FIG. 9 conceptually illustrates zero-copy TCP splicing between two kernel sockets.



FIG. 10 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Modern computers use a bifurcated structure that includes a core operating system (the kernel) and applications that access that kernel operating in a user-space. Some data is used by both the kernel and by applications in the user-space. The prior art copies the data from memory locations used by the kernel to separate memory locations used by applications of the user-space. Unlike that prior art, some embodiments provide a novel method for performing zero-copy operations using a dedicated memory allocator for I/O operations (MAIO). Zero-copy operations are operations that allow separate processes (e.g., a kernel-space process and a user-space process, two sockets in a kernel-space, etc.) to access the same data without copying the data between separate memory locations. The term “kernel-space process,” as used herein, encompasses any operation or set of operations by the kernel, whether these operations are part of a specific process or independent of any specific process.


Some embodiments provide a novel method for performing zero-copy operations using dedicated memory allocated for I/O operations. FIG. 1 conceptually illustrates a process 100 that allocates memory as a shared memory pool for user-space and kernel-space processes. FIG. 2 conceptually illustrates a process 200 for allocating a virtual region of memory for zero-copy operations. The process 100 of FIG. 1 and process 200 of FIG. 2 will be described by reference to FIG. 3. FIG. 3 conceptually illustrates kernel memory allocated as a virtual memory address space in a user-space. FIG. 3 includes a kernel-space 310 with kernel memory 320 and user-space 330 with virtual memory 340. Kernel memory 320 includes allocated memory pages 325 which in turn include memory 327 allocated for zero-copy operations. A user-space process 350 runs in user-space 330 and a kernel-space process 360 runs in kernel-space 310.


The process 100 of FIG. 1 prepares memory for sharing data between user-space processes and kernel-space processes without copying the data. The method 100 allocates (at 105) a set of memory locations as a shared memory pool. In some embodiments, the memory pool is allocated from kernel memory. An example of this is shown in FIG. 3, with memory pages 325 allocated as shared memory pages. The process 100 (of FIG. 1) then maps (at 110) a pool of the dedicated kernel memory to a virtual memory address space of user-space processes. FIG. 3 illustrates such a mapping with the allocated memory pages 325 mapped to virtual memory 320. Although the embodiment of FIG. 3 shows the allocated memory pages 325 mapped to a single virtual memory space, in some embodiments the allocated memory may be mapped to multiple virtual memory address spaces (e.g., for multiple processes in a single user-space, processes in multiple user-spaces, processes owned by multiple tenants of a datacenter, etc.)


After the memory is mapped, the process 100 then provides (at 115) the memory location identifier to a kernel-space process to allow the kernel-space process to access the virtual memory region. The process 100 also provides (at 120) a memory location identifier to a user-space process to access the virtual-memory region.


Although the process 100 is shown as providing memory location identifier to the kernel-space process first, one of ordinary skill in the art will understand that other embodiments provide the memory location identifier to the kernel-space process after providing it to the user-space process. Additionally, in some embodiments, the features of either operation 115 or operation 120 may be combined with the features of operation 110 into a single operation in which the mapping operation is performed by a kernel-space operation or a user-space operation which creates the memory location identifier of operations 115 or 120 in the course of a mapping operation similar to operation 110. In some embodiments, the location identifier may supply an identifier of a memory location in kernel-space at which the memory begins and/or a corresponding memory location in a virtual memory for the user-space at which the memory begins. In embodiments in which the kernel-space and the user-space each uses a separate addressing locations for the same physical memory location, this or whatever other location identifier or identifiers are exchanged between the user-space process and the kernel allows the kernel to identify an address of a page, in the kernel-space memory, based on a supplied memory page address, in the virtual memory, provided to the kernel by the user-space process. Similarly, in some embodiments, the user-space process may translate the address locations between the virtual memory addresses and the kernel-space memory addresses.


Once the process 100 maps a pool of dedicated kernel memory pages to a virtual memory address space of user-space processes, some embodiments provide a process for allocating a virtual region of that dedicated kernel memory for zero-copy operations. FIG. 2 conceptually illustrates a process 200 for allocating a virtual region of memory for zero-copy operations. The process 200 receives (at 205) a memory location identifier of an allocated pool of memory shared by kernel-processes and user-space processes. In some embodiments, the memory location identifier is received from a user-space process or kernel-space process that allocates the memory (e.g., in operation 110 of FIG. 1).


The process 200 allocates (at 210) a virtual region of memory from the identified memory location for use in a zero-copy memory operation. The process 200 provides (at 215) an identifier of the allocated memory for zero-copy memory operations to a kernel-space process and a user-space process. In FIG. 3, the zero-copy memory is accessible by both user-space process 350 and kernel-space process 360. Although process 200 is described as being performed by a user-space process, one of ordinary skill in the art will understand that in some embodiments a kernel-space process allocates the memory for zero-copy memory operations instead of the user-space process allocating the memory. Similarly, in some embodiments, both user-space processes and kernel-space processes can allocate memory for zero-copy memory operations.


Zero-copy operations between kernel-space and user-space are useful in multiple processes. One such process is receiving and transmitting data in I/O operations. In existing systems, the direct and indirect costs of system calls impact user-space I/O performance. Some embodiments of the present invention avoid these costs by offloading the I/O operation to one or more dedicated kernel threads which will perform the I/O operation using kernel sockets rather than requiring user-space processes to perform the I/O operations. In some embodiments, a dedicated ring memory buffer (sometimes called an RX ring) is used for receiving data at a network interface controller (NIC) and a second dedicated ring memory buffer is used for transmitting data from the NIC. The dedicated RX ring may be limited to a single tuple (e.g., a single combination of source IP address, source port address, destination IP address, destination port address, and protocol). The dedicated RX ring may alternately be limited to a defined group of tuples. Similarly, in some embodiments an allocated virtual region implements a dedicated transmission ring memory buffer (sometimes called a TX ring) for a NIC. As in the case in which the virtual region implements an RX ring, the dedicated TX ring may be limited to a single tuple or a defined group of tuples.


An example of such dedicated RX and TX rings is shown in FIG. 4. FIG. 4 conceptually illustrates send and receive threads using dedicated ring buffers. FIG. 4 includes device drivers 400 and a network stack 410 operating in kernel-space, dedicated transmission ring memory buffers 415 which receive data 420 from kernel system calls (i.e., system calls sending messages from the kernel to the user-space), dedicated receiving ring memory buffers 425 which transmit data 430, through kernel system calls (i.e., system calls receiving messages at the kernel from the user-space).


Although the dedicated transmission memory buffer ring 415 is shown as two separate items, one in the kernel-space and one straddling a dashed line separating user-space from kernel-space, they are the same memory buffer ring shown from two different perspectives, not two separate entities. Kernel processes and user processes each have access to the transmission memory buffer ring 415 and the data 420 sent from the kernel with system calls 417 in the user-space is all data stored in the transmission memory buffer ring 415. In addition to storing data 420 for MAIO pages, in some embodiments, the dedicated transmission ring may be used to store data 422 for a kernel buffer without needing any special care for data separation.


As with dedicated memory buffer ring 415, although the dedicated receiving memory buffer ring 425 is shown as two separate items, one in the kernel-space and one straddling a dashed line separating user-space from kernel-space, they are also a single memory buffer ring shown from two different perspectives, not two separate entities. Kernel processes and user processes each have access to the transmission memory buffer ring 425 and the data 430 received by the kernel with system calls 427 from the user-space is all data stored in the transmission memory buffer ring 425.


Some embodiments use dedicated threads with the ring buffers. This has multiple advantages. For example, it reduces the need for some system calls which would otherwise slow down the data transmission. For example, when sending data some embodiments do not require a send_msg system call, but instead use an I/O descriptor (e.g., struct, msghdr, and int flags) written to a shared memory ring buffer. Additionally, splitting (between the kernel-space process and the user-space process) responsibility for performing I/O preserves the existing socket API, facilitates exceptionless system calls, and allows for better parallel programming. Furthermore, bifurcated I/O (splitting the responsibility for performing the I/O) enables the separation of the application computations and the TCP computations to different CPU cores. In some embodiments, dedicated kernel threads are also used to perform memory operations (e.g., retrieving memory buffers back from the user).


Although the embodiment of FIG. 4 shows receiving and transmitting only through zero-copy operations, in other embodiments, both zero-copy and standard send and receive operations are supported. For example, some embodiments provide support for standard I/O operations for apps with small I/O needs (e.g., where the copying of only a small amount of data reduces or eliminates the savings from zero-copy operations). In standard mode, the sent buffer is copied to a new MAIO buffer before being sent. In some embodiments the common memory is allocated using a NIC driver. In some embodiments, the NIC driver dedicates the memory using an application device queue (ADQ). Various embodiments may map the kernel-space memory to the virtual (user-space) memory after the NIC driver dedicates the memory for user space, after the NIC driver dedicates the memory to kernel-space, or in some embodiments the NIC driver may perform the mapping of the kernel-space memory to the virtual memory as well as dedicating the memory to a user-space process using an ADQ.


The previous figure illustrated the use of the present invention in a computer system with a single user-space and a single kernel-space. However, the invention is not limited to such systems. In some embodiments, the invention operates on a guest machine (e.g., a virtual machine operating on a physical host machine). In some such embodiments, both the host system and the guest system are designed to use zero-copy operations and are both able to access the shared memory. FIG. 5 illustrates a zero-copy memory accessible by the user-spaces and kernel-spaces of both a guest machine and a host machine. FIG. 5 includes a host kernel-space 500, a host user-space 502, a guest kernel-space 504, and a guest user-space 506. A kernel-space process 530 operates in the guest-kernel-space 504 and receives data from a user-space process 520 through a dedicated memory ring buffer 510. Similarly, another kernel-space process 550 operates in the guest-kernel-space 504 and receives data from a user-space process 560 through a dedicated memory ring buffer 540.


The embodiments of FIG. 5 includes only a single guest machine, eliminating security issues that might arise from exposing data from one guest machine, that is owned by a first tenant, to a second data machine that is owned by a second tenant. However, even when multiple tenants have guest machines on the same host machine, the present invention still provides security for the tenants' data.


In order to protect data when user-processes now seemingly have access to sensitive kernel memory, the present invention provides entirely separate allocated memory to different tenants. That is, in some embodiments, the method limits access to the virtual region allocated for zero-copy operations to a single user. Thus, the kernel memory a particular user has access to contains only data that the particular user would normally have access to. FIG. 6 illustrates a dedicated memory allocation I/O system operating on a multi-tenant host. FIG. 6 includes a host kernel-space 600 and a host user-space 602. Tenant 1 has a guest machine with a guest kernel-space 604, and a guest user-space 606. A kernel-space process 620 operates in the guest-kernel-space 604 and receives data from a user-space process 630 through a dedicated memory ring buffer 610. Tenant 2 has a guest machine with a guest kernel-space 644, and a guest user-space 646. A kernel-space process 650 operates in the guest-kernel-space 644 and receives data from a user-space process 660 through a dedicated memory ring buffer 640. Memory ring 610 is used exclusively for tenant 1, while memory ring 640 is used exclusively for tenant 2. Accordingly, no data can leak from tenant 1 to tenant 2 or vice versa through the dedicated memory ring buffers.


Some embodiments provide additional security features. For example, in some embodiments, shared pages are only ever used by the kernel to hold I/O data buffers and not any metadata or any other data needed by the kernel. That is, the user-space process can only ever see the information that a user-space process has written or data bound to user-space which would be received by the user in a standard operation, even if a zero-copy operation were not used. In some embodiments, in addition to the message data, the kernel-process is privy to transport headers as well. In some embodiments, where the NIC supports Header/Data splitting, the kernel-process places the headers onto non-shared buffers for additional security. In contrast, in embodiments where all potential receiving memory ring buffers are shared, the MAIO would potentially expose all traffic to a single observer. In the absence of driver support for keeping different tenant data separate, the usefulness of MAIO in such embodiments should be limited to those cases when any user with access is trusted (e.g., sudo).


Kernel memory allocated to zero-copy operations is not available for other kernel functions. If allocated memory is not released back to the kernel while new memory continues to be allocated, the kernel may run out of memory for those other functions. Therefore, in addition to allocating virtual memory, the user-space process of some embodiments may de-allocate memory. That is, the user-space process may identify virtual memory, previously allocated to zero-copy operations, to be de-allocated.


Under some circumstances, a user-process may not properly de-allocate memory. Accordingly, in some embodiments, when the amount of memory allocated by the user-space process is more than a threshold amount, the kernel-space process takes corrective action. In some embodiments, when the amount of memory allocated by the user-space process is more than a threshold amount, the kernel-space process prevents the user-space process from allocating more memory. FIG. 7 conceptually illustrates a process 700 of some embodiments for allocating and de-allocating kernel memory for shared memory access with kernel-space and user-space processes. The process 700 receives (at 705) a request from a user-space process for a pool of dedicated kernel memory to be accessed by both kernel-space and user-space processes. The process 700 determines (at 710) whether the user-space process has more than a threshold amount of kernel memory dedicated to that user-space process. In some embodiments, the threshold is a fixed amount, in other embodiments; the threshold is variable based on available (free) system resources, relative priority of various user-processes etc. In some embodiments, the threshold is determined on a per-process basis; in other embodiments, the threshold may be determined on a per guest machine basis or a per-tenant basis.


When the process 700 determines (at 710) that the user-process has more than the threshold amount of memory, the process 700 uses (at 715) a standard memory allocation (e.g., the driver of the NIC uses a standard memory allocation) and refuses to designate a pool of kernel memory for the user-space process. For example, this occurs when a user-space process hoards MAIO buffers without releasing them to the kernel, thus starving the kernel of needed memory. In some embodiments, when the driver of the NIC reverts to standard memory allocation, this renders the user-space process unable to receive, while other process and kernel functionality will remain intact. After operation 715, the process 700 moves on to operation 725.


When the process 700 determines (at 710) that the user-process does not have more than the threshold amount of memory, the process 700 designates (at 720) a pool of dedicated kernel memory for the user-space process to share with kernel-space processes. After operation 720, the process 700 moves on to operation 725.


The process 700 determines (at 725) whether it has received (e.g., from the user-space process) a request to de-allocate a pool of dedicated kernel memory. When the process 700 has received a request to de-allocate a pool of dedicated kernel memory, the process 700 de-allocates (at 730) that pool of kernel memory, freeing that pool to be allocated for shared use with other user-space processes or for use in other kernel operations. The process then returns to operation 705 when it receives a new request for a pool of memory. When the process 700 determines (at 725) that it has not received a request to de-allocate a pool of dedicated kernel memory, the process 700 returns to operation 705.


The process 700 may be used to prevent memory hoarding by a user process in circumstances when zero-copy solutions with a shared static buffer are considered dangerous because these shared pages can be exhausted and cannot be swapped out. However, some modern systems have hundreds of GB of RAM and such systems may not be exhausted during typical operation. In such systems, the user-space process might not reach a threshold level requiring the kernel to refuse further memory allocation. In other embodiments, the kernel-space process itself de-allocates memory allocated to the user-space process rather than merely denying new allocations.


Although the previous description involved zero-copy operations used between kernel-space processes and user-space processes, zero-copy processes can also be used in kernel-space to kernel-space operations. One example, of such kernel/kernel operations is TCP splicing. TCP splicing is a method of splicing two socket connections inside a kernel, so that the data relayed between the two connections can be run at near router speeds.


In older prior art, TCP splicing involved user-space processes as well as kernel-space processes. In more recent prior art, a process called an “eBPF callback” is called when a packet is received. The eBPF callback forwards the received packet to a predefined socket. However, the prior art eBPF callback is problematic due to the fact that the callback is invoked in a non-process context. That is, the eBPF callback process has no way to determine whether the predefined socket to which the callback is forwarding the packet is ready to handle a packet. Therefore, when the destination socket cannot send (e.g., due to a closed send or receive window); there is no feedback process that can tell the original sender to wait for the window to open. Without this option, the notion of “back-pressure” (narrowing a receive window to tell the system that is the original source of the packets to slow or stop transmission until the transmitting socket can send the packets that already arrived) is infeasible. Back-pressure is paramount for socket splicing where the two connected lines are of different widths.


In contrast to the prior art eBPF callback, the present invention allows backpressure in the form of feedback to the original source when the transmitting socket is not ready to receive more packets. Some embodiments provide a method of splicing TCP sockets on a computing device (e.g., a physical computer or a virtual computer) that processes a kernel of an operating system. The method receives a set of packets at a first TCP socket of the kernel, stores the set of packets at a kernel memory location, and sends the set of packets directly from the kernel memory location out through a second TCP socket of the kernel. The method provides back-pressure that prevents the original source of the packets from sending packets to the receiving socket faster than the transmitting socket of the splice can send them onward. In some embodiments, the receiving, storing, and sending are performed without a system call.



FIG. 8 conceptually illustrates a process 800 for zero-copy TCP splicing. The process 800 will be described by reference to FIG. 9 which conceptually illustrates zero-copy TCP splicing between two kernel sockets. FIG. 9 includes receiving socket 910 which receives data packets 915 and stores them in memory buffer 920 and transmitting socket 930 which transmits the data packets from the memory buffer 920 without any intermediate copying of the data.


The process 800 of FIG. 8 receives (at 805), at a first TCP socket (e.g., such as receiving socket 910 of FIG. 9) of a kernel, a set of data packets (e.g., such as data packets 915 of FIG. 9). The process 800 of FIG. 8 stores (at 810) the data packets in a kernel memory location. For example, memory buffer 920 of FIG. 9. The process 800 of FIG. 8 sends (at 815) the set of packets directly from the kernel memory location out through a second TCP socket of the kernel. For example, transmitting socket 930 of FIG. 9. In some embodiments, the kernel memory location identifies a set of memory pages of a particular set of data, and the method frees the memory pages with a driver completion handler after the data stored in the memory pages is sent (at 815).


In some cases, the transmitting socket 930 may not be able to transmit packets as quickly as the receiving socket 910 is able to receive them. When that occurs, the receiving socket 910 adds packets to the memory buffer 920 faster than the transmitting socket 930 can clear the packets by sending them. Thus, the memory buffer 920 fills up. Accordingly, the process 800 determines (at 820) whether the buffer fullness has crossed a threshold level. This can happen in one of two ways, by the fullness increasing past a first threshold or decreasing past a second threshold. One of ordinary skill in the art will understand that in some embodiments the first and second thresholds will be the same and in other embodiments the thresholds will be different.


When the buffer becomes full beyond a first threshold level, the process 800 sends (at 825) an indicator from the first TCP socket (e.g., receiving socket 910 of FIG. 9) to a source of the set of packets (not shown). The indicator communicates that the size of a receive window of the first TCP socket has been adjusted downward. After the window size is reduced the process 800 returns to operation 805 and loops through operations 805-820 until the buffer fullness passes another threshold at 820. When the original source of the packets receives such an indicator, it slows down transmission of new packets to the receiving socket 910. If this adjustment reduces the rate of receiving incoming packets below the rate that the transmitting socket, then the buffer will gradually empty while the process 800 loops through operations 805-820.


The reduction of the rate of incoming packets will eventually result in the buffer dropping below a threshold (on subsequent passes through the loop). At that point, the process 800 then sends (at 825) an indicator increasing the size of the receiving window. Once the indicator is sent, the process 800 returns to operation 805 and continues to loop through operations 805-820, occasionally returning to operation 825 to adjust the size of the receive window up or down as needed before returning to the loop again.


While the adjustments are intended to keep the packets arriving at a rate that always leaves adequate space in the buffer, in some cases, the buffer may become nearly or entirely full. In such cases, the process 800 sends (at 825) an indicator to the original source of the set of packets, that the receive window size is zero, stopping the transmission of packets to the receiving socket entirely until the transmitting socket clears enough space in the buffer. Subsequent passes through the loop send (at 815) packets, but do not receive or store new ones until the buffer has enough space to resume receiving and the process 800 sends (at 825) an indicator that the receive window is open again.


Although the above described figures disclose the elements of some embodiments, some embodiments may include other elements. For example, in some embodiments, the memory allocator uses a pool of dedicated compound memory pages (i.e., _GFP_COMP). In some embodiments, the allocator is partly based on two mechanisms: a page frag mechanism over 64 KB buffers and these buffers in turn are allotted by a magazine allocator. This allocation scheme efficiently allocates variable size buffers in the kernel. Variable size allocation is useful to support variable sizes of MTU & HW offloads (e.g., HW GRO). To facilitate zero-copy, these pages are mapped once to the virtual memory address space of the privileged user-space process. The user-space process accesses MAIO buffers in two ways in some embodiments: (1) Zero-copy send, in which the user-space process has to mmap (mmap is a Unix system call that maps files or devices into memory), or perform a similar operation appropriate to the operating system on which the invention is implemented, the MAIO buffer and then allocate a virtual region for its own use (the allocated region's size is a multiple of 64 KB in some embodiments); and (2) Zero-copy receive, in which the user-space process performs a zero-copy receive operation to get MAIO buffers. The user-space process of some embodiments can return memory to the kernel via an exception-less mechanism.


With respect to Zero-copy support for kernel sockets, some embodiments expand the existing Linux TCP API with a tcp_read_sock_zcopy for RX and add a new msg flag, SOCK_KERN_ZEROCOPY, for tcp_sendmsg_locked in TX. With respect to receiving, some embodiments provide a new function, tcp_read_sock_zcopy, based on existing infrastructure i.e., tcp_read_sock. It is used by tcp_splice_read to collect buffers from a socket without copying. When kernel memory is used for I/O (e.g., for TCP socket splicing), enabling zero-copy is less complicated when compared to zero-copy from user-space. The pages are already pinned in memory and there is no need for a notification on TX completion. The pages are reference counted, and can be freed by the device driver completion handler (do_tcp_sendpages). Instead of modifying the behavior of tcp_sendmsg_locked, it is also possible to use do_tcp_sendpages, which is used in splicing. Ironically, do_tcp_sendpages accepts only one page fragment (i.e., struct page, size and offset) per invocation and does not work with a scatter-gather list, which tcp_sendmsg_locked supports. Although the above description refers to TCP, one of ordinary skill in the art will understand that the inventions described herein also apply to other standards such as UDP, etc.



FIG. 10 conceptually illustrates an electronic system 1000 with which some embodiments of the invention are implemented. The electronic system 1000 can be used to execute any of the control, virtualization, or operating system applications described above. The electronic system 1000 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 1000 includes a bus 1005, processing unit(s) 1010, a system memory 1025, a read-only memory 1030, a permanent storage device 1035, input devices 1040, and output devices 1045.


The bus 1005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1000. For instance, the bus 1005 communicatively connects the processing unit(s) 1010 with the read-only memory 1030, the system memory 1025, and the permanent storage device 1035.


From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.


The read-only-memory (ROM) 1030 stores static data and instructions that are needed by the processing unit(s) 1010 and other modules of the electronic system. The permanent storage device 1035, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1000 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1035.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1035, the system memory 1025 is a read-and-write memory device. However, unlike storage device 1035, the system memory is a volatile read-and-write memory, such a random access memory. The system memory 1025 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1025, the permanent storage device 1035, and/or the read-only memory 1030. From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1005 also connects to the input and output devices 1040 and 1045. The input devices 1040 enable the user to communicate information and select commands to the electronic system. The input devices 1040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1045 display images generated by the electronic system 1000. The output devices 1045 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, as shown in FIG. 10, bus 1005 also couples electronic system 1000 to a network 1065 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1000 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.


This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.


VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.


Hypervisor kernel network interface modules, in some embodiments, are non-VM DCNs that include a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.


It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method of splicing Transmission Control Protocol (TCP) sockets on a computing device that processes a kernel of an operating system, the method comprising: receiving a set of packets at a first TCP socket of the kernel;storing the set of packets at a kernel memory location;sending the set of packets directly from the kernel memory location out through a second TCP socket of the kernel, the kernel memory location comprising a location of a buffer of a particular size;sending an indicator of a receive window size of the first TCP socket to a source of the set of packets;when the buffer is full beyond a threshold level, sending an indicator of a reduced size of the receive window to the source of the set of packets.
  • 2. A method of splicing Transmission Control Protocol (TCP) sockets on a computing device that processes a kernel of an operating system, the method comprising: receiving a set of packets at a first TCP socket of the kernel;storing the set of packets at a kernel memory location;sending the set of packets directly from the kernel memory location out through a second TCP socket of the kernel, the kernel memory location comprising a location of a buffer of a particular size;sending an indicator of a receive window size of the first TCP socket to a source of the set of packets;when the buffer is full, sending an indicator that the receive window size is zero to the source of the set of packets.
  • 3. The method of claim 2, wherein the indicator is a first indicator, the method further comprising: emptying the buffer by sending packets from the second TCP socket; andsending a second indicator that the receive window size is no longer zero.
  • 4. The method of claim 1 wherein the set of packets is a first set of packets, the method further comprising waiting for the first set of packets to be sent by the second TCP socket before allowing a second set of packets to be received by the first TCP socket.
  • 5. The method of claim 4, wherein the kernel memory location comprises a set of memory pages, the method further comprising freeing the memory pages with a driver completion handler after the pages are sent.
  • 6. The method of claim 1, wherein: the first TCP socket writes data comprising the set of packets to the kernel memory location upon first receiving the set of packets, andthe second TCP socket reads the data from the same kernel memory location.
  • 7. The method of claim 1, wherein the receiving, storing, and sending are performed without a system call.
  • 8. A non-transitory machine readable medium storing a program which when executed by at least one processing unit splices sockets on a computing device that processes a kernel of an operating system, the program comprising sets of instructions for: receiving a set of packets at a first socket of the kernel;storing the set of packets at a kernel memory location;sending the set of packets directly from the kernel memory location out through a second socket of the kernel, the kernel memory location comprising a location of a buffer of a particular size;sending an indicator of a receive window size of the first socket to a source of the set of packets;when the buffer is full beyond a threshold level, sending an indicator of a reduced size of the receive window to the source of the set of packets.
  • 9. A non-transitory machine readable storing a program which when executed by at least one processing unit splices sockets on a computing device that processes a kernel of an operating system, the program comprising sets of instructions for: receiving a set of packets at a first socket of the kernel;storing the set of packets at a kernel memory location;sending the set of packets directly from the kernel memory location out through a second socket of the kernel, the kernel memory location comprising a location of a buffer of a particular size;sending an indicator of a receive window size of the first socket to a source of the set of packets;when the buffer is full, sending an indicator that the receive window size is zero to the source of the set of packets.
  • 10. The non-transitory machine readable medium of claim 9, wherein the indicator is a first indicator, the program further comprising sets of instructions for: emptying the buffer by sending packets from the second socket; andsending a second indicator that the receive window size is no longer zero.
  • 11. The non-transitory machine readable medium of claim 8 wherein the set of packets is a first set of packets, the program further comprising a set of instructions for waiting for the first set of packets to be sent by the second socket before allowing a second set of packets to be received by the first socket.
  • 12. The non-transitory machine readable medium of claim 11, wherein the kernel memory location comprises a set of memory pages, the program further comprising a set of instructions for freeing the memory pages with a driver completion handler after the pages are sent.
  • 13. The non-transitory machine readable medium of claim 8, wherein: the first socket writes data comprising the set of packets to the kernel memory location upon first receiving the set of packets, andthe second socket reads the data from the same kernel memory location.
  • 14. The non-transitory machine readable medium of claim 8, wherein the sets of instructions for receiving, storing, and sending are performed without a system call.
US Referenced Citations (683)
Number Name Date Kind
5652751 Sharony Jul 1997 A
5909553 Campbell et al. Jun 1999 A
6154465 Pickett Nov 2000 A
6157648 Voit et al. Dec 2000 A
6201810 Masuda et al. Mar 2001 B1
6363378 Conklin et al. Mar 2002 B1
6445682 Weitz Sep 2002 B1
6744775 Beshai et al. Jun 2004 B1
6976087 Westfall et al. Dec 2005 B1
7003481 Banka et al. Feb 2006 B2
7280476 Anderson Oct 2007 B2
7313629 Nucci et al. Dec 2007 B1
7320017 Kurapati et al. Jan 2008 B1
7373660 Guichard et al. May 2008 B1
7581022 Griffin et al. Aug 2009 B1
7680925 Sathyanarayana et al. Mar 2010 B2
7681236 Tamura et al. Mar 2010 B2
7962458 Holenstein et al. Jun 2011 B2
8094575 Vadlakonda et al. Jan 2012 B1
8094659 Arad Jan 2012 B1
8111692 Ray Feb 2012 B2
8141156 Mao et al. Mar 2012 B1
8224971 Miller et al. Jul 2012 B1
8228928 Parandekar et al. Jul 2012 B2
8243589 Trost et al. Aug 2012 B1
8259566 Chen et al. Sep 2012 B2
8274891 Averi et al. Sep 2012 B2
8301749 Finklestein et al. Oct 2012 B1
8385227 Downey Feb 2013 B1
8566452 Goodwin et al. Oct 2013 B1
8630291 Shaffer et al. Jan 2014 B2
8661295 Khanna et al. Feb 2014 B1
8724456 Hong et al. May 2014 B1
8724503 Johnsson et al. May 2014 B2
8745177 Kazerani et al. Jun 2014 B1
8799504 Capone et al. Aug 2014 B2
8804745 Sinn Aug 2014 B1
8806482 Nagargadde et al. Aug 2014 B1
8856339 Mestery et al. Oct 2014 B2
8964548 Keralapura et al. Feb 2015 B1
8989199 Sella et al. Mar 2015 B1
9009217 Nagargadde et al. Apr 2015 B1
9055000 Ghosh et al. Jun 2015 B1
9060025 Xu Jun 2015 B2
9071607 Twitchell, Jr. Jun 2015 B2
9075771 Gawali et al. Jul 2015 B1
9135037 Petrescu-Prahova et al. Sep 2015 B1
9137334 Zhou Sep 2015 B2
9154327 Marino et al. Oct 2015 B1
9203764 Shirazipour et al. Dec 2015 B2
9306949 Richard et al. Apr 2016 B1
9323561 Ayala et al. Apr 2016 B2
9336040 Dong et al. May 2016 B2
9354983 Yenamandra et al. May 2016 B1
9356943 Lopilato et al. May 2016 B1
9379981 Zhou et al. Jun 2016 B1
9413724 Xu Aug 2016 B2
9419878 Hsiao et al. Aug 2016 B2
9432245 Sorenson et al. Aug 2016 B1
9438566 Zhang et al. Sep 2016 B2
9450817 Bahadur et al. Sep 2016 B1
9450852 Chen et al. Sep 2016 B1
9462010 Stevenson Oct 2016 B1
9467478 Khan et al. Oct 2016 B1
9485163 Fries et al. Nov 2016 B1
9521067 Michael et al. Dec 2016 B2
9525564 Lee Dec 2016 B2
9559951 Sajassi et al. Jan 2017 B1
9563423 Pittman Feb 2017 B1
9602389 Maveli et al. Mar 2017 B1
9608917 Anderson et al. Mar 2017 B1
9608962 Chang Mar 2017 B1
9621460 Mehta et al. Apr 2017 B2
9641551 Kariyanahalli May 2017 B1
9665432 Kruse et al. May 2017 B2
9686127 Ramachandran et al. Jun 2017 B2
9715401 Devine et al. Jul 2017 B2
9717021 Hughes et al. Jul 2017 B2
9722815 Mukundan et al. Aug 2017 B2
9747249 Cherian et al. Aug 2017 B2
9755965 Yadav et al. Sep 2017 B1
9787559 Schroeder Oct 2017 B1
9807004 Koley et al. Oct 2017 B2
9819540 Bahadur et al. Nov 2017 B1
9819565 Djukic et al. Nov 2017 B2
9825822 Holland Nov 2017 B1
9825911 Brandwine Nov 2017 B1
9825992 Xu Nov 2017 B2
9832128 Ashner et al. Nov 2017 B1
9832205 Santhi et al. Nov 2017 B2
9875355 Williams Jan 2018 B1
9906401 Rao Feb 2018 B1
9930011 Clemons, Jr. et al. Mar 2018 B1
9935829 Miller et al. Apr 2018 B1
9942787 Tillotson Apr 2018 B1
9996370 Khafizov et al. Jun 2018 B1
10038601 Becker et al. Jul 2018 B1
10057183 Salle et al. Aug 2018 B2
10057294 Xu Aug 2018 B2
10135789 Mayya et al. Nov 2018 B2
10142226 Wu et al. Nov 2018 B1
10178032 Freitas Jan 2019 B1
10187289 Chen et al. Jan 2019 B1
10229017 Zou et al. Mar 2019 B1
10237123 Dubey et al. Mar 2019 B2
10250498 Bales et al. Apr 2019 B1
10263832 Ghosh Apr 2019 B1
10320664 Nainar et al. Jun 2019 B2
10320691 Matthews et al. Jun 2019 B1
10326830 Singh Jun 2019 B1
10348767 Lee et al. Jul 2019 B1
10355989 Panchal et al. Jul 2019 B1
10425382 Mayya et al. Sep 2019 B2
10454708 Mibu Oct 2019 B2
10454714 Mayya et al. Oct 2019 B2
10461993 Turabi et al. Oct 2019 B2
10498652 Mayya et al. Dec 2019 B2
10511546 Singarayan et al. Dec 2019 B2
10523539 Mayya et al. Dec 2019 B2
10550093 Ojima et al. Feb 2020 B2
10554538 Spohn et al. Feb 2020 B2
10560431 Chen et al. Feb 2020 B1
10565464 Han et al. Feb 2020 B2
10567519 Mukhopadhyaya et al. Feb 2020 B1
10574528 Mayya et al. Feb 2020 B2
10594516 Cidon et al. Mar 2020 B2
10594659 El-Moussa et al. Mar 2020 B2
10608844 Cidon et al. Mar 2020 B2
10637889 Ermagan et al. Apr 2020 B2
10666460 Cidon et al. May 2020 B2
10686625 Cidon et al. Jun 2020 B2
10749711 Mukundan et al. Aug 2020 B2
10778466 Cidon et al. Sep 2020 B2
10778528 Mayya et al. Sep 2020 B2
10805114 Cidon et al. Oct 2020 B2
10805272 Mayya et al. Oct 2020 B2
10819564 Turabi et al. Oct 2020 B2
10826775 Moreno et al. Nov 2020 B1
10841131 Cidon et al. Nov 2020 B2
10911374 Kumar et al. Feb 2021 B1
10938693 Mayya et al. Mar 2021 B2
10951529 Duan et al. Mar 2021 B2
10958479 Cidon et al. Mar 2021 B2
10959098 Cidon et al. Mar 2021 B2
10992558 Silva et al. Apr 2021 B1
10992568 Michael et al. Apr 2021 B2
10999100 Cidon et al. May 2021 B2
10999137 Cidon et al. May 2021 B2
10999165 Cidon et al. May 2021 B2
11005684 Cidon May 2021 B2
11018995 Cidon et al. May 2021 B2
11044190 Ramaswamy et al. Jun 2021 B2
11050588 Mayya et al. Jun 2021 B2
11050644 Hegde et al. Jun 2021 B2
11071005 Shen et al. Jul 2021 B2
11089111 Markuze et al. Aug 2021 B2
11095612 Oswal et al. Aug 2021 B1
11102032 Cidon et al. Aug 2021 B2
11108851 Kurmala et al. Aug 2021 B1
11115347 Gupta et al. Sep 2021 B2
11115426 Pazhyannur et al. Sep 2021 B1
11115480 Markuze et al. Sep 2021 B2
11121962 Michael et al. Sep 2021 B2
11121985 Cidon et al. Sep 2021 B2
11128492 Sethi et al. Sep 2021 B2
11153230 Cidon et al. Oct 2021 B2
11171885 Cidon et al. Nov 2021 B2
20020085488 Kobayashi Jul 2002 A1
20020087716 Mustafa Jul 2002 A1
20020198840 Banka et al. Dec 2002 A1
20030061269 Hathaway Mar 2003 A1
20030088697 Matsuhira May 2003 A1
20030112766 Riedel et al. Jun 2003 A1
20030112808 Solomon Jun 2003 A1
20030126468 Markham Jul 2003 A1
20030161313 Jinmei et al. Aug 2003 A1
20030189919 Gupta et al. Oct 2003 A1
20030202506 Perkins et al. Oct 2003 A1
20030219030 Gubbi Nov 2003 A1
20040059831 Chu et al. Mar 2004 A1
20040068668 Lor et al. Apr 2004 A1
20040165601 Liu et al. Aug 2004 A1
20040224771 Chen et al. Nov 2004 A1
20050078690 DeLangis Apr 2005 A1
20050154790 Nagata et al. Jul 2005 A1
20050172161 Cruz et al. Aug 2005 A1
20050265255 Kodialam et al. Dec 2005 A1
20060002291 Alicherry et al. Jan 2006 A1
20060114838 Mandavilli et al. Jun 2006 A1
20060171365 Borella Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060182035 Vasseur Aug 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060193252 Naseh et al. Aug 2006 A1
20070064604 Chen et al. Mar 2007 A1
20070064702 Bates et al. Mar 2007 A1
20070083727 Johnston et al. Apr 2007 A1
20070091794 Filsfils et al. Apr 2007 A1
20070103548 Carter May 2007 A1
20070115812 Hughes May 2007 A1
20070121486 Guichard et al. May 2007 A1
20070130325 Lesser Jun 2007 A1
20070162619 Aloni et al. Jul 2007 A1
20070162639 Chu Jul 2007 A1
20070177511 Das et al. Aug 2007 A1
20070237081 Kodialam et al. Oct 2007 A1
20070260746 Mirtorabi et al. Nov 2007 A1
20070268882 Breslau et al. Nov 2007 A1
20080002670 Bugenhagen et al. Jan 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080080509 Khanna et al. Apr 2008 A1
20080095187 Jung et al. Apr 2008 A1
20080117930 Chakareski et al. May 2008 A1
20080144532 Chamarajanagar et al. Jun 2008 A1
20080181116 Kavanaugh et al. Jul 2008 A1
20080219276 Shah Sep 2008 A1
20080240121 Xiong et al. Oct 2008 A1
20090013210 McIntosh et al. Jan 2009 A1
20090125617 Klessig et al. May 2009 A1
20090141642 Sun Jun 2009 A1
20090154463 Hines et al. Jun 2009 A1
20090247204 Sennett et al. Oct 2009 A1
20090274045 Meier et al. Nov 2009 A1
20090276657 Wetmore et al. Nov 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20100008361 Guichard et al. Jan 2010 A1
20100017802 Lojewski Jan 2010 A1
20100046532 Okita Feb 2010 A1
20100061379 Parandekar et al. Mar 2010 A1
20100080129 Strahan et al. Apr 2010 A1
20100088440 Banks et al. Apr 2010 A1
20100091823 Retana et al. Apr 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100118727 Draves et al. May 2010 A1
20100165985 Sharma et al. Jul 2010 A1
20100191884 Holenstein et al. Jul 2010 A1
20100223621 Joshi et al. Sep 2010 A1
20100290422 Haigh et al. Nov 2010 A1
20100309841 Conte Dec 2010 A1
20100309912 Mehta et al. Dec 2010 A1
20100322255 Hao et al. Dec 2010 A1
20100332657 Elyashev et al. Dec 2010 A1
20110007752 Silva et al. Jan 2011 A1
20110032939 Nozaki et al. Feb 2011 A1
20110040814 Higgins Feb 2011 A1
20110075674 Li et al. Mar 2011 A1
20110107139 Middlecamp et al. May 2011 A1
20110110370 Moreno et al. May 2011 A1
20110141877 Xu et al. Jun 2011 A1
20110142041 Imai Jun 2011 A1
20110153909 Dong Jun 2011 A1
20110255397 Kadakia et al. Oct 2011 A1
20120008630 Ould-Brahim Jan 2012 A1
20120027013 Napierala Feb 2012 A1
20120136697 Peles et al. May 2012 A1
20120157068 Eichen et al. Jun 2012 A1
20120173694 Yan et al. Jul 2012 A1
20120173919 Patel et al. Jul 2012 A1
20120182940 Taleb et al. Jul 2012 A1
20120221955 Raleigh et al. Aug 2012 A1
20120227093 Shatzkamer et al. Sep 2012 A1
20120250682 Vincent et al. Oct 2012 A1
20120250686 Vincent et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120300615 Kempf et al. Nov 2012 A1
20120317291 Wolfe Dec 2012 A1
20130019005 Hui et al. Jan 2013 A1
20130021968 Reznik et al. Jan 2013 A1
20130044764 Casado et al. Feb 2013 A1
20130051237 Ong Feb 2013 A1
20130051399 Zhang et al. Feb 2013 A1
20130054763 Merwe et al. Feb 2013 A1
20130086267 Gelenbe et al. Apr 2013 A1
20130103834 Dzerve et al. Apr 2013 A1
20130117530 Kim et al. May 2013 A1
20130124718 Griffith et al. May 2013 A1
20130124911 Griffith et al. May 2013 A1
20130124912 Griffith et al. May 2013 A1
20130128889 Mathur et al. May 2013 A1
20130142201 Kim et al. Jun 2013 A1
20130170354 Takashima et al. Jul 2013 A1
20130173788 Song Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130191688 Agarwal et al. Jul 2013 A1
20130238782 Zhao et al. Sep 2013 A1
20130242718 Zhang Sep 2013 A1
20130254599 Katkar et al. Sep 2013 A1
20130258839 Wang et al. Oct 2013 A1
20130266015 Qu et al. Oct 2013 A1
20130266019 Qu et al. Oct 2013 A1
20130283364 Chang et al. Oct 2013 A1
20130286846 Atlas et al. Oct 2013 A1
20130297611 Moritz et al. Nov 2013 A1
20130297770 Zhang Nov 2013 A1
20130301469 Suga Nov 2013 A1
20130301642 Radhakrishnan et al. Nov 2013 A1
20130308444 Sem-Jacobsen et al. Nov 2013 A1
20130315242 Wang et al. Nov 2013 A1
20130315243 Huang et al. Nov 2013 A1
20130329548 Nakil et al. Dec 2013 A1
20130329601 Yin et al. Dec 2013 A1
20130329734 Chesla et al. Dec 2013 A1
20130346470 Obstfeld et al. Dec 2013 A1
20140019604 Twitchell, Jr. Jan 2014 A1
20140019750 Dodgson et al. Jan 2014 A1
20140040975 Raleigh et al. Feb 2014 A1
20140064283 Balus et al. Mar 2014 A1
20140092907 Sridhar et al. Apr 2014 A1
20140108665 Arora et al. Apr 2014 A1
20140112171 Pasdar Apr 2014 A1
20140115584 Mudigonda et al. Apr 2014 A1
20140123135 Huang et al. May 2014 A1
20140126418 Brendel et al. May 2014 A1
20140156818 Hunt Jun 2014 A1
20140156823 Liu et al. Jun 2014 A1
20140164560 Ko et al. Jun 2014 A1
20140164617 Jalan et al. Jun 2014 A1
20140173113 Vemuri et al. Jun 2014 A1
20140173331 Martin et al. Jun 2014 A1
20140181824 Saund et al. Jun 2014 A1
20140208317 Nakagawa Jul 2014 A1
20140219135 Li et al. Aug 2014 A1
20140223507 Xu Aug 2014 A1
20140229210 Sharifian et al. Aug 2014 A1
20140244851 Lee Aug 2014 A1
20140258535 Zhang Sep 2014 A1
20140269690 Tu Sep 2014 A1
20140279862 Dietz et al. Sep 2014 A1
20140280499 Basavaiah et al. Sep 2014 A1
20140317440 Biermayr et al. Oct 2014 A1
20140337500 Lee Nov 2014 A1
20140341109 Cartmell et al. Nov 2014 A1
20140372582 Ghanwani et al. Dec 2014 A1
20150003240 Drwiega et al. Jan 2015 A1
20150016249 Mukundan et al. Jan 2015 A1
20150029864 Raileanu et al. Jan 2015 A1
20150046572 Cheng et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150056960 Egner et al. Feb 2015 A1
20150058917 Xu Feb 2015 A1
20150088942 Shah Mar 2015 A1
20150089628 Lang Mar 2015 A1
20150092603 Aguayo et al. Apr 2015 A1
20150096011 Watt Apr 2015 A1
20150124603 Ketheesan et al. May 2015 A1
20150134777 Onoue May 2015 A1
20150139238 Pourzandi et al. May 2015 A1
20150146539 Mehta et al. May 2015 A1
20150163152 Li Jun 2015 A1
20150169340 Haddad et al. Jun 2015 A1
20150172121 Farkas et al. Jun 2015 A1
20150172169 DeCusatis et al. Jun 2015 A1
20150188823 Williams et al. Jul 2015 A1
20150189009 Bemmel Jul 2015 A1
20150195178 Bhattacharya et al. Jul 2015 A1
20150201036 Nishiki et al. Jul 2015 A1
20150222543 Song Aug 2015 A1
20150222638 Morley Aug 2015 A1
20150236945 Michael et al. Aug 2015 A1
20150236962 Veres et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150249644 Xu Sep 2015 A1
20150271056 Chunduri et al. Sep 2015 A1
20150271104 Chikkamath et al. Sep 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150312142 Barabash et al. Oct 2015 A1
20150312760 O'Toole Oct 2015 A1
20150317169 Sinha et al. Nov 2015 A1
20150334025 Rader Nov 2015 A1
20150334696 Gu et al. Nov 2015 A1
20150341271 Gomez Nov 2015 A1
20150349978 Wu et al. Dec 2015 A1
20150350907 Timariu et al. Dec 2015 A1
20150363221 Terayama et al. Dec 2015 A1
20150363733 Brown Dec 2015 A1
20150372943 Hasan et al. Dec 2015 A1
20150372982 Herle et al. Dec 2015 A1
20150381407 Wang et al. Dec 2015 A1
20150381493 Bansal et al. Dec 2015 A1
20160035183 Buchholz et al. Feb 2016 A1
20160036924 Koppolu et al. Feb 2016 A1
20160036938 Aviles et al. Feb 2016 A1
20160037434 Gopal et al. Feb 2016 A1
20160072669 Saavedra Mar 2016 A1
20160072684 Manuguri et al. Mar 2016 A1
20160080502 Yadav et al. Mar 2016 A1
20160105353 Cociglio Apr 2016 A1
20160105392 Thakkar et al. Apr 2016 A1
20160105471 Nunes et al. Apr 2016 A1
20160105488 Thakkar et al. Apr 2016 A1
20160117185 Fang et al. Apr 2016 A1
20160134461 Sampath et al. May 2016 A1
20160134528 Lin et al. May 2016 A1
20160134591 Liao et al. May 2016 A1
20160142373 Ossipov May 2016 A1
20160150055 Choi May 2016 A1
20160164832 Bellagamba et al. Jun 2016 A1
20160164914 Madhav et al. Jun 2016 A1
20160173338 Wolting Jun 2016 A1
20160191363 Haraszti et al. Jun 2016 A1
20160191374 Singh et al. Jun 2016 A1
20160192403 Gupta et al. Jun 2016 A1
20160197834 Luft Jul 2016 A1
20160197835 Luft Jul 2016 A1
20160198003 Luft Jul 2016 A1
20160210209 Verkaik et al. Jul 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160218951 Masseur et al. Jul 2016 A1
20160255169 Kovvuri et al. Sep 2016 A1
20160261493 Li Sep 2016 A1
20160261495 Xia et al. Sep 2016 A1
20160261639 Xu Sep 2016 A1
20160269298 Li et al. Sep 2016 A1
20160269926 Sundaram Sep 2016 A1
20160285736 Gu Sep 2016 A1
20160308762 Teng et al. Oct 2016 A1
20160315912 Mayya et al. Oct 2016 A1
20160323377 Einkauf et al. Nov 2016 A1
20160328159 Coddington et al. Nov 2016 A1
20160330111 Manghirmalani et al. Nov 2016 A1
20160352588 Subbarayan et al. Dec 2016 A1
20160353268 Senarath et al. Dec 2016 A1
20160359738 Sullenberger et al. Dec 2016 A1
20160366187 Kamble Dec 2016 A1
20160371153 Dornemann Dec 2016 A1
20160380886 Blair et al. Dec 2016 A1
20160380906 Hodique et al. Dec 2016 A1
20170005986 Bansal et al. Jan 2017 A1
20170012870 Blair et al. Jan 2017 A1
20170019428 Cohn Jan 2017 A1
20170026283 Williams et al. Jan 2017 A1
20170026355 Mathaiyan et al. Jan 2017 A1
20170034046 Cai et al. Feb 2017 A1
20170034129 Sawant et al. Feb 2017 A1
20170048296 Ramalho Feb 2017 A1
20170053258 Carney et al. Feb 2017 A1
20170055131 Kong et al. Feb 2017 A1
20170063674 Maskalik et al. Mar 2017 A1
20170063782 Jain et al. Mar 2017 A1
20170063794 Jain et al. Mar 2017 A1
20170064005 Lee Mar 2017 A1
20170075710 Prasad et al. Mar 2017 A1
20170093625 Pera et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170104653 Badea et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170109212 Gaurav et al. Apr 2017 A1
20170118173 Arramreddy et al. Apr 2017 A1
20170123939 Maheshwari et al. May 2017 A1
20170126516 Tiagi et al. May 2017 A1
20170126564 Mayya et al. May 2017 A1
20170134186 Mukundan et al. May 2017 A1
20170134520 Abbasi et al. May 2017 A1
20170139789 Fries et al. May 2017 A1
20170155557 Desai et al. Jun 2017 A1
20170163473 Sadana et al. Jun 2017 A1
20170171310 Gardner Jun 2017 A1
20170181210 Nadella et al. Jun 2017 A1
20170195161 Ruel et al. Jul 2017 A1
20170195169 Mills et al. Jul 2017 A1
20170201585 Doraiswamy et al. Jul 2017 A1
20170207976 Rovner et al. Jul 2017 A1
20170214545 Cheng et al. Jul 2017 A1
20170214701 Hasan Jul 2017 A1
20170223117 Messerli et al. Aug 2017 A1
20170237710 Mayya et al. Aug 2017 A1
20170257260 Govindan et al. Sep 2017 A1
20170257309 Appanna Sep 2017 A1
20170264496 Ao et al. Sep 2017 A1
20170279717 Bethers et al. Sep 2017 A1
20170279803 Desai et al. Sep 2017 A1
20170280474 Vesterinen et al. Sep 2017 A1
20170288987 Pasupathy et al. Oct 2017 A1
20170289002 Ganguli et al. Oct 2017 A1
20170295264 Touitou Oct 2017 A1
20170302565 Ghobadi et al. Oct 2017 A1
20170310641 Jiang et al. Oct 2017 A1
20170310691 Vasseur et al. Oct 2017 A1
20170317974 Masurekar et al. Nov 2017 A1
20170337086 Zhu et al. Nov 2017 A1
20170339054 Yadav et al. Nov 2017 A1
20170339070 Chang et al. Nov 2017 A1
20170364419 Lo Dec 2017 A1
20170366445 Nemirovsky et al. Dec 2017 A1
20170366467 Martin et al. Dec 2017 A1
20170374174 Evens et al. Dec 2017 A1
20180006995 Bickhart et al. Jan 2018 A1
20180007123 Cheng et al. Jan 2018 A1
20180013636 Seetharamaiah et al. Jan 2018 A1
20180014051 Phillips et al. Jan 2018 A1
20180020035 Boggia et al. Jan 2018 A1
20180034668 Mayya et al. Feb 2018 A1
20180041425 Zhang Feb 2018 A1
20180062914 Boutros et al. Mar 2018 A1
20180062917 Chandrashekhar et al. Mar 2018 A1
20180063036 Chandrashekhar et al. Mar 2018 A1
20180063193 Chandrashekhar et al. Mar 2018 A1
20180063233 Park Mar 2018 A1
20180069924 Tumuluru et al. Mar 2018 A1
20180074909 Bishop et al. Mar 2018 A1
20180077081 Lauer et al. Mar 2018 A1
20180077202 Xu Mar 2018 A1
20180084081 Kuchibhotla et al. Mar 2018 A1
20180097725 Wood et al. Apr 2018 A1
20180114569 Strachan et al. Apr 2018 A1
20180131608 Jiang et al. May 2018 A1
20180131615 Zhang May 2018 A1
20180131720 Hobson et al. May 2018 A1
20180145899 Rao May 2018 A1
20180159796 Wang et al. Jun 2018 A1
20180159856 Gujarathi Jun 2018 A1
20180167378 Kostyukov et al. Jun 2018 A1
20180176073 Dubey et al. Jun 2018 A1
20180176082 Katz et al. Jun 2018 A1
20180176130 Banerjee et al. Jun 2018 A1
20180213472 Ishii et al. Jul 2018 A1
20180219765 Michael et al. Aug 2018 A1
20180219766 Michael et al. Aug 2018 A1
20180234300 Mayya et al. Aug 2018 A1
20180260125 Botes et al. Sep 2018 A1
20180262468 Kumar et al. Sep 2018 A1
20180270104 Zheng et al. Sep 2018 A1
20180278541 Wu et al. Sep 2018 A1
20180295101 Gehrmann Oct 2018 A1
20180295529 Jen et al. Oct 2018 A1
20180302286 Mayya et al. Oct 2018 A1
20180302321 Manthiramoorthy et al. Oct 2018 A1
20180307851 Lewis Oct 2018 A1
20180316606 Sung et al. Nov 2018 A1
20180351855 Sood et al. Dec 2018 A1
20180351862 Jeganathan et al. Dec 2018 A1
20180351863 Vairavakkalai et al. Dec 2018 A1
20180351882 Jeganathan et al. Dec 2018 A1
20180373558 Chang et al. Dec 2018 A1
20180375744 Mayya et al. Dec 2018 A1
20180375824 Mayya et al. Dec 2018 A1
20180375967 Pithawala et al. Dec 2018 A1
20190013883 Vargas et al. Jan 2019 A1
20190014038 Ritchie Jan 2019 A1
20190020588 Twitchell, Jr. Jan 2019 A1
20190020627 Yuan Jan 2019 A1
20190028552 Johnson et al. Jan 2019 A1
20190036810 Michael et al. Jan 2019 A1
20190046056 Khachaturian et al. Feb 2019 A1
20190058657 Chunduri et al. Feb 2019 A1
20190058709 Kempf et al. Feb 2019 A1
20190068470 Mirsky Feb 2019 A1
20190068493 Ram et al. Feb 2019 A1
20190068500 Hira Feb 2019 A1
20190075083 Mayya et al. Mar 2019 A1
20190103990 Cidon et al. Apr 2019 A1
20190103991 Cidon et al. Apr 2019 A1
20190103992 Cidon et al. Apr 2019 A1
20190103993 Cidon et al. Apr 2019 A1
20190104035 Cidon et al. Apr 2019 A1
20190104049 Cidon et al. Apr 2019 A1
20190104050 Cidon et al. Apr 2019 A1
20190104051 Cidon et al. Apr 2019 A1
20190104052 Cidon et al. Apr 2019 A1
20190104053 Cidon et al. Apr 2019 A1
20190104063 Cidon et al. Apr 2019 A1
20190104064 Cidon et al. Apr 2019 A1
20190104109 Cidon et al. Apr 2019 A1
20190104111 Cidon et al. Apr 2019 A1
20190104413 Cidon et al. Apr 2019 A1
20190109769 Jain et al. Apr 2019 A1
20190140889 Mayya et al. May 2019 A1
20190140890 Mayya et al. May 2019 A1
20190158371 Dillon et al. May 2019 A1
20190158605 Markuze et al. May 2019 A1
20190199539 Deng et al. Jun 2019 A1
20190220703 Prakash et al. Jul 2019 A1
20190238364 Boutros et al. Aug 2019 A1
20190238446 Barzik et al. Aug 2019 A1
20190238449 Michael et al. Aug 2019 A1
20190238450 Michael et al. Aug 2019 A1
20190238483 Marichetty et al. Aug 2019 A1
20190268421 Markuze et al. Aug 2019 A1
20190268973 Bull et al. Aug 2019 A1
20190280962 Michael et al. Sep 2019 A1
20190280963 Michael et al. Sep 2019 A1
20190280964 Michael et al. Sep 2019 A1
20190306197 Degioanni Oct 2019 A1
20190313907 Khachaturian et al. Oct 2019 A1
20190319847 Nahar et al. Oct 2019 A1
20190334813 Raj et al. Oct 2019 A1
20190342219 Liu et al. Nov 2019 A1
20190356736 Narayanaswamy et al. Nov 2019 A1
20190364099 Thakkar et al. Nov 2019 A1
20190372888 Michael et al. Dec 2019 A1
20190372889 Michael et al. Dec 2019 A1
20190372890 Michael et al. Dec 2019 A1
20200014615 Michael et al. Jan 2020 A1
20200014616 Michael et al. Jan 2020 A1
20200014661 Mayya et al. Jan 2020 A1
20200021514 Michael et al. Jan 2020 A1
20200021515 Michael et al. Jan 2020 A1
20200036624 Michael et al. Jan 2020 A1
20200059420 Abraham Feb 2020 A1
20200059459 Abraham et al. Feb 2020 A1
20200092207 Sipra et al. Mar 2020 A1
20200097327 Beyer et al. Mar 2020 A1
20200099659 Cometto et al. Mar 2020 A1
20200106696 Michael et al. Apr 2020 A1
20200106706 Mayya et al. Apr 2020 A1
20200119952 Mayya et al. Apr 2020 A1
20200127905 Mayya et al. Apr 2020 A1
20200127911 Gilson et al. Apr 2020 A1
20200153736 Liebherr et al. May 2020 A1
20200169473 Rimar et al. May 2020 A1
20200177503 Hooda et al. Jun 2020 A1
20200204460 Schneider et al. Jun 2020 A1
20200213212 Dillon et al. Jul 2020 A1
20200213224 Cheng et al. Jul 2020 A1
20200218558 Sreenath et al. Jul 2020 A1
20200235990 Janakiraman et al. Jul 2020 A1
20200235999 Mayya et al. Jul 2020 A1
20200236046 Jain et al. Jul 2020 A1
20200244721 S et al. Jul 2020 A1
20200252234 Ramamoorthi et al. Aug 2020 A1
20200259700 Bhalla et al. Aug 2020 A1
20200267184 Vera-Schockner Aug 2020 A1
20200280587 Janakiraman et al. Sep 2020 A1
20200287819 Theogaraj et al. Sep 2020 A1
20200287976 Theogaraj et al. Sep 2020 A1
20200296011 Jain et al. Sep 2020 A1
20200296026 Michael et al. Sep 2020 A1
20200314006 Mackie et al. Oct 2020 A1
20200314614 Moustafa et al. Oct 2020 A1
20200322287 Connor et al. Oct 2020 A1
20200336336 Sethi et al. Oct 2020 A1
20200344143 Faseela et al. Oct 2020 A1
20200344163 Gupta et al. Oct 2020 A1
20200351188 Arora et al. Nov 2020 A1
20200366530 Mukundan et al. Nov 2020 A1
20200366562 Mayya et al. Nov 2020 A1
20200382345 Zhao et al. Dec 2020 A1
20200382387 Pasupathy et al. Dec 2020 A1
20200412576 Kondapavuluru et al. Dec 2020 A1
20200413283 Shen et al. Dec 2020 A1
20210006482 Hwang et al. Jan 2021 A1
20210006490 Michael et al. Jan 2021 A1
20210029088 Mayya et al. Jan 2021 A1
20210036888 Makkalla et al. Feb 2021 A1
20210036987 Mishra et al. Feb 2021 A1
20210067372 Cidon et al. Mar 2021 A1
20210067373 Cidon et al. Mar 2021 A1
20210067374 Cidon et al. Mar 2021 A1
20210067375 Cidon et al. Mar 2021 A1
20210067407 Cidon et al. Mar 2021 A1
20210067427 Cidon et al. Mar 2021 A1
20210067442 Sundararajan et al. Mar 2021 A1
20210067461 Cidon et al. Mar 2021 A1
20210067464 Cidon et al. Mar 2021 A1
20210067467 Cidon et al. Mar 2021 A1
20210067468 Cidon et al. Mar 2021 A1
20210105199 H et al. Apr 2021 A1
20210112034 Sundararajan et al. Apr 2021 A1
20210126853 Ramaswamy et al. Apr 2021 A1
20210126854 Guo et al. Apr 2021 A1
20210126860 Ramaswamy et al. Apr 2021 A1
20210144091 H et al. May 2021 A1
20210160813 Gupta et al. May 2021 A1
20210184952 Mayya et al. Jun 2021 A1
20210184966 Ramaswamy et al. Jun 2021 A1
20210184983 Ramaswamy et al. Jun 2021 A1
20210194814 Roux et al. Jun 2021 A1
20210226880 Ramamoorthy et al. Jul 2021 A1
20210234728 Cidon et al. Jul 2021 A1
20210234775 Devadoss et al. Jul 2021 A1
20210234786 Devadoss et al. Jul 2021 A1
20210234804 Devadoss et al. Jul 2021 A1
20210234805 Devadoss et al. Jul 2021 A1
20210235312 Devadoss et al. Jul 2021 A1
20210235313 Devadoss et al. Jul 2021 A1
20210266262 Subramanian et al. Aug 2021 A1
20210279069 Salgaonkar Sep 2021 A1
20210328835 Mayya et al. Oct 2021 A1
20210377156 Michael et al. Dec 2021 A1
20210392060 Silva et al. Dec 2021 A1
20210392070 Tootaghaj et al. Dec 2021 A1
20210399920 Sundararajan et al. Dec 2021 A1
Foreign Referenced Citations (18)
Number Date Country
104956329 Sep 2015 CN
1912381 Apr 2008 EP
3041178 Jul 2016 EP
3509256 Jul 2019 EP
2574350 Feb 2016 RU
03073701 Sep 2003 WO
2012167184 Dec 2012 WO
2016061546 Apr 2016 WO
2017083975 May 2017 WO
2019070611 Apr 2019 WO
2019094522 May 2019 WO
2020012491 Jan 2020 WO
2020018704 Jan 2020 WO
2020101922 May 2020 WO
2020112345 Jun 2020 WO
2021040934 Mar 2021 WO
2021118717 Jun 2021 WO
2021150465 Jul 2021 WO
Non-Patent Literature Citations (52)
Entry
Non-Published Commonly Owned U.S. Appl. No. 17/233,427, filed Apr. 16, 2021, 124 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/361,292, filed Jun. 28, 2021, 35 pages, Nicira, Inc.
Sarhan, Soliman Abd Elmonsef, et al., “Data Inspection in SDN Network,” 2018 13th International Conference on Computer Engineering and Systems (ICCES), Dec. 18-19, 2018, 6 pages, IEEE, Cairo, Egypt.
Xie, Junfeng, et al., A Survey of Machine Learning Techniques Applied to Software Defined Networking (SDN): Research Issues and Challenges, IEEE Communications Surveys & Tutorials, Aug. 23, 2018, 38 pages, vol. 21, Issue 1, IEEE.
Huang, Cancan, et al., “Modification of Q.SD-WAN,” Rapporteur Group Meeting—Doc, Study Period 2017-2020, Q4/11-DOC1 (190410), Study Group 11, Apr. 10, 2019, 19 pages, International Telecommunication Union, Geneva, Switzerland.
Non-published Commonly Owned U.S. Appl. No. 17/187,913, filed Mar. 1, 2021, 27 pages, Nicira, Inc.
Del Piccolo, Valentin, et al., “A Survey of Network Isolation Solutions for Multi-Tenant Data Centers,” IEEE Communications Society, Apr. 20, 2016, vol. 18, No. 4, 37 pages, IEEE.
Fortz, Bernard, et al., “Internet Traffic Engineering by Optimizing OSPF Weights,” Proceedings IEEE INFOCOM 2000, Conference on Computer Communications, Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Mar. 26-30, 2000, 11 pages, IEEE, Tel Aviv, Israel, Israel.
Francois, Frederic, et al., “Optimizing Secure SDN-enabled Inter-Data Centre Overlay Networks through Cognitive Routing,” 2016 IEEE 24th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Sep. 19-21, 2016, 10 pages, IEEE, London, UK.
Michael, Nithin, et al., “HALO: Hop-by-Hop Adaptive Link-State Optimal Routing,” IEEE/ACM Transactions on Networking, Dec. 2015, 14 pages, vol. 23, No. 6, IEEE.
Mishra, Mayank, et al., “Managing Network Reservation for Tenants in Oversubscribed Clouds,” 2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Aug. 14-16, 2013, 10 pages, IEEE, San Francisco, CA, USA.
Mudigonda, Jayaram, et al., “NetLord: A Scalable Multi-Tenant Network Architecture for Virtualized Datacenters,” Proceedings of the ACM SIGCOMM 2011 Conference, Aug. 15-19, 2011, 12 pages, ACM, Toronto, Canada.
Non-Published Commonly Owned U.S. Appl. No. 16/662,363, filed Oct. 24, 2019, 129 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,379, filed Oct. 24, 2019, 123 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,402, filed Oct. 24, 2019, 128 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,427, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,489, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,510, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,531, filed Oct. 24, 2019, 135 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,570, filed Oct. 24, 2019, 141 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,587, filed Oct. 24, 2019, 145 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,591, filed Oct. 24, 2019, 130 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/721,964, filed Dec. 20, 2019, 39 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/721,965, filed Dec. 20, 2019, 39 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/792,908, filed Feb. 18, 2020, 48 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/792,909, filed Feb. 18, 2020, 49 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,294, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,301, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,308, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,314, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,323, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,397, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/068,603, filed Oct. 12, 2020, 37 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/072,764, filed Oct. 16, 2020, 33 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/072,774, filed Oct. 16, 2020, 34 pages, VMware, Inc.
Non-Published Commonly Owned Related U.S. Appl. No. 17/085,893 with similar specification, filed Oct. 30, 2020, 34 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 15/803,964, filed Nov. 6, 2017, 15 pages, The Mode Group.
Non-Published Commonly Owned U.S. Appl. No. 16/216,235, filed Dec. 11, 2018, 19 pages, The Mode Group.
Non-Published Commonly Owned U.S. Appl. No. 16/818,862, filed Mar. 13, 2020, 198 pages, The Mode Group.
Ray, Saikat, et al., “Always Acyclic Distributed Path Computation,” University of Pennsylvania Department of Electrical and Systems Engineering Technical Report, May 2008, 16 pages, University of Pennsylvania ScholarlyCommons.
Webb, Kevin C., et al., “Blender: Upgrading Tenant-Based Data Center Networking,” 2014 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Oct. 20-21, 2014, 11 pages, IEEE, Marina del Rey, CA, USA.
Yap, Kok-Kiong, et al., “Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering,” SIGCOMM '17: Proceedings of the Conference of the ACM Special Interest Group on Data Communication, Aug. 21-25, 2017, 14 pages, Los Angeles, CA.
Guo, Xiangyi, et al., U.S. Appl. No. 62/925,193, filed Oct. 23, 2019, 26 pages.
Lasserre, Marc, et al., “Framework for Data Center (DC) Network Virtualization,” RFC 7365, Oct. 2014, 26 pages, IETF.
Lin, Weidong, et al., “Using Path Label Routing in Wide Area Software-Defined Networks with Open Flow,” 2016 International Conference on Networking and Network Applications, Jul. 2016, 6 pages, IEEE.
Non-Published Commonly Owned U.S. Appl. No. 17/467,378, filed Sep. 6, 2021, 157 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/474,034, filed Sep. 13, 2021, 349 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/542,413, filed Dec. 4, 2021, 173 pages, VMware, Inc.
Cox, Jacob H., et al., “Advancing Software-Defined Networks: A Survey,” IEEE Access, Oct. 12, 2017, 40 pages. vol. 5, IEEE, retrieved from https://ieeexplore.ieee.org/document/8066287.
Ming, Gao, et al., “A Design of SD-WAN-Oriented Wide Area Network Access,” 2020 International Conference on Computer Communication and Network Security (CCNS), Aug. 21-23, 2020, 4 pages, IEEE, Xi'an, China.
Barozet, Jean-Marc, “Cisco SD-WAN as a Managed Service,” BRKRST-2558, Jan. 27-31, 2020, 98 pages, Cisco, Barcelona, Spain, retrieved from https://www.ciscolive.eom/c/dam/r/ciscolive/emea/docs/2020/pdf/BRKRST-2558.pdf.
Barozei, Jean-Marc, “Cisco SDWAN,” Deep Dive, Dec. 2017, 185 pages, Cisco, Retreived from https://www.coursehero.com/file/71671376/Cisco-SDWAN-Deep-Divepdf/.
Related Publications (1)
Number Date Country
20220038557 A1 Feb 2022 US
Provisional Applications (1)
Number Date Country
63059113 Jul 2020 US