In recent years, there has been an increase in the use of hardware offload units to assist functions performed by programs executing on host computers. Examples of such hardware offload units include FGPAs, GPUs, smart NICs, etc. Such hardware offload units have improved performance and efficiency requirements of the host computers by offloading some of the operations that are typically performed by the host computer CPU to the hardware offload unit.
Some embodiments of the invention provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to the set of external storages or the distributed storage service. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set.
The method in some embodiments presents the external storage as a local storage of the host computer to a set of one or more programs executing on the host computer. In some embodiments, the local storage is a virtual disk, while the set of programs are a set of machines (e.g., virtual machines or containers) executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer (e.g., a virtual disk layer) on the NIC to create a local storage construct. In some embodiments, the emulated local storage (e.g., the virtual disk) does not represent any storage on the NIC, while in other embodiments, the emulated local storage also represents one or more storages on the NIC.
The method forwards read/write (R/W) requests to the set of external storages when receiving R/W requests from the set of programs to the virtual disk, and provides responses to the R/W requests after receiving responses from the set of external storages to the forwarded read/write requests. In some embodiments, the method translates the R/W requests from a first format for the local storage to a second format for the set of external storages before forwarding the requests to the external storage through the network fabric storage driver. The method also translates responses to these requests from the second format to the first format before providing the responses to an NIC interface of the host computer in order to provide these responses to the set of programs.
In some embodiments, the NIC interface is a PCIe (peripheral component interconnect express) interface, and the first format is an NVMe (non-volatile memory express) format. The second format in some of these embodiments is an NVMeOF (NVME over fabric) format and the network fabric storage driver is an NVMeOF driver. In other embodiments, the second format is a remote DSAN (distributed storage area network) format and the network fabric storage driver is a remote DSAN driver. The NIC in some embodiments includes a general purpose central processing unit (CPU) and a memory that stores a program (e.g., an NIC operating system) for execution by the CPU to access the set of external storages and to present the set of external storages as a local storage. In some embodiments, the NIC also includes an application specific integrated circuit (ASIC), which processes packets forwarded to and from the host computer, with at least a portion of this processing including the translation of the R/W requests and responses to these requests. The ASIC in some embodiments is a hardware offload unit of the NIC.
In addition to providing an emulation layer that creates and presents an emulated local storage to the set of programs on the host, the method of some embodiments has the NIC execute a DSAN service for the local storage to improve its operation and provide additional features for this storage. One example of a DSAN service is the vSAN service offered by VMware, Inc. The features of the DSAN service in some embodiments include (1) data efficiency processes, such as deduplication operations, compression operations, and thin provisioning, (2) security processes, such as end-to-end encryption, and access control operations, (3) data and life cycle management, such as storage vMotion, snapshot operations, snapshot schedules, cloning, disaster recovery, backup, long term storage, (4) performance optimizing operations, such as QoS policies (e.g., max and/or min I/O regulating policies), and (5) analytic operations, such as collecting performance metrics and usage data for virtual disk (10, latency, etc.).
These services are highly advantageous for improving performance, resiliency and security of the host's storage access that is facilitated through the NIC. For instance, the set of host programs that access the emulated local storage do not have insight that data is being accessed on remote storages through network communications. Neither these programs nor other programs executing on the host in some embodiments encrypt their storage access, as the storage being accessed appears to be local to these programs. Hence, it is highly beneficial to use the DSAN services for the R/W requests and responses (e.g., its security processes to encrypt the R/W requests and responses) exchanged between the host and the set of external storages that are made to appear as the local storage.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments of the invention provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to the set of external storages or the distributed storage service. The NICs are sometimes referred to herein as smart NICs as they perform multiple types of services and operations. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol (e.g., NVMeOF) to access the external storage set.
The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the local storage is a virtual disk, while the set of programs are a set of machines (e.g., virtual machines or containers) executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer (e.g., a virtual disk layer) to create a local storage construct that presents the set of external storages as a local storage of the host computer. In some embodiments, the emulated local storage (e.g., the virtual disk) does not represent any storage on the NIC, while in other embodiments, the emulated local storage also represents one or more storages on the NIC.
The method forwards read/write (R/W) requests to the set of external storages when receiving R/W requests from the set of programs to the virtual disk, and provides responses to the R/W requests after receiving responses from the set of external storages to the forwarded read/write requests. In some embodiments, the method translates the R/W requests from a first format for the local storage to a second format for the set of external storages before forwarding the requests to the external storage through the network fabric storage driver. The method also translates responses to these requests from the second format to the first format before providing the responses to a NIC interface of the host computer in order to provide these responses to the set of programs.
In some embodiments, the NIC interface is a PCIe interface, and the first format is an NVMe format. The second format in some of these embodiments is an NVMeOF format and the network fabric storage driver is an NVMeOF driver. The NIC in some embodiments includes a general purpose central processing unit (CPU) and a memory that stores a program (e.g., an NIC operating system) for execution by the CPU to access the set of external storages and to present the set of external storages as a local storage. The NIC in some embodiments is implemented as a system on chip (SoC) with multiple other circuit components. For instance, in some embodiments, the NIC also includes an application specific integrated circuit (ASIC), which processes packets forwarded to and from the host computer, with at least a portion of this processing including the translation of the R/W requests and responses to these requests. This ASIC in some embodiments is a hardware offload unit (HOU) of the NIC, and performs special operations (e.g., packet processing operations, response/request reformatting operations, etc.).
In addition to providing an emulation layer that creates and presents an emulated local storage to the set of programs on the host, the method of some embodiments has the NIC execute a distributed storage area network (DSAN) service for the local storage to improve its operation and provide additional features for this storage. One example of a DSAN service is the vSAN service offered by VMware, Inc.
The DSAN services are highly advantageous for improving performance, resiliency and security of the host's storage access that is facilitated through the NIC. For instance, the set of host programs that accesses the emulated local storage does not have insight that data is being accessed on remote storages through network communications. Neither these programs nor other programs executing on the host in some embodiments encrypt their storage access, as the storage being accessed appears to be local to these programs. Hence, it is highly beneficial to use the DSAN services for the R/W requests and responses (e.g., its security processes to encrypt the R/W requests and responses) exchanged between the host and the set of external storages that are made to appear as the local storage.
Although the description of some embodiments refers to emulations of NVMe storage and NVMe storage protocol, in other embodiments other storage protocols may be emulated instead of or in addition to NVMe storages. Similarly, although the description refers to PCIe buses, in other embodiments, other system buses are used instead of or in addition to a PCIe bus. Although certain drivers and protocols are shown as being used by external storages in various embodiments, other embodiments use other drivers or protocols for external storage. The smart NICs described herein are described as having operating software. In some embodiments, this operating software is an operating system that has direct control over the smart NIC without an intervening program or hypervisor. In other embodiments, the operating software is a hypervisor that runs on top of another operating system of the smart NIC. Still other embodiments use just a hypervisor and no other operating system on the smart NIC.
The smart NIC in some embodiments is a system on chip (SoC) with a CPU, FPGA, memory, IO controller, a physical NIC, and other hardware components. The smart NIC has an operating system (OS) 120 that includes an NVMe driver 122 and a series of storage processing layers 124-127. The discussion below collectively refers to the software executing on the smart NIC as the smart NIC OS 120. However, in some embodiments, the smart NIC OS is a hypervisor, while in other embodiments a hypervisor executes on top of the smart NIC OS and some or all of the storage processing layers are part of this hypervisor. In the discussion below, the components that are attributed to the smart NIC OS 120 are components of the hypervisor 114 that serves as the smart NIC OS or executes on top of the smart NIC OS in some embodiments. In other embodiments, these are components of a smart NIC OS that is not a hypervisor. In still other embodiments, some of these components belong to the smart NIC OS, while other components belong to the hypervisor executing on the smart NIC OS.
The NVMe driver 122 is a driver for the PCIe bus 150. This driver relays NVMe formatted R/W requests from the host hypervisor 114 to the storage processing layers, and relays responses to these requests from the storage processing layers to the host hypervisor 114. The storage processing layers include an NVMeOF driver 124, a core storage service 125, a DSAN service 126, and a virtual device service 127. The virtual device service includes an NVMe emulator 128.
The smart NIC OS 120 uses the NVMeOF driver 124 in some embodiments to access one or more external storages 140. Specifically, the smart NIC OS 120 emulates a local NVMe storage 160 to represent several external storages 140 to the machines (e.g., VM 112) executing on the host. From the host point of view, the VM 112 operates on the emulated local storage 160 as if it was a local NVMe storage connected through the PCIe bus 150.
To access the external storages 140, the smart NIC (e.g., the NVMeOF driver) uses one or more of its shared ports 130. The shared ports are not only used for the purposes of accessing external storage 140, but are also used for other purposes as well (e.g., used to forward packets to and from destinations other than the external storages). The NVMeOF driver 124 handles the NVMeOF protocols needed for communicating with the external storages 140 through network fabric (e.g., through routers).
The smart NICs illustrated in
The core storage service 125 provides one or more core storage operations. One example of such operations are adapter services that allow the smart NIC to emulate one or more storage adapters, with each adapter logically connecting to one or more external storages 140 and facilitating a different communication mechanism (e.g., transport mechanism) for communicating with the external storages.
Through this interface, an administrator in some embodiments can specify one or more adapters to use to access an external storage, or a set of two or more external storages. In some embodiments, more than one adapter is specified for an external storage when the administrator wants to specify a multipath pluggable storage architecture (PSA) approach to accessing the storage. Once the administrator specifies an adapter, a network manager that provides the interface sends the definition of the specified adapter to a network controller, which then configures the smart NIC to implement and configure a new driver, or reconfigure an existing driver, to access the external storage according to the adapter's specified definition. Different methods for configuring a smart NIC in some embodiments are described below.
The DSAN service 126 provides one or more DSAN operations to improve the operation of the emulated local storage 160 and provide additional features for this storage. These operations are performed as the emulated local storage is not really local but rather an emulation of one or more external storages. As such, the DSAN service 126 addresses one or more things that can go wrong in accessing such a virtual “local” storage.
For instance, in some embodiments, the DSAN service provides data resiliency and I/O control that are not generally needed when a host machine is accessing a physical local storage over NVMe. A local drive is not subject to interception over a network and is not prone to packet duplication in the manner of packets sent over a network. These issues arise from emulating the local storage using external storage accessed over a network, therefore the DSAN layer 126 resolves such issues before the data is presented to the higher layers.
In some embodiments, the DSAN operations include (1) data efficiency processes, such as deduplication operations, compression operations, and thin provisioning. (2) security processes, such as end-to-end encryption, and access control operations, (3) data and life cycle management, such as storage vMotion, snapshot operations, snapshot schedules, cloning, disaster recovery, backup, long term storage, (4) performance optimizing operations, such as QoS policies (e.g., max and/or min I/O regulating policies), and (5) analytic operations, such as collecting performance metrics and usage data for virtual disk (10, latency, etc.).
One example of a DSAN service 126 is the vSAN service offered by VMware, Inc. In some such embodiments, the smart NIC includes a local physical storage that can serve as a vSAN storage node. In other embodiments, the smart NIC does not have a local physical storage, or has such a storage but this data storage cannot participate as a vSAN storage node. In such embodiments, the smart NIC serves as a remote vSAN client node, and its vSAN layer is a vSAN proxy that uses one or more remote vSAN nodes that perform some or all of the vSAN operations and then direct the vSAN proxy what to do.
In other embodiments, the DSAN service of the smart NIC does not use a remote vSAN client protocol to communicate with the other vSAN nodes. For instance, as shown in
The virtual device service 127 has an NVMe emulator 128 that emulates the local NVMe storage 160 to represent the set of external storages 140 that are accessed through the NVMeOF driver 124 and the intervening network. As part of this emulation, the virtual device layer 127 maps outgoing NVMe access commands to external storage access commands, and the incoming external storage responses to an NVMe memory response. When multiple external storages are used, this mapping involves mapping between a storage location in the emulated local storage 160 and a storage location in one or more external storages 140. One example of a virtual device emulator that can be used for the NVMe emulator is the virtual device emulator of the vSphere software of VMware, Inc.
Part of the NVMe emulator's operation also involves this emulator using the hardware offload unit (e.g., an ASIC) of the smart NIC to convert the NVMe access commands from an NVMe-PCIe format to an NVMe format, and to convert the external storage responses received at the emulator 128 from the NVMe format to an NVMe-PCIe format (e.g., to remove PCIe header information from outgoing commands, and to add PCIe header information to incoming responses). This is further described below by reference to
The host OS 100, the hypervisor 114 or the VM 112 in some embodiments have their own drivers (not shown) for sending and receiving data through the PCIe bus 150. The host OS 100, the hypervisor 114 or the VM 112 treats the virtual local storage 160 as a physical local storage, without having to deal with the operations that the smart NIC performs to send data to and receive data from the set of external storages 140.
DSAN services 126 (such as the remote DSAN client of
In
In some embodiments, a smart NIC is able to employ HOU drivers that are adapted to the smart NIC OS (e.g., HOU drivers supplied along with the smart NIC operating software or subsequently downloaded, etc.) as the interface with the smart NIC HOU. The HOU drivers that are adapted to run directly on a particular type of operating software are referred to as being “native” to that operating software. In
More generally, a VM is used by the smart NIC of some embodiments to perform other processes and/or support other protocols that are not natively supported by the smart NIC in some embodiments. For instance,
At the direction of the HOU interface 520 (also called the HOU handler), the HOU 715 performs storage command and response processing operations needed to implement the third party storage protocol and to convert between the command and response formats of the host's local storage (e.g., its NVMe local storage) and the third party external storage 712. As shown, the third party storage interface 725 passes storage access commands and receives storage access responses from a shared port 720 of the NIC.
Next, at 820, the third party storage interface 725 strips off the PCI Headers and passes NVMe command back to the HOU handler 520. To do this, the third party storage interface 725 uses the HOU in some embodiments. The HOU handler next uses (at 825) the smart NIC HOU to change the format of the NVMe command to a command that comports with the third party storage 712, and passes (at 830) this command to the third party storage 712 along a shared port of the smart NIC. In some embodiments, the command is passed to the third party storage 712 as one or more packets transmitted through the network fabric.
At 915, the HOU Interface 520 gets the storage-access response and provides it to the third party storage interface 725, which then converts (at 920) the storage-access response from a third party format to an NVMe format and passes the storage-access response back to the HOU interface 520. Next, at 925, the HOU interface encapsulates the NVMe storage-access response with a PCIe header, and is passed to the host's local storage controller along the PCIe bus 150. The local storage controller then removes (at 930) the PCIe header, and provides the NVMe storage-access response to a workload VM or an application running on the host.
As described with respect to
One advantage of the approach of
In the example of
The virtual device emulator 1057 is used to emulate a local virtual disk from several external storages 1040 for one or more VMs 112. As mentioned above, the vSphere software's virtual device layer is used to implement the virtual device emulator of the host hypervisor or smart NIC hypervisor in some embodiments. In some embodiments, the same or different PCIe drivers 1060 are used to access different external storages 1040 that are used to emulate one virtual disk. The DSAN module 1056 performs DSAN services like those described above for the emulated local storages.
In some embodiments, the host hypervisor and smart NIC hypervisor can be configured to provide different storage services for different workload VMs 112. For instance, the storage access commands and responses for one workload VM is processed by the storage services 1055-57, while the storage access commands and responses for another workload VM skip these storage services. Similarly, the storage access commands and responses of one workload VM is processed by the storage services 125-127 of the smart NIC as shown in
At 1120, the HOU driver 1022 passes the NVMe command to the kernel NVMe module 1028, which maps this packet to an NVMeOF transport controller. The kernel NVMe module 1028 in some embodiments is transport agnostic, and can be configured to use any one of a number of different NVMe transport drivers. At 1120, the kernel NVMe 1028 identifies the NVMeOF controller (i.e., NVMe RDMA controller 1024 or NVMe TCP controller 1026) that needs to receive this NVMe command. This identification is based on the NVMe command parameters that identify the transport protocol to use. These command parameters are provided by the host's multipath PSA layer 1055.
The kernel module (at 1125) passes the NVMe command to the identified NVMeOF controller, which then generates one or more NVMeOF packets to forward (at 1130) the NVMe command to the destination external storage through a shared port of the smart NIC. As mentioned above, both NVMe RDMA 1024 and NVMe TCP 1026 are provided by the smart NIC OS 1020 for accessing remote external storages 1040 through the shared port(s) 130 of the smart NIC. In some embodiments, the kernel NVMe 1028 works like a multiplexer that provides NVMe storage access to the HOU driver 1022 using different transports, such as NVMe RDMA 1024 and NVMe TCP 1026, at the same time. After 1130, the process 1100 ends.
At 1220, the kernel NVMe 1028 maps the received NVMe command to the HOU driver 1022 as the NVMe command needs to go to host. In some embodiments, the kernel NVMe 1028 creates a record when it was processing an egress packet at 1125 and uses this record to perform its mapping at 1220. In some embodiments, the kernel NVMe 1028 provides the NVMe command to the HOU driver 1022 with the controller of the emulated local storage 160 as the command's destination. At 1225, the HOU driver 1022 then encapsulates the NVMe command with a PCIe header by using the smart NIC's HOU and then sends the NVMe command along the host PCIe to the local storage controller of the emulated local storage 160. The host PCIe then provides (at 1230) the NVMe command to the local storage controller through the NVMe PCIe driver 1060. This controller then removes (at 1230) the PCIe header and provides the NVMe command to the destination VM 112. The process 1200 then ends.
In some embodiments, the smart NICs are used as storage access accelerators.
In some embodiments, the hypervisor 1314 also includes the DSAN service layer 1313, which provide distributed storage services for the emulated local NVMe storage. As mentioned above, the distributed storage services in some embodiments account for the VM 1312 having no knowledge regarding the plurality of external storages being used to emulate the local storage. These DSAN service improve this emulated storage's operation and provide additional features for it. Examples of such features in some embodiments include (1) data efficiency processes, such as deduplication operations, compression operations, and thin provisioning, (2) security processes, such as end-to-end encryption, and access control operations, (3) data and life cycle management, such as storage vMotion, snapshot operations, snapshot schedules, cloning, disaster recovery, backup, long term storage, (4) performance optimizing operations, such as QoS policies (e.g., max and/or min I/O regulating policies), and (5) analytic operations, such as collecting performance metrics and usage data for virtual disk (10, latency, etc.) One example of a DSAN service is the vSAN service offered by VMware vSphere software. The DSAN service layer 1313 also includes a multipathing PSA layer in some embodiments.
The DSAN service module 1313 receives and sends storage related NVMe commands from and to the kernel NVMe module 1315. The kernel NVMe module 1315 interacts with either the NVMe RDMA driver 1316 or NVMe TCP driver 1317 to receive and send these NVMe commands. These drivers exchange these NVMe commands with the smart NIC OS 1320 through one or more virtual functions (VFs) 1322 defined for these drivers on the smart NIC OS.
In some embodiments, the smart NIC OS can present the smart NIC as multiple physical functions (PF) connected to the host computer. The PCIe bus 150, in some embodiments, allows for the creation of these PFs. A PF, in some embodiments, can be further virtualized as multiple virtual functions (VFs). More specifically, in some embodiments, physical functions and virtual functions refer to ports exposed by a smart NIC using a PCIe interface to connect to the host computer over the PCIe bus. A PF refers to an interface of the smart NIC that is recognized as a unique resource with a separately configurable PCIe interface (e.g., separate from other PFs on a same smart NIC). In some embodiments, each PF is executed by the processing units (e.g., microprocessors) of the host computer.
The VF refers to a virtualized interface that is not fully configurable as a separate PCIe resource, but instead inherits some configuration from the PF with which it is associated while presenting a simplified configuration space. VFs are provided, in some embodiments, to provide a passthrough mechanism that allows compute nodes executing on a host computer to receive data messages from the smart NIC without traversing a virtual switch of the host computer. The VFs, in some embodiments, are provided by virtualization software executing on the smart NIC. In some embodiments, each VF is executed by the processing units (e.g., microprocessors) of the smart NIC.
The VFs and PFs, in some embodiments, are deployed to support storage and compute virtualization modules. For example, a PF or VF can be deployed to present a storage or compute resource provided by the smart NIC as a local device (i.e., a device connected to the host computer by a PCIe bus). Defining such VFs are further described below.
The PF 1370 on the host has the corresponding VF 1322 on the smart NIC. The PF 1370 represents a shared NIC port to the NVMeOF drivers 1316 and 1317, which run on the host and convert the NVMe storage access commands to network packets. These drivers use this representative port 1370 to forward storage access packets to an external storage through the VF 1322 of the smart NIC 1320, and to receive storage access response packets from the external storage 1340 through the VF 1322 of the smart NIC 1320.
When the VF 1322 does not know how to process a packet (e.g., when it receives a first packet of a new flow for which it does not have a forwarding rule), the VF passes the packet through a “slow-path” that includes the virtual switch 1326 of the virtualization layer 1327, which then determines how to forward the packet and provides the VF with forwarding rule for forwarding the packet. On the other hand, when the VF 1322 knows how to process a packet (e.g., when the VF receives another packet of a flow that it has previously processed and/or for which it has a forwarding rule), the VF passes the packet through a “fast-path,” e.g., passes a packet of a previously processed flow directly to the NIC driver 1325 for forwarding to an external storage 1340. Accordingly, in the example illustrated in
In some embodiments, the VF 1322 uses the smart NIC HOU 505 to perform its fast path forwarding. When the HOU is not programmed with flow-processing rules needed to process a new flow, the VF 1322 in some embodiments passes the packet to the virtualization layer 1327, which either identifies the flow-processing rule for a rule cache or passes the packet to a manager (executing on the smart NIC or on an external computer) that then determines the flow processing rule, and passes this rule back to the virtualization layer to use to forward the packet and to program the HOU. Once programmed, the VF can use the HOU to process subsequent packets of this flow.
The NVMEoRDMA 1316 or NVMEoTCP 1317 module running on the host (at 1420) converts the NVMe command to one or more NVMe network packets (NVMeOF packets) and passes the packets to a PF 1370 of the PCIe bus 150. At 1425, the PF 1370 adds PCIe header information to the NVMe network packets, and then passes the packets along the PCIe bus 150. The PCIe bus 150 creates a mapping between the PF 1370 and the VF module 1322 running on the smart NIC. Hence, the VF module 1322 receives each NVMeOF packet through the PCIe bus 150.
At 1430, the VFI module 1322 then transfers the NVMeOF packet either directly through the fast path to the NIC driver 1325, or indirectly to the NIC driver 1325 through the slow path that involves the virtual switch 1326. The NIC driver 1325 then forwards the NVMeOF packet through a shared port of the smart NIC, so that this packet can be forwarded through intervening network fabric (e.g., intervening switches/routers) to reach its destination external storage 1340. In some embodiments, the fast-path processing of the VF 1322 allows the VF to directly pass the packet to the shared port of the smart NIC. The process then ends.
For the PF 1580, the smart NIC OS in
The PF 1580 provides (at 1615) the set of network packets that contains the NVMe command (with data) to the VF21523, which is a high speed network adapter provided by the smart NIC 1320. As described above for VF 1322 and operation 1430 of
The smart NIC operating system in some embodiments is provided with the host-computer hypervisor program as part of a single downloaded package. For instance, some embodiments provide a method for provisioning a smart NIC with a smart NIC operating system for enabling resource sharing on the smart NIC connected to a host computer. The method, in some embodiments, is performed by the host computer and begins when the host computer receives (1) a host-computer hypervisor program for enabling resource sharing on the host computer and (2) the smart NIC operating system. In some embodiments, the host-computer hypervisor program includes the smart NIC hypervisor program. The host computer then installs the host-computer hypervisor program and provides the smart NIC operating system to the smart NIC for the smart NIC to install on the smart NIC. One of ordinary skill in the art will appreciate that a hypervisor program is used as an example of virtualization software (e.g., software enabling resource sharing for a device executing the software).
The smart NIC, in some embodiments, is a NIC that includes (i) an application-specific integrated circuit (ASIC), (ii) a general purpose central processing unit (CPU), and (iii) memory. The ASIC, in some embodiments, is an I/O ASIC that handles the processing of packets forwarded to and from the computer and is at least partly controlled by the CPU. The CPU executes a NIC operating system in some embodiments that controls the ASIC and can run other programs, such as API translation logic to enable the compute manager to communicate with a bare metal computer. The smart NIC also includes a configurable peripheral control interconnect express (PCIe) interface in order to connect to the other physical components of the bare metal computer system (e.g., the x86 CPU, memory, etc.). Via this configurable PCIe interface, the smart NIC can present itself to the bare metal computer system as a multitude of devices, including a packet processing NIC, a hard disk (using non-volatile memory express (NVMe) over PCIe), or other devices.
Although not necessary for managing a bare metal computer, the NIC operating system of some embodiments is capable of executing a virtualization program (similar to a hypervisor) that enables sharing resources (e.g., memory, CPU resources) of the smart NIC among multiple machines (e.g., VMs) if those VMs execute on the computer. The virtualization program can provide compute virtualization services and/or network virtualization services similar to a managed hypervisor. These network virtualization services, in some embodiments, include segregating data messages into different private (e.g., overlay) networks that are defined over the physical network (shared between the private networks), forwarding the data messages for these private networks (e.g., performing switching and/or routing operations), and/or performing middlebox services for the private networks.
The host-computer hypervisor program and the smart NIC operating system, in some embodiments, are programs that do not have previous versions installed on the computer or the smart NIC. In other embodiments, the host-computer hypervisor program and the smart NIC operating system received by the host computer are update programs for previously installed versions of the host-computer hypervisor program and the smart NIC operating system. After a host-computer hypervisor program and the smart NIC operating system are received, the host computer, in some embodiments, receives an additional program for updating the smart NIC operating system and provides the received program to the smart NIC for the smart NIC to update the smart NIC operating system.
In some embodiments, after receiving the host-computer hypervisor program and the smart NIC operating system, the host computer detects (or determines) that the host computer is connected to the smart NIC. In some embodiments, the connection is made over a standard PCIe connection and the smart NIC is detected as a peripheral device that supports the installation of the smart NIC operating system. The host computer provides, based on the detection, the smart NIC operating system to the smart NIC for the smart NIC to install. In some embodiments, the smart NIC operating system is sent to the smart NIC along with an instruction to the smart NIC to install the smart NIC operating system.
In some embodiments, the host computer includes a local controller that receives the host-computer hypervisor program and the smart NIC operating system. The local controller, in some embodiments, provides the host-computer hypervisor program and the smart NIC operating system to a compute agent that installs the host-computer hypervisor program on the host computer to enable the host computer to share resources among a set of compute nodes (e.g., virtual machines, containers, Pods, etc.). The host-computer hypervisor program and the smart NIC operating system are particular examples of virtualization software that is used, in some embodiments, to enabling resource sharing for the host computer and smart NIC, respectively.
As mentioned above, the smart NIC in some embodiments includes a set of ASICs, a general purpose CPU, and a memory. The set of ASICs, in some embodiments, includes an ASIC for processing packets forwarded to and from the host computer as well as other ASICs for accelerating operations performed by the smart NIC on behalf of the host computer (e.g., encryption, decryption, storage, security, etc.). The smart NIC operating system, in some embodiments, includes virtualization programs for network virtualization, compute virtualization, and storage virtualization. The virtualization programs, in some embodiments, enable sharing the resources of the smart NIC among multiple tenants of a multi-tenant datacenter.
The network virtualization program provides network virtualization services on the smart NIC. The network virtualization services, in some embodiments, include forwarding operations (e.g., network switching operations and network routing operations). The forwarding operations are performed, in some embodiments, on behalf of multiple logically separate networks implemented over a shared network of a datacenter. Forwarding packets for different logical networks, in some embodiments, includes segregating packets for each logically separate network into the different logically separate networks. Forwarding operations for the different logical networks, in some embodiments, are implemented as different processing pipelines that perform different sets of operations. The different sets of operations include, in some embodiments, different logical packet forwarding operations (e.g., logical switching, logical routing, logical bridging, etc.) and different middlebox services (e.g., a firewall service, a load balancing service, etc.).
The compute virtualization program, in some embodiments, provides virtualized compute resources (virtual machines, containers, Pods, etc.) that execute over the compute virtualization program. The storage virtualization program, in some embodiments, provides storage virtualization services on the smart NIC. The virtualized storage, in some embodiments, include one or multiple of virtual storage area networks (vSANs), virtual volumes (vVOLs), and other virtualized storage solutions. The virtualized storage appears to the connected host computer as a local storage, in some embodiments, even when the physical resources that are the backend of the virtualized storage are provided by a distributed set of storages of multiple physical host computers.
The process 1800, in some embodiments, is performed by a host computer (e.g., host computer 1710) that in some embodiments, is a host computer (e.g., an x86 server) provided by a datacenter provider. The process 1800 begins by receiving (at 1810) a host-computer virtualization program (e.g., host-computer hypervisor program 1715) that includes a smart NIC operating system (e.g., smart NIC hypervisor program 1745). The host-computer virtualization program (e.g., host-computer hypervisor program 1715) and smart NIC operating system (e.g., smart NIC hypervisor program 1745), in some embodiments, are installer programs that install virtualization software (e.g., a software virtualization layer or a virtualization OS). The host-computer virtualization program, in some embodiments, is received from a network controller computer to configure the host computer to support virtualized compute nodes, storage, network cards, etc., to be implemented on the host computer for a virtual or logical network associated with the network controller computer.
Stage 1701 of
After receiving (at 1810) the host-computer virtualization program and the smart NIC operating system, the host computer then installs (at 1820) the received host-computer virtualization program (e.g., host-computer hypervisor program 1715) on the host computer. The virtualization program, in some embodiments, is a hypervisor such as ESXi™ provided by VMware, Inc. or other virtualization programs. As shown in stage 1702 of
After, or as part of, installing (at 1820) the host-computer virtualization program, the host computer detects (at 1830) that the smart NIC operating system is included in the host-computer virtualization program. In some embodiments, detecting (at 1830) that the smart NIC operating system is incorporated in the host-computer virtualization program includes a set of operations to perform to program any virtualization-capable smart NICs connected to the host computer. The set of operations, in some embodiments, includes an operation to detect whether a virtualization-capable smart NIC is connected to the host computer.
The host computer determines (at 1840) that a virtualization-capable smart NIC is connected to the host computer. In some embodiments, determining (at 1840) that a virtualization-capable smart NIC is connected to the host computer is part of the installation process for the host-computer virtualization program. Determining (at 1840) that a virtualization-capable smart NIC is connected to the host computer, in some embodiments, is based on a set of components exposed to the host computer by the smart NIC. In some embodiments, the host-computer virtualization program (e.g., an ESXi™ installer) queries a baseboard management controller (BMC) of the host computer to determine (at 1840) that the smart NIC is compatible with the smart NIC operating system (e.g., a smart NIC operating system (OS) such as ESXio™). In some embodiments, a virtualization-capable smart NIC is identified to the connected host computer during a previously performed process that configures the connection between the host-computer virtualization program computer and the smart NIC.
After determining (at 1840) that a virtualization-capable smart NIC is connected to the host computer, the host computer provides (at 1850) the smart NIC operating system to the smart NIC for the smart NIC to install a virtualization layer to enable the smart NIC to share resources on the smart NIC.
In some embodiments, providing (at 1850) the smart NIC operating system for the smart NIC to install the smart NIC operating system includes multiple sub-operations.
After configuring the smart NIC to enable booting from an image stored on the host computer, the smart NIC operating system is staged (at 1920) on the host computer for the smart NIC to use in an initial boot-up process. The host-computer virtualization program, in some embodiments, invokes BMC APIs to stage (at 1920) the smart NIC operating system (e.g., ESX.io) in BMC storage as an image file (e.g., as an ISO, DD, tgz, or zip file) for the smart NIC to perform the initial boot-up of the smart NIC operating system.
At 1930, the process 1900 provides the smart NIC virtualization program for storage on partitioned memory.
The smart NIC operating system (e.g., ESX.io bootloader and system modules) is then stored (at 2130) in the local partitioned storage. In some embodiments, the smart NIC operating system is copied to the smart NIC local storage for storing (at 2130) from the host computer based on a process of the smart NIC operating system. In other embodiments, the host-computer virtualization program detects that the smart NIC has booted from the image and partitioned the storage and provides the smart NIC operating system to the smart NIC for storage (at 2130).
The smart NIC operating system then verifies (at 2140) that the installation was successful. In some embodiments, verifying (at 2140) that the installation was successful includes verifying that the smart NIC device and functions are successfully enumerated. The verification (at 2140), in some embodiments, is based on a set of post-installation scripts. In some embodiments, the verification includes a communication to the host-computer virtualization program installation process that the installation on the smart NIC was successful.
The host computer BMC then configures (at 1940) the smart NIC to boot from the local copy of the smart NIC operating system.
The host computer then completes (at 1950) the installation of the host-computer virtualization program and reboots the host computer and the smart NIC. These operations (1940 and 1950) are reflected in process 2100 of
As illustrated in
As used in this document, physical functions (PFs) and virtual functions (VFs) refer to ports exposed by a smart NIC using a PCIe interface to connect to a host computer (or set of host computers) over a PCIe bus. A PF refers to an interface of the smart NIC that is recognized as a unique resource with a separately configurable PCIe interface (e.g., separate from other PFs on a same smart NIC). The VF refers to a virtualized interface that is not fully-configurable as a separate PCIe resource, but instead inherits some configuration from the PF with which it is associated while presenting a simplified configuration space. VFs are provided, in some embodiments, to provide a passthrough mechanism that allows compute nodes executing on a host computer to receive data messages from the smart NIC without traversing a virtual switch of the host computer. The VFs, in some embodiments, are provided by virtualization software executing on the smart NIC. The VFs and PFs, in some embodiments, are deployed to support the storage and computer virtualization modules 2263 and 2261. For example, a PF or VF can be deployed to present a storage or compute resource provided by the smart NIC as a local device (i.e., a device connected to the host computer by a PCIe bus).
The smart NIC 2240 also includes a local memory 2246 and a set of general purpose CPUs 2044 that are used to install (and support) the virtualization layer 2330, which enables resource sharing of elements on the I/O portion and a compute portion of the smart NIC (e.g., the CPUs 2044, memory 2246, etc.). As shown, smart NIC operating system 2245 is stored in memory 2246 (and more specifically, in memory partition 2246a) which communicates with the CPUs 2044 to execute the smart NIC operating system 2245 to install the NIC operating system 2260 (e.g., ESX.io). In some embodiments, the memory 2246 is an embedded multi-media controller (eMMC) memory that includes flash memory and a flash memory controller. The memory 2246 and the CPUs 2044 communicate, in some embodiments, with other elements of the smart NIC 2240 over an internal PCIe bus 2043.
Smart NIC 2240 also includes an I/O ASIC 2047 (among a set of additional ASICs or field-programmable gate arrays (FPGAs) not shown) that can be used to accelerate data message forwarding or other networking functions (encryption, security operations, storage operations, etc.). A set of physical ports 2041 that provide connections to a physical network and interacts with the I/O ASIC 2047 is also included in smart NIC 2240. The I/O ASIC and physical ports that are depicted in
The host computer and smart NIC, in some embodiments, are elements of a datacenter that implements virtual networks for multiple tenants. In some embodiments, the virtual networks implemented in the datacenter include one or more logical networks including one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, a logical forwarding element (LFE) is defined by configuring several physical forwarding elements (PFEs), some or all of which execute on host computers or smart NICs along with deployed compute nodes (e.g., VMs, Pods, containers, etc.). The PFEs, in some embodiments, are configured to implement two or more LFEs to connect two or more different subsets of deployed compute nodes. The virtual network in some embodiments, is a software-defined network (SDN) such as that deployed by NSX-T™ and includes a set of SDN managers and SDN controllers. In some embodiments, the set of SDN managers manage the network elements and instruct the set of SDN controllers to configure the network elements to implement a desired forwarding behavior for the SDN.
As shown, the set of SDN controller computers 2370 send a host-computer hypervisor program 2315 to a local controller 2390 of host computer 2310 through smart NIC 2340 (using physical port (PP) 2341 and a PCIe bus 2342). In some embodiments, the host-computer hypervisor program 2315 is an installer program executed by the compute resources 2321 of host computer 2310 to install a virtualization layer 2330 (e.g., a hypervisor such as ESXi™ provided by VMware, Inc.) to enable the physical resources 2320 of host computer 2310 (including compute, network and storage resources 2321, 2322, and 2323) to be shared among multiple virtualized machines.
Local controller 2390 receives the host-computer hypervisor program 2315 and provides it to the physical resources 2320 (e.g., runs the host-computer hypervisor program 2315 using the compute resources 2321 of the host computer 2310). Based on the host-computer hypervisor program 2315, a virtualization layer 2330 is installed on the host computer 2310 (shown using dashed lines to distinguish between hardware and software of the host computer 2310). While virtualization layer 2330 is shown as including a compute virtualization module 2261, a network virtualization module 2262, and a storage virtualization module 2263, in some embodiments, a virtualization layer 2330 supports only a subset of these functions, supports additional functions, or supports a different combination of functions. As described above in relation to
The I/O ASIC 2047 of the smart NIC 2440 and the host computer hypervisor 2430, in some embodiments, implement separate processing pipelines for the separate tenants (e.g., the separate logical networks). Data messages, e.g., ingressing data messages T1 and T2, are segregated into the different processing pipelines of the different logical networks of the different tenants, in some embodiments, based on logical network identifiers (e.g., virtual local area network (VLAN) or virtual extensible LAN (VXLAN) identifiers).
Network virtualization 2562 provides a virtualized PCIe interface that presents the PCIe bus 2542 as including a set of physical functions (PFs 2570a-n) as defined above and, for a set of physical functions, a set of virtual functions 2571. Both the host computer hypervisor 2530 and NIC OS 2560 execute a virtual switch 2532 that provides logical routing and logical switching operations for compute nodes (virtual machines, container, Pods, etc.). In some embodiments, a virtual switch 2573 on the smart NIC 2540 provides logical forwarding operations for compute nodes on both the smart NIC 2540 and on the host computer 2510. In some embodiments, the virtual switch 2573 interacts with the I/O ASIC 2047 to perform data message processing offload (e.g., flow processing offload) on behalf of the host computer 2510.
The bus 2805 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 2800. For instance, the bus 2805 communicatively connects the processing unit(s) 2810 with the read-only memory 2830, the system memory 2825, and the permanent storage device 2835.
From these various memory units, the processing unit(s) 2810 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 2830 stores static data and instructions that are needed by the processing unit(s) 2810 and other modules of the electronic system. The permanent storage device 2835, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 2800 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2835.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 2835, the system memory 2825 is a read-and-write memory device. However, unlike storage device 2835, the system memory is a volatile read-and-write memory, such a random access memory. The system memory 2825 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 2825, the permanent storage device 2835, and/or the read-only memory 2830. From these various memory units, the processing unit(s) 2810 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 2805 also connects to the input and output devices 2840 and 2845. The input devices 2840 enable the user to communicate information and select commands to the electronic system. The input devices 2840 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 2845 display images generated by the electronic system 2800. The output devices 2845 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
Hypervisor kernel network interface modules, in some embodiments, are non-VM DCNs that include a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, several examples were provided above by reference to specific distribute storage processes, such as vSAN. One of ordinary skill will realize that other embodiments use other distributed storage services (e.g., vVol offered by VMware, Inc.). The vSAN and vVol services of some embodiments are further described in U.S. Pat. Nos. 8,775,773 and 9,665,235, which are hereby incorporated by reference. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5393483 | Chang et al. | Feb 1995 | A |
5884313 | Talluri et al. | Mar 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5974547 | Klimenko | Oct 1999 | A |
6219699 | McCloghrie et al. | Apr 2001 | B1 |
6496935 | Fink et al. | Dec 2002 | B1 |
7079544 | Wakayama et al. | Jul 2006 | B2 |
7424710 | Nelson et al. | Sep 2008 | B1 |
7606260 | Oguchi et al. | Oct 2009 | B2 |
7849168 | Utsunomiya et al. | Dec 2010 | B2 |
8442059 | Iglesia et al. | May 2013 | B1 |
8660129 | Brendel et al. | Feb 2014 | B1 |
8825900 | Gross et al. | Sep 2014 | B1 |
8856518 | Sridharan et al. | Oct 2014 | B2 |
8931047 | Wanser et al. | Jan 2015 | B2 |
9008085 | Kamble et al. | Apr 2015 | B2 |
9116727 | Benny et al. | Aug 2015 | B2 |
9135044 | Maharana | Sep 2015 | B2 |
9143582 | Banavalikar et al. | Sep 2015 | B2 |
9152593 | Galles | Oct 2015 | B2 |
9154327 | Marino et al. | Oct 2015 | B1 |
9197551 | DeCusatis et al. | Nov 2015 | B2 |
9231849 | Hyoudou et al. | Jan 2016 | B2 |
9378161 | Dalal et al. | Jun 2016 | B1 |
9419897 | Cherian et al. | Aug 2016 | B2 |
9460031 | Dalal et al. | Oct 2016 | B1 |
9692698 | Cherian et al. | Jun 2017 | B2 |
10142127 | Cherian et al. | Nov 2018 | B2 |
10162793 | BShara et al. | Dec 2018 | B1 |
10193771 | Koponen et al. | Jan 2019 | B2 |
10567308 | Subbiah et al. | Feb 2020 | B1 |
10997106 | Bandaru et al. | May 2021 | B1 |
11108593 | Cherian et al. | Aug 2021 | B2 |
11221972 | Raman | Jan 2022 | B1 |
11385981 | Silakov et al. | Jul 2022 | B1 |
20020069245 | Kim | Jun 2002 | A1 |
20030130833 | Brownell | Jul 2003 | A1 |
20030140124 | Burns | Jul 2003 | A1 |
20030145114 | Gertner | Jul 2003 | A1 |
20030200290 | Zimmerman et al. | Oct 2003 | A1 |
20030217119 | Raman | Nov 2003 | A1 |
20050053079 | Havala | Mar 2005 | A1 |
20060029056 | Perera et al. | Feb 2006 | A1 |
20060041894 | Cheng et al. | Feb 2006 | A1 |
20060206603 | Rajan | Sep 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060236054 | Kitamura | Oct 2006 | A1 |
20070174850 | Zur | Jul 2007 | A1 |
20080008202 | Terrell et al. | Jan 2008 | A1 |
20080267177 | Johnson et al. | Oct 2008 | A1 |
20090089537 | Vick et al. | Apr 2009 | A1 |
20090119087 | Ang | May 2009 | A1 |
20090161547 | Riddle et al. | Jun 2009 | A1 |
20090161673 | Breslau et al. | Jun 2009 | A1 |
20100070677 | Thakkar | Mar 2010 | A1 |
20100115208 | Logan | May 2010 | A1 |
20100165874 | Brown et al. | Jul 2010 | A1 |
20100275199 | Smith et al. | Oct 2010 | A1 |
20100287306 | Matsuda | Nov 2010 | A1 |
20110060859 | Shukla et al. | Mar 2011 | A1 |
20110219170 | Frost | Sep 2011 | A1 |
20120042138 | Eguchi | Feb 2012 | A1 |
20120072909 | Malik et al. | Mar 2012 | A1 |
20120079478 | Galles et al. | Mar 2012 | A1 |
20120163388 | Goel et al. | Jun 2012 | A1 |
20120167082 | Kumar et al. | Jun 2012 | A1 |
20120259953 | Gertner | Oct 2012 | A1 |
20120278584 | Nagami | Nov 2012 | A1 |
20120320918 | Fomin et al. | Dec 2012 | A1 |
20130033993 | Cardona et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130061047 | Sridharan et al. | Mar 2013 | A1 |
20130073702 | Umbehocker | Mar 2013 | A1 |
20130125122 | Hansen | May 2013 | A1 |
20130145106 | Kan | Jun 2013 | A1 |
20130311663 | Kamath et al. | Nov 2013 | A1 |
20130318219 | Kancherla | Nov 2013 | A1 |
20130318268 | Dalal et al. | Nov 2013 | A1 |
20140003442 | Hernandez et al. | Jan 2014 | A1 |
20140056151 | Petrus et al. | Feb 2014 | A1 |
20140067763 | Jorapurkar et al. | Mar 2014 | A1 |
20140074799 | Karampur et al. | Mar 2014 | A1 |
20140098815 | Mishra et al. | Apr 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140123211 | Wanser et al. | May 2014 | A1 |
20140208075 | McCormick, Jr. | Jul 2014 | A1 |
20140215036 | Elzur | Jul 2014 | A1 |
20140244983 | McDonald et al. | Aug 2014 | A1 |
20140269712 | Kidambi | Sep 2014 | A1 |
20140269754 | Eguchi et al. | Sep 2014 | A1 |
20150007317 | Jain | Jan 2015 | A1 |
20150016300 | Devireddy et al. | Jan 2015 | A1 |
20150019748 | Gross, IV et al. | Jan 2015 | A1 |
20150052280 | Lawson | Feb 2015 | A1 |
20150156250 | Varshney et al. | Jun 2015 | A1 |
20150172183 | DeCusatis et al. | Jun 2015 | A1 |
20150200808 | Gourlay | Jul 2015 | A1 |
20150212892 | Li et al. | Jul 2015 | A1 |
20150215207 | Qin et al. | Jul 2015 | A1 |
20150222547 | Hayut et al. | Aug 2015 | A1 |
20150242134 | Takada et al. | Aug 2015 | A1 |
20150261556 | Jain et al. | Sep 2015 | A1 |
20150261720 | Kagan | Sep 2015 | A1 |
20150347231 | Gopal | Dec 2015 | A1 |
20150358288 | Jain et al. | Dec 2015 | A1 |
20150358290 | Jain et al. | Dec 2015 | A1 |
20150381494 | Cherian et al. | Dec 2015 | A1 |
20150381495 | Cherian et al. | Dec 2015 | A1 |
20160006696 | Donley et al. | Jan 2016 | A1 |
20160092108 | Karaje | Mar 2016 | A1 |
20160134702 | Gertner | May 2016 | A1 |
20160162302 | Warszawski et al. | Jun 2016 | A1 |
20160162438 | Hussain et al. | Jun 2016 | A1 |
20160179579 | Amann et al. | Jun 2016 | A1 |
20160182342 | Singaravelu et al. | Jun 2016 | A1 |
20160306648 | Deguillard et al. | Oct 2016 | A1 |
20170024334 | Bergsten et al. | Jan 2017 | A1 |
20170075845 | Kopparthi | Mar 2017 | A1 |
20170093623 | Zheng | Mar 2017 | A1 |
20170099532 | Kakande | Apr 2017 | A1 |
20170161090 | Kodama | Jun 2017 | A1 |
20170161189 | Gertner | Jun 2017 | A1 |
20170214549 | Koshino et al. | Jul 2017 | A1 |
20170295033 | Cherian et al. | Oct 2017 | A1 |
20180024964 | Mao et al. | Jan 2018 | A1 |
20180032249 | Makhervaks | Feb 2018 | A1 |
20180088978 | Li et al. | Mar 2018 | A1 |
20180095872 | Dreier et al. | Apr 2018 | A1 |
20180109471 | Chang et al. | Apr 2018 | A1 |
20180152540 | Niell et al. | May 2018 | A1 |
20180260125 | Botes | Sep 2018 | A1 |
20180262599 | Firestone | Sep 2018 | A1 |
20180278684 | Rashid | Sep 2018 | A1 |
20180309641 | Wang et al. | Oct 2018 | A1 |
20180309718 | Zuo | Oct 2018 | A1 |
20180329743 | Pope | Nov 2018 | A1 |
20180331976 | Pope | Nov 2018 | A1 |
20180336346 | Guenther | Nov 2018 | A1 |
20180337991 | Kumar | Nov 2018 | A1 |
20180349037 | Zhao | Dec 2018 | A1 |
20180359215 | Khare et al. | Dec 2018 | A1 |
20190042506 | Evey et al. | Feb 2019 | A1 |
20190044809 | Willis et al. | Feb 2019 | A1 |
20190044866 | Chilikin et al. | Feb 2019 | A1 |
20190075063 | McDonnell et al. | Mar 2019 | A1 |
20190132296 | Jiang et al. | May 2019 | A1 |
20190158396 | Yu et al. | May 2019 | A1 |
20190173689 | Cherian et al. | Jun 2019 | A1 |
20190200105 | Cheng et al. | Jun 2019 | A1 |
20190235909 | Jin et al. | Aug 2019 | A1 |
20190278675 | Bolkhovitin | Sep 2019 | A1 |
20190280980 | Hyoudou | Sep 2019 | A1 |
20190286373 | Karumbunathan | Sep 2019 | A1 |
20190306083 | Shih et al. | Oct 2019 | A1 |
20200028800 | Strathman et al. | Jan 2020 | A1 |
20200042234 | Krasner et al. | Feb 2020 | A1 |
20200042389 | Kulkarni | Feb 2020 | A1 |
20200042412 | Kulkarni | Feb 2020 | A1 |
20200136996 | Li | Apr 2020 | A1 |
20200259731 | Sivaraman et al. | Aug 2020 | A1 |
20200278892 | Nainar et al. | Sep 2020 | A1 |
20200278893 | Niell et al. | Sep 2020 | A1 |
20200319812 | He et al. | Oct 2020 | A1 |
20200328192 | Zaman | Oct 2020 | A1 |
20200382329 | Yuan | Dec 2020 | A1 |
20200401320 | Pyati | Dec 2020 | A1 |
20200412659 | Ilitzky et al. | Dec 2020 | A1 |
20210019270 | Li et al. | Jan 2021 | A1 |
20210026670 | Krivenok | Jan 2021 | A1 |
20210058342 | McBrearty | Feb 2021 | A1 |
20210232528 | Kutch et al. | Jul 2021 | A1 |
20210266259 | Renner, III et al. | Aug 2021 | A1 |
20210314232 | Nainar et al. | Oct 2021 | A1 |
20210357242 | Ballard et al. | Nov 2021 | A1 |
20210377166 | Brar et al. | Dec 2021 | A1 |
20210377188 | Ghag et al. | Dec 2021 | A1 |
20210392017 | Cherian et al. | Dec 2021 | A1 |
20210409317 | Seshan et al. | Dec 2021 | A1 |
20220043572 | Said et al. | Feb 2022 | A1 |
20220100432 | Kim et al. | Mar 2022 | A1 |
20220100491 | Voltz et al. | Mar 2022 | A1 |
20220100542 | Voltz | Mar 2022 | A1 |
20220100544 | Voltz | Mar 2022 | A1 |
20220100545 | Cherian et al. | Mar 2022 | A1 |
20220100546 | Cherian et al. | Mar 2022 | A1 |
20220103478 | Ang et al. | Mar 2022 | A1 |
20220103487 | Ang et al. | Mar 2022 | A1 |
20220103488 | Wang et al. | Mar 2022 | A1 |
20220103629 | Cherian et al. | Mar 2022 | A1 |
20220150055 | Cui et al. | May 2022 | A1 |
20220164451 | Bagwell | May 2022 | A1 |
20220197681 | Rajagopal | Jun 2022 | A1 |
20220206908 | Brar et al. | Jun 2022 | A1 |
20220206962 | Kim et al. | Jun 2022 | A1 |
20220206964 | Kim et al. | Jun 2022 | A1 |
20220210229 | Maddukuri et al. | Jun 2022 | A1 |
20220231968 | Rajagopal | Jul 2022 | A1 |
20220272039 | Cardona et al. | Aug 2022 | A1 |
20220335563 | Elzur | Oct 2022 | A1 |
20230004508 | Liu et al. | Jan 2023 | A1 |
Number | Date | Country |
---|---|---|
2672100 | Jun 2008 | CA |
2918551 | Jul 2010 | CA |
101258725 | Sep 2008 | CN |
101540826 | Sep 2009 | CN |
102018004046 | Nov 2018 | DE |
1482711 | Dec 2004 | EP |
E P-1482711 | Dec 2004 | EP |
3598291 | Jan 2020 | EP |
202107297 | Feb 2021 | TW |
WO-2005099201 | Oct 2005 | WO |
2007036372 | Apr 2007 | WO |
WO-2010008984 | Jan 2010 | WO |
2016003489 | Jan 2016 | WO |
WO-2020027913 | Feb 2020 | WO |
2021030020 | Feb 2021 | WO |
2022066267 | Mar 2022 | WO |
2022066268 | Mar 2022 | WO |
2022066270 | Mar 2022 | WO |
2022066271 | Mar 2022 | WO |
2022066531 | Mar 2022 | WO |
Entry |
---|
Non-Published Commonly Owned Related International Patent Application PCT/US2021/042120 with similar specification, filed Jul. 17, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/461,908, filed Aug. 30, 2021, 60 pages, Nicira, Inc. |
Anwer, Muhammad Bilal, et al., “Building a Fast, Virtualized Data Plane with Programmable Hardware,” Aug. 17, 2009, 8 pages, VISA'09, ACM, Barcelona, Spain. |
Author Unknown, “Network Functions Virtualisation; Infrastructure Architecture; Architecture of the Hypervisor Domain,” Draft ETSI GS NFV-INF 004 V0.3.1, May 28, 2014, 50 pages, France. |
Koponen, Teemu, et al., “Network Virtualization in Multi-tenant Datacenters,” Technical Report TR-2013-001E, Aug. 2013, 22 pages, VMware, Inc., Palo Alto, CA, USA. |
Le Vasseur, Joshua, et al., “Standardized but Flexible I/O for Self-Virtualizing Devices,” Month Unknown 2008, 7 pages. |
Non-Published Commonly Owned U.S. Appl. No. 17/091,663 (G806), filed Nov. 6, 2020, 29 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/107,561, filed Nov. 30, 2020, 39 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/107,568, filed Nov. 30, 2020, 39 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/114,975, filed Dec. 8, 2020, 52 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/114,994, filed Dec. 8, 2020, 51 pages, VMware, Inc. |
Non-Published Commonly Owned Related U.S. Appl. No. 17/145,318 with similar specification, filed Jan. 9, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned Related U.S. Appl. No. 17/145,319 with similar specification, filed Jan. 9, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,321, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,322, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,329, filed Jan. 9, 2021, 50 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,334, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Peterson, Larry L., et al., “OS Support for General-Purpose Routers,” Month Unknown 1999, 6 pages, Department of Computer Science, Princeton University. |
Pettit, Justin, et al., “Virtual Switching in an Era of Advanced Edges,” In Proc. 2nd Workshop on Data Center—Converged and Virtual Ethernet Switching (DCCAVES), Sep. 2010, 7 pages, vol. 22. ITC. |
Spalink, Tammo, et al., “Building a Robust Software-Based Router Using Network Processors,” Month Unknown 2001, 14 pages, ACM, Banff, Canada. |
Turner, Jon, et al., “Supercharging PlanetLab—High Performance, Multi-Application Overlay Network Platform,” SIGCOMM-07, Aug. 27-31, 2007, 12 pages, ACM, Koyoto, Japan. |
Author Unknown, “8.6 Receive-Side Scaling (RSS),” Month Unknown 2020, 2 pages, Red Hat, Inc. |
Herbert, Tom, et al., “Scaling in the Linux Networking Stack,” Jun. 2, 2020, retrieved from https://01.org/linuxgraphics/gfx-docs/drm/networking/scaling.html. |
Non-Published Commonly Owned U.S. Appl. No. 16/890,890 (F967), filed Jun. 2, 2020, 39 pages, VMware, Inc. |
Stringer, Joe, et al., “OVS Hardware Offloads Discussion Panel,” Nov. 7, 2016, 37 pages, retrieved from nttp://openvswitch.org/support/ovscon2016/7/1450-stringer.pdf. |
Author Unknown, “An Introduction to SmartNICs” The Next Platform, Mar. 4, 2019, 4 pages, retrieved from https://www.nextplatform.com/2019/03/04/an-introduction-to-smartnics/. |
Author Unknown, “In-Hardware Storage Virtualization—NVMe SNAP™ Revolutionizes Data Center Storage Composable Storage Made Simple,” Month Unknown 2019, 3 pages, Mellanox Technologies, Sunnyvale, CA, USA. |
Author Unknown, “Package Manager,” Wikipedia, Sep. 8, 2020, 10 pages. |
Author Unknown, “VMDK”, Wikipedia, May 17, 2020, 3 pages, retrieved from https://en.wikipedia.org/w/index.php?title=VMDK&oldid=957225521. |
Author Unknown, “vSphere Managed Inventory Objects,” Aug. 3, 2020, 3 pages, retrieved from https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-4D4B3DF2-D033-4782-A030-3C3600DE5A7F.html, VMware, Inc. |
Grant, Stewart, et al., “SmartNIC Performance Isolation with FairNIC: Programmable Networking for the Cloud,” SIGCOMM '20, Aug. 10-14, 2020, 13 pages, ACM, Virtual Event, USA. |
Liu, Ming, et al., “Offloading Distributed Applications onto SmartNICs using iPipe,” SIGCOMM '19, Aug. 19-23, 2019, 16 pages, ACM, Beijing, China. |
Suarez, Julio, “Reduce TCO with Arm Based SmartNICs,” Nov. 14, 2019, 12 pages, retrieved from https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/reduce-tco-with-arm-based-smartnics. |
Author Unknown, “vSAN Planning and Deployment” Update 3, Aug. 20, 2019, 85 pages, VMware, Inc., Palo Alto, CA, USA. |
Author Unknown, “What is End-to-End Encryption and How does it Work?,” Mar. 7, 2018, 4 pages, Proton Technologies AG, Geneva, Switzerland. |
Harris, Jim, “Accelerating NVME-oF* for VMs with the Storage Performance Development Kit,” Flash Memory Summit, Aug. 2017, 18 pages, Intel Corporation, Santa Clara, CA. |
Perlroth, Nicole, “What is End-to-End Encryption? Another Bull's-Eye on Big Tech,” The New York Times, Nov. 19, 2019, 4 pages, retrieved from https://nytimes.com/2019/11/19/technology/end-to-end-encryption.html. |
PCT International Search Report and Written Opinion of Commonly Owned International Patent Application PCT/US2021/042120, dated Nov. 2, 2021, 10 pages. International Searching Authority (EPO). |
Angeles, Sara, “Cloud vs. Data Center: What's the difference?” Nov. 23, 2018, 1 page, retrieved from https://www.businessnewsdaily.com/4982-cloud-vs-data-center.html. |
Author Unknown, “Middlebox,” Wikipedia, Nov. 19, 2019, 1 page, Wikipedia.com. |
Doyle, Lee, “An Introduction to smart NICs and their Benefits,” Jul. 2019, 2 pages, retrieved from https://www.techtarget.com/searchnetworking/tip/An-introduction-to-smart-NICs-and-ther-benefits. |
Number | Date | Country | |
---|---|---|---|
20220103490 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
63084513 | Sep 2020 | US |