More operations normally associated with a server are being pushed to programmable network interface controllers (NICs). Some of the operations pushed to programmable NICs include flow processing for virtualized compute nodes. As these programmable NICs become more prevalent and perform more flow processing on behalf of virtualized networks, optimizations to the flow processing will enhance the functionality of programmable NICs. Accordingly, it is desirable to optimize the flow processing offloaded to a programmable MC.
Some embodiments of the invention provide a method for configuring a physical network card or physical network controller (pNIC) to provide flow processing offload (FPO) for a host computer connected to the pNIC. The host computers host a set of compute nodes (e.g., virtual machines, Pods, containers, etc.) in a virtual network. The set of compute nodes are each associated with a set of interfaces (virtual NICs, ports, etc.) that are each assigned a locally-unique virtual port identifier (VPID) by a virtual network controller. The pNIC includes a set of interfaces (physical ports connected to a physical network, peripheral component interconnect express (PCIe) ports, physical functions (PFs), virtual functions (VFs), etc.) that are assigned physical port identifiers (PPIDs) by the pNIC. The method includes providing the pNIC with a set of mappings between VPIDs and PPIDs. The method also includes sending updates to the mappings as compute nodes migrate, connect to different interfaces of the pNIC, are assigned different VPIDs, etc. The method of some embodiments is performed by a flow processing and action generator. In some embodiments, the flow processing and action generator executes on processing units of the host computer, while in other embodiments, the flow processing and action generator executes on a set of processing units of a pNIC that includes flow processing hardware and a set of programmable processing units.
The method further includes providing the pNIC with a set of flow entries for a set of data message flows associated with the set of compute nodes. The set of flow entries, in some embodiments, define one or both of a set of matching criteria and an action using VPIDs. In some embodiments, the action specifies a destination. Each destination, in some embodiments, is specified in terms of a VPID and the pNIC resolves the VPID into a PPID (i.e., egress interface) using the set of mappings. Each flow entry, in some embodiments, is for a particular data message flow and is generated based on a first data message received in the data message flow. The flow entry is generated, in some embodiments, based on the result of data message processing performed by a virtual (e.g., software) switch and provided to the pNIC to allow the pNIC to process subsequent data messages in the data message flow.
In some embodiments, the pNIC stores the set of flow entries and the mappings in network processing hardware to perform flow processing for the set of compute nodes executing on the connected host computer. The flow entries and mapping tables, in some embodiments, are stored in separate memory caches (e.g., content-addressable memory (CAM), ternary CAM (TCAM), etc.) to perform fast lookups. In some embodiments, the pNIC receives data messages at an interface of the pNIC and performs a lookup in the set of flow entries stored by the network processing hardware to identify an action for the data message based on matching criteria associated with the data message. Flow entries, in some embodiments, include a set of criteria for identifying a data message flow and an action that specifies forwarding the data message to an interface identified by a VPID. If a flow entry specifying a VPID as a destination for a received data message exists, the pNIC performs a lookup in the VPID to PPID mappings to identify an interface of the pNIC associated with the VPID. The pNIC then forwards the data message to an interface of the pNIC identified by the PPID mapped to the specified destination VPID.
The network processing hardware, in some embodiments, is also programmed with a default flow entry that identifies an interface of the pNIC as a destination for data messages not matching with other flow entries. The identified interface, in some embodiments, is an interface used to forward the data message to a virtual (e.g., software) switch of the flow processing and action generator. The virtual switch, in some embodiments, performs first-data-message processing (e.g. slow path processing) and based on the result of the processing returns a flow entry to the network processing hardware for processing subsequent data messages in the data message flow to which the data message belongs.
Some embodiments provide a method for updating VPID to PPID mappings when a compute node connects to a different interface of the pNIC. Connecting to a different interface of the pNIC occurs, in some embodiments, due to a compute node being migrated to a different interface of the pNIC or even a different host computer that is connected to a different interface of the pNIC when the pNIC provides FPO for multiple host computers. In some embodiments, connecting to a different interface of the pNIC is based on a VM transitioning from a passthrough mode (e.g., connected to a VF) to an emulated mode (e.g., connected to a PF) or vice versa. In such cases, flow entries identifying the VPID of the compute-node interface as a destination are still valid even though the compute-node interface is now connected to a different pNIC interface (i.e., with a different PPID). Data messages matching those flow entries are directed to the pNIC interface currently connected to the compute-node interface based on a lookup in the mapping table identifying the updated mapping of the VPID to the PPID of the currently-connected pNIC interface.
The method, in some embodiments, also addresses cases in which the pNIC includes multiple physical ports (PPs) connected to a physical network for which link aggregation (e.g., LACP, trunking, bundling, teaming, etc.) is enabled. A mapping of a first VPID to a first PPID of a first PP connected to the physical network, in some embodiments, is updated to map the first VPID to a second PPID of a second PP connected to the physical network in the event of (1) a failure of the first PP or (2) an updated load balancing decision to direct the traffic associated with the VPID to the second PP instead of the first PP.
In some embodiments, an updated VPID to PPID is required for a compute-node interface that is assigned a new VPID after a change to the configuration of the compute-node interface even if the vNIC is still connected to the same interface of the pNIC. For any of the updated VPID to PPID mappings, the flow processing and action generator, in some embodiments, sends a set of instructions (e.g., two separate instructions or a single instruction to perform two actions) to remove the invalid VPID to PPID mapping and create a new VPID to PPID mapping for the updated association between a VPID and a PPID. Because the configuration of the compute-node interface has changed, some previous data message flows are no longer valid and any data messages matching flow entries for those data message flows are redirected to the virtual switch of the flow processing and action generator to evaluate based on the new configuration of the compute-node interface. In some embodiments, the redirection to the virtual switch is based on a lookup in the VPID to PPID mapping table returning a ‘fault’ (e.g., a null result or other result indicating that there is no entry for the VPID in the mapping table). In some embodiments, data messages that match a flow entry but fail to match a VPID to PPID mapping are forwarded to the flow processing and action generator along with an identifier for the flow entry that the data message matched in order to allow the flow processing and action generator to instruct the pNIC to remove the invalid flow entry (i.e., a flow entry pointing to a VPID that no longer exists) from the set of flow entries stored by the network processing hardware.
The flow processing and action generator, in some embodiments, stores information regarding flow entries generated for each VPID identified as a source destination VPID. When a VPID for a particular compute-node interface is invalidated (e.g., as described above) and a new configuration has taken effect, the flow processing and action generator can identify the flow entries associated with the invalidated VPID and instruct the pNIC to remove the identified flow entries from the set of flow entries stored by the network processing hardware. This process need not be performed before the configuration change can take effect and can be performed as a background process by the flow processing and action generator and the pNIC when processing capacity is available.
Removing the flow entries specifying the invalidated VPID as a destination allows the VPID to be reused without concern for old flows associated with the compute-node interface previously associated with the invalidated VPID being directed to the compute-node interface currently associated with the reused VPID. Additionally, the networking processing hardware, in some embodiments, performs a process for aging out flow entries that have not been used (i.e., no data messages matching the flow entry have been received) for a particular amount of time. Accordingly, in such embodiments, the VPIDs may be reused safely even without the flow processing and action generator instructing the pNIC to remove the invalidated flow entries after an amount of time based on the particular amount of time (e.g., the particular amount of time plus a timeout for previously active flows directed to the invalidated VPID). In some embodiments, the VPIDs are configured to have more bits than the PPIDs such that the VPID to PPID mapping is sparse (i.e., there are at least as many unused VPIDs as the number of possible PPIDs).
The mapping table, in some embodiments, is also used to identify VPIDs associated with PPIDs on which a data message is received. A data message received at a PPID is associated with the VPID to which the PPID maps, and the lookup in the set of flow entries is performed based on the VPID as well as a set of other matching criteria. For PPIDs that are associated with multiple VPIDs, e.g., a physical function (PF) of the pNIC connected to an interface of a virtual switch connected to multiple compute-node interfaces each with a different VPID, a data message received at the PF is already associated with a VPID to distinguish the traffic from different sources. Additionally, for VPIDs that map to the PPID identifying the PF connected to the virtual switch, some embodiments include an indication in the mapping table (e.g., a flag bit associated with the mapping entry) that the VPID should be included with the forwarded data message matching the mapping entry.
In some embodiments, the mapping table is not programmed with mappings for VPIDs that connect to a virtual switch, and the networking processing hardware is programmed to send any data messages that match a flow entry but fail to match an entry in the mapping table to the pNIC interface connected to the virtual switch (i.e., of the flow processing and action generator) along with the destination VPID specified in the matching flow entry. The virtual switch can then forward the data message based on the destination VPID or other matching criteria of the data message. The virtual switch, in some embodiments, includes a fast path processing pipeline based on stored flow entries as well as a slow path processing pipeline based on the configuration of the virtual network and the characteristics of a received data message.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all of the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description, and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments of the invention provide a method for configuring a physical network card or physical network controller (pNIC) to provide flow processing offload (FPO) for a host computer connected to the pNIC. The host computers host a set of compute nodes (e.g., virtual machines (VMs), Pods, containers, etc.) in a virtual or logical network. The set of compute nodes are each associated with a set of interfaces (virtual NICs, ports, etc.) that are each assigned a locally-unique virtual port identifier (VPID) by a flow processing and action generator. The pNIC includes a set of interfaces (physical ports connected to a physical network, peripheral component interconnect express (PCIe) ports including physical functions (PFs) and virtual functions (VFs), etc.) that are assigned physical port identifiers (PPIDs) by the pNIC.
As used in this document, physical functions (PFs) and virtual functions (VFs) refer to ports exposed by a pNIC using a PCIe interface. A PF refers to an interface of the pNIC that is recognized as a unique resource with a separately configurable PCIe interface (e.g., separate from other PFs on a same pNIC). The VF refers to a virtualized interface that is not separately configurable and is not recognized as a unique PCIe resource. VFs are provided, in some embodiments, to provide a passthrough mechanism that allows compute nodes executing on a host computer to receive data messages from the pNIC without traversing a virtual switch of the host computer. The VFs, in some embodiments, are provided by virtualization software executing on the pNIC.
In some embodiments, the virtual network includes one or more logical networks including one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, a logical forwarding element (LFE) is defined by configuring several physical forwarding elements (PFEs), some or all of which execute on host computers along with the deployed compute nodes (e.g., VMs, Pods, containers, etc.). The PFEs, in some embodiments, are configured to implement two or more LFEs to connect two or more different subsets of deployed compute nodes. The virtual network in some embodiments, is a software-defined network (SDN) such as that deployed by NSX-T™ and includes a set of SDN managers and SDN controllers. In some embodiments, the set of SDN managers manage the network elements and instruct the set of SDN controllers to configure the network elements to implement a desired forwarding behavior for the SDN. The set of SDN controllers, in some embodiments, interact with local controllers on host computers to configure the network elements. In some embodiments, these managers and controllers are the NSX-T managers and controllers licensed by VMware, Inc.
As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message is used in this document to refer to various formatted collections of bits that are sent across a network. The formatting of these bits can be specified by standardized protocols or non-standardized protocols. Examples of data messages following standardized protocols include Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers (or layer 2, layer 3, layer 4, and layer 7) are references, respectively, to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.
In some embodiments, connections between the vNICs 112a-n and the VFs 133a-n is enabled by VF drivers 118a-n on the host computer 110. Host computer 110 also includes a second set of VMs 113a-m that connect to a virtual switch 115 of the host computer 110. The virtual switch 115 connects to the pNIC 120 through a PF 134m through the PCIe bus 131. In some embodiments, the PFs 134a-m are also virtualized by virtualization software 135 to appear as separate PCIe connected devices to the host computer 110 or a set of connected host devices. VMs and vNICs are just one example of a compute node and an interface that may be implemented in embodiments of the invention.
The pNIC 120 also includes a physical network port 121 that connects the pNIC 120 and the VMs 111a-n and vNICs 112a-n to a physical network. The PCIe bus 131 and physical network port 121 connect to the flow processing offload (FPO) hardware 140 to perform flow processing for the VMs 111a-n and vNICs 112a-n. The FPO hardware 140 includes a flow entry table 143 that stores a set of flow entries for performing flow processing. The flow entries, in some embodiments, specify a set of matching criteria and an action to take for data messages that match the matching criteria. One or both of the set of matching criteria and the action use VPIDs to identify compute-node interfaces. Additional matching criterion, in some embodiments, includes header values (e.g., header values related to L2, L3, L4, etc.) of the data message. In some embodiments, the possible actions include dropping the data message or forwarding the data message to a VPID.
The FPO hardware 140 also includes a mapping table 142. Mapping table 142 includes a set of VPID to PPID mappings that are used to resolve the VPIDs specified in flow entries into interfaces of the pNIC 120. The mapping table 142 maps VPIDs to PPIDs, and the PPIDs identify interfaces of the pNIC 120. In some embodiments, the PPIDs are assigned by the pNIC 120, and the VPIDs are assigned and associated with particular interfaces of the pNIC 120 by a flow processing and action generator (not shown). As will be discussed in the examples below, specifying the destinations in terms of VPIDs and using a mapping table to identify an interface of the pNIC allows flow entries to remain valid even as an interface of a compute node changes its association between one interface of the pNIC to an association with another interface of the pNIC.
The FPAG 260 also includes a virtual switch 261, which in turn includes a slow path processor 263 and a flow generator 264. The slow path processor 263 performs slow path processing for data messages for which the FPO hardware 140 does not store a valid flow entry. The results of the slow path processing are then used by the flow generator 264 to generate a flow entry to offload the flow processing to the FPO hardware 140. For example, the slow path processing may indicate that a particular forwarding rule applies to the data message flow and supplies a set of criteria that uniquely identify the flow to which the data message belongs and an action to take for future data messages belonging to that flow. In some embodiments, for a particular forwarding rule that uses a reduced set of criteria, the generated flow entry includes wildcard values in the set of matching criteria specified by the flow entry for those data message characteristics that are not used by the particular forwarding rule to determine the action.
In some embodiments, the virtual network is a software-defined network (SDN) that includes a set of SDN managers and a set of SDN controllers.
The local controller 365, in some embodiments, configures the slow path processor 263 with forwarding rules and additional policies (e.g., firewall policies, encryption policies, etc.) necessary to implement a data message processing pipeline defined for the SDN (or a set of logical forwarding elements of the SDN). The local controller 365, in some embodiments, also provides information received from the pNIC 120 and the SDN controllers 366 to the mapping generator 368 to identify the VPIDs and PPIDs of the different interfaces and the connections between the interfaces to generate VPID to PPID mappings. Additionally, the local controller 365 notifies the mapping generator 368 when a configuration change affects the VPID to PPID mappings to allow the mapping generator 368 to generate a new or updated VPID to PPID mapping and, when applicable, identify a mapping that must be deleted. While FPAG 260 is shown in
The method includes providing the pNIC with a set of mappings between VPIDs and PPIDs.
The process 600 also identifies (at 610) interfaces of the pNIC connected to the identified compute-node interfaces and PPIDs associated with those pNIC interfaces. The PPIDs, in some embodiments, are identified by the flow processing and action generator by querying the pNIC for the PPIDs. In some embodiments, the flow processing and action generator is aware of all the interfaces of the pNIC and their PPIDs and determines the interface of the pNIC to which each compute-node interface connects.
Based on the identified VPIDs for the compute-node interfaces and the PPIDs of the interfaces of the pNIC to which they connect, the flow processing and action generator generates (at 615) a set of mappings between the VPIDs and PPIDs. The generated set of mappings is sent (at 620) to the FPO hardware of the pNIC. In some embodiments, the generated set of mappings is sent to the FPO hardware using a PF of a PCIe connection between the processing units that execute the flow processing and action generator and the FPO hardware. As described above, the processing units executing the flow processing and action generator are processing units of a host computer, while in other embodiments, the pNIC is an integrated MC (e.g., a programmable NIC, smart NIC, etc.) that includes the processing units as well as the FPO hardware.
The FPO hardware receives (at 625) the VPID to PPID mappings sent from the flow processing and action generator. The received VPID to PPID mappings are stored (at 630) in a mapping table of the FPO hardware. In some embodiments, the mapping table is stored in a memory cache (e.g., content-addressable memory (CAM), ternary CAM (TCAM), etc.) that can be used to identify PPIDs based on VPIDs or VPIDs based on PPIDs. One of ordinary skill in the art will appreciate that the process 600 describes an initial mapping of VPIDs to PPIDs and that certain operations represent multiple operations or are performed in different orders (e.g., operation 605 may be preceded by operation 610) in different embodiments and that the description of process 600 is not meant to exclude equivalent processes for achieving the same result.
The method also includes sending updates to the mappings as compute nodes migrate, connect to different interfaces of the pNIC, are assigned different VPIDs, etc. One of ordinary skill in the art will appreciate that a modified process 600 for a particular VPID to PPID mapping, in some embodiments, is performed each time the flow processing and action generator detects a change to either a VPID or an association between a VPID and a PPID. For example, operation 605 identifies a specific set of VPIDs that are added, moved, or invalidated by a particular configuration change of the virtual network, and operation 610 identifies a current association of the added or moved set of VPIDs to a set of PPIDs of the pNIC. Generating (at 615) the mapping entries is performed only for the added or moved set of VPIDs mapped to the identified set of PPIDs. Additionally, sending (at 620) the generated mapping for an updated VPID to PPID mapping, in some embodiments, includes sending an instruction to remove a previously sent VPID to PPID mapping that is invalid based on the detected configuration change (invalidating a VPID or moving the VPID to connect to an interface identified by a different PPID).
The method further includes providing the pNIC with a set of flow entries for a set of data message flows associated with the set of compute nodes. The set of flow entries, in some embodiments, define one or both of a set of matching criteria and an action using VPIDs. In some embodiments, the action specifies a destination. Each destination, in some embodiments, is specified in terms of a VPID and the pNIC resolves the VPID into a PPID (i.e., egress interface) using the set of mappings. Each flow entry, in some embodiments, is for a particular data message flow and is generated based on a first data message received in the data message flow. The flow entry is generated, in some embodiments, based on the result of data message processing performed by a virtual (e.g., software) switch and provided to the pNIC to allow the pNIC to process subsequent data messages in the data message flow.
The flow processing and action generator processes (at 715) the data message through a processing pipeline to determine an action to take for subsequent data messages in the same data message flow. For example, the processing pipeline, in some embodiments, includes a set of logical forwarding operations along with a set of other operations (e.g., firewall, middlebox services, etc.) that result in either a decision to drop the data messages of the data message flow or identify a destination for data messages of the data message flow (possibly with an encapsulation or decapsulation before forwarding). Identifying the destination for data messages of a data message flow, in some embodiments, includes identifying a VPID of a compute-node interface that is a destination of the data messages of the data message flow.
Based on (1) characteristics of the received data message that identify the data message flow to which it belongs and (2) the action determined to be taken based on processing the data message, the flow processing and action generator generates (at 720) a flow entry for the FPO hardware to use to process subsequent data messages of the data message flow. The flow processing and action generator sends (at 725) the generated flow entry to the FPO hardware. As described above, in some embodiments, the generated flow entry is sent to the FPO hardware using a PF of a PCIe connection between the processing units that execute the flow processing and action generator and the FPO hardware.
The FPO hardware receives (at 730) the flow entry sent from the flow processing and action generator. The received flow entries are stored (at 735) in a set of flow entries (e.g., a flow entry table) of the FPO hardware. In some embodiments, the set of flow entries is stored in a memory cache (e.g., content-addressable memory (CAM), ternary CAM (TCAM), etc.) that can be used to identify a flow entry that specifies a set of matching criteria associated with a received data message.
In some embodiments, the pNIC stores the set of flow entries and the mappings in network processing hardware to perform flow processing for the set of compute nodes executing on the connected host computer. The flow entries and mapping tables, in some embodiments, are stored in separate memory caches (e.g., content-addressable memory (CAM), ternary CAM (TCAM), etc.) to perform fast lookup.
The process 800 determines (at 810) if the received data message matches a flow entry stored by the FPO hardware. In some embodiments, determining whether the FPO hardware stores a flow entry matching the received data message is based on a lookup in a set of stored flow entries based on characteristics of the received data message (e.g., a 5-tuple, header values at different layers of the OSI model, metadata, etc.). If the received data message is determined (at 810) to not match a flow entry, the process 800 proceeds to forward (at 815) the data message to the flow processing and action generator for slow path processing, receive (at 820) a flow entry for the data message flow to which the received data message belongs, and store (at 825) the flow entry for processing subsequent data messages of the data message flow. Operations 815-825 are described in more detail above with the discussion of operations 710, 730, and 735 of
If the received data message is determined to match a flow entry, the process 800 proceeds to determine (at 830) whether the matching flow entry specifies that data messages matching the flow entry be forwarded to a destination VPID. If the process 800 determines that the flow entry specifies that the data message be forwarded to a destination VPID, the process 800 determines (at 835) whether a mapping for the VPID exists in the mapping table. In some embodiments, determining whether a mapping for the VPID exists in the mapping table includes searching a content-addressable memory (CAM) based on the VPID. If the process 800 determines (at 830) that the flow entry does not specify a destination VPID (e.g., the flow entry specifies that the data message should be dropped) or the process 800 determines (at 835) that a mapping for the VPID exists in the mapping table, the action specified in the flow entry is performed (at 800) and the process ends.
If the process 800 determines (at 835) that the VPID is not in the mapping table, the process 800 returns to operations 815-825. In some embodiments, determining that the VPID is not in the mapping table 142 is based on the VPID lookup returning a default result that directs the data message to the interface associated with slow path processing (associated with operations 815-825). In other embodiments, instead of including a default entry in the mapping table 142, some embodiments determine that the VPID is not in the mapping table based on a VPID lookup returning a ‘fault’ (e.g., a null result or other result indicating that there is no entry for the VPID in the mapping table). In some embodiments, in which there is no default entry in the mapping table 142, the FPO hardware 140 is configured to direct all data messages for which a fault is returned to the virtual switch. As will be described below in reference to
For example, flow entries 951 and 952 specify a VLAN identifier in the sets of matching criteria 950, while flow entry 954 specifies a VXLAN identifier in the set of matching criteria 950. In some embodiments, additional types of metadata that are added internally are also specified, such as in flow entry 955 which specifies a set of VPIDs (i.e., VPIDs 0001-0003) as a metadata criteria (characteristic) that is associated with a data message after a PPID identifying an interface of the pNIC on which the data message is received is translated into a VPID. VPIDs 0001-0003, in some embodiments, are associated with pNIC interfaces connecting to the physical network, such that flow entry 955 only applies to data messages received from the physical network.
In some embodiments, IP addresses are specified as classless inter-domain routing notation to identify an IP prefix representing a range of IP addresses (e.g., a range of IP addresses assigned to a particular application or user group that should or should not be granted access to a certain other application or user group). For example, flow entry 953 specifies a source IP range IP4/28 indicating an IP address “IP4” and a mask length of 28 bits such that any IP address matching the first 28 bits will be a match. Similarly, flow entry 953 specifies a destination IP range IP5/30 indicating an IP address “IP5” and a mask length of 30 bits such that any IP address matching the first 30 bits will be a match. Additionally, the flow entries, in some embodiments, include at least one criteria using a wildcard value (identified by “*”) that is considered a match for any value of the associated characteristic of a received data message. For example, rules 952-956 all specify at least one criteria (e.g., data message characteristic) using a wildcard value.
In some embodiments, the flow entries are assigned priorities, such that, for a data message that matches multiple flow entries, an action specified in the flow entry with the highest priority is taken for the data message. Priority, in some embodiments, is determined by the specificity of the matching criteria of the flow entries when generating the flow entry during slow path processing and is included in the generated flow entry. A default rule 956 is specified, in some embodiments, that directs data messages that do not match any higher-priority rules to a VPID (e.g., VPID 5000) associated with slow path processing (e.g., to a virtual switch of the flow processing and action generator).
Each flow entry, in some embodiments, includes an action associated with a data message that matches that flow entry. The actions, in some embodiments, include: a forwarding operation (FWD), a DROP for packets that are not to be forwarded, modifying the packet's header and a set of modified headers, replicating the packet (along with a set of associated destinations), a decapsulation (DECAP) for encapsulated packets that require decapsulation before forwarding towards their destination, and an encapsulation (ENCAP) for packets that require encapsulation before forwarding towards their destination. In some embodiments, some actions specify a series of actions. For example, flow entry 954 specifies that a data message with source IP address “IP6,” any source MAC address, a source port “Port6,” a destination IP address “IP7,” a destination MAC address “MAC7,” a source port “4789,” and metadata indicating that the data message is associated with a VXLAN “VXLAN2,” be decapsulated and forwarded to VPID “3189.” In some embodiments, the identified VPID is a VPID associated with a particular interface of a compute node executing on the host computer. The VPID identified by some flow entries that specify a DECAP action is a VPID for a physical function that connects to a virtual switch of the flow processing and action generator for processing the decapsulated data message through the slow path processing. For other flow entries that specify a DECAP action the interface identifier (e.g., VPID or PPID) is an identifier for a loopback interface of the FPO hardware to allow the FPO hardware to process the inner data message (the decapsulated data message). In some embodiments, flow entries specifying a DECAP action also explicitly specify further processing of the decapsulated data message by the FPO hardware.
Mapping table 942, in some embodiments, is stored in CAM and includes a set of VPID to PPID mappings 971-975 that specify a VPID in a “VPID” field 970, a corresponding PPID in a “PPID” field 980, and a flag bit indicating whether a VPID associated with a data message should be appended to a forwarded data message in an a “Append VPID” field 990. The mapping table, as described above in relation to
For VPIDs that are not found in the mapping table 942, some embodiments define a default entry 975 specifying a wildcard 976 in the VPID field 970. In the embodiment illustrated in
In such cases, some embodiments include a flow entry identifier when forwarding the data message to the virtual switch of the flow processing and action generator. The flow entry identifier is stored in a metadata field or is appended to a data message in such a way to allow the flow processing and action generator to identify that the identified flow entry should be removed from the set of flow entries stored by the FPO hardware. The VPID may be invalid because an associated compute-node interface has changed configuration and been assigned a new VPID, or the associated compute node has been shut down. If the compute-node interface has been assigned a new VPID, the mapping table is provided with a mapping entry that maps the newly assigned VPID to a PPID of an associated interface of the pNIC and the flow entries associated with the invalid VPID will eventually be removed as described above and as further described in relation to
In some embodiments, multiple VPIDs are associated with a single PPID. For example mapping table entries 972, 974, and 975 are all associated with PPID 1111. In some embodiments, the append VPID field 990 is used to identify data messages for which the destination VPID should be forwarded along with the data message. As described above, PPID 1111 is associated with an interface of the pNIC connected to the virtual switch of the flow processing and action generator. The virtual switch, in some embodiments, provides a single connection to the pNIC for multiple emulated compute nodes and appending the VPID (e.g., VPID 2225) allows the virtual switch to use a local fast-path processing or other form of minimal processing to forward a data message associated with a VPID to its destination. Additionally, on the return path, the data message, in some embodiments, is associated with a VPID and the append VPID flag indicates that the VPID should not be removed before providing the data message to the FPO hardware 940. In other embodiments, VPIDs associated with data messages (e.g., stored in a metadata field of the data message) are kept by default. Appending (or keeping) the VPID on the return path allows the FPO hardware 940 to distinguish between the different compute nodes connected to the pNIC using the same interface.
The process 1400 then identifies (at 1410) a set of flow entries related to the invalidated VPID. In some embodiments, the FPAG stores each flow entry generated specifying a VPID as either a source or destination. Based on the identified, invalidated VPID, the FPAG can identify each entry specifying the invalidated VPID as either a source or destination. In some embodiments, the FPAG does not identify all of the flow entries associated with the invalidated VPID, but instead identifies a flow entry related to the invalid VPID based on a data message received from the FPO hardware. The data message received from the FPO hardware, in some embodiments, includes (e.g., in metadata or as the content of a control message) a flow entry identifier for a flow entry matching a data message received at the FPO hardware that produced a fault (or hit a default rule) from a lookup in the mapping table. One of ordinary skill in the art will appreciate that operations 715-725 (of
The process 1400 then generates (at 1415) a set of instructions to remove the identified flow entries from the FPO hardware. The set of instructions, in some embodiments, are generated as a single instruction to remove multiple flow entries, while in other embodiments, the set of instructions includes a separate instruction to remove each identified flow entry. In some embodiments, the set of instructions are generated as a background process when resources are available.
The set of instructions are sent (at 1420) to the FPO hardware to have the FPO hardware remove the flow entries from its storage. The FPO hardware then removes the invalidated flow entries and the process 1400 ends. In some embodiments, the FPO hardware also only processes the instructions as a background process that does not consume resources needed for other higher-priority processes. In some embodiments, the FPO hardware sends a confirmation that the identified set of flow entries have been removed to allow the FPAG to reuse the invalidated VPID. Process 1400 and processing the instructions at the FPO hardware are able to be performed as background processes because the configuration change can take effect based on the updated VPID to PPID mapping before the invalid flow entries are removed. The flow entries are removed to conserve resources of the FPO hardware and to enable invalidated VPIDs to be reused after flow entries previously generated for the VPID are removed.
The process 1500 then determines (at 1510) that no VPID to PPID mapping exists for the VPID specified as a destination in the matching flow entry. The determination, in some embodiments, is based on a lookup in the mapping table producing a fault or a default mapping being the only match returned. In some embodiments, an identifier of the flow entry that matched the data message is maintained (e.g., forwarded along with the data message) until a non-default destination is identified.
The process 1500 then removes (at 1515) the flow entry from the FPO hardware. In some embodiments, the FPO hardware stores the flow entries along with a bit that indicates whether the flow entry should be automatically invalidated (e.g., deleted) if no non-default match is found in the mapping table. The FPO hardware, in some embodiments, automatically invalidates the flow entry that matched the data message either based on the bit stored along with the flow entry or as a default behavior that is not based on storing a flag bit along with the flow entry, and the process 1500 ends. In some embodiments, invalidating (at 1515) the flow entry includes sending a data message to the FPAG identifying the flow entry as being a flow entry that did not resolve into a destination PPID (i.e., did not produce a non-default match from a lookup in the mapping table). The FPGA then performs process 1400 to generate an instruction that is received by the FPO hardware to invalidate (or remove) the flow entry. Based on the received instruction, the FPO hardware invalidates (or removes) the flow entry and the process 1500 ends.
In some embodiments, the FPO also has an internal process for invalidating (e.g., aging out) flow entries based on the flow entry not having been used for a particular amount of time. The FPO hardware, in some such embodiments, stores data regarding the last time a flow entry matched a data message. If the time elapsed from the last time the flow entry matched a data message is greater than an aging-out threshold time, the flow entry is removed (or invalidated). Accordingly, after a reuse threshold time that is at least as great as the aging-out threshold time, an invalidated VPID can be reused. In some embodiments, the reuse threshold time is set to be equal to or greater than a time an average data message flow would timeout plus the aging-out time to ensure that the aging-out threshold has been met on the FPO hardware. To further facilitate the reuse of VPIDs, in some embodiments, the VPIDs are defined to have more bits than the PPIDs. The number of bits of the PPID, in some embodiments, is based on how many PFs the pNIC has and how many VFs each PF supports. Assuming a 16 bit PPID, a VPID, in some embodiments, is 18 or 20 bits depending on the desired sparsity of VPID to PPID mappings.
In some embodiments, the mapping table includes a set of reverse mappings to identify a VPID associated with a PPID on which a data message is received. The reverse mappings, in some embodiments, are generated using a process similar to process 600 but generates (at 615) mappings of PPIDs to VPIDs as well as VPIDs to PPIDs. The reverse mappings are stored in a separate reverse mapping table, in some embodiments. As discussed above, a particular PPID may be associated with multiple VPIDs. For data messages received from a compute node executing on a host computer, a VPID is appended (or maintained), when providing the data message to the FPO hardware.
In addition to quickly failing over in the case of link failure without the need to rewrite flow entries associated with the failed link, the use of the mapping table also allows load balancing decisions made to distribute data messages over multiple physical ports to be updated without rewriting the associated flow entries. For example, if the bandwidth of a particular physical port in a link aggregation group changes, a set of data messages that was previously sent to the particular physical port, in some embodiments, is redirected to a different physical port by updating a VPID to PPID mapping so that a VPID associated with the particular physical port now maps to the PPID of the different physical port. In some embodiments, each physical port is assigned multiple VPIDs that map to the PPID of the physical port (e.g., physical port 1621a of
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 1705 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1700. For instance, the bus 1705 communicatively connects the processing unit(s) 1710 with the read-only memory 1730, the system memory 1725, and the permanent storage device 1735.
From these various memory units, the processing unit(s) 1710 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 1730 stores static data and instructions that are needed by the processing unit(s) 1710 and other modules of the computer system. The permanent storage device 1735, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1700 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1735.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1735, the system memory 1725 is a read-and-write memory device. However, unlike storage device 1735, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1725, the permanent storage device 1735, and/or the read-only memory 1730. From these various memory units, the processing unit(s) 1710 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 1705 also connects to the input and output devices 1740 and 1745. The input devices 1740 enable the user to communicate information and select requests to the computer system. The input devices 1740 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1745 display images generated by the computer system 1700. The output devices 1745 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Also, while several examples above refer to container Pods, other embodiments use containers outside of Pods. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5884313 | Talluri et al. | Mar 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5974547 | Klimenko | Oct 1999 | A |
6219699 | McCloghrie et al. | Apr 2001 | B1 |
6393483 | Latif et al. | May 2002 | B1 |
6496935 | Fink et al. | Dec 2002 | B1 |
7079544 | Wakayama et al. | Jul 2006 | B2 |
7424710 | Nelson et al. | Sep 2008 | B1 |
7606260 | Oguchi et al. | Oct 2009 | B2 |
7849168 | Utsunomiya | Dec 2010 | B2 |
8442059 | de la Iglesia | May 2013 | B1 |
8660129 | Brendel | Feb 2014 | B1 |
8825900 | Gross et al. | Sep 2014 | B1 |
8856518 | Sridharan et al. | Oct 2014 | B2 |
8931047 | Wanser et al. | Jan 2015 | B2 |
9008085 | Kamble et al. | Apr 2015 | B2 |
9116727 | Benny et al. | Aug 2015 | B2 |
9135044 | Maharana | Sep 2015 | B2 |
9143582 | Banavalikar et al. | Sep 2015 | B2 |
9152593 | Galles | Oct 2015 | B2 |
9154327 | Marino | Oct 2015 | B1 |
9231849 | Hyoudou | Jan 2016 | B2 |
9378161 | Dalal et al. | Jun 2016 | B1 |
9419897 | Cherian et al. | Aug 2016 | B2 |
9460031 | Dalal et al. | Oct 2016 | B1 |
9692698 | Cherian et al. | Jun 2017 | B2 |
10142127 | Cherian et al. | Nov 2018 | B2 |
10162793 | BShara et al. | Dec 2018 | B1 |
10193771 | Koponen et al. | Jan 2019 | B2 |
10567308 | Subbiah | Feb 2020 | B1 |
10997106 | Bandaru | May 2021 | B1 |
11108593 | Cherian et al. | Aug 2021 | B2 |
11221972 | Raman et al. | Jan 2022 | B1 |
11385981 | Silakov et al. | Jul 2022 | B1 |
20030130833 | Brownell et al. | Jul 2003 | A1 |
20030140124 | Burns | Jul 2003 | A1 |
20030145114 | Gertner | Jul 2003 | A1 |
20030200290 | Zimmerman et al. | Oct 2003 | A1 |
20030217119 | Raman et al. | Nov 2003 | A1 |
20050053079 | Havala | Mar 2005 | A1 |
20060029056 | Perera et al. | Feb 2006 | A1 |
20060206603 | Rajan et al. | Sep 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060236054 | Kitamura | Oct 2006 | A1 |
20070174850 | Zur | Jul 2007 | A1 |
20080008202 | Terrell | Jan 2008 | A1 |
20080267177 | Johnson et al. | Oct 2008 | A1 |
20090089537 | Vick et al. | Apr 2009 | A1 |
20090119087 | Ang et al. | May 2009 | A1 |
20090161547 | Riddle et al. | Jun 2009 | A1 |
20090161673 | Breslau et al. | Jun 2009 | A1 |
20100070677 | Thakkar | Mar 2010 | A1 |
20100115208 | Logan | May 2010 | A1 |
20100165874 | Brown et al. | Jul 2010 | A1 |
20100275199 | Smith et al. | Oct 2010 | A1 |
20110060859 | Shukla et al. | Mar 2011 | A1 |
20110219170 | Frost et al. | Sep 2011 | A1 |
20120042138 | Eguchi et al. | Feb 2012 | A1 |
20120072909 | Malik et al. | Mar 2012 | A1 |
20120079478 | Galles et al. | Mar 2012 | A1 |
20120163388 | Goel et al. | Jun 2012 | A1 |
20120167082 | Kumar et al. | Jun 2012 | A1 |
20120259953 | Gertner | Oct 2012 | A1 |
20120278584 | Nagami et al. | Nov 2012 | A1 |
20120320918 | Fomin et al. | Dec 2012 | A1 |
20130033993 | Cardona et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130061047 | Sridharan et al. | Mar 2013 | A1 |
20130073702 | Umbehocker | Mar 2013 | A1 |
20130145106 | Kan | Jun 2013 | A1 |
20130311663 | Kamath et al. | Nov 2013 | A1 |
20130318219 | Kancherla | Nov 2013 | A1 |
20130318268 | Dalal et al. | Nov 2013 | A1 |
20140003442 | Hernandez et al. | Jan 2014 | A1 |
20140056151 | Petrus et al. | Feb 2014 | A1 |
20140067763 | Jorapurkar et al. | Mar 2014 | A1 |
20140074799 | Karampuri et al. | Mar 2014 | A1 |
20140098815 | Mishra et al. | Apr 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140123211 | Wanser et al. | May 2014 | A1 |
20140208075 | McCormick, Jr. | Jul 2014 | A1 |
20140215036 | Elzur | Jul 2014 | A1 |
20140244983 | McDonald et al. | Aug 2014 | A1 |
20140269712 | Kidambi | Sep 2014 | A1 |
20140269754 | Eguchi et al. | Sep 2014 | A1 |
20150007317 | Jain | Jan 2015 | A1 |
20150016300 | Devireddy et al. | Jan 2015 | A1 |
20150019748 | Gross, IV et al. | Jan 2015 | A1 |
20150052280 | Lawson | Feb 2015 | A1 |
20150156250 | Varshney | Jun 2015 | A1 |
20150172183 | DeCusatis et al. | Jun 2015 | A1 |
20150200808 | Gourlay et al. | Jul 2015 | A1 |
20150212892 | Li et al. | Jul 2015 | A1 |
20150215207 | Qin et al. | Jul 2015 | A1 |
20150222547 | Hayut et al. | Aug 2015 | A1 |
20150242134 | Takada | Aug 2015 | A1 |
20150261556 | Jain et al. | Sep 2015 | A1 |
20150261720 | Kagan et al. | Sep 2015 | A1 |
20150358288 | Jain | Dec 2015 | A1 |
20150358290 | Jain | Dec 2015 | A1 |
20150381494 | Cherian et al. | Dec 2015 | A1 |
20150381495 | Cherian et al. | Dec 2015 | A1 |
20160006696 | Donley et al. | Jan 2016 | A1 |
20160134702 | Gertner | May 2016 | A1 |
20160162302 | Warszawski et al. | Jun 2016 | A1 |
20160162438 | Hussain et al. | Jun 2016 | A1 |
20160179579 | Amann | Jun 2016 | A1 |
20160306648 | Deguillard et al. | Oct 2016 | A1 |
20170024334 | Bergsten et al. | Jan 2017 | A1 |
20170093623 | Zheng | Mar 2017 | A1 |
20170099532 | Kakande | Apr 2017 | A1 |
20170161090 | Kodama | Jun 2017 | A1 |
20170161189 | Gertner | Jun 2017 | A1 |
20170214549 | Yoshino | Jul 2017 | A1 |
20170295033 | Cherian et al. | Oct 2017 | A1 |
20180024964 | Mao et al. | Jan 2018 | A1 |
20180032249 | Makhervaks et al. | Feb 2018 | A1 |
20180088978 | Li et al. | Mar 2018 | A1 |
20180095872 | Dreier et al. | Apr 2018 | A1 |
20180109471 | Chang | Apr 2018 | A1 |
20180152540 | Niell | May 2018 | A1 |
20180260125 | Botes et al. | Sep 2018 | A1 |
20180262599 | Firestone | Sep 2018 | A1 |
20180309641 | Wang | Oct 2018 | A1 |
20180309718 | Zuo | Oct 2018 | A1 |
20180329743 | Pope et al. | Nov 2018 | A1 |
20180331976 | Pope et al. | Nov 2018 | A1 |
20180337991 | Kumar et al. | Nov 2018 | A1 |
20180349037 | Zhao et al. | Dec 2018 | A1 |
20180359215 | Khare | Dec 2018 | A1 |
20190042506 | Devey et al. | Feb 2019 | A1 |
20190044809 | Willis | Feb 2019 | A1 |
20190044866 | Chilikin | Feb 2019 | A1 |
20190132296 | Jiang et al. | May 2019 | A1 |
20190158396 | Yu | May 2019 | A1 |
20190173689 | Cherian et al. | Jun 2019 | A1 |
20190200105 | Cheng et al. | Jun 2019 | A1 |
20190235909 | Jin | Aug 2019 | A1 |
20190278675 | Bolkhovitin et al. | Sep 2019 | A1 |
20190280980 | Hyoudou | Sep 2019 | A1 |
20190286373 | Karumbunathan et al. | Sep 2019 | A1 |
20190306083 | Shih et al. | Oct 2019 | A1 |
20200028800 | Strathman et al. | Jan 2020 | A1 |
20200042234 | Krasner et al. | Feb 2020 | A1 |
20200042389 | Kulkarni et al. | Feb 2020 | A1 |
20200042412 | Kulkarni et al. | Feb 2020 | A1 |
20200136996 | Li et al. | Apr 2020 | A1 |
20200259731 | Sivaraman et al. | Aug 2020 | A1 |
20200278893 | Niell et al. | Sep 2020 | A1 |
20200319812 | He et al. | Oct 2020 | A1 |
20200328192 | Zaman et al. | Oct 2020 | A1 |
20200382329 | Yuan | Dec 2020 | A1 |
20200401320 | Pyati et al. | Dec 2020 | A1 |
20200412659 | Ilitzky et al. | Dec 2020 | A1 |
20210019270 | Li et al. | Jan 2021 | A1 |
20210026670 | Krivenok et al. | Jan 2021 | A1 |
20210058342 | McBrearty | Feb 2021 | A1 |
20210232528 | Kutch et al. | Jul 2021 | A1 |
20210266259 | Renner, III | Aug 2021 | A1 |
20210357242 | Ballard | Nov 2021 | A1 |
20210377166 | Brar | Dec 2021 | A1 |
20210377188 | Ghag et al. | Dec 2021 | A1 |
20210392017 | Cherian et al. | Dec 2021 | A1 |
20220043572 | Said et al. | Feb 2022 | A1 |
20220100432 | Kim et al. | Mar 2022 | A1 |
20220100491 | Voltz et al. | Mar 2022 | A1 |
20220100542 | Voltz | Mar 2022 | A1 |
20220100544 | Voltz | Mar 2022 | A1 |
20220100545 | Cherian et al. | Mar 2022 | A1 |
20220100546 | Cherian et al. | Mar 2022 | A1 |
20220103487 | Ang et al. | Mar 2022 | A1 |
20220103488 | Wang et al. | Mar 2022 | A1 |
20220103490 | Kim et al. | Mar 2022 | A1 |
20220103629 | Cherian et al. | Mar 2022 | A1 |
20220150055 | Cui et al. | May 2022 | A1 |
20220197681 | Rajagopal | Jun 2022 | A1 |
20220206962 | Kim et al. | Jun 2022 | A1 |
20220206964 | Kim et al. | Jun 2022 | A1 |
20220272039 | Cardona et al. | Aug 2022 | A1 |
20220335563 | Elzur | Oct 2022 | A1 |
Number | Date | Country |
---|---|---|
2672100 | Jun 2008 | CA |
2918551 | Jul 2010 | CA |
101258725 | Sep 2008 | CN |
101540826 | Sep 2009 | CN |
102018004046 | Nov 2018 | DE |
1482711 | Dec 2004 | EP |
3598291 | Jan 2020 | EP |
202107297 | Feb 2021 | TW |
2005099201 | Oct 2005 | WO |
2007036372 | Apr 2007 | WO |
2010008984 | Jan 2010 | WO |
2016003489 | Jan 2016 | WO |
2020027913 | Feb 2020 | WO |
2021030020 | Feb 2021 | WO |
2022066267 | Mar 2022 | WO |
2022066268 | Mar 2022 | WO |
2022066270 | Mar 2022 | WO |
2022066271 | Mar 2022 | WO |
2022066531 | Mar 2022 | WO |
Entry |
---|
Non-Published Commonly Owned Related International Patent Application PCT/US2021/042115 with similar specification, filed Jul. 17, 2021, 52 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/461,908, filed Aug. 30, 2021, 60 pages, Nicira, Inc. |
Author Unknown, “An Introduction to SmartNICs” The Next Platform, Mar. 4, 2019, 4 pages, retrieved from https://www.nextplatform.com/2019/03/04/an-introduction-to-smartnics/. |
Author Unknown, “In-Hardware Storage Virtualization—NVMe SNAP™ Revolutionizes Data Center Storage: Composable Storage Made Simple,” Month Unknown 2019, 3 pages, Mellanox Technologies, Sunnyvale, CA, USA. |
Author Unknown, “Package Manager,” Wikipedia, Sep. 8, 2020, 10 pages. |
Author Unknown, “VMDK”, Wikipedia, May 17, 2020, 3 pages, retrieved from https://en.wikipedia.org/w/index.php?title=VMDK&oldid=957225521. |
Author Unknown, “vSphere Managed Inventory Objects,” Aug. 3, 2020, 3 pages, retrieved from https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-4D4B3DF2-D033-4782-A030-3C3600DE5A7F.html, VMware, Inc. |
Grant, Stewart, et al., “SmartNIC Performance Isolation with FairNIC: Programmable Networking for the Cloud,” SIGCOMM '20, Aug. 10-14, 2020, 13 pages, ACM, Virtual Event, USA. |
Liu, Ming, et al., “Offloading Distributed Applications onto SmartNICs using iPipe,” SIGCOMM '19, Aug. 19-23, 2019, 16 pages, ACM, Beijing, China. |
PCT International Search Report and Written Opinion of Commonly Owned International Patent Application PCT/US2021/042115, dated Dec. 2, 2021, 14 pages, International Searching Authority (EPO). |
Suarez, Julio, “Reduce TCO with Arm Based SmartNICs,” Nov. 14, 2019, 12 pages, retrieved from https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/reduce-tco-with-arm-based-smartnics. |
Anwer, Muhammad Bilal, et al., “Building A Fast, Virtualized Data Plane with Programmable Hardware,” Aug. 17, 2009, 8 pages, VISA'09, ACM, Barcelona, Spain. |
Author Unknown, “Network Functions Virtualisation; Infrastructure Architecture; Architecture of the Hypervisor Domain,” Draft ETSI GS NFV-INF 004 V0.3.1, May 28, 2014, 50 pages, France. |
Koponen, Teemu, et al., “Network Virtualization in Multi-tenant Datacenters,” Technical Report TR-2013-001E, Aug. 2013, 22 pages, VMware, Inc., Palo Alto, CA, USA. |
Le Vasseur, Joshua, et al., “Standardized but Flexible I/O for Self-Virtualizing Devices,” Month Unknown 2008, 7 pages. |
Non-Published Commonly Owned U.S. Appl. No. 17/091,663, filed Nov. 6, 2020, 29 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/107,561, filed Nov. 30, 2020, 39 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/107,568, filed Nov. 30, 2020, 39 pages, VMware, Inc. |
Non-Published Commonly Owned Related U.S. Appl. No. 17/114,994 with similar specification, filed Dec. 8, 2020, 51 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,318, filed Jan. 9, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,319, filed Jan. 9, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,320, filed Jan. 9, 2021, 70 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,321, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,322, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,329, filed Jan. 9, 2021, 50 pages, VMware, Inc. |
Non-Published Commonly Owned U.S. Appl. No. 17/145,334, filed Jan. 9, 2021, 49 pages, VMware, Inc. |
Peterson, Larry L., et al., “OS Support for General-Purpose Routers,” Month Unknown 1999, 6 pages, Department of Computer Science, Princeton University. |
Pettit, Justin, et al., “Virtual Switching in an Era of Advanced Edges,” In Proc. 2nd Workshop on Data Center-Converged and Virtual Ethernet Switching (DCCAVES), Sep. 2010, 7 pages, vol. 22. ITC. |
Spalink, Tammo, et al., “Building a Robust Software-Based Router Using Network Processors,” Month Unknown 2001, 14 pages, ACM, Banff, Canada. |
Turner, Jon, et al., “Supercharging PlanetLab—High Performance, Multi-Application Overlay Network Platform,” SIGCOMM-07, Aug. 27-31, 2007, 12 pages, ACM, Koyoto, Japan. |
Author Unknown, “8.6 Receive-Side Scaling (RSS),” Month Unknown 2020, 2 pages, Red Hat, Inc. |
Herbert, Tom, et al., “Scaling in the Linux Networking Stack,” Jun. 2, 2020, retrieved from https://01.org/inuxgraphics/gfx-docs/drm/networking/scaling.html. |
Non-Published Commonly Owned U.S. Appl. No. 16/890,890, filed Jun. 2, 2020, 39 pages, VMware, Inc. |
Stringer, Joe, et al., “OVS Hardware Offloads Discussion Panel,” Nov. 7, 2016, 37 pages, retrieved from http://openvswitch.org/support/ovscon2016/7/1450-stringer.pdf. |
Angeles, Sara, “Cloud vs. Data Center: What's the difference?” Nov. 23, 2018, 1 page, retrieved from https://www.businessnewsdaily.com/4982-cloud-vs-data-center.html. |
Author Unknown, “Middlebox,” Wikipedia, Nov. 19, 2019, 1 page, Wikipedia.com. |
Doyle, Lee, “An Introduction to smart NICs and their Benefits,” Jul. 2019, 2 pages, retrieved from https://www.techtarget.com/searchnetworking/tip/An-introduction-to-smart-NICs-and-ther-benefits. |
Author Unknown, “vSan Planning and Deployment” Update 3, Aug. 20, 2019, 85 pages, VMware, Inc., Palo Alto, CA, USA. |
Author Unknown, “What is End-to-End Encryption and How does it Work?,” Mar. 7, 2018, 4 pages, Proton Technologies AG, Geneva, Switzerland. |
Harris, Jim, “Accelerating NVME-oF* for VMs with the Storage Performance Development Kit,” Flash Memory Summit, Aug. 2017, 18 pages, Intel Corporation, Santa Clara, CA. |
Perlroth, Nicole, “What is End-to-End Encryption? Another Bull's-Eye on Big Tech,” The New York Times, Nov. 19, 2019, 4 pages, retrieved from https://nytimes.com/2019/11/19/technology/end-to-end-encryption.html. |
Number | Date | Country | |
---|---|---|---|
20220103478 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
63084436 | Sep 2020 | US |