Distributed service chain across multiple clouds

Information

  • Patent Grant
  • 12132780
  • Patent Number
    12,132,780
  • Date Filed
    Friday, July 7, 2023
    a year ago
  • Date Issued
    Tuesday, October 29, 2024
    28 days ago
Abstract
Some embodiments of the invention provide novel methods for performing services on data messages passing through a network connecting one or more datacenters, such as software defined datacenters (SDDCs). The method of some embodiments uses service containers executing on host computers to perform different chains (e.g., ordered sequences) of services on different data message flows. For a data message of a particular data message flow that is received or generated at a host computer, the method in some embodiments uses a service classifier executing on the host computer to identify a service chain that specifies several services to perform on the data message. For each service in the identified service chain, the service classifier identifies a service container for performing the service. The service classifier then forwards the data message to a service forwarding element to forward the data message through the service containers identified for the identified service chain. The service classifier and service forwarding element are implemented in some embodiments as processes that are defined as hooks in the virtual interface endpoints (e.g., virtual Ethernet ports) of the host computer's operating system (e.g., Linux operating system) over which the service containers execute.
Description

Datacenters today use a static, configuration intensive way to distribute data messages between different application layers and to different service layers. A common approach today is to configure the virtual machines to send packets to virtual IP addresses, and then configure the forwarding elements and load balancers in the datacenter with forwarding rules that direct them to forward VIP addressed packets to appropriate application and/or service layers. Another problem with existing message distribution schemes is that today's load balancers often are chokepoints for the distributed traffic. Accordingly, there is a need in the art for a new approach to seamlessly distribute data messages in the datacenter between different application and/or service layers. Ideally, this new approach would allow the distribution scheme to be easily modified without reconfiguring the servers that transmit the data messages.


BRIEF SUMMARY

Some embodiments of the invention provide novel methods for performing services on data messages passing through a network connecting one or more datacenters, such as software defined datacenters (SDDCs). The method of some embodiments uses service containers executing on host computers to perform different chains (e.g., ordered sequences) of services on different data message flows. For a data message of a particular data message flow that is received or generated at a host computer, the method in some embodiments uses a service classifier executing on the host computer to identify a service chain that specifies several services to perform on the data message.


For each service in the identified service chain, the service classifier identifies a service node for performing the service. Some or all of the service nodes in a service chain are service containers in some embodiments. The service classifier then forwards the data message to a service forwarding element to forward the data message through the service nodes identified for the identified service chain. As further described below, the service classifier and service forwarding element are implemented in some embodiments as processes that are defined as hooks in the virtual interface endpoints (e.g., virtual Ethernet ports) of the host computer's operating system (e.g., Linux operating system) over which the service containers execute.


For the particular data message flow, the service classifier in some embodiments identifies a service container for at least one service in the identified service chain by performing load balancing operations to select particular service containers from a set of two or more candidate service containers for the service. In some embodiments, the service classifier performs this load balancing operation to select one service container from multiple candidate service containers for two or more (e.g., all) of the services in the identified service chain.


For a particular service, the service classifier in some embodiments performs the load balancing operation by directing a load balancer that is specified for the particular service to select a container from the set of candidate service containers for the particular service. In some embodiments, the load balancing operation uses statistics regarding data messages processed by each container in the candidate container set to select one particular container from the set for the particular data message flow.


For the particular data message flow, the service classifier in some embodiments specifies a service path identifier (SPI) that identifies a path through the containers selected for implementing the identified service chain, and provides this service path identifier to the service forwarding element to use to perform its classification operations for forwarding the data messages of this flow. In other embodiments, the service forwarding element does not use the service path identifier for forwarding the data messages of the particular data message flow, but uses MAC redirect for specifying forwarding rules for directing the data messages of this flow between successive service containers in the service path.


Conjunctively with either of these forwarding approaches, some embodiments use the specified service path identifier to select the service path for a reverse data message flow that is sent in response to the particular data message flow (e.g., by the destination of the particular data message flow). This approach ensures that in these embodiments the same set of service containers examine both the initial data message flow in the forward direction and the responsive data message flow in the reverse direction.


In some of the embodiments that use the MAC redirect approach for forwarding data messages to different service containers in the service path, the service forwarding element is implemented (1) by the virtual interface endpoints in the OS namespace that is used to define a virtual forwarding element (e.g., virtual switch or virtual bridge) in the OS, and (2) by a virtual interface endpoint in a container namespace of each service container. These virtual interface endpoints are configured to perform match-action forwarding operations needed for implementing the MAC redirect forwarding.


In some embodiments, these match-action operations include match classification operations that compare layer 2 (L2) source and/or destination network address of the data message and layer 3 (L3) source and/or destination network address of the data message with selection criteria of forwarding rules. The L3 source and/or destination network addresses are used in some embodiments to differentiate egress data messages exiting a subnet from ingress data messages entering a subnet. In some embodiments, the match-action operations include action operations that modify the L2 destination MAC address of the data messages as these embodiments use MAC redirect to forward the data messages to successive service containers.


The service classifier of some embodiments selects all the service containers for a service chain to be on its host computer. In other embodiments, different service containers for a service chain can operate on different host computers. In some of these embodiments, the different service containers can execute on host computers in different datacenters. To facilitate the forwarding of the data messages between different datacenters for service processing, some embodiments deploy service forwarding proxies in the datacenters.


When a data message's service processing starts in a first datacenter and continues to a second datacenter, the service forwarding proxy in the first datacenter encapsulates the data message with an encapsulating header, and stores the service path identifier that identifies the service path for the second datacenter. This SPI in some embodiments is a globally unique SPI that uniquely identifies the service path in each datacenter that has a service container on the service path. In some embodiments, the globally unique SPI includes a UUID (universally unique ID) for each service and a datacenter ID for each service UUID or for each set of service UUIDs in each datacenter.


Upon receiving the encapsulated data message, the service forwarding proxy in the second datacenter decapsulates the data message (removes the encapsulating header from the data message), removes the SPI embedded in the removed header, and uses the SPI to identify the next hop service container in the service path that should process the data message in the second datacenter.


In addition to the SPI, the encapsulating header also includes in some embodiments a next-hop service identifier the service forwarding proxy can use to identify the next service container that should process the data message in the service path. For instance, when the global SPI has the UUID of each service container, the next service hop identifier is a reference to the service container UUID location in the global SPI in some embodiments, or is set to this container's UUID in other embodiments. In other embodiments, the encapsulating header does not include a next-hop service identifier, as the service forwarding proxy in the second datacenter is configured to identify the next hop service node just from the received SPI.


Instead of using the SPI to identify the next hop service container, the service forwarding proxy in the second datacenter in other embodiments passes the SPI to a service forwarding element in the second datacenter to use to identify the next hop service container. This forwarding element in some embodiments is the service forwarding element executing on the host computer that executes the next hop service container


Two service forwarding proxies in two datacenters can be used in some embodiments to forward many data message flows between the two datacenters for service processing. Also, in some embodiments, a service forwarding proxy in a datacenter can forward data messages to, and receive data messages from, multiple other service forwarding proxies in multiple other datacenters to implement service chains that span different sets of datacenters. Each service forwarding proxy in some embodiments includes (1) a forwarding proxy for encapsulating data messages and sending the encapsulated data messages to another service forwarding proxy of another datacenter, and (2) a receiving proxy for receiving encapsulated data messages from another service forwarding proxy of another datacenter and decapsulating the received data messages for processing in its datacenter.


In some embodiments, a datacenter has (1) several service host computers that execute sets of service containers for performing the same service chain on data message flows received at the datacenter, and (2) a set of one or more forwarding elements (e.g., front end load balancers) that randomly or deterministically distribute data message flows to these host computers. Each service host computer then performs a service classification operation on each data message flow that it receives to determine whether it should process the data message flow, or it should redirect the data message flow to another service host computer.


For instance, upon receiving a first data message flow, a first service host computer uses the flow's attribute set (e.g., the flow's five tuple identifier) to perform a first service classification operation that identifies a first set of services to perform on the data message. Based on an identifier for the first set of services, the first service host computer determines that a set of service machines executing on a second host has to perform the first set of services on the first data message flow. It then forwards data messages of the first data message flow to the second service host computer.


On the other hand, upon receiving a second data message flow, a first service host computer uses the flow's attribute set (e.g., flow's five tuple identifier) to perform a second service classification operation that identifies a second set of services to perform on the data message. Based on an identifier for the second set of services, the first service host computer determines that a set of service machines executing on the first service host computer has to perform the second set of services on the second data message flow. It then forwards the data message of the second data message flow to each service machine in the set of service machines on the first service host computer that has to perform a service in the second set of services on the second data message flow.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description, and the Drawings.





BRIEF DESCRIPTION OF FIGURES

The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 illustrates a software defined datacenter (SDDC) that uses the service-performance methods of some embodiments to process data messages originating from, and/or received at, the SDDC.



FIG. 2 illustrates how some embodiments implement the service forwarding element and the service classifier within a Linux operating system (OS) of a host computer.



FIG. 3 illustrates a process that the service classifier performs in some embodiments.



FIG. 4 illustrates the service classifier of some embodiments interacting with several other modules to perform service classification.



FIG. 5 presents a process that conceptually illustrates the operation of the service forwarding element in forwarding a data message through a service path identified by the service classifier.



FIG. 6 illustrates that upon receiving a first data message flow, a virtual interface endpoint of the Linux OS of a first service host computer passes the data message to a service classifier that has registered as a hook in a callback mechanism of the OS.



FIG. 7 illustrates the processing of the second data message flow, which a top of rack switch initially forwards to the first service host computer.



FIG. 8 illustrates a process that a service host computer performs in some embodiments, in order to perform service operations on a received data message flow, or to redirect the data message to another service host computer for service processing.



FIG. 9 further illustrates the distributed service chain classification and forwarding architecture of FIGS. 6 and 7.



FIG. 10 presents an example that illustrates the use of such service forwarding proxies.



FIG. 11 illustrates additional attributes of service forwarding proxies in some embodiments.



FIG. 12 presents a process that conceptually illustrates using service containers in different datacenters to perform the services associated with a service chain on a data message.



FIG. 13 conceptually illustrates a computer system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments of the invention provide novel methods for performing services on data messages passing through a network connecting machines in one or more datacenters, such as software defined datacenters (SDDCs). The method of some embodiments uses service containers executing on host computers to perform different chains of services on different data message flows. Service chains include one or more service nodes, each of which performs a service in the service chain. In some embodiments, some or all of the service nodes are service containers.


Containers in some embodiments are constructs that run on top of an operating system (OS) of a host computer. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. Examples of containers include Docker containers, rkt containers, and containers executing on top of hypervisors, such as ESXi.


As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers (or layer 2, layer 3, layer 4, layer 7) are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.



FIG. 1 illustrates an SDDC 100 that uses the service-performance methods of some embodiments to process data messages originating from, and/or received at, the SDDC. In some embodiments, the SDDC is part of a telecommunication network (e.g., a 5G telecommunication network) for which multiple network slices can be defined. A data message flow can be associated with a network slice, and one or more service chains can be defined for each network slice. Each service chain in some embodiments specifies one or more ordered sequence of service operations (e.g., compute operations, forwarding operations, and/or middlebox service operations, etc.) to perform on the data message flows associated with the chain's network slice.


In a 5G telecommunication network, the service operations include virtual network functions (VNFs) that are performed on the data messages. Examples of network slices for a 5G telecommunication network include a mobile broadband slice for processing broadband data, an IoT (Internet of Things) slice for processing IoT data, a telemetry slice for processing telemetry data, a VOIP (voice over IP) slice for voice over IP data, a video conferencing slice for processing video conferencing data, a device navigation slice for processing navigation data, etc.


As shown, the SDDC 100 includes host computers 105, managing servers 110, ingress gateways 115, and egress gateways 120. The ingress/egress gateways 115 and 120 allow data messages to enter and exit the datacenter. In some embodiments, the same set of gateways can act as ingress and egress gateways, as they connect the SDDC to an external network, such as the Internet. In other embodiments, the ingress and egress gateways are different as the ingress gateways connect the SDDC to one network (e.g., a private telecommunication network) while the egress gateways connect the SDDC to another network (e.g., to the Internet). Also, in some embodiments, one or both of these sets of gateways (e.g., the ingress gateways or the egress gateways) connect to two or more networks (e.g., an MPLS network and the Internet).


As further shown, the host computers execute operating systems 130, service containers 135 and software forwarding elements 140. The operating system (OS) 130 in some embodiments is Linux. This OS executes on top of a hypervisor in some embodiments, while it executes natively (without a hypervisor) over the host computer in other embodiments. The service containers 135 and the software forwarding elements 140 are deployed and configured by the managing servers 110 to implement chains of service operations.


The managing servers 110 in some embodiments include managers through which service chains can be defined and managed, and controllers through which the service containers 135 and the software forwarding elements 140 can be configured. In other embodiments, a common set of servers performs both the management and control operations. To operate service chains, the managing servers 110 in some embodiments configure each host computer 105 and its software forwarding element to implement a service classifier 155 and a service forwarding element 160.


For a data message of a particular data message flow that is received at a host computer, the service classifier 155 executing on the host computer 105 identifies a service chain that specifies several services to perform on the data message. The received data message in some cases originate from a source machine executing on the host computer, while in other embodiments was forwarded by a forwarding element (e.g., front end load balancer) operating outside of the host computer.


For each service in the identified service chain, the service classifier 155 identifies a service container 135 to perform the service. In some embodiments, the service classifier 155 of one host computer identifies all service containers for a service chain to be on its host computer. In other embodiments, the service classifier can select service containers on different hosts to perform some or all of the service operation of the identified service chain. The set of service containers that are identified for implementing a service chain represent a service path through the network.


After identifying the service chain and the service containers to implement the service chain (e.g., after identifying the service path), the service classifier 155 passes the data message to the service forwarding element 160 to forward the data message to the service containers identified for the identified service chain. In some embodiments, the service forwarding element 160 executes on the service classifier's host computer. In other embodiments where the service containers of the identified service path can be on different host computers, the service forwarding element 160 is a distributed forwarding element (e.g., a logical forwarding element) that spans the multiple hosts that execute the service containers of the service path.


In some embodiments, the service forwarding element 160 performs L2-match operations with L2 MAC redirect action operations to forward data messages to different service containers in the service path. In other embodiments, the service forwarding element uses service path identifiers (that identify the service paths) to perform its match operations, as further described below.



FIG. 1 illustrates the service classifier 155 selecting two different service paths 142 and 144 for two different data message flows 146 and 148, and the service forwarding element 160 forwarding these data message flows along the service containers in each path. The service forwarding element forwards the data message flow 146 along service containers SC1, SC2 and SC3 for service path 142, while forwarding the data message flow 148 along the service containers SC4 and SC5 for service path 144. The service forwarding element then forwards both of these data message flows out of the SDDC 100. Once a data message is processed by the service containers of a service chain, a service forwarding element in some embodiments can also forward the data message to another host computer or another machine, application or middlebox service operating on the same host computer or different host computer in the SDDC.



FIG. 2 illustrates how some embodiments implement the service forwarding element and the service classifier within a Linux OS 230 of a host computer. As shown, a service classifier 155 in some embodiments is implemented as a hook function in an ingress-side virtual interface endpoint 204 (e.g., Ethernet port) of the Linux OS 230. This port 204 in some embodiments serves as an interface with a network interface controller (NIC) of the host computer. In some embodiments, the service forwarding element 160 is implemented in part by a Linux bridge 240 inside its root namespace 215, and in other part by hook functions in the virtual interface endpoints 206 (e.g., Ethernet ports) of the service containers 235 and in the virtual interface endpoints 208 defined in the Linux namespace.



FIG. 3 illustrates a process 300 that the service classifier 155 performs in some embodiments. The classifier performs this process each time it receives a data message. To perform this process, the service classifier 155 interacts with several other modules executing on its host computer. As shown in FIG. 4, these other modules in some embodiments include container selectors 404 and SPI generator 406.


As shown, the process 300 starts (at 305) when the service classifier 155 receives a data message for processing. At 310, the service classifier 155 determines whether it has previously processed another data message that is in the same flow as the received data message. If so, it transitions to 330 to pass the received data message to a first service container that is identified by a record that the service classifier previously created and stored for the processed flow in a connection tracker 410, as further described below.


In some embodiments, the record that was previously created in the connection tracker might be for a related flow in the reverse direction. Specifically, in some embodiments, the record that the service classifier creates for a first data message flow in a first direction (e.g., a flow exiting the SDDC) is used by the service classifier to process a second data message flow in a second direction (e.g., a flow entering the SDDC) that is received in response to the first data message flow, as further described below.


The service classifier does this in order to use the same service path (e.g., the same set of service containers) to process the reverse second flow as it did for the initial first flow. In these embodiments, the connection tracker record is for a bi-directional flow, instead of just being for a unidirectional flow. In other embodiments, the service classifier creates two records when processing the first data message flow, one for the forward direction and the other for the reverse direction, as the connection-tracker records in the forward and reverse directions are related but not identical.


When the service classifier 155 determines (at 310) that it has not previously processed another data message in the same flow as the received data message, it uses (at 315) the received data message's attribute set (e.g., its header values) to perform a classification operation to identify a service chain identifier for a service chain that has to be performed on the data message's flow. In some embodiments, the data message's attribute set that is used for the classification match operation is the data message flow's five tuple identifier (e.g., source and destination IP, source and destination port, and protocol), or its seven tuple identifier (i.e., its five tuple identifier plus source and destination MAC addresses).



FIG. 4 shows the service classifier 155 performing its service classification operation by referring to service classification rules 450 that are stored in classification rule storage 455. As shown, each classification rule includes a match tuple 457 and an action tuple 459. The match tuple includes one or more header values (e.g., five or seven tuple identifiers), while the action tuple 459 includes a service chain identifier (SCI).


After matching a data message's attribute set with the match tuple 457 of a service classification rule 450, the service container 155 (at 320) retrieves the SCI from the matching service classification rule's action tuple 459 and uses the retrieved SCI to identify a record 465 in an SCI attribute storage 460. Each record 465 in the SCI attribute storage correlates an SCI with an ordered list of services 444 of the service chain identified by the SCI, and a list 446 of container selectors 404 for selecting the containers to perform the services in the chain.


At 320, the service classifier 155 in some embodiments selects a service container for each service specified in the identified SCI record 465 in the storage 460, by using the container selector 404 specified for the service in the identified SCI record. When multiple candidate service containers exist for performing one service, the specified container selector for that service in some embodiments performs a load balancing operation to select one particular candidate service container for the received data message's flow.


In some embodiments, such a load balancing operation uses statistics (stored in container statistics storage 424) regarding data messages processed by each candidate service container to select the particular service container. As further described below, the service classifier updates the statistics for the containers associated with a service path each time that it processes a data message. In some embodiments, the load balancing operations of the container selectors are designed to distribute the data message load evenly across the candidate service containers, or unevenly based on a weighted distribution scheme.


Also, in some embodiments, the container selectors for different services in a service chain work in conjunction to select the containers in a service path, e.g., in embodiments where selection of a first service container for a first service in the service path necessitates the selection of a second service container for a second service in the service path. Such is the case in some embodiments when one service container cannot be part of two different service paths (i.e., when two service paths cannot overlap).


Some embodiments group the containers into pods, with each pod comprising one or more service containers that are guaranteed to be co-located on the same host computer. Each pod in some embodiments is implemented by one virtual machine. In some embodiments, two or more of the service containers for a service path (e.g., all the service containers for the service path) are in the same pod, and two or more pods are candidates for implementing the same service chain. In some of these embodiments, the container selector 404 is a load-balancing pod selector that selects one pod from several pods that are candidates for implementing the service path of a service chain identified by the service classifier 155.


Next, at 325, the service classifier generates a SPI for the service path specified by the containers selected at 320, and stores the generated SPI in the connection tracker 410 for the received data message's flow identifier (e.g., its five or seven tuple identifier). To generate the SPI, the service classifier uses the SPI generator 406. In some embodiments, the SPI generator 406 uses a set of rules to define the SPI for a service path based on the identifiers associated with the containers selected at 320. For instance, the SPI is defined in some embodiments to be a concatenation of the UUID (universally unique ID) of the service path containers. In some embodiments, the UUIDs are concatenated in the order of the service containers in the service path.


The service classifier stores (at 325) the generated SPI in the connection tracker 410 for the received data message's flow identifier so that it can later use this SPI to identify the service path (in the SPI attribute storage 415) for a subsequent data message in the same flow as the currently processed data message. To do this, the service classifier would match the subsequent data message's flow ID (e.g., its five or seven tuple identifier) with the flow ID in a match tuple 492 of a record 494 in the connection tracker 410, and then retrieve the SPI specified by the action tuple 496 of the record with the matching flow ID.


As mentioned above, the service classifier in some embodiments uses the SPI record in the connection tracker 410 to process data messages of a flow that is in response to the flow of the currently processed data message. In some embodiments, the service classifier uses the same SPI record for the forward flow and reverse flow. In other embodiments, the service classifier creates separate connection tracker flows for the forward and reverse flows. Some embodiments use the same SPI for the reverse flow in order to ensure that the same set of service containers examine both the initial data message flow in the forward direction and the responsive data message flow in the reverse direction.


After storing the record(s) in the connection tracker 410, the service classifier transitions to 330. The process also transitions to 330 from 310 when it determines that it has previously processed the received data message's flow, identifies the SPI for this flow from the connection tracker and then uses this SPI to identify the service containers in the service path for the data message.


At 330, the service classifier passes the data message to the service forwarding element to forward to the first service container. In some embodiments, the service classifier provides the specified service path identifier to the service forwarding element to use to perform its classification operations for forwarding the data messages of this flow. In other embodiments, the service forwarding element does not use the service path identifier for forwarding the data messages of the particular data message flow, but rather uses a MAC redirect approach.


In some embodiments, the service classifier specifies the data message's destination MAC address as the MAC address of the first service container and provides this data message to service forwarding element to forward to the first service container. In other embodiments, the service classifier specified the data message's destination MAC as a MAC address associated with the service forwarding element, which uses the data message's source MAC address to perform its service forwarding operation, as further described below. In some of these embodiments, the service classifier specifies the source MAC address as a MAC address associated with the start of a particular service path to allow the service forwarding element to identify the first service container for the service path.


After 330, the service classifier increments statistics of the service containers in the identified service path. As mentioned above, the service classifier maintains these statistics in the statistic storage 424. Different statistics are maintained in different embodiments. Examples of such statistics include number of data messages, number of bytes in the forwarded payload bytes, etc. Hence, in some embodiments, the service classifier increments the statistics by incrementing each service container's message count by one, and/or adding the processed message's payload size to the byte count of each service container in the service path. After 330, the process 300 ends.



FIG. 5 presents a process 500 that conceptually illustrates the operation of the service forwarding element 160 in forwarding a data message through a service path identified by the service classifier 155. This forwarding operation uses MAC redirect and is implemented in part by a Linux bridge 240 inside its root namespace 215, and in other part by hook functions in the virtual interface endpoints (e.g., Ethernet ports 206) of the service containers and in the virtual interface endpoints (e.g., Ethernet ports 208) defined in the Linux namespace. These virtual interface endpoints are configured to perform match-action forwarding operations needed for implementing the MAC redirect forwarding.


As shown, the process 500 starts (at 505) when it receives a data message for forwarding through the service path. Next, at 510, the process performs a classification operation to identify the virtual interface endpoint of the Linux bridge associated the first service node. As mentioned above, the service classifier in some embodiments defines this destination MAC to be the destination MAC of the virtual interface endpoint connected to the first service container. In some of these embodiments, the classification operation (at 510) compares the data message's destination MAC with the match criteria of forwarding rules in a lookup table that associates different destination MAC address with different virtual interface endpoint identifiers. Under this approach, the process retrieves the identifier for the next hop virtual interface endpoint from the forwarding rule that has the data message's destination MAC as its match criteria.


In other embodiments, the process 500 performs the classification operation differently. For instance, in some embodiments, the process 500 uses the below-described three classification operations 525-535, which first identify the direction of the service flow, then use the source MAC of the data message to identify the destination MAC of the first service node, and lastly use the identified destination MAC to identify the virtual interface endpoint. In some of these embodiments, the service classifier does not set the data message's destination MAC address to be the MAC address of the first service node, but instead sets this address to be the destination MAC address of the bridge.


Next, at 515, the process forwards the data message to the next service container through the identified virtual interface endpoint. The service container performs its service operation (e.g., middlebox service operation, etc.) on the data message, and then provides (at 520) the data message back to the service forwarding element. In some embodiments, the service container 235, its associated Ethernet port 206, or the associated bridge interface endpoint 208 changes the source MAC address of the data message to be a MAC address associated with the service container (e.g., associated with its Ethernet port 206), as the service forwarding element uses source MAC addresses to perform its next-hop service determination.


The process 500 then performs three classification operations at 525, 530 and 535, which were briefly mentioned above. The first classification operation (at 525) compares the L3 source and/or destination network addresses of the data message with classification rules that are defined to differentiate egress data messages from ingress data messages. For instance, in some embodiments, one classification rule determines whether the data message's source L3 address is in the CIDR of SDDC subnet in order to determine whether the data message is part of an upstream flow exiting the subnet, while another classification rule determines the data message's destination L3 address is in the CIDR of SDDC subnet in order to determine whether the data message is part of downstream flow entering the subnet.


In some embodiments, each of these classification rules identifies a different lookup table for performing the second classification operation at 530. Hence, after identifying the direction of the data message's flow (upstream or downstream) in the first classification operation at 525, the process 500 uses the lookup table identified by the first classification operation to perform the second lookup at 530, this time based on the current source MAC address of the data message. In some embodiments, this second classification rule matches the data message's current source MAC address with the match criteria (specified in terms of a source MAC) of one classification rule that provides in its action tuple the destination MAC of the next hop along the service path. The source MAC identifies the prior service node in the service chain for the direction identified at 525 (e.g., in the table identified at 525), and hence can be used to identify the next service node in the service chain.


In some embodiments, the second classification operation (at 530) changes the data message's destination MAC address to the MAC address of the next hop in the service path. When the service path has not been completed (i.e., when the last service container has not yet processed the data message), the next hop in the service path is another service container. On the other hand, when the service path has finished (i.e., when the last service container has processed the data message), the next hop in the service path is an egress destination MAC that has been defined for the service path. This egress destination MAC in some embodiments is a MAC address associated with a switch or router that forwards the data message to another destination in the SDDC, or is a MAC address associated with a gateway that forwards the data message out of the SDDC or an SDDC subnet.


After the destination MAC of the data message is redefined at 530, the process performs a third classification operation (at 535) to identify the virtual interface endpoint of the Linux bridge associated with the data message's destination MAC. This classification operation in some embodiments compares the data message's destination MAC with the match criteria of forwarding rules in a lookup table that associates different destination MAC address with different virtual interface endpoint identifiers. Under this approach, the process retrieves the identifier for the next hop virtual interface endpoint from the forwarding rule that has the data message's destination MAC as its match criteria.


After 535, the process 500 determines (at 540) whether the identified virtual interface endpoint identified at 535 is that of another service container. When the identified virtual interface endpoint is not another service container, the service path has been completed. The operation 540 in some embodiments is not actually performed by the service forwarding element but is included only to illustrate the end of the service path in FIG. 5.


When the identified virtual interface endpoint identified at 535 is that of another service container, the service path forwarding of the process 500 has not finished. Hence, the process returns to 515 to forward the data message to the next service container on the path through its identified virtual interface endpoint. Otherwise, the service-path forwarding process 500 ends. As mentioned above, when the service path finishes, the destination MAC address that was defined in the last iteration through 530 identifies the virtual interface endpoint of the egress port that is defined for the service path. Hence, at the end of the service path in these embodiments, the Linux bridge forwards that the data message to the virtual interface endpoint from where it will be forwarded to its next destination.


The following example by reference to the host computer of FIG. 2 further illustrates the MAC-redirect forwarding of the service forwarding element of some embodiments. In this example, the service path includes the service container 235a followed by the service container 235b for an upstream data message on which two service operations have to be performed on the data message's way out of the SDDC. When the Linux bridge 240 receives this upstream data message, the data message has a destination MAC address of the vethx interface of the bridge, as it needs to be first processed by the service container 235a.


Hence, the bridge passes the data message to the vethx interface, which in turn forwards it to the service container 235a through the etho interface 206 of this service container. The service container performs its service on the data message, and passes it back to vethx interface through the etho interface. In passing the data message back to the vethx interface, the service container or its associated etho interface specifies the source MAC address of the data message as the source MAC address of the etho interface.


The vethx interface then performs a first classification operation, which based on the data message's L3 source address being in the ingress CIDR, results in a determination that the data message is in an upstream direction. Based on this determination, the vethx interface performs a second classification operation on an upstream lookup table that matches the current source MAC address with a next hop forwarding rule that identifies the next hop's destination MAC address. After the vethx interface identifies the next hop address to be the MAC address of vethy interface, the bridge provides the data message to the vethy interface. The vethy interface forwards the data message to the service container 235b through the etho interface 206 of this service container. The service container performs its service on the data message, and passes it back to vethy interface through the etho interface. Again, the source MAC address of the data message is changed to the source MAC address of etho interface of the service container 235b.


The vethy interface then performs a first classification operation, which based on the data message's L3 source address being in the ingress CIDR, results in a determination that the data message is in an upstream direction. Based on this determination, vethy interface performs a second classification operation on am upstream lookup table that matches the current source MAC address with a next hop forwarding rule that identifies the next hop's destination MAC address. In this case, the next hop address is that of the egress L2 address of the bridge. Hence, after the vethy interface identifies the next hop address to be the egress MAC address of the bridge, the bridge provides the data message to its egress interface for forwarding out of the host computer.


The service forwarding element 160 uses other forwarding methods in other embodiments. For instance, in some embodiments, the service forwarding element use the SPI for the identified service path and a current hop count to perform its forwarding operations. In some embodiments, the SPI and current hop count are values that the service classifier initially creates and stores on the host computer. For each service hop, the service forwarding element compares the SPI and the current hop count with match-criteria of next hop forwarding rules, which have action tuples that provide the virtual endpoint interface identifier for the virtual interface connected to the next hop. As the service forwarding element forwards the data message through its successive service hops, it adjusts (e.g., decrements) its current hop count to correspond to the next service container position in the service path.


In some embodiments, the service forwarding element uses the SPI/hop-count approach when the service containers execute on different host computers and/or execute in different datacenters. In some such embodiments, the SPI/hop-count information is embedded in tunnel headers that encapsulate the data messages as they are forwarded between the different host computers and/or different datacenters.


As mentioned above, the SDDC 100 in some embodiments has several host computers that execute sets of service containers for performing the same service chain on data message flows received at the datacenter. In some such embodiments, the host computers only execute service containers that perform operations associated with service chains, and do not execute any other data compute end node (i.e., any other a container or virtual machine that is the source or destination machine for a data message flow). As such, these host computers will be referred to below as service host computers. In these embodiments, other host computers in the SDDC 1000 execute the machines as serve as the compute end nodes.


In some embodiments, the service classification, forwarding and operations are distributed among these service host computers to distribute the service load and to provide fault tolerance in case one or more service host computers fail. A set of one or more frontend forwarding elements (e.g., load balancers) randomly or deterministically distribute data message flows to these service host computers, which then perform service classification operation on the data message flows that they receive to determine whether they should service process the data message flows, or should redirect the data message flows to other service host computers for service processing.



FIGS. 6 and 7 illustrate examples of three service host computers 605, 607 and 609 performing distributed service classification and forwarding operations of some embodiments. Each of these service host computers in some embodiments executes two clusters of service containers for performing two different services. Each cluster in this example includes more than one container. As further described below by reference to FIG. 9, the service classification and forwarding operations are distributed among the service host computers 605, 607 and 609, so that these computers implement the same service classification and forwarding operations (e.g., process the same service classification and forwarding rules) for similar service containers that execute on them.


In the examples of FIGS. 6 and 7, a top-of-rack (TOR) switch 615 selects the first service host computer 605 to process two different data message flows, as part of a load balancing operation that it performs to distribute the load across different host computers that execute service containers that perform service operations. This TOR is part of a cluster of two or more TORs that perform such frontend load balancing operations for a cluster 680 of three service host computers 605, 607 and 609. These frontend load balancing operations are deterministic (e.g., are based on flow-identifier hashes and hash table lookups) in some embodiments, while being random in other embodiments.



FIG. 6 illustrates that upon receiving a first data message flow 622, a virtual interface endpoint 612 of the Linux OS 614 of the first service host computer 605 passes the data message to a service classifier 655 that has registered as a hook in the XDP (eXpress Data Path) callback mechanism of this OS. The service classifier 655 of the first service host computer 605 uses the flow's attribute set (e.g., five or seven tuple identifier) to perform a first service classification operation that identifies a first service chain that specifies a set of services to perform on the data message.


Based on the first service chain's identifier, the service classifier of the first service host computer determines that service containers executing on the first host computer 605 have to perform the first service chain's set of services on the first data message flow 622. For instance, in some embodiments, the service classifier computes a hash value from the service chain identifier and then lookups this hash value in a hash lookup table that correlates hash ranges with different service host computer identifiers. Some embodiments compute the hash value based on the other parameters in conjunction or instead of the service chain identifier. Examples of such other parameters include source network address (e.g., source IP address), source port, SPI, etc.


After its hash lookup identifies the first host computer 605 as the service host computer that should process the received data message flow, the service classifier 655 of the first service host computer 605 selects the service containers 632 and 634 on the first host computer to implement the service path that performs the services in the identified service chain. The service classifier then hands off the data message flow 622 to the service forwarding element 642 executing on the first host computer 605 to sequentially forward the data messages of the first data message flow to the two identified service containers 632 and 634 on the first host computer 605 so that these service containers can perform their service operations on these data messages. After the service processing, the data messages are forwarded to their next hop destination (e.g., to the destination identified by their original layers 3 and 4 header values).



FIG. 7 illustrates the processing of the second data message flow 724, which the TOR 615 also initially forwards to the first service host computer 605. Upon receiving a data message of the second data message flow 724 at the virtual interface endpoint 612, the data message is again forwarded to the service classifier 655, as it is registered as a hook function for this interface. The service classifier 655 then uses the flow's attribute set (e.g., five or seven tuple identifier) to perform a second service classification operation that identifies a second service chain that specifies a second set of services to perform on the data message.


Based on the second service chain's identifier, the first host computer 605 determines that service containers on the second host computer 607 have to perform the second set of services on the second data message flow 724. Again, in some embodiments, the service classifier computes a hash value from the service chain identifier and/or other parameters (such as source IP address, source port address, SPI, etc.) and then lookups this hash value in a hash lookup table that correlates hash ranges with different service host computer identifiers. The hash lookup in FIG. 7 identifies the second host computer 607 as the service host computer that should process the received data message flow.


Hence, in FIG. 7, the service classifier 655 hands back the data messages of the second flow 724 to virtual interface endpoint 612 for forwarding to the second host computer 607. Once a data message of the second flow is received at this virtual interface endpoint on the second host, it is passed to the service classifier 755 executing on this host, which then performs a classification operation to identify the second service chain's identifier for this data message.


Based on the second service chain's identifier (e.g., the hash of this identifier), the service classifier 755 on the second host computer 607 determines that service containers on the second host computer 607 have to perform the second set of services on the received data message and its flow 724. The service classifier then identifies the two service containers 736 and 738 on its host that have to implement the service path that performs the services in the identified service chain. It then hands off the received data message of the second flow 724 to the service forwarding element 742 executing on the second host computer 607 to sequentially forward to each of two service containers 736 and 738 on the second host computer 607 so that these service containers can perform their service operations on these data messages. After the service processing, the data messages are forwarded to their next hop destination (e.g., to the destination identified by their original layers 3 and 4 header values).



FIG. 8 illustrates a process 800 that each service host computer (e.g., computers 605, 607 and 609) performs in some embodiments, in order to perform service operations on a received data message flow, or to redirect the data message to another service host computer for service processing. As shown, the process 800 starts (at 805) when the service classifier 155 of a service host computer receives a data message for processing from the virtual interface endpoint 612 of its OS. The process then performs (at 810) a classification operation that matches the received data message's attribute set (e.g., its five or seven tuple identifier) with the match criteria of a service classification rule, and retrieves the SCI from the matching rule's action tuple.


The service classifier then uses (at 815) the retrieved SCI to determine whether service containers executing on its host computer should perform the service operations of the service chain identified by the SCI. To do this, the service classifier in some embodiments computes a hash value from the SCI and one or more other parameters (e.g., source IP address, source port, SPI, etc.) associated with the data message or the identified service chain, and then lookups this hash value in a hash lookup table that correlates hash ranges with different service host computer identifiers. In some embodiments, when a service host computer fails, the hash range associated with that service host computer is automatically assigned to one or more other service host computers, which allows the service classification and forwarding operations of the service host computers to be fault tolerant.


When the service classifier determines (at 815) that its host's service containers should perform the service operations of the identified service chain, the service classifier performs (at 825) the operations 320-335 of the process 300. On the other hand, when it determines (at 815) that another host's service containers should perform the service operations of the identified service chain, the service classifier hands back (at 820) the data message to virtual interface endpoint of its host OS for forwarding to the other host computer. After 820 and 825, the process 800 ends.


In some embodiments, the process 800 configures one or more frontend forwarding elements (e.g., frontend load balancing TORs 615) each time that it performs a classification operation for a new data message flow. Specifically, after performing its classification operation at 810, the process 800 sends an in-band or out-of-band data message (sends a message through the data path or through a control path) that associates the data message's flow identifier (e.g., five or seven tuple identifier) with the identifier of the service host computer that the process identifies (at 810) for performing the service chain on the data message's flow. A frontend forwarding element that receives such a message creates a record in its connection tracker that associates the received flow identifier with the received host identifier, and then uses this record to process subsequent data messages in the flow that it receives after it creates the record.



FIG. 9 further illustrates the distributed service chain classification and forwarding architecture of FIGS. 6 and 7. This architecture eliminates discrete service chain classifiers and service forwarding elements in a datacenter by replacing them with distributed service classification logic and forwarding on service host computers 605, 607 and 609 that execute the service containers (e.g., the service containers that implement VNFs in a 5G telecommunication network). The service host computers are also referred to in this document as backend servers.


As shown, a server set 110 provides the same set of service classification rules and service forwarding rules to each of the service host computers 605, 607 and 609, and configures the virtual interface endpoints on these computers to use these rules. By providing the same set of service classification rules and forwarding rules to each of the service host computers, the server set configures these host computers to implement distributed service classification and forwarding operations, as depicted by the names distributed service classifier 955 and distributed forwarding element 960 in FIG. 9. These classification and forwarding operations are distributed because they are performed identically on the service host computer 605, 607 and 609, based on identical sets of classification and forwarding rules on the service host computers 605, 607 and 609.


In some embodiments, each service host computer (backend server) obtains from the server set 110 (1) service classification rules that correlate flow identifiers with service chain identifiers, (2) a list of service identifiers for each service chain identifier, (3) a list of container identifiers that identify the service containers that are candidates for implementing each service identified on the list of service identifiers, (4) the MAC address of each service container identified on the list of container identifiers, (5) a list of other service host computers for receiving redirected data message flow traffic, (6) a MAC address for each of these other service host computers, (7) a hash function for generating hash values for the received data messages, and (8) a hash lookup table that associates hash values with identifiers of service host computers.


In some embodiments, the server set 110 collects statistics generated by the service classifiers 955 on the service host computers. These statistics are pushed (published) to the server set from the service host computers in some embodiments, while they are pulled (retrieved) from the service host computers by the server set 110. The server set analyzes these statistics and based on this analysis, adds or removes service host computers from a cluster that performs one or more service chains. Also, in some embodiments, the server set deploys and configures multiple clusters of service host computers and uses different service host computer clusters for different sets of service chains. In some such embodiments, the server set can move one service chain from one service host computer cluster to another service host computer cluster.


The service classifier of some embodiments selects all the service containers for a service chain to be on its host computer. In other embodiments, different service containers for a service chain can operate on different host computers. In some of these embodiments, the different service containers can execute on host computers in different datacenters. To facilitate the forwarding of the data messages between different datacenters for service processing, some embodiments deploy service forwarding proxies in the datacenters. A service proxy in some embodiments is another service node in the service chain, with its operation involving forwarding a data message to another service proxy in a subsequent datacenter or receiving a data message from another service proxy in a previous datacenter.



FIG. 10 presents an example that illustrates the use of such service forwarding proxies. Specifically, this figure illustrates a logical view 1005 of a service chain that is performed by two service containers 1020 and 1022. It also illustrates a multi-cloud implementation 1010 of the service chain, in which the first service container 1020 executes on a first service host computer 1030 in a first datacenter 1040, and the second service container 1022 executes on a second service host computer 1032 in a second datacenter 1042. As further described below, this multi-cloud implementation 1010 uses service forwarding proxies 1050 and 1052 in the first and second datacenters 1040 and 1042 to pass the data messages from the first service container 1020 in the first datacenter 1040 to the second service container 1022 in the second datacenter 1042.


In the example of FIG. 10, the service processing of a data message 1056 starts in the first datacenter 1040 and finishes in the second datacenter 1042. In the first datacenter, a service classifier 1090 executing on the first service host computer 1030 identifies the service chain for the data message and the service containers to implement this service chain. It then generates a SPI that identifies the service path that includes the identified service containers, and then stores the SPI in a memory of the first host computer for later use by the service proxy 1050.


After the service classifier 1090 identifies the service path, it then passes the data message to the first service container 1020 through a service forwarding element 1070 executing on the first host computer 1030. The first service container 1020 then performs its operation and passes the message back to the service forwarding element 1070. Based on its forwarding rules, the service forwarding element then determines that the next service node in the service chain is the service forwarding proxy 1050 for forwarding the data message to another datacenter. In some embodiments, the service forwarding proxy is implemented as a container. In other embodiments, the service forwarding proxy is implemented as a function in the OS, like the service classifier and the service forwarding element, and the service forwarding element passes the data message to the service forwarding proxy through shared memory.


The service forwarding proxy 1050 then encapsulates the data message with an encapsulating header and stores the service path identifier (SPI) that identifies the service path for the second datacenter. This SPI in some embodiments is a globally unique SPI that uniquely identifies the service path in each datacenter that has a service container on the service path. In the example of FIG. 10, the SPI uniquely identifies the service path in both the first and second datacenters 1040 and 1042.


In some embodiments, the service forwarding proxy 1050 performs one or more classification operations to identify the global SPI and the destination address for the service forwarding proxy 1052 in the subsequent datacenter 1042. The service forwarding proxy 1050 encapsulate the data message with an encapsulation header that includes the global SPI and the network address of the service forwarding proxy 1052 (e.g., with the layer 3 network address of proxy 1052), and then passes the data message to an intervening network to forward to the service forwarding proxy 1052.


In some embodiments, the globally unique SPI includes a UUID (unique universal identifier) for each service and a datacenter ID for each service UUID. The globally unique SPI in some embodiments is generated by the service classifier 1090 of the first datacenter 1040. In other embodiments, the service classifier 1090 generates a local SPI for the first datacenter 1040, and the service forwarding proxy 1050 converts this local SPI to a globally unique SPI.


With the global SPI, the service forwarding proxy 1050 in some embodiments includes in the encapsulating header a next service hop identifier that identifies the next service or the next service container to process the data message. For instance, when the global SPI has the UUID of each service container, the next service hop identifier is a reference to the service container UUID location in the global SPI in some embodiments, or is set to this container's UUID in other embodiments. In still other embodiments, the service forwarding proxy 1050 does not include a next service hop identifier in the encapsulating header.


Upon receiving the encapsulated data message, a service forwarding proxy 1052 in the second datacenter 1042 decapsulates the data message (removes the encapsulating header from the data message), extracts the embedded SPI and next-hop identifier from the removed header, and uses the SPI and next-hop identifier to identify the next hop service container in the service path that should process the data message in the second datacenter. It then looks up the identified service container's network address (e.g., MAC address) in the second datacenter, and then provides the data message to a service forwarding element 1072 executing on the second service host computer 1032 to forward to the service container 1022.


In other embodiments, the service forwarding proxy 1052 does not need a next-hop identifier, as it is configured to identify the next service node in the service chain based on the global SPI that it extracts from the encapsulating header. The service forwarding proxy 1052 in some of these embodiments performs a classification operation based on the extracted global SPI in order to identify the next hop container. In still other embodiments, the service forwarding proxy 1052 does not use the extracted SPI to identify the next hop service container, but instead passes the SPI (and the next-hop identifier when provided) to the service forwarding element 1072 to use to identify the next hop service container. In these embodiments, the service forwarding elements 1070 and 1072 perform their next hop lookups based on the SPI (and next hop identifiers when provided).


The service path's service processing finishes once the service container 1022 processes the data message. In some embodiments, the service forwarding element 1072 sets the destination MAC address to identify the virtual interface endpoint of the egress port that is defined for the service path. For instance, at the end of the service path in these embodiments, the Linux bridge forwards that the data message to its virtual interface endpoint from where it will be forwarded to its next destination.


In some embodiments, the service forwarding proxy operates on a different computer than the host service computer that executes the service classifier and/or the service containers. However, in other embodiments (like the embodiment illustrated in FIG. 10), the service forwarding proxy is implemented in a distributed manner as the service classifier and service forwarding element. Also, in some embodiments, multiple service containers on multiple service host computers in one datacenter implement part of the service path. In some such embodiments, the service forwarding proxy operates on the last service host computer in the datacenter when the service path spans multiple datacenters and a data message flow has to be forwarded to another datacenter to continue with its service processing along the service path.


In some embodiments, the service classifier in a first datacenter (in which the first service container of the service path operates) identifies all the service containers for implementing the service chain, including other service container(s) in any subsequent datacenter(s), as described above by reference to FIG. 10. However, in other embodiments, the initial service classifier only selects the service container(s) in its own datacenter, and leaves the selection of the service container(s) in the other datacenter(s) to the service classifier(s) in the subsequent datacenter(s).


In FIG. 10, each datacenter is shown to include one service container that perform one service operation of a very simple service chain. The service chain can be much larger in other embodiments. For instance, in some embodiments, multiple service containers in one datacenter (e.g., in the first datacenter) perform multiple service operations of a service chain on the data message, before the data message is forwarded to another datacenter. One or more service containers in this other datacenter can then perform one or more of the service operations on the data message, before the data message is forwarded to yet another datacenter for further service processing of the service chain. Each time the data message goes from one datacenter to another, it is encapsulated in some embodiments with a global SPI (and next hop identifier when used) to allow the new datacenter to identify the service path and the next service container in the service path.



FIG. 11 illustrates additional attributes of service forwarding proxies in some embodiments. As shown, two service forwarding proxies in two datacenters (such as proxies 1050 and 1052 in datacenters 1040 and 1042) can be used in some embodiments to forward many data message flows between the two datacenters for service processing. Also, in some embodiments, a service forwarding proxy in a datacenter can forward data messages to, and receive data messages from, multiple other service forwarding proxies in multiple other datacenters to implement service chains that span different sets of datacenters.


For instance, the service forwarding proxy 1050 in the datacenter 1040 encapsulates and forwards data message flows to service forwarding proxy 1052 in the datacenter 1042, and data message flows to service forwarding proxy 1114 in the datacenter 1124. The service forwarding proxy 1050 in the datacenter 1040 also receives and decapsulates data message flows from service forwarding proxy 1052 in the datacenter 1042, and data message flows from service forwarding proxy 1114 in the datacenter 1124.


As shown in FIG. 11, each service forwarding proxy in some embodiments includes (1) a forwarding proxy 1130 for encapsulating data messages and sending the encapsulated data messages to another service forwarding proxy of another datacenter, and (2) a receiving proxy 1132 for receiving encapsulated data messages from another service forwarding proxy of another datacenter and decapsulating the received data messages for processing in its datacenter.



FIG. 12 presents a process 1200 that conceptually illustrates using service containers in different datacenters to perform the services associated with a service chain on a data message. As shown, the process 1200 starts (at 1205) when a host computer receives a data message for service processing. This data message is forwarded in some embodiments to the service host computer (e.g., from a frontend load balancer), while in other embodiments the data message has been generated by a machine (e.g., a container or virtual machine) executing on the host computer.


Next, at 1210, the service classifier 155 executing on the host computer performs a service classification operation to identify (1) a service chain for the data message, (2) a service path to implement the service chain, and (3) a SPI to identify this service path. In some embodiments, the service classifier 155 performs this operation by performing the process 300 of FIG. 3. Also, in some embodiments, the SPI specified by the service classifier is a globally unique SPI across the datacenters, while in other embodiments it is a local SPI that is converted into a global SPI by a service forwarding proxy at a later stage. In some embodiments, the service classifier stores (at 1210) the specified SPI in its host computer memory for later use by the its associated service forwarding element and/or service forwarding proxy, as further described below.


For the embodiments illustrated by FIG. 12, the classification operation (at 1210) specifies the data message's destination MAC address as the MAC address of the first service container and provides this data message to a service forwarding element executing on its host computer to forward to the first service container. As mentioned above, the service classifier in some embodiments specifies the data message's destination MAC address to be the MAC address of the service forwarding element, as in these embodiments the service forwarding element performs its service forwarding based on the source MAC of the data message. In some embodiments, the service classifier also specifies the source MAC address as a MAC address associated with the start of a particular service path to allow the service forwarding element to identify the first service container for the service path.


In some embodiments, the service classifier provides the specified service path identifier to the service forwarding element to use to perform its classification operations for forwarding the data messages of this flow. In some of these embodiments, the service classifier provides a next-hop service index (identifying the next service to perform in the service path) that the service forwarding element (1) uses to perform its next-hop determination, and (2) adjusts (e.g., decrements) to perform its subsequent next-hop determinations as it passes the data message to the service containers.


At 1215, the service forwarding element performs a classification operation to identify the virtual interface endpoint of the Linux bridge associated with the next service node.


The classification operation (at 1215) in some embodiments compares the data message's destination MAC with the match criteria of forwarding rules in a lookup table that associates different destination MAC address with different virtual interface endpoint identifiers. Under this approach, the process retrieves the identifier for the next hop virtual interface endpoint from the forwarding rule that has the data message's destination MAC as its match criteria.


In other embodiments, the process 1200 performs the classification operation (at 1215) differently. For instance, in some embodiments, the process 1200 uses the above-described three classification operations 525-535 of the process 500, which first identify the direction of the service flow, then use the source MAC of the data message to identify the destination MAC of the next service node, and lastly use the identified destination MAC to identify the virtual interface endpoint.


After identifying the virtual interface endpoint connected to the next service container, the service forwarding element forwards (at 1215) the data message to this service container through the identified virtual interface endpoint. The service container performs it service operation (e.g., middlebox service operation, etc.) on the data message, and then provides (at 1220) the data message back to the service forwarding element. In some embodiments, the service container, its associated Ethernet port 206, or the associated bridge interface endpoint 208 changes the source MAC address of the data message to be a MAC address associated with the service container (e.g., associated with its Ethernet port 206), as the service forwarding element uses source MAC addresses to perform its next-hop service determination.


The service forwarding element then performs (at 1225) a set of classification operations. The first classification operation compares the L3 source and destination network addresses of the data message with classification rules that are defined to differentiate egress data messages from ingress data messages. As described above, each of these classification rules identifies a different lookup table for performing the second classification operation in some embodiments.


After identifying the direction of the data message's flow (upstream or downstream) in the first classification operation, the service forwarding element uses the lookup table identified by the first classification operation to perform a second classification operation, this time based on the current source MAC address of the data message. This second classification operation matches the data message's current source MAC address with the match criteria (specified in terms of a source MAC) of a classification rule that provides in its action tuple a next hop identifier that the process can use at 1230 to determine whether the next hop is in the current datacenter or another datacenter.


This next hop identifier in some embodiments is a destination MAC of the next hop (e.g., the next service node along the service path or the egress port defined for the service path). In other embodiments, the next hop identifier includes a datacenter identifier that identifies the datacenter for the next hop service node along the service path. In still other embodiments, the next hop identifier is in a different form.


After the classification operations at 1225, the process 1200 determines (at 1230) whether the next hop service node is in the same datacenter. If so, the process performs (at 1232) a set of one or more classification operations to define the data message's destination MAC address as the MAC address of the next hop service node (e.g., service container) and to identify the virtual interface endpoint for this new destination MAC address. This classification operation in some embodiments compares the identified next hop destination MAC with the match criteria of forwarding rules in a lookup table that associates different destination MAC address with different virtual interface endpoint identifiers. Under this approach, the process retrieves the identifier for the next hop virtual interface endpoint from the forwarding rule that has the data message's destination MAC as its match criteria.


Next, at 1235, the process determines whether the service path has been completed. If not, the process forwards (at 1237) the data message to the next service node (e.g., next service container), and then transitions to 1220. When the process 1200 determines (at 1235) that the service path has finished, the process 1200 ends. When the service path finishes, the destination MAC address that was defined in the last iteration through 1232 is an egress destination MAC that has been defined for the service path.


This egress destination MAC in some embodiments is a MAC address associated with a switch or router that forwards the data message to its next destination (e.g., another destination in the SDDC, or out of the SDDC, or to a gateway that forwards the data message out of the SDDC). In some embodiments, the egress destination MAC identifies the egress virtual interface endpoint that is defined for the service path. Hence, at the end of the service path in these embodiments, the Linux bridge forwards that the data message to the virtual interface endpoint from where it will be forwarded to its next destination. The operations 1230 and 1235 in some embodiments are not actually performed by the service forwarding element but are included only to illustrate the end of a service path in one datacenter or the eventual end of the service path.


When the process determines (at 1230) that the next service node is in another datacenter, the service forwarding element provides (at 1240) the data message to the service forwarding proxy (e.g., a proxy on the same host computer as the service forwarding element). This determination is made differently in different embodiments. For instance, in some embodiments, the process determines that the next service node is in another datacenter when the next hop destination MAC specified at 1225 belongs to the bridge's virtual interface endpoint associated with the service forwarding proxy. In other embodiments, the next hop lookup at 1225 provides another identifier that specifies that the next hop service node is in another datacenter.


Next, at 1245, the service forwarding proxy performs a classification operation based on the received data message's header values (e.g., all or part of the data message's seven tuple identifier) to identify a globally unique SPI that identifies the service path for the next datacenter. As mentioned above, the globally unique SPI in some embodiments is generated by the service classifier of the first datacenter. In other embodiments, the service classifier generates a local SPI for the first datacenter, and the service forwarding proxy converts this local SPI to a globally unique SPI.


With the global SPI, the service forwarding proxy in some embodiments identifies (at 1245) a service hop identifier that identifies the next service or the next service container to process the data message. For instance, when the global SPI has the UUID of each service container, the next service hop identifier is a reference to the next service container UUID in the global SPI in some embodiments, or is set to this container's UUID in other embodiments. The proxy's classification operation at 1245, or another classification operation that this proxy performs at 1245, provides the network address of the service forwarding proxy at the next datacenter.


At 1250, the service forwarding proxy encapsulates the data message with an encapsulating header and stores the identified global SPI in this header. In the embodiments that use the service-hop identifier, the service forwarding proxy also includes (at 1250) the service-hop identifier in the encapsulating header. It then forwards (at 1250) the encapsulated data message to the service forwarding proxy of the next datacenter. The encapsulating header in some embodiments is a tunnel header that is associated with a tunnel that is established between the two service forwarding proxies (e.g., between virtual interfaces executing on host computers on which the service forwarding proxies execute). This tunnel header allows the data message to pass through the intervening network fabric (e.g., the intervening routers and switches) to reach the other service forwarding proxy.


At 1255, upon receiving the encapsulated data message, the service forwarding proxy in the other datacenter (referred to as the new datacenter) decapsulates the data message (removes the encapsulating header from the data message), extracts the embedded SPI (and next-hop identifier when included) in the removed header, and uses the extracted parameters (e.g., the SPI) to identify the next hop service container in the service path that should process the data message in the second datacenter.


It then looks up (at 1255) the identified service container's network address (e.g., MAC address) in the second datacenter, and then provides (at 1215) the data message to a service forwarding element executing on its host computer to forward to the service container associated with this network address. Once the service forwarding element receives the data message, the process 1200 then repeats its operations starting with 1215.


In other embodiments, the process 1200 performs its operation at 1255 differently. For instance, in some embodiments, the service forwarding proxy specifies (at 1255) the data message's destination MAC address to be the MAC address of the service forwarding element, as in these embodiments the service forwarding element performs its service forwarding based on the source MAC of the data message. In some of these embodiments, the service forwarding proxy specifies (at 1255) the source MAC address as a MAC address associated with the start of a particular service path to allow the service forwarding element to identify the first service container for the service path.


In still other embodiments, instead of using the SPI to identify the next hop service container, the service forwarding proxy in the new datacenter passes the SPI (and the next-hop identifier when included) to its associated service forwarding element to use to identify the next hop service container. In these embodiments, the service forwarding elements perform their next hop lookups based on the SPI and next hop identifiers. When a service path spans more than two datacenters, the process 1200 will loop through 1240-1255 multiple times, once for each transition to a new datacenter.


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 13 conceptually illustrates a computer system 1300 with which some embodiments of the invention are implemented. The computer system 1300 can be used to implement any of the above-described hosts, controllers, and managers. As such, it can be used to execute any of the above described processes. This computer system includes various types of non-transitory machine readable media and interfaces for various other types of machine readable media. Computer system 1300 includes a bus 1305, processing unit(s) 1310, a system memory 1325, a read-only memory 1330, a permanent storage device 1335, input devices 1340, and output devices 1345.


The bus 1305 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1300. For instance, the bus 1305 communicatively connects the processing unit(s) 1310 with the read-only memory 1330, the system memory 1325, and the permanent storage device 1335.


From these various memory units, the processing unit(s) 1310 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 1330 stores static data and instructions that are needed by the processing unit(s) 1310 and other modules of the computer system. The permanent storage device 1335, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1300 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1335.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1335, the system memory 1325 is a read-and-write memory device. However, unlike storage device 1335, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1325, the permanent storage device 1335, and/or the read-only memory 1330. From these various memory units, the processing unit(s) 1310 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1305 also connects to the input and output devices 1340 and 1345. The input devices enable the user to communicate information and select commands to the computer system. The input devices 1340 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1345 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices.


Finally, as shown in FIG. 13, bus 1305 also couples computer system 1300 to a network 1365 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system 1300 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, instead of selecting service containers to implement a service path, the service classifier of some embodiments selects service virtual machines to implement the service path. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method of performing services, the method comprising: at a first host computer: identifying, at a virtual interface of the first host computer, (i) a service chain for a data message received at the first host computer, the service chain being identified by a service classifier, the service chain comprising a set of two or more services to perform on the data message, and (ii) a service path comprising a plurality of service containers for performing the set of services of the service chain;using a first set of service containers operating on the first host computer to perform a first subset of services in the set of services of the identified service chain;using a service forwarding proxy to encapsulate the data message with an encapsulating header, to store in the encapsulating header an identifier that identifies the service path, and to forward the encapsulated data message to a second host computer for processing by a second set of second service containers to perform a second subset of services in the set of services of the identified service chain, the second host computer being uniquely identified by the identifier.
  • 2. The method of claim 1, wherein the first and second host computers are in one datacenter.
  • 3. The method of claim 2, wherein the first and second host computers are in different datacenters.
  • 4. The method of claim 1 further comprising after identifying the service chain, forwarding the data message to a service forwarding element to forward the data message to the first set of service containers, said service forwarding element forwarding the data message to the first service set of service containers without encapsulating the data message.
  • 5. The method of claim 1, wherein the service forwarding element uses a first type of forwarding to forward the data message to each service container in the first set of containers, while the service forwarding proxy uses a different, second type of forwarding to forward the data message to the second host computer.
  • 6. The method of claim 1 further comprising using, for each service in the identified service chain, a service selector for that service to select a service container to perform the service.
  • 7. The method of claim 1, wherein the identifier is a service path identifier, the method further comprising specifying the service path identifier that uniquely identifies the service path for both the first and second host computers.
  • 8. The method of claim 7, wherein the first and second host computers are part of two different public clouds,the service forwarding proxy is a cross-cloud forwarding proxy, andthe service path identifier uniquely identifies the service path in both public clouds.
  • 9. The method of claim 1, the method further comprising: at the service classifier: identifying, for each service in the identified service chain, a service container for performing the service;specifying a first service path identifier that identifies the set of service containers that has been identified for performing the first set of services in the identified service chain.
  • 10. The method of claim 1, wherein the service forwarding proxy forwards data messages associated with a plurality of service chains to the second host computer, and processes data messages received from the second host computer for the plurality of service chains.
  • 11. A non-transitory machine readable medium storing a program for execution by a set of processors of a first host computer in a first datacenter to perform services, the program comprising sets of instructions for: identifying, at a virtual interface of the first host computer, (i) a service chain for a data message received at the first host computer, the service chain being identified by a service classifier, the service chain comprising a set of two or more services to perform on the data message, and (ii) a service path comprising a plurality of service containers for performing the set of services of the service chain;using a first set of service containers operating on the first host computer to perform a first subset of services in the set of services of the identified service chain;using a service forwarding proxy to encapsulate the data message with an encapsulating header, to store in the encapsulating header an identifier that identifies the service path, and to forward the encapsulated data message to a second host computer for processing by a second set of second service containers to perform a second subset of services in the set of services of the identified service chain;at the service classifier: identifying, for each service in the identified service chain, a service container for performing the service;specifying a first service path identifier that identifies the set of service containers that has been identified for performing the first set of services in the identified service chain.
  • 12. The non-transitory machine readable medium of claim 11, wherein the first and second host computers are in one datacenter.
  • 13. The non-transitory machine readable medium of claim 12, wherein the first and second host computers are in different datacenters.
  • 14. The non-transitory machine readable medium of claim 11, wherein the program further comprises a set of instructions for after identifying the service chain, forwarding the data message to a service forwarding element to forward the data message to the first set of service containers, said service forwarding element forwarding the data message to the first service set of service containers without encapsulating the data message.
  • 15. The non-transitory machine readable medium of claim 11, wherein the service forwarding element uses a first type of forwarding to forward the data message to each service container in the first set of containers, while the service forwarding proxy uses a different, second type of forwarding to forward the data message to the second host computer.
  • 16. The non-transitory machine readable medium of claim 11, wherein the program further comprises a set of instructions for using, for each service in the identified service chain, a service selector for that service to select a service container to perform the service.
  • 17. The non-transitory machine readable medium of claim 11, wherein the identifier is a service path identifier, the program further comprises a set of instructions for specifying the service path identifier that uniquely identifies the service path for both the first and second host computers.
  • 18. The non-transitory machine readable medium of claim 17, wherein the first and second host computers are part of two different public clouds,the service forwarding proxy is a cross-cloud forwarding proxy, andthe service path identifier uniquely identifies the service path in both public clouds.
  • 19. The non-transitory machine readable medium of claim 11, the program further comprises sets of instructions for: at the service forwarding proxy converting the first service path identifier into the identifier that is stored in the encapsulating header and that uniquely identifies the service path for the second host computer.
  • 20. The non-transitory machine readable medium of claim 11, wherein the service forwarding proxy forwards data messages associated with a plurality of service chains to the second host computer, and processes data messages received from the second host computer for the plurality of service chains.
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application is a continuation application of U.S. patent application Ser. No. 17/492,626, filed Oct. 3, 2021, now published as U.S. Patent Publication 2022/0030058. U.S. patent application Ser. No. 17/492,626 is a continuation application of U.S. patent application Ser. No. 16/668,485, filed Oct. 30, 2019, now issued as U.S. Pat. No. 11,140,218. U.S. patent application Ser. No. 17/492,626, now issued as U.S. Pat. No. 11,722,559, and U.S. patent application Ser. No. 16/668,485, now issued as U.S. Pat. No. 11,140,218 are incorporated herein by reference.

US Referenced Citations (861)
Number Name Date Kind
6006264 Colby et al. Dec 1999 A
6104700 Haddock et al. Aug 2000 A
6154448 Petersen et al. Nov 2000 A
6772211 Lu et al. Aug 2004 B2
6779030 Dugan et al. Aug 2004 B1
6826694 Dutta et al. Nov 2004 B1
6880089 Bommareddy et al. Apr 2005 B1
6985956 Luke et al. Jan 2006 B2
7013389 Srivastava et al. Mar 2006 B1
7209977 Acharya et al. Apr 2007 B2
7239639 Cox et al. Jul 2007 B2
7379465 Aysan et al. May 2008 B2
7406540 Acharya et al. Jul 2008 B2
7447775 Zhu et al. Nov 2008 B1
7480737 Chauffour et al. Jan 2009 B2
7487250 Siegel Feb 2009 B2
7499463 Droux et al. Mar 2009 B1
7649890 Mizutani et al. Jan 2010 B2
7698458 Liu et al. Apr 2010 B1
7818452 Matthews et al. Oct 2010 B2
7898959 Arad Mar 2011 B1
7921174 Denise Apr 2011 B1
7948986 Ghosh et al. May 2011 B1
8078903 Parthasarathy et al. Dec 2011 B1
8094575 Vadlakonda et al. Jan 2012 B1
8175863 Ostermeyer et al. May 2012 B1
8190767 Maufer et al. May 2012 B1
8201219 Jones Jun 2012 B2
8223634 Tanaka et al. Jul 2012 B2
8224885 Doucette et al. Jul 2012 B1
8230493 Davidson et al. Jul 2012 B2
8266261 Akagi Sep 2012 B2
8339959 Moisand et al. Dec 2012 B1
8451735 Li May 2013 B2
8484348 Subramanian et al. Jul 2013 B2
8488577 Macpherson Jul 2013 B1
8521879 Pena et al. Aug 2013 B1
8615009 Ramamoorthi et al. Dec 2013 B1
8707383 Bade et al. Apr 2014 B2
8738702 Belanger et al. May 2014 B1
8743885 Khan et al. Jun 2014 B2
8804720 Rainovic et al. Aug 2014 B1
8804746 Wu et al. Aug 2014 B2
8811412 Shippy Aug 2014 B2
8830834 Sharma et al. Sep 2014 B2
8832683 Heim Sep 2014 B2
8849746 Candea et al. Sep 2014 B2
8856518 Sridharan et al. Oct 2014 B2
8862883 Cherukuri et al. Oct 2014 B2
8868711 Skjolsvold et al. Oct 2014 B2
8873399 Bothos et al. Oct 2014 B2
8874789 Zhu Oct 2014 B1
8892706 Dalal Nov 2014 B1
8913611 Koponen et al. Dec 2014 B2
8914406 Augsnes et al. Dec 2014 B1
8966024 Koponen et al. Feb 2015 B2
8966029 Zhang et al. Feb 2015 B2
8971345 McCanne et al. Mar 2015 B1
8989192 Foo et al. Mar 2015 B2
8996610 Sureshchandra et al. Mar 2015 B1
9009289 Jacob Apr 2015 B1
9015823 Koponen et al. Apr 2015 B2
9094464 Scharber et al. Jul 2015 B1
9104497 Mortazavi Aug 2015 B2
9148367 Kandaswamy et al. Sep 2015 B2
9172603 Padmanabhan et al. Oct 2015 B2
9178709 Higashida et al. Nov 2015 B2
9191293 Iovene et al. Nov 2015 B2
9195491 Zhang et al. Nov 2015 B2
9203748 Jiang et al. Dec 2015 B2
9225638 Jain Dec 2015 B2
9225659 McCanne et al. Dec 2015 B2
9232342 Seed et al. Jan 2016 B2
9237098 Patel Jan 2016 B2
9256467 Singh et al. Feb 2016 B1
9258742 Pianigiani et al. Feb 2016 B1
9264313 Manuguri et al. Feb 2016 B1
9277412 Freda et al. Mar 2016 B2
9344337 Kumar et al. May 2016 B2
9363183 Kumar et al. Jun 2016 B2
9397946 Yadav Jul 2016 B1
9407540 Kumar Aug 2016 B2
9407599 Koponen et al. Aug 2016 B2
9419897 Cherian Aug 2016 B2
9442752 Roth et al. Sep 2016 B1
9467382 Kumar et al. Oct 2016 B2
9479358 Klosowski et al. Oct 2016 B2
9503530 Niedzielski Nov 2016 B1
9531590 Jain et al. Dec 2016 B2
9577845 Thakkar et al. Feb 2017 B2
9602380 Strassner Mar 2017 B2
9608896 Kumar et al. Mar 2017 B2
9660905 Dunbar May 2017 B2
9686192 Sengupta et al. Jun 2017 B2
9686200 Pettit et al. Jun 2017 B2
9705702 Foo et al. Jul 2017 B2
9705775 Zhang et al. Jul 2017 B2
9749229 Previdi Aug 2017 B2
9755898 Jain et al. Sep 2017 B2
9755971 Wang et al. Sep 2017 B2
9774537 Jain et al. Sep 2017 B2
9787559 Schroeder Oct 2017 B1
9787605 Zhang et al. Oct 2017 B2
9804797 Ng et al. Oct 2017 B1
9825810 Jain et al. Nov 2017 B2
9860079 Cohn Jan 2018 B2
9900410 Dalal Feb 2018 B2
9935827 Jain et al. Apr 2018 B2
9979641 Jain May 2018 B2
9985896 Koponen et al. May 2018 B2
9996380 Singh et al. Jun 2018 B2
10013276 Fahs et al. Jul 2018 B2
10042722 Chigurupati et al. Aug 2018 B1
10075470 Vaidya et al. Sep 2018 B2
10079779 Zhang et al. Sep 2018 B2
10084703 Kumar Sep 2018 B2
10089127 Padmanabhan et al. Oct 2018 B2
10091276 Bloomquist et al. Oct 2018 B2
10104169 Moniz et al. Oct 2018 B1
10129077 Jain et al. Nov 2018 B2
10129180 Zhang et al. Nov 2018 B2
10135636 Jiang Nov 2018 B2
10135737 Jain et al. Nov 2018 B2
10158573 Lee et al. Dec 2018 B1
10187306 Nainar Jan 2019 B2
10200493 Bendapudi et al. Feb 2019 B2
10212071 Kancherla et al. Feb 2019 B2
10225137 Jain et al. Mar 2019 B2
10237379 Kumar Mar 2019 B2
10250501 Ni Apr 2019 B2
10257095 Jain et al. Apr 2019 B2
10284390 Kumar May 2019 B2
10305822 Tao May 2019 B2
10320679 Jain et al. Jun 2019 B2
10333822 Jeuk Jun 2019 B1
10341233 Jain et al. Jul 2019 B2
10341427 Jalan et al. Jul 2019 B2
10375155 Cai et al. Aug 2019 B1
10390285 Zhou Aug 2019 B2
10397275 Jain et al. Aug 2019 B2
10445509 Thota et al. Oct 2019 B2
10484334 Lee et al. Nov 2019 B1
10514941 Zhang et al. Dec 2019 B2
10516568 Jain et al. Dec 2019 B2
10547508 Kanakarajan Jan 2020 B1
10547692 Salgueiro Jan 2020 B2
10554484 Chanda et al. Feb 2020 B2
10594743 Hong et al. Mar 2020 B2
10609091 Hong et al. Mar 2020 B2
10609122 Argenti et al. Mar 2020 B1
10623309 Gampel et al. Apr 2020 B1
10637750 Bollineni et al. Apr 2020 B1
10645060 Ao et al. May 2020 B2
10645201 Mishra et al. May 2020 B2
10659252 Boutros May 2020 B2
10693782 Jain Jun 2020 B2
10700891 Hao et al. Jun 2020 B2
10708229 Sevinc et al. Jul 2020 B2
10728174 Boutros et al. Jul 2020 B2
10735311 Li Aug 2020 B2
10742544 Roeland et al. Aug 2020 B2
10757077 Rajahalme et al. Aug 2020 B2
10797910 Boutros et al. Oct 2020 B2
10797966 Boutros et al. Oct 2020 B2
10802858 Gunda Oct 2020 B2
10805181 Boutros Oct 2020 B2
10805192 Boutros et al. Oct 2020 B2
10812378 Nainar Oct 2020 B2
10826835 Ruckstuhl et al. Nov 2020 B2
10834004 Yigit et al. Nov 2020 B2
10853111 Gupta et al. Dec 2020 B1
10929171 Gokhale Feb 2021 B2
10931793 Kumar et al. Feb 2021 B2
10938668 Zulak et al. Mar 2021 B1
10938716 Chin et al. Mar 2021 B1
10944673 Naveen et al. Mar 2021 B2
10949244 Naveen et al. Mar 2021 B2
10997177 Howes et al. May 2021 B1
11003482 Rolando et al. May 2021 B2
11012351 Feng et al. May 2021 B2
11012420 Sevinc et al. May 2021 B2
11026047 Greenberger et al. Jun 2021 B2
11036538 Lecuyer et al. Jun 2021 B2
11038782 Boutros et al. Jun 2021 B2
11042397 Mishra et al. Jun 2021 B2
11055273 Meduri et al. Jul 2021 B1
11074097 Naveen et al. Jul 2021 B2
11075839 Zhuang et al. Jul 2021 B2
11075842 Jain et al. Jul 2021 B2
11086654 Rolando et al. Aug 2021 B2
11119804 Gokhale et al. Sep 2021 B2
11140218 Tidemann Oct 2021 B2
11153190 Mahajan et al. Oct 2021 B1
11153406 Sawant et al. Oct 2021 B2
11157304 Watt, Jr. et al. Oct 2021 B2
11184397 Annadata et al. Nov 2021 B2
11194610 Mundaragi et al. Dec 2021 B2
11212356 Rolando Dec 2021 B2
11223494 Mishra et al. Jan 2022 B2
11249784 Chalvadi et al. Feb 2022 B2
11265187 Boutros et al. Mar 2022 B2
11277331 Rolando et al. Mar 2022 B2
11283717 Tidemann et al. Mar 2022 B2
11288088 Rolando et al. Mar 2022 B2
11294703 Rolando et al. Apr 2022 B2
11296930 Jain et al. Apr 2022 B2
11301281 Rolando Apr 2022 B2
11316900 Schottland et al. Apr 2022 B1
11321113 Feng et al. May 2022 B2
11354148 Rolando et al. Jun 2022 B2
11360796 Mishra et al. Jun 2022 B2
11368387 Rolando et al. Jun 2022 B2
11397604 Mundaragi et al. Jul 2022 B2
11398983 Wijnands et al. Jul 2022 B2
11405431 Hong et al. Aug 2022 B2
11411863 Zhang et al. Aug 2022 B2
11438257 Rolando Sep 2022 B2
11438267 Jain et al. Sep 2022 B2
11467861 Kavathia Oct 2022 B2
11496606 Jain et al. Nov 2022 B2
11528213 Venkatasubbaiah et al. Dec 2022 B2
11528219 Rolando et al. Dec 2022 B2
11595250 Naveen et al. Feb 2023 B2
11604666 Feng et al. Mar 2023 B2
11609781 Mishra et al. Mar 2023 B2
11611625 Jain et al. Mar 2023 B2
11659061 Sawant et al. May 2023 B2
11722367 Jain et al. Aug 2023 B2
11722559 Tidemann et al. Aug 2023 B2
11734043 Jain et al. Aug 2023 B2
11743172 Rolando et al. Aug 2023 B2
11750476 Boutros et al. Sep 2023 B2
20020010783 Primak et al. Jan 2002 A1
20020078370 Tahan Jun 2002 A1
20020097724 Halme et al. Jul 2002 A1
20020194350 Lu et al. Dec 2002 A1
20030065711 Acharya et al. Apr 2003 A1
20030093481 Mitchell et al. May 2003 A1
20030097429 Wu et al. May 2003 A1
20030105812 Flowers et al. Jun 2003 A1
20030188026 Denton et al. Oct 2003 A1
20030236813 Abjanic Dec 2003 A1
20040066769 Ahmavaara et al. Apr 2004 A1
20040210670 Anerousis et al. Oct 2004 A1
20040215703 Song et al. Oct 2004 A1
20040249776 Horvitz et al. Dec 2004 A1
20040260745 Gage et al. Dec 2004 A1
20050021713 Dugan et al. Jan 2005 A1
20050089327 Ovadia et al. Apr 2005 A1
20050091396 Nilakantan et al. Apr 2005 A1
20050114429 Caccavale May 2005 A1
20050114648 Akundi et al. May 2005 A1
20050120099 Marce et al. Jun 2005 A1
20050120350 Ni et al. Jun 2005 A1
20050132030 Hopen et al. Jun 2005 A1
20050198200 Subramanian et al. Sep 2005 A1
20050249199 Albert et al. Nov 2005 A1
20060069776 Shim et al. Mar 2006 A1
20060112297 Davidson May 2006 A1
20060130133 Andreev et al. Jun 2006 A1
20060155862 Kathi et al. Jul 2006 A1
20060195896 Fulp et al. Aug 2006 A1
20060233155 Srivastava Oct 2006 A1
20070061492 Riel Mar 2007 A1
20070121615 Weill et al. May 2007 A1
20070153782 Fletcher et al. Jul 2007 A1
20070214282 Sen Sep 2007 A1
20070248091 Khalid et al. Oct 2007 A1
20070260750 Feied et al. Nov 2007 A1
20070288615 Keohane et al. Dec 2007 A1
20070291773 Khan et al. Dec 2007 A1
20080005293 Bhargava et al. Jan 2008 A1
20080031263 Ervin et al. Feb 2008 A1
20080046400 Shi et al. Feb 2008 A1
20080049614 Briscoe et al. Feb 2008 A1
20080049619 Twiss Feb 2008 A1
20080049786 Ram et al. Feb 2008 A1
20080072305 Casado et al. Mar 2008 A1
20080084819 Parizhsky et al. Apr 2008 A1
20080095153 Fukunaga et al. Apr 2008 A1
20080104608 Hyser et al. May 2008 A1
20080195755 Lu et al. Aug 2008 A1
20080205345 Sachs et al. Aug 2008 A1
20080225714 Denis Sep 2008 A1
20080239991 Applegate et al. Oct 2008 A1
20080247396 Hazard Oct 2008 A1
20080276085 Davidson et al. Nov 2008 A1
20080279196 Friskney et al. Nov 2008 A1
20090003349 Havemann et al. Jan 2009 A1
20090003364 Fendick et al. Jan 2009 A1
20090003375 Havemann et al. Jan 2009 A1
20090019135 Eswaran et al. Jan 2009 A1
20090037713 Khalid et al. Feb 2009 A1
20090063706 Goldman et al. Mar 2009 A1
20090129271 Ramankutty et al. May 2009 A1
20090172666 Yahalom et al. Jul 2009 A1
20090190506 Belling et al. Jul 2009 A1
20090199268 Ahmavaara et al. Aug 2009 A1
20090235325 Dimitrakos et al. Sep 2009 A1
20090238084 Nadeau et al. Sep 2009 A1
20090249472 Litvin et al. Oct 2009 A1
20090265467 Peles et al. Oct 2009 A1
20090271586 Shaath Oct 2009 A1
20090299791 Blake et al. Dec 2009 A1
20090300210 Ferris Dec 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20090307334 Maltz et al. Dec 2009 A1
20090327464 Archer et al. Dec 2009 A1
20100031360 Seshadri et al. Feb 2010 A1
20100036903 Ahmad et al. Feb 2010 A1
20100100616 Bryson et al. Apr 2010 A1
20100131638 Kondamuru May 2010 A1
20100165985 Sharma et al. Jul 2010 A1
20100223364 Wei Sep 2010 A1
20100223621 Joshi et al. Sep 2010 A1
20100235915 Memon et al. Sep 2010 A1
20100254385 Sharma et al. Oct 2010 A1
20100257278 Gunturu Oct 2010 A1
20100265824 Chao et al. Oct 2010 A1
20100281482 Pike et al. Nov 2010 A1
20100332595 Fullagar et al. Dec 2010 A1
20110010578 Dominguez et al. Jan 2011 A1
20110016348 Pace et al. Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110022812 Van Der Linden et al. Jan 2011 A1
20110035494 Pandey et al. Feb 2011 A1
20110040893 Karaoguz et al. Feb 2011 A1
20110055845 Nandagopal et al. Mar 2011 A1
20110058563 Saraph et al. Mar 2011 A1
20110090912 Shippy Apr 2011 A1
20110164504 Bothos et al. Jul 2011 A1
20110194563 Shen et al. Aug 2011 A1
20110211463 Matityahu et al. Sep 2011 A1
20110225293 Rathod Sep 2011 A1
20110235508 Goel et al. Sep 2011 A1
20110261811 Battestilli et al. Oct 2011 A1
20110268118 Schlansker et al. Nov 2011 A1
20110271007 Wang et al. Nov 2011 A1
20110276695 Maldaner Nov 2011 A1
20110283013 Grosser et al. Nov 2011 A1
20110295991 Aida Dec 2011 A1
20110317708 Clark Dec 2011 A1
20120005265 Ushioda et al. Jan 2012 A1
20120011281 Hamada et al. Jan 2012 A1
20120014386 Xiong et al. Jan 2012 A1
20120023231 Jeno Jan 2012 A1
20120054266 Kazerani et al. Mar 2012 A1
20120089664 Igelka Apr 2012 A1
20120110577 Chen et al. May 2012 A1
20120137004 Smith May 2012 A1
20120140719 Hui et al. Jun 2012 A1
20120144014 Natham et al. Jun 2012 A1
20120147890 Kikuchi Jun 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120155266 Patel et al. Jun 2012 A1
20120176932 Wu et al. Jul 2012 A1
20120185588 Error Jul 2012 A1
20120195196 Ghai et al. Aug 2012 A1
20120207174 Shieh Aug 2012 A1
20120213074 Goldfarb et al. Aug 2012 A1
20120230187 Tremblay et al. Sep 2012 A1
20120239804 Liu et al. Sep 2012 A1
20120246637 Kreeger et al. Sep 2012 A1
20120266252 Spiers et al. Oct 2012 A1
20120281540 Khan et al. Nov 2012 A1
20120287789 Aybay et al. Nov 2012 A1
20120303784 Zisapel et al. Nov 2012 A1
20120303809 Patel et al. Nov 2012 A1
20120311568 Jansen Dec 2012 A1
20120317260 Husain et al. Dec 2012 A1
20120317570 Dalcher et al. Dec 2012 A1
20120331188 Riordan et al. Dec 2012 A1
20130003735 Chao et al. Jan 2013 A1
20130021942 Bacthu et al. Jan 2013 A1
20130031544 Sridharan et al. Jan 2013 A1
20130039218 Narasimhan et al. Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130058346 Sridharan et al. Mar 2013 A1
20130073743 Ramasamy et al. Mar 2013 A1
20130100851 Bacthu et al. Apr 2013 A1
20130125120 Zhang et al. May 2013 A1
20130136126 Wang et al. May 2013 A1
20130142048 Gross, IV et al. Jun 2013 A1
20130148505 Koponen et al. Jun 2013 A1
20130151661 Koponen et al. Jun 2013 A1
20130159487 Patel et al. Jun 2013 A1
20130160024 Shtilman et al. Jun 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130166703 Hammer et al. Jun 2013 A1
20130170501 Egi et al. Jul 2013 A1
20130182608 Maggiari et al. Jul 2013 A1
20130201989 Hu et al. Aug 2013 A1
20130227097 Yasuda et al. Aug 2013 A1
20130227550 Weinstein et al. Aug 2013 A1
20130287026 Davie Oct 2013 A1
20130287036 Banavalikar et al. Oct 2013 A1
20130291088 Shieh et al. Oct 2013 A1
20130297798 Arisoylu et al. Nov 2013 A1
20130301472 Allan Nov 2013 A1
20130311637 Kamath et al. Nov 2013 A1
20130318219 Kancherla Nov 2013 A1
20130322446 Biswas et al. Dec 2013 A1
20130332983 Koorevaar et al. Dec 2013 A1
20130336319 Liu et al. Dec 2013 A1
20130343174 Guichard et al. Dec 2013 A1
20130343378 Veteikis et al. Dec 2013 A1
20140003232 Guichard et al. Jan 2014 A1
20140003422 Mogul et al. Jan 2014 A1
20140010085 Kavunder et al. Jan 2014 A1
20140029447 Schrum, Jr. Jan 2014 A1
20140046997 Dain et al. Feb 2014 A1
20140046998 Dain et al. Feb 2014 A1
20140050223 Foo et al. Feb 2014 A1
20140052844 Nayak et al. Feb 2014 A1
20140059204 Nguyen et al. Feb 2014 A1
20140059544 Koganty et al. Feb 2014 A1
20140068602 Gember et al. Mar 2014 A1
20140092738 Grandhi et al. Apr 2014 A1
20140092906 Kandaswamy et al. Apr 2014 A1
20140092914 Kondapalli Apr 2014 A1
20140096183 Jain et al. Apr 2014 A1
20140101226 Khandekar et al. Apr 2014 A1
20140101656 Zhu et al. Apr 2014 A1
20140108665 Arora et al. Apr 2014 A1
20140115578 Cooper et al. Apr 2014 A1
20140129715 Mortazavi May 2014 A1
20140149696 Frenkel et al. May 2014 A1
20140164477 Springer et al. Jun 2014 A1
20140169168 Jalan et al. Jun 2014 A1
20140169375 Khan et al. Jun 2014 A1
20140195666 Dumitriu et al. Jul 2014 A1
20140207968 Kumar et al. Jul 2014 A1
20140254374 Janakiraman et al. Sep 2014 A1
20140254591 Mahadevan et al. Sep 2014 A1
20140269487 Kalkunte Sep 2014 A1
20140269717 Thubert et al. Sep 2014 A1
20140269724 Mehler et al. Sep 2014 A1
20140280896 Papakostas et al. Sep 2014 A1
20140281029 Danforth Sep 2014 A1
20140282526 Basavaiah et al. Sep 2014 A1
20140301388 Jagadish et al. Oct 2014 A1
20140304231 Kamath et al. Oct 2014 A1
20140307744 Dunbar et al. Oct 2014 A1
20140310391 Sorenson et al. Oct 2014 A1
20140310418 Sorenson, III et al. Oct 2014 A1
20140317677 Vaidya et al. Oct 2014 A1
20140321459 Kumar et al. Oct 2014 A1
20140330983 Zisapel et al. Nov 2014 A1
20140334485 Jain et al. Nov 2014 A1
20140334488 Guichard et al. Nov 2014 A1
20140341029 Allan et al. Nov 2014 A1
20140351452 Bosch et al. Nov 2014 A1
20140362682 Guichard et al. Dec 2014 A1
20140362705 Pan Dec 2014 A1
20140369204 Anand et al. Dec 2014 A1
20140372567 Ganesh et al. Dec 2014 A1
20140372616 Arisoylu et al. Dec 2014 A1
20140372702 Subramanyam et al. Dec 2014 A1
20150003453 Sengupta et al. Jan 2015 A1
20150003455 Haddad et al. Jan 2015 A1
20150009995 Gross, IV et al. Jan 2015 A1
20150016279 Zhang et al. Jan 2015 A1
20150023354 Li et al. Jan 2015 A1
20150026321 Ravinoothala et al. Jan 2015 A1
20150026345 Ravinoothala et al. Jan 2015 A1
20150026362 Guichard et al. Jan 2015 A1
20150030024 Venkataswami et al. Jan 2015 A1
20150052262 Chanda et al. Feb 2015 A1
20150052522 Chanda et al. Feb 2015 A1
20150063102 Mestery et al. Mar 2015 A1
20150063364 Thakkar et al. Mar 2015 A1
20150071285 Kumar et al. Mar 2015 A1
20150071301 Dalal Mar 2015 A1
20150073967 Katsuyama et al. Mar 2015 A1
20150078384 Jackson et al. Mar 2015 A1
20150092551 Moisand et al. Apr 2015 A1
20150092564 Aldrin Apr 2015 A1
20150103645 Shen et al. Apr 2015 A1
20150103679 Tessmer et al. Apr 2015 A1
20150103827 Quinn et al. Apr 2015 A1
20150106802 Ivanov et al. Apr 2015 A1
20150109901 Tan et al. Apr 2015 A1
20150124608 Agarwal et al. May 2015 A1
20150124622 Kovvali et al. May 2015 A1
20150124815 Beliveau et al. May 2015 A1
20150124840 Bergeron May 2015 A1
20150138973 Kumar et al. May 2015 A1
20150139041 Bosch et al. May 2015 A1
20150146539 Mehta et al. May 2015 A1
20150156035 Foo et al. Jun 2015 A1
20150188770 Naiksatam et al. Jul 2015 A1
20150195197 Yong et al. Jul 2015 A1
20150213087 Sikri Jul 2015 A1
20150215819 Bosch et al. Jul 2015 A1
20150222640 Kumar et al. Aug 2015 A1
20150236948 Dunbar et al. Aug 2015 A1
20150237013 Bansal et al. Aug 2015 A1
20150242197 Alfonso et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150263901 Kumar et al. Sep 2015 A1
20150263946 Tubaltsev et al. Sep 2015 A1
20150271102 Antich Sep 2015 A1
20150280959 Vincent Oct 2015 A1
20150281089 Marchetti Oct 2015 A1
20150281098 Pettit et al. Oct 2015 A1
20150281125 Koponen et al. Oct 2015 A1
20150281179 Raman et al. Oct 2015 A1
20150281180 Raman et al. Oct 2015 A1
20150288671 Chan et al. Oct 2015 A1
20150288679 Ben-Nun et al. Oct 2015 A1
20150295831 Kumar et al. Oct 2015 A1
20150319078 Lee et al. Nov 2015 A1
20150319096 Yip et al. Nov 2015 A1
20150358235 Zhang et al. Dec 2015 A1
20150358294 Kancharla et al. Dec 2015 A1
20150365322 Shatzkamer et al. Dec 2015 A1
20150370586 Cooper et al. Dec 2015 A1
20150370596 Fahs et al. Dec 2015 A1
20150372840 Benny et al. Dec 2015 A1
20150372911 Yabusaki et al. Dec 2015 A1
20150379277 Thota et al. Dec 2015 A1
20150381493 Bansal et al. Dec 2015 A1
20150381494 Cherian et al. Dec 2015 A1
20150381495 Cherian et al. Dec 2015 A1
20160006654 Fernando et al. Jan 2016 A1
20160028640 Zhang et al. Jan 2016 A1
20160043901 Sankar et al. Feb 2016 A1
20160043952 Zhang et al. Feb 2016 A1
20160057050 Ostrom et al. Feb 2016 A1
20160057687 Horn et al. Feb 2016 A1
20160065503 Yohe et al. Mar 2016 A1
20160080253 Wang et al. Mar 2016 A1
20160087888 Jain et al. Mar 2016 A1
20160094384 Jain et al. Mar 2016 A1
20160094389 Jain et al. Mar 2016 A1
20160094451 Jain et al. Mar 2016 A1
20160094452 Jain et al. Mar 2016 A1
20160094453 Jain et al. Mar 2016 A1
20160094454 Jain et al. Mar 2016 A1
20160094455 Jain et al. Mar 2016 A1
20160094456 Jain et al. Mar 2016 A1
20160094457 Jain et al. Mar 2016 A1
20160094631 Jain et al. Mar 2016 A1
20160094632 Jain et al. Mar 2016 A1
20160094633 Jain et al. Mar 2016 A1
20160094642 Jain et al. Mar 2016 A1
20160094643 Jain et al. Mar 2016 A1
20160094661 Jain et al. Mar 2016 A1
20160099948 Ott et al. Apr 2016 A1
20160105333 Lenglet et al. Apr 2016 A1
20160119226 Guichard Apr 2016 A1
20160127306 Wang et al. May 2016 A1
20160127564 Sharma et al. May 2016 A1
20160134528 Lin et al. May 2016 A1
20160149784 Zhang et al. May 2016 A1
20160149816 Roach et al. May 2016 A1
20160149828 Vijayan et al. May 2016 A1
20160162320 Singh et al. Jun 2016 A1
20160164776 Biancaniello Jun 2016 A1
20160164787 Roach et al. Jun 2016 A1
20160164826 Riedel et al. Jun 2016 A1
20160173373 Guichard et al. Jun 2016 A1
20160182684 Connor et al. Jun 2016 A1
20160188527 Cherian et al. Jun 2016 A1
20160197831 Foy et al. Jul 2016 A1
20160197839 Li et al. Jul 2016 A1
20160203817 Formhals et al. Jul 2016 A1
20160205015 Halligan et al. Jul 2016 A1
20160212048 Kaempfer et al. Jul 2016 A1
20160212237 Nishijima Jul 2016 A1
20160218918 Chu et al. Jul 2016 A1
20160226700 Zhang et al. Aug 2016 A1
20160226754 Zhang et al. Aug 2016 A1
20160226762 Zhang et al. Aug 2016 A1
20160232019 Shah et al. Aug 2016 A1
20160248685 Pignataro et al. Aug 2016 A1
20160277210 Lin et al. Sep 2016 A1
20160277294 Akiyoshi Sep 2016 A1
20160294612 Ravinoothala et al. Oct 2016 A1
20160294933 Hong et al. Oct 2016 A1
20160294935 Hong et al. Oct 2016 A1
20160308758 Li et al. Oct 2016 A1
20160308961 Rao Oct 2016 A1
20160337189 Liebhart et al. Nov 2016 A1
20160337249 Zhang et al. Nov 2016 A1
20160337317 Hwang et al. Nov 2016 A1
20160344565 Batz et al. Nov 2016 A1
20160344621 Roeland et al. Nov 2016 A1
20160344803 Batz et al. Nov 2016 A1
20160352866 Gupta et al. Dec 2016 A1
20160366046 Anantharam et al. Dec 2016 A1
20160373364 Yokota Dec 2016 A1
20160378537 Zou Dec 2016 A1
20160380812 Chanda et al. Dec 2016 A1
20170005882 Tao Jan 2017 A1
20170005920 Previdi et al. Jan 2017 A1
20170005923 Babakian Jan 2017 A1
20170005988 Bansal et al. Jan 2017 A1
20170019303 Swamy et al. Jan 2017 A1
20170019329 Kozat et al. Jan 2017 A1
20170019331 Yong Jan 2017 A1
20170019335 Schultz et al. Jan 2017 A1
20170019341 Huang et al. Jan 2017 A1
20170026417 Ermagan et al. Jan 2017 A1
20170033939 Bragg et al. Feb 2017 A1
20170063683 Li et al. Mar 2017 A1
20170063928 Jain et al. Mar 2017 A1
20170064048 Pettit et al. Mar 2017 A1
20170064749 Jain et al. Mar 2017 A1
20170078176 Lakshmikantha et al. Mar 2017 A1
20170078961 Rabii et al. Mar 2017 A1
20170093698 Farmanbar Mar 2017 A1
20170093758 Chanda Mar 2017 A1
20170099194 Wei Apr 2017 A1
20170126497 Dubey et al. May 2017 A1
20170126522 McCann et al. May 2017 A1
20170126726 Han May 2017 A1
20170134538 Mahkonen et al. May 2017 A1
20170142012 Thakkar et al. May 2017 A1
20170147399 Cropper et al. May 2017 A1
20170149582 Cohn et al. May 2017 A1
20170149675 Yang May 2017 A1
20170149680 Liu et al. May 2017 A1
20170163531 Kumar et al. Jun 2017 A1
20170163724 Puri et al. Jun 2017 A1
20170170990 Gaddehosur et al. Jun 2017 A1
20170171159 Kumar et al. Jun 2017 A1
20170180240 Kern et al. Jun 2017 A1
20170195255 Pham et al. Jul 2017 A1
20170208000 Bosch et al. Jul 2017 A1
20170208011 Bosch et al. Jul 2017 A1
20170208532 Zhou Jul 2017 A1
20170214627 Zhang et al. Jul 2017 A1
20170220306 Price et al. Aug 2017 A1
20170230333 Glazemakers et al. Aug 2017 A1
20170230467 Salgueiro et al. Aug 2017 A1
20170237656 Gage Aug 2017 A1
20170250869 Voellmy Aug 2017 A1
20170250902 Rasanen et al. Aug 2017 A1
20170250917 Ruckstuhl et al. Aug 2017 A1
20170251065 Furr et al. Aug 2017 A1
20170257432 Fu et al. Sep 2017 A1
20170264677 Li Sep 2017 A1
20170273099 Zhang et al. Sep 2017 A1
20170279938 You et al. Sep 2017 A1
20170295021 Gutiérrez et al. Oct 2017 A1
20170295033 Cherian et al. Oct 2017 A1
20170295100 Hira et al. Oct 2017 A1
20170310588 Zuo Oct 2017 A1
20170310611 Kumar et al. Oct 2017 A1
20170317887 Dwaraki et al. Nov 2017 A1
20170317926 Penno et al. Nov 2017 A1
20170317936 Swaminathan et al. Nov 2017 A1
20170317954 Masurekar et al. Nov 2017 A1
20170317969 Masurekar et al. Nov 2017 A1
20170318081 Hopen et al. Nov 2017 A1
20170318097 Drew et al. Nov 2017 A1
20170324651 Penno et al. Nov 2017 A1
20170324654 Previdi et al. Nov 2017 A1
20170331672 Fedyk et al. Nov 2017 A1
20170339110 Ni Nov 2017 A1
20170339600 Roeland et al. Nov 2017 A1
20170346764 Tan et al. Nov 2017 A1
20170353387 Kwak et al. Dec 2017 A1
20170359252 Kumar et al. Dec 2017 A1
20170364287 Antony et al. Dec 2017 A1
20170364794 Mahkonen et al. Dec 2017 A1
20170366605 Chang et al. Dec 2017 A1
20170373990 Jeuk et al. Dec 2017 A1
20180004954 Liguori et al. Jan 2018 A1
20180006935 Mutnuru et al. Jan 2018 A1
20180026911 Anholt et al. Jan 2018 A1
20180027101 Kumar et al. Jan 2018 A1
20180041425 Zhang Feb 2018 A1
20180041470 Schultz et al. Feb 2018 A1
20180041524 Reddy et al. Feb 2018 A1
20180063000 Wu et al. Mar 2018 A1
20180063018 Bosch et al. Mar 2018 A1
20180063087 Hira et al. Mar 2018 A1
20180091420 Drake et al. Mar 2018 A1
20180102919 Hao et al. Apr 2018 A1
20180102965 Hari et al. Apr 2018 A1
20180115471 Curcio et al. Apr 2018 A1
20180123950 Garg et al. May 2018 A1
20180124061 Raman et al. May 2018 A1
20180139098 Sunavala et al. May 2018 A1
20180145899 Rao May 2018 A1
20180159733 Poon et al. Jun 2018 A1
20180159801 Rajan et al. Jun 2018 A1
20180159943 Poon et al. Jun 2018 A1
20180176177 Bichot et al. Jun 2018 A1
20180176294 Vacaro et al. Jun 2018 A1
20180183764 Gunda Jun 2018 A1
20180184281 Tamagawa et al. Jun 2018 A1
20180191600 Hecker et al. Jul 2018 A1
20180198692 Ansari et al. Jul 2018 A1
20180198705 Wang et al. Jul 2018 A1
20180198791 Desai et al. Jul 2018 A1
20180203736 Vyas et al. Jul 2018 A1
20180205637 Li Jul 2018 A1
20180213040 Pak et al. Jul 2018 A1
20180219762 Wang et al. Aug 2018 A1
20180227216 Hughes Aug 2018 A1
20180234360 Narayana et al. Aug 2018 A1
20180247082 Durham et al. Aug 2018 A1
20180248713 Zanier et al. Aug 2018 A1
20180248755 Hecker et al. Aug 2018 A1
20180248790 Tan et al. Aug 2018 A1
20180248986 Dalal Aug 2018 A1
20180262427 Jain et al. Sep 2018 A1
20180262434 Koponen et al. Sep 2018 A1
20180278530 Connor et al. Sep 2018 A1
20180288129 Joshi et al. Oct 2018 A1
20180295036 Krishnamurthy et al. Oct 2018 A1
20180295053 Leung et al. Oct 2018 A1
20180302242 Hao et al. Oct 2018 A1
20180309632 Kompella et al. Oct 2018 A1
20180337849 Sharma et al. Nov 2018 A1
20180349212 Liu et al. Dec 2018 A1
20180351874 Abhigyan et al. Dec 2018 A1
20180375684 Filsfils et al. Dec 2018 A1
20190007382 Nirwal et al. Jan 2019 A1
20190020580 Boutros et al. Jan 2019 A1
20190020600 Zhang et al. Jan 2019 A1
20190020684 Qian et al. Jan 2019 A1
20190028347 Johnston et al. Jan 2019 A1
20190028384 Penno et al. Jan 2019 A1
20190028577 D'Souza et al. Jan 2019 A1
20190036819 Kancherla et al. Jan 2019 A1
20190068500 Hira Feb 2019 A1
20190089679 Kahalon et al. Mar 2019 A1
20190097838 Sahoo et al. Mar 2019 A1
20190102280 Caldato et al. Apr 2019 A1
20190108049 Singh et al. Apr 2019 A1
20190116063 Bottorff et al. Apr 2019 A1
20190121961 Coleman et al. Apr 2019 A1
20190124096 Ahuja et al. Apr 2019 A1
20190132220 Boutros et al. May 2019 A1
20190132221 Boutros et al. May 2019 A1
20190140863 Nainar et al. May 2019 A1
20190140947 Zhuang et al. May 2019 A1
20190140950 Zhuang et al. May 2019 A1
20190149512 Sevinc et al. May 2019 A1
20190149516 Rajahalme et al. May 2019 A1
20190149518 Sevinc et al. May 2019 A1
20190166045 Peng et al. May 2019 A1
20190173778 Faseela et al. Jun 2019 A1
20190173850 Jain et al. Jun 2019 A1
20190173851 Jain et al. Jun 2019 A1
20190222538 Yang et al. Jul 2019 A1
20190229937 Nagarajan et al. Jul 2019 A1
20190230126 Kumar et al. Jul 2019 A1
20190238363 Boutros et al. Aug 2019 A1
20190238364 Boutros et al. Aug 2019 A1
20190268384 Hu et al. Aug 2019 A1
20190286475 Mani Sep 2019 A1
20190288915 Denyer et al. Sep 2019 A1
20190288946 Gupta et al. Sep 2019 A1
20190288947 Jain et al. Sep 2019 A1
20190306036 Boutros et al. Oct 2019 A1
20190306086 Boutros et al. Oct 2019 A1
20190342175 Wan et al. Nov 2019 A1
20190377604 Cybulski Dec 2019 A1
20190379578 Mishra et al. Dec 2019 A1
20190379579 Mishra et al. Dec 2019 A1
20200007388 Johnston et al. Jan 2020 A1
20200036629 Roeland et al. Jan 2020 A1
20200059761 Li et al. Feb 2020 A1
20200067828 Liu et al. Feb 2020 A1
20200073739 Rungta et al. Mar 2020 A1
20200076684 Naveen et al. Mar 2020 A1
20200076734 Naveen et al. Mar 2020 A1
20200084141 Bengough et al. Mar 2020 A1
20200084147 Gandhi et al. Mar 2020 A1
20200136960 Jeuk et al. Apr 2020 A1
20200143388 Duchin et al. May 2020 A1
20200145331 Bhandari et al. May 2020 A1
20200162318 Patil et al. May 2020 A1
20200162352 Jorgenson et al. May 2020 A1
20200183724 Shevade et al. Jun 2020 A1
20200195711 Abhigyan et al. Jun 2020 A1
20200204492 Sarva et al. Jun 2020 A1
20200213366 Hong et al. Jul 2020 A1
20200220805 Dhanabalan Jul 2020 A1
20200272493 Lecuyer et al. Aug 2020 A1
20200272494 Gokhale et al. Aug 2020 A1
20200272495 Rolando et al. Aug 2020 A1
20200272496 Mundaragi et al. Aug 2020 A1
20200272497 Kavathia et al. Aug 2020 A1
20200272498 Mishra et al. Aug 2020 A1
20200272499 Feng et al. Aug 2020 A1
20200272500 Feng Aug 2020 A1
20200272501 Chalvadi et al. Aug 2020 A1
20200274757 Rolando et al. Aug 2020 A1
20200274769 Naveen et al. Aug 2020 A1
20200274778 Lecuyer et al. Aug 2020 A1
20200274779 Rolando et al. Aug 2020 A1
20200274795 Rolando et al. Aug 2020 A1
20200274801 Feng et al. Aug 2020 A1
20200274808 Mundaragi et al. Aug 2020 A1
20200274809 Rolando et al. Aug 2020 A1
20200274810 Gokhale et al. Aug 2020 A1
20200274826 Mishra et al. Aug 2020 A1
20200274944 Naveen et al. Aug 2020 A1
20200274945 Rolando et al. Aug 2020 A1
20200287962 Mishra et al. Sep 2020 A1
20200322271 Jain et al. Oct 2020 A1
20200344088 Selvaraj et al. Oct 2020 A1
20200358696 Hu Nov 2020 A1
20200364074 Gunda et al. Nov 2020 A1
20200366526 Boutros et al. Nov 2020 A1
20200366584 Boutros et al. Nov 2020 A1
20200382412 Chandrappa et al. Dec 2020 A1
20200382420 Suryanarayana et al. Dec 2020 A1
20200389401 Enguehard et al. Dec 2020 A1
20210004245 Kamath et al. Jan 2021 A1
20210011812 Mitkar et al. Jan 2021 A1
20210011816 Mitkar et al. Jan 2021 A1
20210029088 Mayya et al. Jan 2021 A1
20210044502 Boutros et al. Feb 2021 A1
20210067439 Kommula Mar 2021 A1
20210073736 Alawi et al. Mar 2021 A1
20210117217 Croteau et al. Apr 2021 A1
20210120080 Mishra et al. Apr 2021 A1
20210135992 Tidemann et al. May 2021 A1
20210136140 Tidemann et al. May 2021 A1
20210136141 Tidemann et al. May 2021 A1
20210136147 Giassa et al. May 2021 A1
20210218587 Mishra et al. Jul 2021 A1
20210227041 Sawant et al. Jul 2021 A1
20210227042 Sawant et al. Jul 2021 A1
20210240734 Shah et al. Aug 2021 A1
20210266295 Stroz Aug 2021 A1
20210271565 Bhavanarushi et al. Sep 2021 A1
20210306240 Boutros et al. Sep 2021 A1
20210311758 Cao et al. Oct 2021 A1
20210311772 Mishra et al. Oct 2021 A1
20210314248 Rolando et al. Oct 2021 A1
20210314252 Rolando et al. Oct 2021 A1
20210314253 Rolando et al. Oct 2021 A1
20210314268 Rolando et al. Oct 2021 A1
20210314277 Rolando et al. Oct 2021 A1
20210314310 Cao et al. Oct 2021 A1
20210314415 Rolando et al. Oct 2021 A1
20210314423 Rolando et al. Oct 2021 A1
20210328913 Nainar et al. Oct 2021 A1
20210349767 Asayag et al. Nov 2021 A1
20210359945 Jain et al. Nov 2021 A1
20210377160 Faseela Dec 2021 A1
20220019698 Durham et al. Jan 2022 A1
20220030058 Tidemann et al. Jan 2022 A1
20220038310 Boutros et al. Feb 2022 A1
20220060467 Montgomery et al. Feb 2022 A1
20220078037 Mishra et al. Mar 2022 A1
20220188140 Jain et al. Jun 2022 A1
20220191304 Jain et al. Jun 2022 A1
20220417150 Jain et al. Dec 2022 A1
20230052818 Jain et al. Feb 2023 A1
20230168917 Kavathia et al. Jun 2023 A1
20230179474 Naveen et al. Jun 2023 A1
20230283689 Sawant et al. Sep 2023 A1
Foreign Referenced Citations (54)
Number Date Country
2358607 Apr 2002 CA
3034809 Mar 2018 CA
1482745 Mar 2004 CN
1689369 Oct 2005 CN
101273650 Sep 2008 CN
101594358 Dec 2009 CN
101729412 Jun 2010 CN
102986172 Mar 2013 CN
103516807 Jan 2014 CN
103795805 May 2014 CN
104471899 Mar 2015 CN
104521195 Apr 2015 CN
105706420 Jun 2016 CN
105847069 Aug 2016 CN
106134137 Nov 2016 CN
107005584 Aug 2017 CN
107078950 Aug 2017 CN
107113208 Aug 2017 CN
107204941 Sep 2017 CN
107210959 Sep 2017 CN
107852368 Mar 2018 CN
107925589 Apr 2018 CN
109213573 Jan 2019 CN
110521169 Nov 2019 CN
107105061 Sep 2020 CN
112181632 Jan 2021 CN
2426956 Mar 2012 EP
2466985 Jun 2012 EP
2222022 Dec 2016 EP
3210345 Aug 2017 EP
3300319 Mar 2018 EP
3991393 May 2022 EP
2005311863 Nov 2005 JP
2015519822 Jul 2015 JP
9918534 Apr 1999 WO
2008095010 Aug 2008 WO
2009065304 May 2009 WO
2013184846 Dec 2013 WO
2014069978 May 2014 WO
2014182529 Nov 2014 WO
2014207725 Dec 2014 WO
2016053373 Apr 2016 WO
2016054272 Apr 2016 WO
2019084066 May 2019 WO
2019147316 Aug 2019 WO
2019157955 Aug 2019 WO
2019168532 Sep 2019 WO
2019226327 Nov 2019 WO
2020046686 Mar 2020 WO
2020171937 Aug 2020 WO
2021041440 Mar 2021 WO
2021086462 May 2021 WO
2021206789 Oct 2021 WO
2022132308 Jun 2022 WO
Non-Patent Literature Citations (32)
Entry
Author Unknown, “Datagram,” Jun. 22, 2012, 2 pages, retrieved from https://web.archive.org/web/20120622031055/https://en.wikipedia.org/wiki/datagram.
Author Unknown, “MPLS,” Mar. 3, 2008, 47 pages.
Author Unknown, “Research on Multi-tenancy Network Technology for Datacenter Network,” May 2015, 64 pages, Beijing Jiaotong University.
Author Unknown, “AppLogic Features,” Jul. 2007, 2 pages, 3TERA, Inc.
Author Unknown, “Enabling Service Chaining on Cisco Nexus 1000V Series,” Month Unknown, 2012, 25 pages, CISCO.
Casado, Martin, et al., “Virtualizing the Network Forwarding Plane,” Dec. 2010, 6 pages.
Cianfrani, Antonio, et al., “Translating Traffic Engineering Outcome into Segment Routing Paths: the Encoding Problem,” 2016 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS): GI 2016: 9th EEE Global Internet Symposium, Apr. 10-14, 2016, 6 pages, IEEE, San Francisco, CA, USA.
Dixon, Colin, et al., “An End to the Middle,” Proceedings of the 12th Conference on Hot Topics in Operating Systems, May 2009, 5 pages, USENIX Association, Berkeley, CA, USA.
Dumitriu, Dan Mihai, et al., (U.S. Appl. No. 61/514,990), filed Aug. 4, 2011, 31 pages.
Greenberg, Albert, et al., “VL2: A Scalable and Flexible Data Center Network,” SIGCOMM '09, Aug. 17-21, 2009, 12 pages, ACM, Barcelona, Spain.
Guichard, J., et al., “Network Service Chaining Problem Statement,” Network Working Group, Jun. 13, 2013, 14 pages, Cisco Systems, Inc.
Halpern, J., et al., “Service Function Chaining (SFC) Architecture,” draft-ietf-sfc-architecture-02, Sep. 20, 2014, 26 pages, IETF.
Halpern, J., et al., “Service Function Chaining (SFC) Architecture,” RFC 7665, Oct. 2015, 32 pages, IETF Trust.
Joseph, Dilip Anthony, et al., “A Policy-aware Switching Layer for Data Centers,” Jun. 24, 2008, 26 pages, Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA.
Karakus, Murat, et al., “Quality of Service (QoS) in Software Defined Networking (SDN): A Survey,” Journal of Network and Computer Applications, Dec. 9, 2016, 19 pages, vol. 80, Elsevier, Ltd.
Kumar, S., et al., “Service Function Chaining Use Cases in Data Centers,” draft-ietf-sfc-dc-use-cases-01, Jul. 21, 2014, 23 pages, IETF.
Li, Qing-Gu, “Network Virtualization of Data Center Security,” Information Security and Technology, Oct. 2012, 3 pages.
Lin, Po-Ching, et al., “Balanced Service Chaining in Software-Defined Networks with Network Function Virtualization,” Computer: Research Feature, Nov. 2016, 9 pages, vol. 49, No. 11, IEEE.
Liu, W., et al., “Service Function Chaining (SFC) Use Cases,” draft-liu-sfc-use-cases-02, Feb. 13, 2014, 17 pages, IETF.
Non-Published Commonly Owned U.S. Appl. No. 18/211,580, filed Jun. 19, 2023, 88 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/227,303, filed Jul. 28, 2023, 65 pages, Nicira, Inc.
PCT International Search Report and Written Opinion of commonly owned International Patent Application PCT/US2020/043649, mailed Dec. 18, 2020, 21 pages, International Searching Authority (EPO).
Salsano, Stefano, et al., “Generalized Virtual Networking: An Enabler for Service Centric Networking and Network Function Virtualization,” 2014 16th International Telecommunications Network Strategy and Planning Symposium, Sep. 17-19, 2014, 7 pages, IEEE, Funchal, Portugal.
Sekar, Vyas, et al., “Design and Implementation of a Consolidated Middlebox Architecture,” 9th USENIX Symposium on Networked Systems Design and Implementation, Apr. 25-27, 2012, 14 pages, USENIX, San Jose, CA, USA.
Sherry, Justine, et al., “Making Middleboxes Someone Else's Problem: Network Processing as a Cloud Service,” In Proc. of SIGCOMM '12, Aug. 13-17, 2012, 12 pages, Helsinki, Finland.
Siasi, N., et al., “Container-Based Service Function Chain Mapping,” 2019 SoutheastCon, Apr. 11-14, 2019, 6 pages, IEEE, Huntsville, AL, USA.
Xiong, Gang, et al., “A Mechanism for Configurable Network Service Chaining and Its Implementation,” KSII Transactions on Internet and Information Systems, Aug. 2016, 27 pages, vol. 10, No. 8, KSII.
Author Unknown, “Reference Design: VMware NSX for vSphere (NSX), Network Virtualization Design Guide,”, Aug. 21, 2014, 167 pages, VMware, Inc., Palo Alto, CA, retrieved from https:// communities.vmware.com/docs/DOC-27683.
Author Unknown, “Service Chaining in OpenStack with NSX,” Dec. 28, 2016, 2 pages, retrieved from https://www.youtube.com/watch?v=xY1uz6PjWlo.
Boucadair, M., “Service Function Chaining (SFC) Control Plane Components & Requirements,” draft-ietf-sfc-control-plane-05, May 11, 2016, 53 pages, IETF.
Fernando, R., et al., “Service Chaining using Virtual Networks with BGP VPNs,” draft-ietf-bess-service-chaining-02, Oct. 31, 2016, 81 pages, IETF.
Non-Published Commonly Owned U.S. Appl. No. 18/370,006, filed Sep. 19, 2023, 50 pages, Nicira, Inc.
Related Publications (1)
Number Date Country
20230362239 A1 Nov 2023 US
Continuations (2)
Number Date Country
Parent 17492626 Oct 2021 US
Child 18219187 US
Parent 16668485 Oct 2019 US
Child 17492626 US