Performing in-line service in public cloud

Information

  • Patent Grant
  • 11695697
  • Patent Number
    11,695,697
  • Date Filed
    Monday, September 14, 2020
    3 years ago
  • Date Issued
    Tuesday, July 4, 2023
    11 months ago
Abstract
Some embodiments provide a novel way to insert a service (e.g., a third party service) in the path of a data message flow, between two machines (e.g., two VMs, two containers, etc.) in a public cloud environment. For a particular tenant of the public cloud, some embodiments create an overlay logical network with a logical overlay address space. To perform a service on data messages of a flow between two machines, the logical overlay network passes to the public cloud's underlay network the data messages with their destination address (e.g., destination IP addresses) defined in the logical overlay network. The underlay network (e.g., an underlay default downlink gateway) is configured to pass data messages with such destination addresses (e.g., with logical overlay destination addresses) to a set of one or more service machines. The underlay network (e.g., an underlay default uplink gateway) is also configured to pass to the particular tenant's public cloud gateway the processed data messages that are received from the service machine set and that are addressed to logical overlay destination addresses. The tenant's public cloud gateway is configured to forward such data messages to a logical forwarding element of the logical network, which then handles the forwarding of the data messages to the correct destination machine.
Description
BACKGROUND

In private clouds today, numerous vendors provide specialized services, such as Deep Packet Inspection, Firewall, etc. In private datacenters, the administrators typically deploy such specialized services in the path of traffic that needs the service. With the proliferation of public cloud, these vendors also offer virtual appliances and service virtual machines (service VMs) that can be licensed and deployed in public cloud environments. For example, Palo Alto Networks makes its firewall available as a virtual appliance in the Amazon Web Services (AWS) marketplace.


Such appliances are mostly deployed for traffic that enters and leaves the virtual public cloud (VPC) in the datacenter, and are thus deployed facing the Internet Gateway. However, the public cloud tenants also want to route the traffic between subsets of endpoints within the virtual datacenter through the service appliance of third party vendors. Currently, this is not possible as cloud providers do not allow the routing of traffic between endpoints in a VPC to be over-ridden. For instance, in AWS VPC, the provider routing table has a single entry for the VPC address block and this entry cannot be modified. More specific routes that overlap with the VPC address block cannot be added.


BRIEF SUMMARY

Some embodiments provide a method for inserting a service (e.g., a third party service) in the path of a data message flow between two machines (e.g., two VMs, two containers, etc.) in a public cloud environment. For a particular tenant of the public cloud, the method in some embodiments creates an overlay logical network with a distinct overlay address space. This overlay logical network is defined on top of the public cloud's underlay network, which, in some embodiments, is the VPC network provided by the cloud provider. The VPC network in some embodiments is also a logical overlay network provided by the cloud provider, while in other embodiments, it is just a portion segregated for the particular tenant from the rest of the provider's physical network (e.g., a portion with a segregated address space for the particular tenant).


To perform one or more services on data message flows between two machines in the VPC network, the method configures one or more logical forwarding elements (e.g., a logical router) of the overlay logical network to forward such data message flows to one or more forwarding elements of the underlay network, so that the underlay network can forward the data messages to service machines that perform the service on the data message flows. For instance, in some embodiments, the method configures a logical interface (LIF) of a logical router of the logical overlay network to forward data messages that are directed to certain destination IP addresses, to an underlay default downlink gateway that is specified for the VPC network by the public cloud provider. In some such embodiments, the method defines the underlay default downlink gateway as the next hop (that is accessible through the LIF) for the data messages that are directed to certain destination IP addresses.


The method also modifies the route table of the underlay network's forwarding element (e.g., the underlay default gateway) to send data messages destined to some or all logical overlay addresses to one or more service machines that perform one or more services on the data messages. These machines can be standalone service appliances (e.g., third party service appliances, such firewall appliances of Palo Alto Network, etc.), or they can be service machines (e.g., virtual machines, containers, etc.) executing on host computers. The service machines in some embodiments are within the public cloud, while in other embodiments the service machines can be inside or outside the public cloud.


A service machine performs one or more services on the data messages that it receives from the underlay forwarding element directly or through an intervening network fabric. After performing its service(s) on a data message, the service machine provides the message to its uplink interface that handles the forwarding of data messages to networks outside of the service machine's network. The method configures a separate route table of the service machine's uplink interface to route the processed data messages to the underlay default uplink gateway, for the particular tenant in the public cloud.


The method also configures the route table of this underlay default uplink gateway to forward data messages that are destined to some or all logical overlay addresses to the particular tenant's cloud gateway. In some embodiments, the particular tenant's cloud gateway processes data messages that enter or exit the tenant's VPC network in the public cloud. The method configures the tenant gateway to route a data message that is addressed to a logical overlay network destination address to the correct destination machine (e.g., correct VM or container) in the particular tenant's VPC.


When the destination machine sends a reply message to a message from a source machine, the destination machine's reply message in some embodiments, follows a similar path as the received data message. Specifically, in some embodiments, a logical forwarding element (e.g., a logical router) associated with the destination machine forwards the reply message to a forwarding element of the underlay network through a logical forwarding element of the overlay network (e.g., through a LIF of the logical forwarding element that has the underlay default gateway as its next hop for some logical overlay addresses). The underlay network forwarding element (e.g., the underlay default gateway) is configured to send data messages destined to some or all logical overlay addresses to one or more service machines.


After processing the reply message, the service machine again provides the message to its uplink interface, which again has its route table configured to route the processed reply data message to the underlay default uplink gateway. This gateway is again configured to forward the processed data messages to the particular tenant's cloud gateway, which again is configured to route a message that is addressed to a logical overlay network destination address to the correct destination machine (e.g., correct VM or container) in the particular tenant's VPC.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 presents an example that illustrates how some embodiments perform a service as a data message is sent from one machine to another machine in a public cloud environment.



FIG. 2 illustrates examples of virtual private clouds in a public cloud environment.



FIG. 3 presents an example that illustrates how some embodiments direct a reply message to one or more service machines.



FIG. 4 illustrates a process that a set of one or more network controllers of some embodiments perform to configure the logical network elements, the underlay network elements, the service machines, and the tenant gateways.



FIG. 5 presents examples of how the network and the service elements in FIGS. 1 and 3 are configured in some embodiments.



FIG. 6 conceptually illustrates a computer system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments provide a novel way to insert a service (e.g., a third party service) in the path of a data message flow between two machines (e.g., two VMs, two containers, etc.) in a public cloud environment. For a particular tenant of the public cloud, some embodiments create an overlay logical network with a logical overlay address space. To perform a service on data messages of a flow between two machines, the logical overlay network passes to the public cloud's underlay network the data messages with their destination address (e.g., destination IP addresses) defined in the logical overlay network.


The underlay network (e.g., an underlay default downlink gateway) is configured to pass data messages with such destination addresses (e.g., with logical overlay destination addresses) to a set of one or more service machines. The underlay network (e.g., an underlay default uplink gateway) is also configured to pass to the particular tenant's public cloud gateway the processed data messages that are received from the service machine set, and that are addressed to logical overlay destination addresses. The tenant's public cloud gateway is configured to forward such data messages to a logical forwarding element of the logical network, which then handles the forwarding of the data messages to the correct destination machine.


As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to layer 2 (L2), layer 3 (L3), layer (4) L4, and layer 7 (L7) (or layer 2, layer 3, layer 4, layer 7) are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.



FIG. 1 presents an example that illustrates how some embodiments perform a service (e.g., a third party service) as a data message is sent from one machine to another machine in a public cloud environment. In this example, the machines are VMs 122/132 that execute on host computers in a public cloud 100. These VMs belong to one tenant of the public datacenter 100. The public cloud in some embodiments is at one physical datacenter location, while in other embodiments spans multiple physical datacenter locations (e.g., is in multiple cities).


As shown, this tenant has a virtual private cloud (VPC) 105 in the public cloud 100. FIG. 2 illustrates the VPCs 205 of the public cloud 100. These VPCs are different cloud tenants. As shown, the public cloud has numerous compute resources 210 (e.g., host computers, etc.) and network resources 215 (e.g., software switches and routers executing on the host computers, standalone switches and routers (such as top-of-rack switches), etc.). The network resources in some embodiments include middlebox service appliances and VMs in some embodiments. Each tenant's VPC uses a subset of the public cloud's compute and network resources. Typically, the public cloud provider uses compute virtualization and network virtualization/segmentation techniques to ensure that a tenant with one VPC cannot access the segregated machines and/or network of another tenant with another VPC.


As shown in FIG. 2, the public cloud datacenter 100 allows its tenants to create service VPCs 225. Each tenant's service VPC 225 includes one or more service appliances or machines (e.g., VMs or containers) that perform services (e.g., middlebox services, such as DPI, firewall, network address translation, encryption, etc.) on data messages exchanged between machines within the tenant's VPC as well as data messages entering and exiting the tenant's VPC 205. In some embodiments, all the service appliances and/or machines in a service VPC 225 are provided by third party vendors. In other embodiments, a subset of these appliances and/or machines are provided by third party vendors while the rest are provided by the public cloud provider and/or the tenant.


In the example illustrated in FIG. 1, the tenant VPC 105 uses one service VPC 150 to perform a set of one or more services on the data message that VM 122 sends to VM 132. As shown, the VPC 105 includes a logical overlay network 115. U.S. patent application Ser. No. 15/367,157, filed Dec. 1, 2016, now issued as U.S. Pat. No. 10,333,959, describes how some embodiments define a logical overlay network in a public cloud environment. U.S. patent application Ser. No. 15/367,157, now issued as U.S. Pat. No. 10,333,959, is incorporated herein by reference. The overlay logical network 115 is defined on top of the public cloud's underlay network, which, in some embodiments, is the VPC network provided by the cloud provider. The VPC network in some embodiments is also a logical overlay network provided by the cloud provider, while in other embodiments, it is just a portion segregated for the particular tenant from the rest of the provider's physical network (e.g., a portion with a segregated address space for the particular tenant).


As shown, the logical overlay network 115 includes two logical switches 120 and 125 and a logical router 130. Each logical switch spans multiple software switches on multiple host computers to connect several VMs on these host computers. Similarly, the logical router spans multiple software routers on multiple host computers to connect to the logical switch instances on these host computers. In some embodiments, the VMs connected to the two different logical switches 120 and 125 are on different subnets, while in other embodiments these VMs do not necessarily have to be on two different subnets.


As further shown, the tenant VPC 105 includes at least one tenant gateway 135. This VPC usually has multiple gateways for redundancy, and for handling north and south bound traffic out of and into the tenant VPC. For the tenant VPC 105, the public cloud also provides an underlay downlink gateway 140 and an underlay uplink gateway 145. These gateways forward messages out of and into the tenant VPC 105.



FIG. 1 illustrates the path that a data message takes from VM 122 to VM 132 in some embodiments. As shown, the logical switch 120 initially receives this message and forwards this message to the logical router 130, as the message's destination is not connected to the logical switch 120. The logical router is configured to forward data message flows between the two subnets associated with the two logical switches to the underlay network, so that the underlay network can forward the data messages to the service VPC to perform a set of one or more services on the data message flows. Some embodiments configure a logical interface (LIF) 155 of the logical router 130, as the interface associated with the underlay network's default downlink gateway 140 that is specified for the VPC network by the public cloud provider. In some such embodiments, the underlay default downlink gateway is the next hop (that is accessible through the LIF) for the data messages from VMs connected to logical switch 120 to the VMs connected to the logical switch 125.


The underlay default downlink gateway 140 is configured to send the data message destined to some or all logical overlay addresses to the service VPC 150, so that one or more service machines at this VPC can perform one or more services on the data message. Thus, when the logical router 130 provides the data message from VM 122 to the underlay downlink gateway 140, this gateway forwards this message to the service VPC 150.


One or more service machines 152 at the service VPC 150 are configured to perform one or more services on data messages received from the public cloud's underlay gateway 140 (e.g., on data messages that have source and/or destination addresses in the logical overlay address space). Examples of these service operations include typical middlebox service operations such as firewall operations, NAT operations, etc. The service machines 152 can be standalone service appliances (e.g., third party service appliances, such firewall appliances of Palo Alto Network, etc.), or they can be service machines (e.g., virtual machines, containers, etc.) executing on host computers. In the example illustrated in FIG. 1, the service machines are within a service VPC in the public cloud. In other embodiments, the service machines can be outside of the public cloud. In still other embodiments, the service machines are part of the tenant's VPC but outside of the tenant's logical overlay network 115.


When only one service machine performs a service on the data message from VM 122, the service machine 152 in the example of FIG. 1 provides the processed message to its uplink interface that handles the forwarding of data messages to networks outside of the service machine's VPC. On the other hand, when multiple service machines perform multiple service operations on this data message, the last service machine 152 in the example of FIG. 1 provides the processed message to its uplink interface. In either case, a separate route table of the service machine's uplink interface is configured to route the processed data message to the underlay default uplink gateway 145 for the particular tenant in the public cloud. When a service operation requires the message to be discarded, the message is discarded in some embodiments.


The underlay default uplink gateway 145 is configured to forward data messages that are destined to some or all logical overlay addresses to the particular tenant's cloud gateway 135. Hence, this gateway 145 forwards the data message from VM 122 to the tenant gateway 135. The tenant's cloud gateway 135 is configured to forward to the logical router 130 data messages that have destination addresses in the logical overlay address space.


Accordingly, the gateway forwards the processed data message to the logical router 130, which then forwards it to the logical switch 125. This switch then forwards the data message to the VM 132. For the logical switch 125 to forward the data message to VM 132, the data message is supplied by a logical router instance executing on one host computer to the logical switch instance (i.e., a software switch that implements the logical switch) that executes on the same host computer as VM 132. This logical switch instance then passes the data message to the VM 132.


In some embodiments, a service operation does not need to be performed on a reply message from VM 132 to VM 122. In other embodiments, such a service operation is needed. FIG. 3 presents an example that illustrates how some embodiments direct this reply message to one or more service machines 152 in the service VPC 150. As shown, the path that this reply message takes is identical to the path of the original message from VM 122 to VM 132, except that the reply message (1) initially goes from the VM 132 to the logical switch 125, and then to logical router 130, and (2) after being processed by the service VPC and passed to the underlay uplink gateway 145 and then the tenant gateway, goes from the logical router 130 to the logical switch 120 to reach VM 122. The reply message traverses the same path as the original message when it goes from the logical router 130 to the underlay downlink gateway 140, and then to the service VPC 150, the underlay uplink gateway 145 and then the tenant gateway 135.



FIG. 4 illustrates a process 400 that a set of one or more network controllers of some embodiments perform to configure the logical network elements, the underlay network elements, the service machines, and the tenant gateways. The process 400 configures the network and service elements to direct data message flows, between different machines (e.g., two VMs, two containers, etc.) of a particular tenant of a public cloud, to one or more service machines, so that they can perform one or more services on the data message flows. The process 400 will be explained by reference to FIG. 5, which presents examples of how the network and service elements in FIGS. 1 and 3 are configured in some embodiments.


In some embodiments, the network controller set that performs the process 400 is deployed in a tenant VPC. In other embodiments, the network controller set is deployed in a management VPC in the public cloud. This public cloud provider in some embodiments provides and operates the management VPC, while a third-party provider in other embodiments deploys and operates the management VPC. Also, in some embodiments, the network controller set resides in a private datacenter outside of the public cloud environment. Several examples of network controller sets are described in the above-incorporated U.S. patent application Ser. No. 15/367,157.


As shown, the process 400 in some embodiments initially creates (at 405) an overlay logical network with a distinct overlay address space. The logical overlay network 115 is one example of such an overlay logical network. The overlay logical network is defined (at 405) on top of the public cloud's underlay network, which, in some embodiments, is the VPC network provided by the cloud provider. The VPC network in some embodiments is also a logical overlay network provided by the cloud provider, while in other embodiments, it is just a portion segregated for the particular tenant from the rest of the provider's physical network (e.g., a portion with a segregated address space for the particular tenant).


Next, at 410, the process configures one or more logical forwarding elements (e.g., a logical router) of the overlay logical network to forward such data message flows to one or more forwarding elements of the underlay network, so that the underlay network can forward the data messages to service machines that perform the service on the data message flows. For instance, in some embodiments, the process configures a logical interface of a logical router of the logical overlay network to forward data messages that are directed to certain destination IP addresses, to an underlay default downlink gateway that is specified for the VPC network by the public cloud provider.


In some such embodiments, the process defines the underlay default downlink gateway as the next hop (that is accessible through the LIF) for the data messages that are directed to certain destination IP addresses. FIG. 5 illustrates a route table 505 of the logical router 130. As shown, the route table 505 has a first route record 552 that identifies the underlay default downlink gateway 140 as the next hop for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125), when the data message is received along the logical router's ingress port X that is associated with the logical switch 120.


The process 400 also configures (at 415) one or more logical forwarding elements of the overlay logical network to forward data messages after they have been processed by the service machine(s) to their destination in the logical address space. FIG. 5 illustrates the route table 505 having a second route record 554 that identifies an interface (LIF B) associated with second logical switch 125 as the output port for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125), when the data message is received on the logical router's ingress port Y that is associated with the tenant gateway 135.


Next, at 420, the process 400 configures the route table of an underlay network's forwarding element to send data messages destined to some or all logical overlay addresses to one or more service machines that perform one or more services on the data messages. FIG. 5 illustrates a route table 510 of the underlay downlink gateway 140. As shown, the route table 510 has a route record that identifies the service machine cluster 1 as the next hop for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125).


At 425, the process also configures the route table of an underlay network's forwarding element (e.g., the underlay default uplink gateway) to forward data messages that are destined to some or all logical overlay addresses to the particular tenant's cloud gateway (e.g., the tenant gateway 135). FIG. 5 illustrates a route table 525 of the underlay uplink gateway 145. As shown, the route table 525 has a route record that identifies the tenant gateway 135 as the next hop for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125).


The service machines can be standalone service appliances (e.g., third party service appliances, such firewall appliances of Palo Alto Network, etc.), or they can be service machines (e.g., virtual machines, containers, etc.) executing on host computers. The service machines in some embodiments are within the public cloud, while in other embodiments the service machines can be inside or outside of the public cloud.


At 430, the process 400 configures one or more service machines with one or more service rules that direct the service machines to perform one or more services on the data messages that it receives from the underlay forwarding element directly or through intervening network fabric. Each such service rule is defined in terms of one or more logical overlay addresses. For instance, in some embodiments, each service rule has a rule match identifier that is defined in terms of one or more flow identifiers, and one or more of these flow identifiers in the configured service rules are expressed in terms of logical overlay addresses. FIG. 5 illustrates an example one such service rule. Specifically, it shows a firewall rule table 515 with a firewall rule for a firewall service appliance 560. The firewall rule specifies that any data message from logical IP address N of VM 122 of subnet 1 to logical IP address 0 of VM 132 of subnet 2 should be allowed.


After performing its service(s) on a data message, the service machine provides the message to its uplink interface that handles the forwarding of data messages to networks outside of the service machine's network. The process 400 configures (at 430) a route table of the service machine's uplink interface to route the processed data messages to the underlay default uplink gateway for the particular tenant in the public cloud. FIG. 5 illustrates a route table 520 of the uplink interface of the firewall appliance 560. As shown, the route table 520 has a route record that identifies the underlay uplink gateway 145 as the next hop for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125).


Next, at 435, the process configures the route table of the tenant gateway to route a data message that is addressed to a logical overlay network destination address to the correct destination machine (e.g., correct VM or container) in the particular tenant's VPC. FIG. 5 illustrates a route table 530 of the tenant gateway 135. As shown, the route table 530 has a route record that identifies the logical router 130 as the next hop for any message from any logical IP address of subnet 1 (of the first logical switch 120) to any logical IP address of subnet 2 (of the second logical switch 125). After 435, the process 400 ends.


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 6 conceptually illustrates a computer system 600 with which some embodiments of the invention are implemented. The computer system 600 can be used to implement any of the above-described hosts, controllers, and managers. As such, it can be used to execute any of the above-described processes. This computer system includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media. Computer system 600 includes a bus 605, processing unit(s) 610, a system memory 625, a read-only memory 630, a permanent storage device 635, input devices 640, and output devices 645.


The bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 600. For instance, the bus 605 communicatively connects the processing unit(s) 610 with the read-only memory 630, the system memory 625, and the permanent storage device 635.


From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 630 stores static data and instructions that are needed by the processing unit(s) 610 and other modules of the computer system. The permanent storage device 635, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data, even when the computer system 600 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 635.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 635, the system memory 625 is a read-and-write memory device. However, unlike storage device 635, the system memory 625 is a volatile read-and-write memory, such a random access memory. The system memory 625 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 625, the permanent storage device 635, and/or the read-only memory 630. From these various memory units, the processing unit(s) 610 retrieve instructions to execute, and data to process in order to execute the processes of some embodiments.


The bus 605 also connects to the input and output devices 640 and 645. The input devices enable the user to communicate information and select commands to the computer system. The input devices 640 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 645 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, as shown in FIG. 6, bus 605 also couples computer system 600 to a network 665 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of computer system 600 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit, and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, several figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method for providing a service for data messages exchanged between machines in a virtual private cloud (VPC) in a public cloud, the method comprising: establishing first and second VPCs for first and second tenants in the public cloud;configuring a first router for the first VPC and a second router for the second VPC respectively to connect a set of machines in each VPC, each VPC's router comprising at least one logical interface (LIF);configuring each VPC's router to direct, through the at least one LIF of the VPC's router, a subset of data messages exchanged between the VPC's machines to a shared network of the public cloud; andconfiguring the shared network to forward the subset of data messages, received from the at least one LIF of each VPC's router, to a common set of service machines that are deployed in a service VPC for use by multiple tenant VPCs, the common set of service machines performing a set of one or more services on a particular received data message of the subset of data messages and providing the particular received data message back to the shared network to forward the particular received data message to the VPC router that provided the particular received data message for the VPC router to forward the particular received data message to a destination machine in the VPC corresponding to the respective router with the at least one LIF.
  • 2. The method of claim 1, wherein configuring the shared network comprises configuring at least one downlink gateway of the public cloud to forward the subset of data messages forwarded by at least one VPC's router to the common set of service machines.
  • 3. The method of claim 2, wherein the at least one downlink gateway forwards to the common set of service machines the subset of data messages that it receives from each of at least two configured routers for each of at least two VPCs.
  • 4. The method of claim 1, wherein the common set of service machines processes each data message forwarded by the shared network from each VPC's router.
  • 5. The method of claim 1 further configuring a VPC gateway of each VPC to forward data messages processed by the common set of service machines to the respective VPC's configured router.
  • 6. The method of claim 5, wherein configuring the shared network comprises configuring at least one uplink gateway of the public cloud to forward data messages of at least one VPC that are processed by the common set of service machines to the at least one VPC's configured router.
  • 7. The method of claim 6, wherein the at least one uplink gateway of the public cloud is configured to forward data messages of at least two VPCs that are processed by the common set of service machines to at least two configured routers of the at least two VPCs.
  • 8. The method of claim 1, wherein each VPC of each tenant has a network that is segregated from other networks of other VPCs of other tenants.
  • 9. The method of claim 8, wherein the network for each tenant's VPC is a logical network defined over the shared network of the public cloud.
  • 10. The method of claim 9, wherein the logical network for each VPC includes at least one logical router and at least one logical switch.
  • 11. A non-transitory machine readable medium storing a program for providing a service for data messages exchanged between machines in a virtual private cloud (VPC) in a public cloud, the program for execution by at least one processing unit of a computer, the program comprising sets of instructions for: establishing first and second VPCs for first and second tenants in the public cloud;configuring a first router for the first VPC and a second router for the second VPC respectively to connect a set of machines in each VPC, each VPC's router comprising at least one logical interface (LIF);configuring each VPC's router to direct, through the at least one LIF of the VPC's router, a subset of data messages exchanged between the VPC's machines to a shared network of the public cloud; andconfiguring the shared network to forward the subset of data messages, received from the at least one LIF of each VPC's router, to a common set of service machines that are deployed in a service VPC for use by multiple tenant VPCs, the common set of service machines performing a set of one or more services on a particular received data message of the subset of data messages and providing the particular received data message back to the shared network to forward the particular received data message to the VPC router that provided the particular received data message for the VPC router that provided the particular received data message to forward the particular received data message to a destination machine in the VPC corresponding to the respective router with the at least one LIF.
  • 12. The non-transitory machine readable medium of claim 11, wherein the set of instructions for configuring the shared network comprises a set of instructions for configuring at least one downlink gateway of the public cloud to forward the subset of data messages forwarded by at least one VPC's router to the common set of service machines.
  • 13. The non-transitory machine readable medium of claim 12, wherein the at least one downlink gateway forwards to the common set of service machines the subset of data messages that it receives from each of at least two configured routers for each of at least two VPCs.
  • 14. The non-transitory machine readable medium of claim 11, wherein the common set of service machines processes each data message forwarded by the shared network from each VPC's router.
  • 15. The non-transitory machine readable medium of claim 11, wherein the program further comprises a set of instructions for configuring a VPC gateway of each VPC to forward data messages processed by the common set of service machines to the respective VPC's configured router.
  • 16. The non-transitory machine readable medium of claim 15, wherein configuring the shared network comprises configuring at least one uplink gateway of the public cloud to forward data messages of at least one VPC that are processed by the common set of service machines to the at least one VPC's configured router.
  • 17. The non-transitory machine readable medium of claim 16, wherein the at least one uplink gateway of the public cloud is configured to forward data messages of at least two VPCs that are processed by the common set of service machines to at least two configured routers of the at least two VPCs.
  • 18. The non-transitory machine readable medium of claim 11, wherein each VPC of each tenant has a network that is segregated from other networks of other VPCs of other tenants.
  • 19. The non-transitory machine readable medium of claim 18, wherein the network for each tenant's VPC is a logical network defined over the shared network of the public cloud.
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application is a continuation application of U.S. patent application Ser. No. 16/109,395, filed Aug. 22, 2018, now published as U.S. Patent Publication 2019/0068500. U.S. patent application Ser. No. 16/109,395, claims the benefit of U.S. Provisional Patent Application 62/550,675, filed Aug. 27, 2017. U.S. patent application Ser. No. 16/109,395, now published as U.S. Patent Publication 2019/0068500, is incorporated herein by reference.

US Referenced Citations (241)
Number Name Date Kind
6108300 Coile et al. Aug 2000 A
6832238 Sharma et al. Dec 2004 B1
7107360 Phadnis et al. Sep 2006 B1
7360245 Ramachandran et al. Apr 2008 B1
7423962 Auterinen Sep 2008 B2
7523485 Kwan Apr 2009 B1
7953895 Narayanaswamy et al. May 2011 B1
8264947 Tavares Sep 2012 B1
8296434 Miller et al. Oct 2012 B1
8432791 Masters Apr 2013 B1
8514868 Hill Aug 2013 B2
8719590 Faibish et al. May 2014 B1
8902743 Greenberg et al. Dec 2014 B2
8958293 Anderson Feb 2015 B1
9137209 Brandwine et al. Sep 2015 B1
9244669 Govindaraju et al. Jan 2016 B2
9356866 Sivaramakrishnan et al. May 2016 B1
9413730 Narayan et al. Aug 2016 B1
9485149 Traina et al. Nov 2016 B1
9519782 Aziz et al. Dec 2016 B2
9590904 Heo et al. Mar 2017 B2
9699070 Davie et al. Jul 2017 B2
9832118 Miller et al. Nov 2017 B1
9860079 Cohn Jan 2018 B2
9860214 Bian Jan 2018 B2
9871720 Tillotson Jan 2018 B1
10135675 Yu et al. Nov 2018 B2
10193749 Hira et al. Jan 2019 B2
10228959 Anderson et al. Mar 2019 B1
10326744 Nossik et al. Jun 2019 B1
10333959 Katrekar et al. Jun 2019 B2
10341371 Katrekar et al. Jul 2019 B2
10348689 Bian Jul 2019 B2
10348767 Lee et al. Jul 2019 B1
10367757 Chandrashekhar et al. Jul 2019 B2
10397136 Hira et al. Aug 2019 B2
10484302 Hira et al. Nov 2019 B2
10491466 Hira et al. Nov 2019 B1
10491516 Ram et al. Nov 2019 B2
10523514 Lee Dec 2019 B2
10567482 Ram et al. Feb 2020 B2
10601705 Hira et al. Mar 2020 B2
10673952 Cohen et al. Jun 2020 B1
10764331 Hoole et al. Sep 2020 B2
10778579 Hira Sep 2020 B2
10805330 Katrekar et al. Oct 2020 B2
10812413 Chandrashekhar et al. Oct 2020 B2
10862753 Hira et al. Dec 2020 B2
10924431 Chandrashekhar et al. Feb 2021 B2
11018993 Chandrashekhar et al. May 2021 B2
11115465 Ram et al. Sep 2021 B2
11196591 Hira et al. Dec 2021 B2
11343229 Jain et al. May 2022 B2
11374794 Hira Jun 2022 B2
20020062217 Fujimori May 2002 A1
20020199007 Clayton et al. Dec 2002 A1
20070186281 McAlister Aug 2007 A1
20070226795 Conti et al. Sep 2007 A1
20070256073 Troung et al. Nov 2007 A1
20080104692 McAlister May 2008 A1
20080225888 Valluri et al. Sep 2008 A1
20090254973 Kwan Oct 2009 A1
20100037311 He et al. Feb 2010 A1
20100112974 Sahai et al. May 2010 A1
20100257263 Casado Oct 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20110075667 Li Mar 2011 A1
20110075674 Li Mar 2011 A1
20110176426 Lioy et al. Jul 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120072762 Atchison Mar 2012 A1
20120082063 Fujita Apr 2012 A1
20120250682 Vincent et al. Oct 2012 A1
20130044636 Koponen et al. Feb 2013 A1
20130044641 Koponen et al. Feb 2013 A1
20130044763 Koponen et al. Feb 2013 A1
20130058208 Pfaff et al. Mar 2013 A1
20130058335 Koponen et al. Mar 2013 A1
20130060928 Shao Mar 2013 A1
20130125230 Koponen et al. May 2013 A1
20130152076 Patel Jun 2013 A1
20130198740 Arroyo et al. Aug 2013 A1
20130263118 Kannan et al. Oct 2013 A1
20130287022 Banavalikar Oct 2013 A1
20130287026 Davie Oct 2013 A1
20130297768 Singh Nov 2013 A1
20130304903 Mick et al. Nov 2013 A1
20130318219 Kancherla Nov 2013 A1
20130346585 Ueno Dec 2013 A1
20140010239 Xu et al. Jan 2014 A1
20140052877 Mao Feb 2014 A1
20140108665 Arora et al. Apr 2014 A1
20140143853 Onodera May 2014 A1
20140156818 Hunt Jun 2014 A1
20140192804 Ghanwani et al. Jul 2014 A1
20140226820 Chopra et al. Aug 2014 A1
20140241247 Kempf Aug 2014 A1
20140245420 Tidwell et al. Aug 2014 A1
20140280488 Voit et al. Sep 2014 A1
20140280961 Martinez et al. Sep 2014 A1
20140317677 Vaidya et al. Oct 2014 A1
20140334495 Stubberfield et al. Nov 2014 A1
20140337500 Lee Nov 2014 A1
20140376560 Senniappan et al. Dec 2014 A1
20150009995 Gross, IV Jan 2015 A1
20150016286 Ganichev et al. Jan 2015 A1
20150016460 Zhang et al. Jan 2015 A1
20150043383 Farkas Feb 2015 A1
20150052522 Chanda et al. Feb 2015 A1
20150052525 Raghu Feb 2015 A1
20150063360 Thakkar et al. Mar 2015 A1
20150063364 Thakkar et al. Mar 2015 A1
20150085870 Narasimha Mar 2015 A1
20150096011 Watt Apr 2015 A1
20150098455 Fritsch Apr 2015 A1
20150098465 Pete et al. Apr 2015 A1
20150103838 Zhang et al. Apr 2015 A1
20150106804 Chandrashekhar et al. Apr 2015 A1
20150124645 Yadav et al. May 2015 A1
20150128245 Brown et al. May 2015 A1
20150138973 Kumar May 2015 A1
20150139238 Pourzandi et al. May 2015 A1
20150163137 Kamble Jun 2015 A1
20150163145 Pettit et al. Jun 2015 A1
20150163192 Jain Jun 2015 A1
20150172075 DeCusatis et al. Jun 2015 A1
20150172183 DeCusatis et al. Jun 2015 A1
20150172331 Raman Jun 2015 A1
20150263899 Tubaltsev et al. Sep 2015 A1
20150263946 Tubaltsev et al. Sep 2015 A1
20150263983 Brennan et al. Sep 2015 A1
20150263992 Kuch et al. Sep 2015 A1
20150264077 Berger et al. Sep 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150281098 Pettit et al. Oct 2015 A1
20150281274 Masurekar et al. Oct 2015 A1
20150295731 Bagepalli et al. Oct 2015 A1
20150295800 Bala et al. Oct 2015 A1
20150304117 Dong Oct 2015 A1
20150326469 Kern Nov 2015 A1
20150339136 Suryanarayanan et al. Nov 2015 A1
20150350059 Chunduri Dec 2015 A1
20150350101 Sinha et al. Dec 2015 A1
20150373012 Bartz et al. Dec 2015 A1
20150381493 Bansal Dec 2015 A1
20160014023 He Jan 2016 A1
20160055019 Thakkar et al. Feb 2016 A1
20160072888 Jung et al. Mar 2016 A1
20160094364 Subramaniyam et al. Mar 2016 A1
20160094661 Jain et al. Mar 2016 A1
20160105488 Thakkar et al. Apr 2016 A1
20160124742 Rangasamy et al. May 2016 A1
20160134418 Liu et al. May 2016 A1
20160182567 Sood et al. Jun 2016 A1
20160191304 Muller Jun 2016 A1
20160198003 Luft Jul 2016 A1
20160212049 Davie Jul 2016 A1
20160226967 Zhang Aug 2016 A1
20160274926 Narasimhamurthy et al. Sep 2016 A1
20160308762 Teng et al. Oct 2016 A1
20160337329 Sood et al. Nov 2016 A1
20160352623 Jayabalan et al. Dec 2016 A1
20160352682 Chang et al. Dec 2016 A1
20160352747 Khan et al. Dec 2016 A1
20160364575 Caporal et al. Dec 2016 A1
20160380973 Sullenberger et al. Dec 2016 A1
20170005923 Babakian Jan 2017 A1
20170006053 Greenberg et al. Jan 2017 A1
20170034129 Sawant et al. Feb 2017 A1
20170034198 Powers et al. Feb 2017 A1
20170060628 Tarasuk-Levin et al. Mar 2017 A1
20170078248 Bian Mar 2017 A1
20170091458 Gupta et al. Mar 2017 A1
20170091717 Chandraghatgi et al. Mar 2017 A1
20170093646 Chanda et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170099188 Chang et al. Apr 2017 A1
20170104365 Ghosh et al. Apr 2017 A1
20170111230 Srinivasan et al. Apr 2017 A1
20170118115 Tsuji Apr 2017 A1
20170126552 Pfaff et al. May 2017 A1
20170142012 Thakkar et al. May 2017 A1
20170149582 Cohn May 2017 A1
20170163442 Shen et al. Jun 2017 A1
20170163599 Shen et al. Jun 2017 A1
20170195217 Parasmal et al. Jul 2017 A1
20170195253 Annaluru et al. Jul 2017 A1
20170222928 Johnsen et al. Aug 2017 A1
20170223518 Upadhyaya et al. Aug 2017 A1
20170230241 Neginhal et al. Aug 2017 A1
20170279826 Mohanty et al. Sep 2017 A1
20170289060 Aftab et al. Oct 2017 A1
20170302529 Agarwal et al. Oct 2017 A1
20170302535 Lee Oct 2017 A1
20170310580 Caldwell et al. Oct 2017 A1
20170317972 Bansal et al. Nov 2017 A1
20170324848 Johnsen et al. Nov 2017 A1
20170331746 Qiang Nov 2017 A1
20170359304 Benny et al. Dec 2017 A1
20180006943 Dubey Jan 2018 A1
20180007002 Landgraf Jan 2018 A1
20180013791 Healey et al. Jan 2018 A1
20180026873 Cheng et al. Jan 2018 A1
20180026944 Phillips Jan 2018 A1
20180027012 Srinivasan et al. Jan 2018 A1
20180027079 Ali et al. Jan 2018 A1
20180053001 Folco et al. Feb 2018 A1
20180062880 Yu Mar 2018 A1
20180062881 Chandrashekhar Mar 2018 A1
20180062917 Chandrashekhar Mar 2018 A1
20180062923 Katrekar Mar 2018 A1
20180062933 Hira Mar 2018 A1
20180063036 Chandrashekhar Mar 2018 A1
20180063086 Hira Mar 2018 A1
20180063087 Hira Mar 2018 A1
20180063176 Katrekar et al. Mar 2018 A1
20180063193 Chandrashekhar et al. Mar 2018 A1
20180077048 Kubota et al. Mar 2018 A1
20180083923 Bian Mar 2018 A1
20180115586 Chou et al. Apr 2018 A1
20180139123 Qiang May 2018 A1
20180197122 Kadt et al. Jul 2018 A1
20180336158 Iyer et al. Nov 2018 A1
20190037033 Khakimov et al. Jan 2019 A1
20190068493 Ram et al. Feb 2019 A1
20190068500 Hira Feb 2019 A1
20190068689 Ram et al. Feb 2019 A1
20190097838 Sahoo et al. Mar 2019 A1
20190173757 Hira et al. Jun 2019 A1
20190173780 Hira et al. Jun 2019 A1
20190306185 Katrekar et al. Oct 2019 A1
20200007497 Jain et al. Jan 2020 A1
20200028758 Tollet et al. Jan 2020 A1
20200067733 Hira Feb 2020 A1
20200067734 Hira et al. Feb 2020 A1
20200177670 Ram et al. Jun 2020 A1
20200351254 Xiong et al. Nov 2020 A1
20210105208 Hira Apr 2021 A1
20210258268 Yu et al. Aug 2021 A1
20220255896 Jain Aug 2022 A1
20220329461 Hira Oct 2022 A1
Foreign Referenced Citations (31)
Number Date Country
1792062 Jun 2006 CN
101764752 Jun 2010 CN
102255934 Nov 2011 CN
102577255 Jul 2012 CN
102577270 Jul 2012 CN
103957270 Jul 2014 CN
104272672 Jan 2015 CN
105099953 Nov 2015 CN
103036919 Dec 2015 CN
105144110 Dec 2015 CN
105379227 Mar 2016 CN
106165358 Nov 2016 CN
107210959 Sep 2017 CN
107534603 Jan 2018 CN
107733704 Feb 2018 CN
107959689 Apr 2018 CN
1742430 Jan 2007 EP
3673627 Jul 2020 EP
2014075731 Apr 2014 JP
2015122088 Jul 2015 JP
2015136132 Jul 2015 JP
2015165700 Sep 2015 JP
2015068255 May 2015 WO
2015142404 Sep 2015 WO
2016159113 Oct 2016 WO
2018044341 Mar 2018 WO
2019040720 Feb 2019 WO
2019046071 Mar 2019 WO
2019112704 Jun 2019 WO
2020005540 Jan 2020 WO
2020041074 Feb 2020 WO
Non-Patent Literature Citations (23)
Entry
Lee, Wonhyuk, et al., “Micro-Datacenter Management Architecture for Mobile Wellness Information,” 2014 International Conference on IT Convergence and Security, Oct. 28-30, 2014, 4 pages, IEEE, Beijing, China.
Ling, Lu, et al., “Hybrid Cloud Solution Based on SDN Architecture,” Collection of Cloud Computing Industry Application Cases, Dec. 31, 2016, 20 pages, Issue 2, China Academic Journal Publishing House. [English translation of document generated from www.onlinedoctranslator.com].
Suen, Chun-Hui, et al., “Efficient Migration of Virtual Machines between Public and Private Cloud,” 2011 IEEE Third International Conference on Cloud Computing Technology and Science, Nov. 29-Dec. 1, 2011, 5 pages, IEEE, Athens, Greece.
Zheng, Guo, et al., “Study and Development of Private Cloud,” Scientific View, Sep. 30, 2016, 2 pages [English translation of abstract generated from Google Translate].
Author Unknown, “Network Controller,” Dec. 16, 2014, 4 pages, available at: https://web.archive.org/web/20150414112014/https://technet.microsoft.com/en-us/library/dn859239.aspx.
Black, David, et al., “An Architecture for Data Center Network Virtualization Overlays (NVO3) [draft-ietf-nvo3-arch-08],” Sep. 20, 2016, 34 pages, IETF.
Church, Mark, “Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks,” Article ID: KB000801, Jun. 20, 2017, 36 pages, retrieved from https://success.docker.com/article/networking.
Fernando, Rex, et al., “Service Chaining using Virtual Networks with BGP,” Internet Engineering Task Force, IETF, Jul. 7, 2015, 32 pages, Internet Society (ISOC), Geneva, Switzerland, available at https://tools.ietf.org/html/draft-fm-bess-service-chaining-01.
Firestone, Daniel, “VFP: A Virtual Switch Platform for Host SDN in the Public Cloud,” 14th USENIX Symposium on Networked Systems Design and Implementation, Mar. 27-29, 2017, 15 pages, USENIX, Boston, MA, USA.
International Search Report and Written Opinion of commonly owned International Patent Application PCT/US2018/047570, dated Nov. 7, 2018, 17 pages, International Searching Authority/European Patent Office.
Koponen, Teemu, et al., “Network Virtualization in Multi-tenant Datacenters,” Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI'14), Apr. 2-4, 2014, 15 pages, Seattle, WA, USA.
Lasserre, Marc, et al., “Framework for Data Center (DC) Network Virtualization,” RFC 7365, Oct. 2014, 26 pages, IETF.
Le Bigot, Jean-Tiare, “Introduction to Linux Namespaces—Part 5: NET,” Jan. 19, 2014, 6 pages, retrieved from https://blog.yadutaf.fr/2014/01/19/introduction-to-linux-namespaces-part-5-net.
Merkel, Dirk, “Docker: Lightweight Linux Containers for Consistent Development and Deployment,” Linux Journal, May 19, 2014, 16 pages, vol. 2014—Issue 239, Belltown Media, Houston, USA.
Non-Published commonly owned U.S. Appl. No. 17/114,322, filed Dec. 7, 2020, 50 pages, Nicira Inc.
Singla, Ankur, et al., “Architecture Documentation: OpenContrail Architecture Document,” Jan. 24, 2015, 42 pages, OpenContrail.
Sunliang, Huang, “Future SDN-based Data Center Network,” Nov. 15, 2013, 5 pages, ZTE Corporation, available at http://wwwen.zte.com.cn/endata/magazine/ztetechnologies/2013/no6/articles/201311/t20131115_412737.html.
Wenjie, Zhu (Jerry), “Next Generation Service Overlay Networks,” IEEE P1903 NGSON (3GPP Draft), Aug. 22, 2014, 24 pages, IEEE.
Zhang, Zhe, et al., “Lark: Bringing Network Awareness to High Throughput Computing,” 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 4-7, 2015, 10 pages, IEEE, Shenzhen, China.
Lichtblau, Franziska, et al., “Detection, Classification, and Analysis of Inter-Domain Traffic with Spoofed Source IP Addresses,” IMC '17: Proceedings of the 2017 Internet Measurement Conference, Nov. 2017, 14 pages, Association for Computing Machinery, London, UK.
Lin, Yu-Chuan, et al. “Development of a Novel Cloud-Based Multi-Tenant Model Creation Scheme for Machine Tools,” 2015 IEEE International Conference on Automation Science and Engineering, Aug. 24-28, 2015, 2 pages, IEEE, Gothenburg, Sweden.
Non-Published Commonly Owned Related U.S. Appl. No. 17/731,232, filed Apr. 27, 2022, 37 pages, VMware, Inc.
Non-Published commonly owned U.S. Appl. No. 17/307,983, filed May 4, 2021, 119 pages, Nicira Inc.
Related Publications (1)
Number Date Country
20210105208 A1 Apr 2021 US
Provisional Applications (1)
Number Date Country
62550675 Aug 2017 US
Continuations (1)
Number Date Country
Parent 16109395 Aug 2018 US
Child 17020713 US