Service chains for inter-cloud traffic

Information

  • Patent Grant
  • 11799821
  • Patent Number
    11,799,821
  • Date Filed
    Thursday, September 9, 2021
    3 years ago
  • Date Issued
    Tuesday, October 24, 2023
    a year ago
Abstract
Systems, methods, and computer-readable media for creating service chains for inter-cloud traffic. In some examples, a system receives domain name system (DNS) queries associated with cloud domains and collects DNS information associated the cloud domains. The system spoofs DNS entries defining a subset of IPs for each cloud domain. Based on the spoofed DNS entries, the system creates IP-to-domain mappings associating each cloud domain with a respective IP from the subset of IPs. Based on the IP-to-domain mappings, the system programs different service chains for traffic between a private network and respective cloud domains. The system routes, through the respective service chain, traffic having a source associated with the private network and a destination matching the IP in the respective IP-to-domain mapping.
Description
TECHNICAL FIELD

The present technology pertains to service chaining and, more specifically, creating service chains for inter-cloud traffic.


BACKGROUND

Service chaining allows network operators to steer traffic for a given application through various appliances, such as firewalls, WAN optimizers, and Intrusion Prevention Systems (IPSs), which together enforce specific policies and provide a desired functionality for the traffic. The appliances in a service chain can be “chained” together in a particular sequence along the path of the traffic to process the traffic through the sequence of appliances. For example, a network operator may define a service chain including a firewall and a WAN optimizer for traffic associated with an application. When such traffic is received, it is first routed to the firewall in the service chain, which provides firewall capabilities such as deep packet inspection and access control. After the traffic is processed by the firewall, it is routed to the WAN optimizer in the service chain, which can compress the traffic, apply quality-of-service (QoS) policies, or perform other traffic optimization functionalities. Once the traffic is processed by the WAN optimizer, it is routed towards its intended destination.


To implement a service chain, the network operator can program rules or policies for redirecting an application's traffic through a sequence of appliances in the service chain. For example, the network provider can program an access control list (ACL) in the network device's hardware, such as the network device's Ternary Content Addressable Memory (TCAM). The ACL can include entries which together specify the sequence of appliances in the service chain for the application's traffic. The ACL entries can identify specific addresses associated with the application's traffic, such as origin or destination IP addresses associated with the application's traffic, which the network device can use to match an ACL entry to traffic. The network device can then use the ACL entries to route the application's traffic through the sequence of appliances in the service chain.


Unfortunately, however, programming service chains on the network device for each IP allocated to a cloud provider or service can be prohibitive. Cloud providers typically have a very large number of IP addresses allocated for their domains and services. Moreover, the hardware capacity (e.g., TCAM capacity) on a network device is limited and typically insufficient to implement service chains for each cloud provider IP. This problem is compounded when dealing with inter-cloud traffic which involves an even higher number of IP addresses from both the origin and destination clouds, thus increasing the number of service chain entries necessary to program service chains for the inter-cloud traffic. As a result, network devices generally lack the hardware capacity to implement service chains for each origin and destination cloud IP. Consequently, network operators are frequently unable to program service chains on a network device based on the origin and destination clouds of inter-cloud traffic.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 illustrates a block diagram of an example service chain configuration for application traffic;



FIG. 2A illustrates a first example configuration of service chains for traffic between a cloud consumer and cloud providers;



FIG. 2B illustrates a second example configuration of service chains for traffic between a cloud consumer and cloud providers, including different consumer-side service chains configured based on the respective origin and destination of the traffic;



FIG. 3 illustrates a diagram of an example architecture for configuring a network device to perform service chaining for inter-cloud traffic;



FIG. 4 illustrates example IP-to-domain mappings for programming service chains for inter-cloud traffic;



FIG. 5 illustrates example service chain definitions for inter-cloud traffic and hardware ACL entries programmed on a network device to build service chains according to the service chain definitions;



FIG. 6 illustrates an example method for creating service chains for inter-cloud traffic;



FIG. 7 illustrates an example network device for programming and applying service chains for inter-cloud traffic; and



FIG. 8 illustrates an example computing device architecture.





DETAILED DESCRIPTION

Various aspects of the disclosure are discussed in detail below. Features of one aspect may be applied to each aspect alone or in combination with other aspects. Moreover, while specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description.


As used herein, “one embodiment” or “an embodiment” can refer to the same embodiment or any embodiment(s). Moreover, reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Features described herein with reference to one embodiment can be combined with features described with reference to any embodiment.


The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure and the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any terms discussed herein, is illustrative and not intended to limit the scope and meaning of the disclosure or any example term. Likewise, the disclosure is not limited to the specific embodiments or examples described in this disclosure.


Without an intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related functionalities are provided below. Titles or subtitles may be used in the examples for convenience of a reader, and in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of a conflict, the present document and included definitions will control.


Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be recognized from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out herein. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.


Overview

Disclosed are systems, methods, and computer-readable media for creating service chains for inter-cloud traffic. In some examples, a system, such as a switch, can receive an indication of respective service chains to be configured for traffic between a private network site (e.g., private cloud or data center) and respective cloud domains (e.g., public clouds). The indication can be specified by a cloud consumer/customer, and can define the respective service chains, including the services in the respective service chains, and the destination cloud domains associated with the respective service chains.


The system can receive, from one or more endpoints (e.g., servers, applications, devices, etc.) on the private network site, domain name system (DNS) queries associated with respective cloud domains. The system can forward the DNS queries associated with the respective cloud domains to one or more DNS servers and receive one or more DNS resolution results from the one or more DNS servers. The system can send, to the one or more endpoints on the private network site, one or more DNS responses to the DNS queries, which can identify one or more IP addresses associated with the respective cloud domains.


Based on the DNS queries, the system can collect DNS information associated with the respective cloud domains. In some examples, the system can snoop the DNS queries and/or associated DNS resolution results to identify IP information corresponding to the respective cloud domains. Moreover, the system can spoof DNS entries associated with the respective cloud domains. The spoofed DNS entries can define a reduced number of IP addresses for each respective cloud domain. The reduced number of IP addresses is smaller than a total number of IP addresses allocated/registered to the respective cloud domain. In some examples, the reduced number of IP addresses associated with the respective cloud domain can be a subset of the total number of IP addresses allocated to the respective cloud domain. The subset of the total number of IP addresses allocated to the respective cloud domain can be identified or selected from the one or more DNS resolution results.


Based on the spoofed DNS entries, the system can create respective IP-to-domain mappings for the respective cloud domains. Each respective IP-to-domain mapping can associate the respective cloud domain with an IP address from the reduced number of IP addresses associated with the respective cloud domain. The IP address can be, for example, a virtual or private IP address allocated by the system for the respective cloud domain or a public IP registered to the respective cloud domain and identified by snooping the DNS resolution results associated with the DNS queries.


Based on the respective IP-to-domain mappings, the system can program the respective service chains for traffic between the private network site and the respective cloud domains. Each respective service chain can be programmed for traffic from the private network site or a segment from the private network site (e.g., one or more endpoints in the private network site), as well as a respective cloud domain or cloud domain service.


Moreover, each respective service chain can be programmed on hardware (e.g., TCAM) via one or more policies (e.g., Access Control List entries) configured to route, through the respective service chain, traffic having source information associated with the private network site (e.g., an IP or subnet associated with the private network site and/or one or more endpoints in the private network site) and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain. In some cases, programming the respective service chains can include programming respective cloud service names for the respective cloud services and associating at least one of the respective service chains or the IP-to-domain mappings with the respective cloud service names.


When the system receives traffic, it can perform a lookup to determine if the traffic matches any of the programmed service chains. For example, the system can compare header information in the traffic (e.g., 5-tuple including source and destination information) with ACL entries programmed on the system for the respective service chains. Each ACL entry can specify a source (e.g., source IP or subnet), a destination (e.g., destination IP), a protocol, an application or service name, an action for redirecting the traffic to an associated service, etc. The system can thus use the header information in the traffic and the traffic information in the ACL entries to determine which, if any, ACL entries match the traffic and determine what action should be taken for the traffic.


When the traffic received has source information associated with the private network site (e.g., an IP or subnet associated with the private network site) and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain, the system can route the traffic through the respective service chain based on the one or more policies (e.g., ACL entries) associated with respective service chain. The system can redirect the traffic to each service in the respective service chain based on the programmed entries or policies for that service chain. Once the traffic has been processed through every service in the service chain, the system can send the traffic to the destination cloud domain.


Description of Example Embodiments

Disclosed herein are techniques for creating service chains for inter-cloud traffic. These techniques allow service chains to be configured based on both the origin cloud or network and the destination cloud or cloud service. The service chains can be configured on network devices for specific inter-cloud traffic using a reduced number of addresses for each cloud domain. As previously mentioned, cloud providers and services typically have a very large number of IP addresses allocated to them. Moreover, network devices have limited storage and memory resources, such as TCAM, which are insufficient to implement service chains for each IP allocated to a cloud provider or service. This problem is compounded when dealing with inter-cloud traffic, which typically involves an even higher number of IP addresses associated with the service chain. As a result, network devices generally do not have sufficient capacity to implement service chains for traffic between each origin and destination cloud IP. Consequently, network operators cannot program service chains on the network device based on the origin and destination clouds of inter-cloud traffic.


To overcome these limitations, the techniques herein can reduce the number of inter-cloud addresses used to program service chains for inter-cloud traffic on the network device. Each cloud domain can be mapped to a reduced number of addresses which can be used to program service chains on the network device for specific inter-cloud traffic without exceeding the hardware capabilities of the network device. The reduced number of addresses thus allows service chains to be programmed on the network device's hardware based on the origin and/or destination clouds or services of the inter-cloud traffic.


The service chains may be programmed on hardware access control lists (ACLs) on the network device. For example, the service chains can be programmed on ACLs in the network device's TCAM. The ACLs can include deterministic entries for each cloud domain and/or service, which define actions to be selectively applied to matching inter-cloud traffic. If the network device receives traffic matching an ACL entry, the network device can route the traffic to a particular service application in a service chain based on the action defined in the ACL entry. The ACL entries and reduced number of inter-cloud addresses allow service chains for inter-cloud traffic to be programmed directly on the network device, despite the limited hardware capabilities of the network device. The technologies herein also provide a paradigm for programming cloud service names used for the service chains natively on the network device.


The disclosure now turns to FIG. 1, which illustrates an example service chain configuration 100 for application traffic. In this example, a service chain 102 is configured to process traffic between endpoint 104 and endpoint 106. The endpoint 104 can include any device or server (physical and/or virtual) on a network, such as a cloud consumer network (e.g., a private cloud or on-premises site), and endpoint 106 can include any device or server (physical and/or virtual) on a different network, such as a public cloud. For example, endpoint 104 can be an application or server on a private cloud and endpoint 106 can be an application or server on a public cloud.


The service chain 102 includes service applications 112, 114, 116, which may be configured to apply specific L4 (Layer 4) through L7 (Layer 7) policies to traffic between endpoint 104 and endpoint 106. The service applications 112, 114, 116 can be implemented via respective virtual machines (VMs), software containers, servers, nodes, clusters of nodes, data centers, etc. Example service applications (112, 114, 116) include, without limitations, firewalls, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), WAN Optimizers, Network Address Translation (NAT) systems, virtual routers/switches, load balancers, Virtual Private Network (VPN) gateways, data loss prevention (DLP) systems, web application firewalls (WAFs), application delivery controllers (ADCs), packet capture appliances, secure sockets layer (SSL) appliances, adaptive security appliances (ASAs), etc.


The service applications 112, 114, 116 in the service chain 102 are interconnected via a logical link 108A, which is supported by a physical link 108B through physical infrastructure 110. The physical infrastructure 110 can include one or more networks, nodes, data centers, clouds, hardware resources, physical locations, etc. Traffic from endpoint 104 can be routed to the physical infrastructure 110 through the physical link 108B, and redirected by the physical infrastructure 110 along the logical link 108A and through the service chain 102.



FIG. 2A illustrates a first example configuration 200 of service chains 208A-N, 236A-N for traffic between a cloud consumer 202 and cloud providers 232. The consumer 202 represents a cloud customer or consumer network, such as a private cloud, network, data center, etc. The cloud providers 232 represent public clouds hosting applications, services, and/or resources consumed by the cloud consumer 202. The cloud consumer 202 and cloud providers 232 can communicate via routed core 230. Routed core 230 can represent one or more networks, clouds, data centers, routers, etc. For example, routed core 230 can represent an inter-cloud fabric capable of routing traffic between the cloud consumer 202 and the cloud providers 232.


The consumer 202 includes endpoints 204 which represent applications and/or servers hosted by the consumer 202 (e.g., on the consumer's network(s)). In this example, the endpoints 204 include sales applications 204A, finance applications 204B, and human resources (HR) applications 204N. The applications 204 can be hosted on specific servers and/or network segments of the consumer 202.


The configuration 200 includes consumer-side service chains 206 including service chains 208A-N (collectively “208”) configured for traffic from the endpoints 204 to the cloud providers 232. The service chains 208 process traffic between the endpoints 204 and the routed core 230, prior to being routed by the routed core 230 to the cloud providers 232.


The service chains 208 include application services 210 configured to apply respective L4-L7 policies to traffic from the endpoints 204. For example, service chain 208A includes service applications 210, 212, 214 for traffic associated with the sales applications 204A. In this example, traffic from the sales applications 204A is first processed by service application 210 in the service chain 208A, which can be, for example, a perimeter firewall. The traffic is then processed by service application 212 in the service chain 208A, which can be, for example, a VPN gateway. The traffic is finally processed by service application 214 in the service chain 208A, which can be, for example, an application firewall (e.g., database firewall). Once the traffic is processed by service application 214, it is sent to the routed core 230, which subsequently routes the traffic to a particular cloud from the cloud providers 232.


Similarly, service chain 208B includes service applications 216, 218, 220 for traffic associated with the finance applications 204B. In this example, traffic from the finance applications 204B is first processed by service application 216 in the service chain 208B, which can be, for example, a perimeter firewall. The traffic is then processed by service application 218 in the service chain 208B, which can be, for example, a VPN gateway. The traffic is finally processed by service application 220 in the service chain 208A, which can be, for example, an application firewall. Once the traffic is processed by service application 220, it is sent to the routed core 230, which subsequently routes the traffic to a particular cloud from the cloud providers 232.


Service chain 208N includes service applications 222, 224, 226, 228 for traffic associated with the HR applications 204N. In this example, traffic from the HR applications 204N is first processed by service application 222 in the service chain 208N, which can be, for example, a perimeter firewall. The traffic is then processed by service application 218 in the service chain 208N, which can be, for example, a load balancer. The traffic is next processed by service application 226 in the service chain 208N, which can be, for example, a Web appliance. The traffic is finally processed by service application 228 in the service chain 208N, which can be, for example, an application firewall. Once the traffic is processed by service application 228, it is sent to the routed core 230, which subsequently routes the traffic to a particular cloud from the cloud providers 232.


As illustrated in FIG. 2A, the number, type and sequence of appliances in the service chains 208A-N can vary. Each service chain (208A-N) can be customized for the specific traffic associated with the applications 204A-N. Moreover, the service chains 208A-N can represent a logical path (e.g., 108A) for the traffic from the applications 204A-N, which can be supported by infrastructure (e.g., 110) along a physical path (e.g., 108B).


The configuration 200 also includes provider-side service chains 234 between the routed core 230 and the cloud providers 232. The provider-side service chains 234 can process traffic exchanged between the routed core 230 and the cloud providers 232. The provider-side service chains 234 in this example include service chains 236 A-N (collectively “236”). Each of the service chains 236 corresponds to a particular cloud 232A-N.


For example, service chain 236A corresponds to cloud 232A, and includes service applications 238, 240, 242. Service applications 238, 240, 242 process traffic between the routed core 230 and cloud 232A. In this example, service applications 238, 240, 242 represent a perimeter firewall, a load balancer, and an application firewall (e.g., database firewall, Web firewall, etc.).


Service chain 236B corresponds to cloud 232B, and includes service applications 244 and 246. Service applications 244 and 246 process traffic between the routed core 230 and cloud 232B. In this example, service applications 244 and 246 represent a firewall and a load balancer.


Service chain 236C corresponds to cloud 232C, and includes service applications 248 and 250. Service applications 248 and 250 process traffic between the routed core 230 and cloud 232C. In this example, service applications 248 and 250 represent an IPS and a firewall.


Service chain 236N corresponds to cloud 232N, and includes service applications 252, 254, 256. Service applications 252, 254, 256 process traffic between the routed core 230 and cloud 232N. In this example, service applications 252, 254, 256 represent a perimeter firewall, an SSL appliance, and a load balancer.



FIG. 2B illustrates another example configuration 260 of service chains for traffic between cloud consumer 202 and cloud providers 232. In this example, configuration 260 includes different consumer-side service chains 270 configured based on the respective origin (e.g., applications 204A-N on the consumer 202) and destination cloud (e.g., 232A-N) of the traffic. Unlike configuration 200 shown in FIG. 2A, which applies the same service chains to all traffic of a particular consumer endpoint (e.g., applications 204A-N) irrespective of the cloud destination (e.g., 232A-N), configuration 260 can apply different service chains to traffic from the same consumer endpoint depending on the cloud destination associated with the traffic.


The consumer-side service chains 270 in configuration 260 are deterministically applied to traffic based on a match of the traffic origin (e.g., application 204A, 204B, or 204N) and the traffic destination (e.g., cloud 232A, 232B, 232C, or 232N). The different consumer-side service chains 270 are thus configured specifically based on the respective traffic origin at the consumer 202 and destination clouds. The different consumer-side service chains 270 can be programmed on hardware (e.g., TCAM) as described herein, despite the large number of addresses allocated to the consumer 202 and each of the cloud providers 232.


To illustrate, service chain 270A is configured specifically for traffic 262A between sales applications 204A and cloud 232N. In this example, service chain 270A includes service application 210 (e.g., perimeter firewall) and service application 272 (e.g., web application firewall (WAF)). Service chain 270B is configured specifically for traffic 262B between the sales applications 204A and cloud 232C. In this example, service chain 270B includes service application 210 (e.g., perimeter firewall), service application 274 (e.g., VPN gateway), and service application 276 (e.g., application firewall). As illustrated by service chains 270A and 270B, traffic associated with the sales applications 204A can be routed through different service chains depending on the destination cloud associated with the traffic.


Service chain 270C is configured specifically for traffic 264A between finance applications 204B and cloud 232B. In this example, service chain 270C includes service application 216 (e.g., perimeter firewall), service application 278 (e.g., VPN gateway), and service application 280 (e.g., application firewall). Service chain 270D is configured specifically for traffic 264B between the finance applications 204B and cloud 232C. In this example, service chain 270D includes service application 216 (e.g., perimeter firewall) and service application 282 (e.g., IPS).


Service chain 270E is configured specifically for traffic 266A between HR applications 204N and cloud 232B. In this example, service chain 270E includes service application 222 (e.g., perimeter firewall), service application 284 (e.g., WAF), service application 286 (e.g., load balancer), and service application 288 (e.g., application firewall). Service chain 270F is configured specifically for traffic 266B between the HR applications 204N and cloud 232A. In this example, service chain 270F includes service application 222 (e.g., perimeter firewall), service application 290 (e.g., load balancer), and service application 292 (e.g., Web appliance).


As illustrated in FIG. 2B, the number, type and sequence of appliances in the consumer-side service chains 270 can vary. Each service chain (270A-F) can be customized based on the traffic origin (e.g., applications 204A-N at the consumer 202) and the traffic destination (e.g., clouds 232A-N). Moreover, the consumer-side service chains 270 can represent a logical path (e.g., 108A) for the traffic from the applications 204A-N, which can be supported by infrastructure (e.g., 110) along a physical path (e.g., 108B).



FIG. 3 illustrates a diagram of an example architecture 300 for configuring a network device to perform service chaining for inter-cloud traffic. The architecture 300 includes a network device 302, such as a switch, for routing inter-cloud traffic through specific service chains configured and applied based on the traffic origin (e.g., consumer 202) and the traffic destination cloud (e.g., cloud providers 232). In this example, network device 302 is programmed to route traffic between finance applications 204B and cloud 232B through service chain 270D, which includes service applications 216, 282. The service chain 270D can be programmed on hardware of the network device 302. For example, the service chain 270D can be programmed on an ACL in TCAM on the network device 302.


To program the service chain 270D, a management configuration service 304 can communicate with the network device 302 to specify the service chain(s) (e.g., 270D) and endpoints (e.g., 204B and 232B) for the service chain(s). The service chain(s) and endpoint can be defined by the consumer 202. Moreover, the endpoints can reside in different clouds. The network device 302 can then build the service chain(s) to ensure that traffic between specific consumer segments (e.g., endpoints 204) and cloud services (e.g., clouds 232A-N) are redirected to respective L4-L7 service chains.


In some cases, the consumer 202 can access an interface via the management configuration service 304 where the consumer 202 can specify the destination domains (e.g., clouds 232A-N) corresponding to the service chains they want to create for specific application traffic. The consumer 202 can also specify the consumer applications or endpoints associated with the service chains. The consumer applications or endpoints (e.g., applications 204A-N) can be identified based on respective network addressing information. For example, the consumer applications or endpoints can be identified by their corresponding IP subnets.


In this example, the consumer 202 specifies service chain 270D, which includes service applications 216 and 282, and identifies finance applications 204B and cloud 232B for the service chain 270D. The network device 302 will then create, as described below, the service chain 270D and deterministically apply it to traffic between the finance applications 204B and cloud 232B.


The network device 302 can be configured to communicate with DNS server 306 to forward DNS queries from the consumer 202. In some examples, the DNS server 306 can be an OPEN DNS server. When the network device 302 receives a DNS request from the finance applications 204B, it can forward the DNS request to the DNS server 306. The DNS request can identify the domain name of the cloud 232B (and/or a cloud service associated with the cloud 232B), and request an IP address to communicate with the cloud 232B. The DNS server 306 identifies an IP address allocated to the cloud 232B and returns a DNS resolution response identifying the IP address associated with the domain name.


The network device 302 then receives the DNS resolution response from the DNS server 306. The network device 302 can snoop the DNS request and the DNS resolution response to build a cache of domain-to-IP mappings for the cloud 232B. To reduce the number of hardware entries or policies (e.g., TCAM entries) needed to program the service chain 270D on the network device 302, the network device 302 can use a subset of IP addresses for the cloud 232B, rather than creating an entry for each IP of the cloud 232B. As previously explained, the number of IP addresses allocated to the cloud 232B can be very large. Therefore, programming an entry on the network device 302 for each IP of the cloud 232B can be expensive and even prohibitive. Accordingly, the network device 302 can scale the service chaining to a smaller subset of IP addresses.


The subset of IP addresses can include one or more IP addresses allocated to the cloud 232B and identified by snooping the DNS requests from the consumer 202, or a single virtual IP (VIP). For example, in some implementations, the network device 302 can allocate a VIP for each cloud and use the VIP to program hardware entries for the respective service chains. The network device 302 can then match traffic from the consumer 202 to a specific service chain based on the traffic source (e.g., IP or subnet associated with the consumer 202 or consumer endpoint 204) and the VIP allocated to the destination cloud. The network device 302 can redirect the traffic to the service applications associated with the specific service chain and perform a destination network address translation (NAT) to then route the traffic to the destination cloud.


To illustrate, the network device 302 can allocate a VIP to cloud 232B and use the VIP and an address associated with the finance applications 204B, such as a subnet IP, to program TCAM entries for the service chain 270D. The network device can then use the addresses used to program the service chain 270D; namely, the VIP associated with the cloud 232B and the address associated with the finance applications 204B, to match traffic between the finance applications 204B and the cloud 232B with the TCAM entries associated with the service chain 270D and redirect the traffic accordingly.


In other implementations, the network device 302 can spoof the DNS entries associated with the destination cloud (e.g., 232B) and use a small subset of IP addresses allocated to the destination cloud (e.g., 232B) to program the hardware entries (e.g., TCAM entries) for the respective service chain (e.g., 270D). The subset of IP addresses can be determined by snooping the DNS requests as previously mentioned. The network device 302 can then use the subset of IP addresses to match traffic from the consumer 202 (e.g., finance applications 204B) to the destination cloud (e.g., 232B) with the hardware entries for the respective service chain (e.g., 270D) and redirect the traffic accordingly. Once the traffic is processed through the service chain, the network device 302 can route the traffic to the destination cloud (232B). In this example, NAT is not required to route the traffic to the destination cloud. Instead, the network device 302 can route the traffic to the destination cloud using the destination IP address associated with the traffic and the respective hardware entries.


Having programmed the service chain 270D on hardware based on the subset of IP addresses selected for the cloud 232B (e.g., the VIP assigned by the network device 302 to the cloud 232B or the subset of IP addresses allocated to the cloud 232B and identified based on DNS resolution results), the network device 302 can deterministically redirect traffic between the finance applications 204B and destination cloud 232B to the service chain 270D.


For example, when the network device 302 receives traffic from the finance applications 204B, it can perform a TCAM or hardware lookup using the source and destination address information in the packets. Based on the TCAM or hardware lookup, the network device 302 can find entries matching the source and destination address information in the packets, and redirect the packets through the service chain 270D as specified by the matching entries associated with the service chain 270D. After the packets are processed through the service chain 270D, the network device 302 can send the packets to the destination cloud 232B.



FIG. 4 illustrates example IP-to-domain mappings 400 for programming service chains for inter-cloud traffic. The IP-to-domain mappings 400 can map cloud application names 402 to internal IP mappings 404 and domain names 406. The cloud application names 402 can correspond to respective cloud services associated with the cloud providers 232 and the domain names 406 can correspond to the respective cloud services and cloud providers 232. The internal IP mappings 404 can include the subset of IP addresses allocated by the network device 302 to the domain names 406. For example, the internal IP mappings 404 can include respective VIPs or spoofed DNS entries for the domain names 406 (e.g., the subset of IP address associated with the clouds 232A-N).


To illustrate, in FIG. 4, the IP-to-domain mappings 400 include entries 408A-D for clouds 232A, 232B, and 232C. Entry 408A maps Cloud 1 (232A) Email Service to private IP 172.16.1.1 and domain name http://mail.cloud1.com. Entry 408B maps Cloud 1 (232A) Productivity Service to private IP 172.16.1.2 and domain name http://productivity.cloud1.com. Entry 408C maps Cloud 2 (232B) Service to private IP 172.16.1.3 and domain name http://cloud2.com. Entry 408D maps Cloud 3 (232C) Service to private IP 172.16.1.4 and domain name cloud3.com. Entries 408A-D in the IP-to-domain mappings 400 can then be used to program service chains for inter-cloud traffic, as illustrated in FIG. 5.



FIG. 5 illustrates example service chain definitions 500 for inter-cloud traffic and hardware ACL 510 programmed on network device 302 to build service chains according to the service chain definitions 500. The service chain definitions 500 identify service chains 502, 504 defined for specific inter-cloud traffic.


Service chain 502 includes an indication 502A of the traffic associated with the service chain 502. The indication 502A specifies that the service chain 502 corresponds to traffic between finance applications 204B associated with consumer 202 and cloud 232B (e.g., Cloud 2). The service chain 502 also includes rules 502B for building the service chain 502.


The rules 502B identify actions to be performed for traffic matching 5-tuple A associated with the finance applications 204B and cloud 232B. The 5-tuple A can include the origin IP/subnet of the traffic, the origin port number of the traffic, the destination IP of the traffic, the destination port of the traffic, and the application service or protocol associated with the traffic. In this example, the 5-tuple A includes the IP or IP subnet associated with the finance applications 204B as the source and the IP allocated by the network device 302 to the cloud 232B (e.g., IP 172.16.1.3 from entry 408C in the internal IP mapping 404 shown in FIG. 4) as the destination.


In this example, the rules 502B indicate that traffic matching 5-tuple A associated with the finance applications 204B and cloud 2 (232B) should first be sent to Service 1 (e.g., 216), which in this example is a perimeter firewall. The rules 502B further indicate that traffic matching 5-tuple A should then be sent to Service 2 (e.g., 278), which in this example is a VPN gateway. The rules 502B indicate that traffic matching 5-tuple A should next be sent to Service 3 (e.g., 280), which in this example is an application firewall. The rules 502B finally indicate that after being processed by Service 3 (e.g., 280), the traffic matching 5-tuple A should be sent to the destination (e.g., cloud 232B).


Service chain 504 includes an indication 504A of the traffic associated with the service chain 504. The indication 504A specifies that the service chain 504 corresponds to traffic between finance applications 204B associated with consumer 202 and cloud 232C (e.g., Cloud 3). The service chain 504 also includes rules 504B for building the service chain 504.


The rules 504B identify actions to be performed for traffic matching 5-tuple B associated with the finance applications 204B and cloud 232C (e.g., Cloud 3). The 5-tuple B can include the origin IP/subnet of the traffic, the origin port number of the traffic, the destination IP of the traffic, the destination port of the traffic, and the application service or protocol associated with the traffic. In this example, the 5-tuple B includes the IP or IP subnet associated with the finance applications 204B as the source and the IP allocated by the network device 302 to the cloud 232C (e.g., IP 172.16.1.4 from entry 408D in the internal IP mapping 404 shown in FIG. 4) as the destination.


In this example, the rules 504B indicate that traffic matching 5-tuple B associated with the finance applications 204B and cloud 3 (232C) should first be sent to Service 1 (e.g., 216), which in this example is a perimeter firewall. The rules 504B further indicate that traffic matching 5-tuple B should then be sent to Service 2 (e.g., 282), which in this example is an IPS. The rules 504B finally indicate that after being processed by Service 2 (e.g., 282), the traffic matching 5-tuple B should be sent to the destination (e.g., cloud 232C).


Hardware ACL 510 (e.g., TCAM ACL) can be programmed on network device 302 consistent with the service chain definitions 500 to build the service chains 502, 504 on the network device 302. In this example, the hardware ACL 510 includes an interface field 512 which defines the interface associated with the received packets, a match source field 514 which defines the source of the packet associated with the ACL entries (e.g., 520A-E), a match destination field 516 which defines the destination of the packet associated with the ACL entries (e.g., 520A-E), and an action field 518 which defines the respective actions for each ACL entries (e.g., 520A-E).


The hardware ACL 510 includes ACL entries 520A-E programmed on the network device 302 to build the service chains 502, 504. Entries 520A, 520B, and 520C pertain to service chain 502, and entries 520D and 520E pertain to service chain 504.


In this example, entry 520A identifies the finance applications 204B in the interface field 512. Entry 520A identifies the finance applications 204B as the source of the packets in source field 514, and the cloud 2 (e.g., 232B) IP address (e.g., IP 172.16.1.3 from entry 408C in the internal IP mapping 404 shown in FIG. 4) as the destination of the packets in destination field 516. In the action field 518, entry 520A indicates that packets matching the interface field 512 (e.g., finance applications 204B), the source field 514 (e.g., finance applications 204B), and the destination field 516 (e.g., cloud 232B) should be sent to Service 1 (216), which in this example is a perimeter firewall.


Entry 520B defines the next the action in the service chain 502 for processing the packets after the packets pass through the Service 1 (216). Entry 520B identifies Service 1 (216) in the interface field 512, the finance applications 204B as the source of the packets in source field 514, and the cloud 2 (e.g., 232B) IP address (e.g., IP 172.16.1.3 from entry 408C in the internal IP mapping 404 shown in FIG. 4) as the destination of the packets in destination field 516. In the action field 518, entry 520B indicates that packets matching the interface field 512 (e.g., Service 216), the source field 514 (e.g., finance applications 204B), and the destination field 516 (e.g., cloud 232B) should be sent to Service 2 (278), which in this example is the VPN gateway.


Finally, entry 520C defines the next the action in the service chain 502 for processing the packets after the packets pass through the Service 2 (278). Entry 520C identifies Service 2 (278) in the interface field 512, the finance applications 204B as the source of the packets in source field 514, and the cloud 2 (e.g., 232B) IP address (e.g., IP 172.16.1.3 from entry 408C in the internal IP mapping 404 shown in FIG. 4) as the destination of the packets in destination field 516. In the action field 518, entry 520C indicates that packets matching the interface field 512 (e.g., Service 278), the source field 514 (e.g., finance applications 204B), and the destination field 516 (e.g., cloud 232B) should be sent to Service 3 (280), which in this example is the application firewall.


As illustrated above, entries 520A-C provide the rules for routing traffic from the finance applications 204B to the cloud 2 (232B) through the service chain 502, as reflected in the service chain definitions 500. Packets matching the entries 520A-C will be routed through each service in the service chain 502 based on the respective actions in the actions field 518. Once the packets are processed through the service chain 502, the network device 302 can send the packets to the destination (e.g., cloud 232B).


As previously mentioned, entries 520D and 520E correspond to service chain 504. Entry 520D identifies the finance applications 204B in the interface field 512. Entry 520D identifies the finance applications 204B as the source of the packets in source field 514, and the cloud 3 (e.g., 232C) IP address (e.g., IP 172.16.1.4 from entry 408D in the internal IP mapping 404 shown in FIG. 4) as the destination of the packets in destination field 516. In the action field 518, entry 520D indicates that packets matching the interface field 512 (e.g., finance applications 204B), the source field 514 (e.g., finance applications 204B), and the destination field 516 (e.g., cloud 232C) should be sent to Service 1 (216), which in this example is the perimeter firewall.


Entry 520E defines the next the action in the service chain 504 for processing the packets after the packets pass through the Service 1 (216). Entry 520E identifies Service 1 (216) in the interface field 512, the finance applications 204B as the source of the packets in source field 514, and the cloud 3 (e.g., 232C) IP address (e.g., IP 172.16.1.4 from entry 408D in the internal IP mapping 404 shown in FIG. 4) as the destination of the packets in destination field 516. In the action field 518, entry 520E indicates that packets matching the interface field 512 (e.g., Service 216), the source field 514 (e.g., finance applications 204B), and the destination field 516 (e.g., cloud 232C) should be sent to Service 2 (282), which in this example is the IPS.


As illustrated here, entries 520D-E provide the rules for routing traffic from the finance applications 204B to the cloud 3 (232C) through the service chain 504, as reflected in the service chain definitions 500. Packets matching the entries 520D-E will be routed through each service in the service chain 504 based on the respective actions in the actions field 518. Once the packets are processed through the service chain 504, the network device 302 can send the packets to the destination (e.g., cloud 232C).


The entries 520A-E in the hardware ACL 510 thus allow traffic from the same source cloud or network segment (e.g., finance applications 204B and/or consumer 202) to be processed through different service chains depending on the destination cloud of the traffic (e.g., cloud 232B or cloud 232C). The destination information in the destination field 516 of the hardware ACL 510 can include a respective subset of IP addresses allocated for each of the different clouds, such as a single VIP or a spoofed IP address associated with each cloud destination, as previously explained. This enables customized service chains to be programmed on hardware of the network device 302 (e.g., TCAM) for inter-cloud traffic based on both the origin cloud or network and the destination cloud, without requiring a prohibitive number of entries to accommodate every IP allocated to the source and/or the destination cloud.


Having disclosed various system components and concepts, the disclosure now turns to the example method for building service chains for inter-cloud traffic, as shown in FIG. 6. For the sake of clarity, the method is described in terms of the network device 302 and architecture 300, as shown in FIG. 3. The steps outlined herein are non-limiting examples provided for illustration purposes, and can be implemented in any combination thereof, including combinations that exclude, add, or modify certain steps.



FIG. 6 illustrates an example method for building service chains for inter-cloud traffic. At step 602, the network device 302 can receive an indication of respective service chains (e.g., 270) to be configured for traffic between a private network site (e.g., consumer 202) and respective cloud domains (e.g., clouds 323A-N). The indication can be configured by a client (e.g., consumer 202) via the network device 302 or a management service (e.g., 304). The indication can specify a respective sequence of services or appliances (e.g., L4-L7 appliances) for the respective service chains as well as specific cloud endpoints, services, or domains associated with the respective service chains. The indication can specify that traffic to the specific cloud endpoints, services or domains should be redirected and routed through the respective sequence of services or appliances in the respective service chains.


At step 604, the network device 302 can receive, from one or more endpoints (e.g., consumer endpoints 204) on the private network site, name system (DNS) queries associated with the respective cloud domains (e.g., clouds 323A-N). At step 606, based on the DNS queries, the network device 302 can collect DNS information associated with the respective cloud domains. For example, the network device 302 can forward the DNS queries to a DNS server (e.g., 306) and snoop the DNS queries and/or DNS resolution results received from the DNS server to identify the DNS information associated with the respective cloud domains. The DNS information can include an IP address registered to a respective cloud domain.


At step 608, the network device 302 can spoof DNS entries associated with the respective cloud domains to yield spoofed DNS entries. The spoofed DNS entries can define a reduced number of IP addresses for each respective cloud domain. The reduced number of IP addresses will be less than a total number of IP addresses registered to the respective cloud domain. In some cases, the reduced number of IP addresses can be a virtual or private IP address spoofed or allocated by the network device 302 to the respective cloud domain. In other cases, the reduced number of IP addresses can be a subset of the IP addresses registered to the respective cloud domain. The subset of the IP addresses can be identified based on the DNS queries. For example, the subset of the IP addresses can be identified by snooping the DNS queries and/or DNS resolution results from the DNS server.


The network device 302 can send to the one or more endpoints in the private network site a DNS response to the DNS queries. In the DNS response, the network device 302 can provide DNS information associated with the respective cloud domains. The DNS information in the DNS response can include the reduced number of IP addresses for each respective cloud domain. The one or more endpoints can use the DNS information in the DNS response to send data traffic to the respective cloud domains.


At step 610, based on the spoofed DNS entries, the network device 302 can create respective IP-to-domain mappings for the respective cloud domains. Each respective IP-to-domain mapping can associate the respective cloud domain with an IP address from the reduced number of IP addresses associated with the respective cloud domain. For example, the respective IP-to-domain mapping can associate the respective cloud domain with a virtual or private IP allocated by the network device 302 to the respective cloud domain, or a subset of IP addresses associated with the respective cloud domain and identified by snooping the DNS queries and/or resolution results from the DNS server.


At step 612, based on the respective IP-to-domain mappings, the network device 302 can program the respective service chains for traffic between the private network site and the respective cloud domains. Each respective service chain can be programmed via one or more policies configured to route, through the respective service chain, traffic having source information associated with the private network site (e.g., an IP or subnet associated with the one or more endpoints in the private network site) and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain.


In some cases, the one or more policies can be ACL entries programmed on hardware of the network device 302, such as TCAM on the network device 302. The ACL entries can specify a traffic source (e.g., an IP or subnet associated with the one or more endpoints) and destination (e.g., the IP(s) assigned to clouds 232A, 232B, 232C, and/or 232N) used to recognize when specific traffic should be routed through the respective service chain. Each ACL entry can also specify an action which identifies a routing target for traffic matching the traffic source and destination specified in the ACL entry. The routing target can be a specific service in the service chain or, if the ACL entry specifies an action to be performed after the traffic has been routed through the service chain, the routing target can be the respective cloud domain.


At step 614, in response to receiving traffic (e.g., 262A-B, 264A-B, 266A-B) having source information associated with the private network site (e.g., a source IP or subnet associated with the one or more endpoints) and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain (e.g., clouds 232A, 232B, 232C, and/or 232N), the network device 302 can route the traffic through the respective service chain based on the one or more policies (e.g., ACL entries) associated with the respective service chain.


For example, the network device 302 can receive data traffic from an endpoint (e.g., 204A) on the private network site (e.g., 202), and perform a lookup on the network device 302 (e.g., a TCAM lookup) to determine if the data traffic matches any entries on the network device 302 (e.g., ACL entries 520A-E). The network device 302 can compare header information (e.g., a 5-tuple including the source and destination addresses) in the packets of the data traffic with the information in the entries (e.g., a respective interface, source, destination, protocol, etc., specified in the entries) and identify any matching entries that may exist. If the network device 302 identifies a matching entry, the network device 302 performs an action (e.g., 518) specified in the matching entry. The action can instruct the network device 302 to redirect the packets to a specific service in the service chain associated with the matching entry. The network device 302 then redirects the packets to the specific service as specified in the action defined on the matching entry for processing by the specific service.


Once the specific service has processed the packets, the network device 302 can route the packets to the next service in the service chain based on a second matching entry associated with the service chain. When the next service has completed processing the packets, the network device 302 can continue routing the packets to each service in the service chain as defined by any remaining entries for that service chain. Once the packets have been processed through all the services in the service chains, the network device 302 can send the packets to the destination cloud domain (e.g., clouds 232A, 232B, 232C, or 232N).


In this way, the network device 302 can program and apply service chains for traffic based on the source cloud or network and the destination cloud, without having to create a prohibitively large number of ACL entries for each IP address registered or used by the source cloud or network and the destination cloud. Thus, while public clouds may have thousands of registered/allocated IPs, the network device 302 can implement service chains customized for different private cloud and public cloud combinations using a single IP for each public cloud or a small subset of IPs for each public cloud. To further limit the number of IP addresses needed to configure the service chains, the network device 302 can also use subnets or other segment identifiers to identify the private cloud/data center for use in programming and applying service chains for traffic from the private cloud/data center.


The disclosure now turns to FIGS. 7 and 8, which illustrate example hardware components and devices suitable for programming and applying service chains, routing traffic, and performing any other computing operations.



FIG. 7 illustrates an example network device 700 suitable for performing routing/switching operations, programming and applying service chains, etc. Network device 700 includes a master central processing unit (CPU) 704, interfaces 702, and a connection 710 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU 704 is responsible for executing packet management, error detection, and/or routing functions.


The CPU 704 can accomplish these functions under the control of software including an operating system and any appropriate applications software. CPU 704 may include one or more processors 708 such as a processor from the Intel X86 family of microprocessors, the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, processor 708 is specially designed hardware for controlling the operations of network device 700. In some cases, a memory 706 (such as non-volatile RAM, a TCAM, and/or ROM) can form part of CPU 704. However, there are many different ways in which memory could be coupled to the system.


The interfaces 702 are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device 700. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM.


The independent processors may control communications and intensive tasks such as packet switching, media control, signal processing, crypto processing, function routing, execution endpoint management, network management, and so forth. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 704 to efficiently perform routing computations, network diagnostics, security functions, etc.


Although the system shown in FIG. 7 is one specific network device of the present embodiments, it is by no means the only network device architecture on which the present embodiments can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used with the router.


Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 706) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory 706 could also hold various containers and virtualized execution environments and data.


The network device 700 can also include an application-specific integrated circuit (ASIC) 712, which can be configured to perform routing and/or switching operations. The ASIC 712 can communicate with other components in the network device 700 via the bus 710, to exchange data and signals and coordinate various types of operations by the network device 700, such as routing, switching, and/or data storage operations, for example.



FIG. 8 illustrates an example architecture of a system 800, including various hardware computing components which are in electrical communication with each other using a connection 806. System 800 includes a processing unit (CPU or processor) 804 and a system connection 806 that couples various system components including the system memory 820, such as read only memory (ROM) 818 and random access memory (RAM) 816, to the processor 804.


The system 800 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 804. The system 800 can copy data from the memory 820 and/or the storage device 808 to the cache 802 for quick access by the processor 804. In this way, the cache can provide a performance boost that avoids processor 804 delays while waiting for data. These and other modules can control or be configured to control the processor 804 to perform various actions. Other system memory 820 may be available for use as well. The memory 820 can include multiple different types of memory with different performance characteristics.


The processor 804 can include any general purpose processor and a service component, such as service 1 810, service 2 812, and service 3 814 stored in storage device 808, configured to control the processor 804 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 804 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.


To enable user interaction with the computing device 800, an input device 822 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 824 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 800. The communications interface 826 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.


Storage device 808 can be a non-volatile memory, a hard disk, or any other type of computer readable media which can store data for access by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 816, read only memory (ROM) 818, and hybrids thereof. In some cases, storage device 808 can store an execution or runtime environment for executing code, one or more functions for execution via the execution or runtime environment, one or more resources (e.g., libraries, data objects, APIs, etc.), and so forth.


The system 800 can include an integrated circuit 828, such as an application-specific integrated circuit (ASIC) configured to perform various operations. The integrated circuit 828 can be coupled with the connection 806 in order to communicate with other components in the system 800.


The storage device 808 can include software services 810, 812, 814 for controlling the processor 804. In some cases, the software services 810, 812, 814 can include, for example, operating system or kernel services, application services, services associated with one or more functions, etc. Other hardware or software modules are contemplated. The storage device 808 can be connected to the system connection 806. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 804, connection 806, output device 824, and so forth, to carry out the function.


For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.


In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.


Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.


Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.


The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.


Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.


Claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B.

Claims
  • 1. A method comprising: receiving an indication of one or more service chains to be configured for traffic between a private network and respective cloud domains;collecting domain name system (DNS) information from DNS queries associated with the respective cloud domains;spoofing DNS entries associated with the respective cloud domains, wherein the spoofed DNS entries define a reduced number of Internet Protocol (IP) addresses, identified in the collected DNS information, for the respective cloud domain;creating, based on the spoofed DNS entries, IP-to-domain mappings for the respective cloud domains;programming, based on the respective IP-to-domain mappings, the one or more service chains for traffic between the private network and the respective cloud domains; andin response to receiving traffic having source information associated with the private network and destination information matching an IP address in the respective IP-to-domain mapping, routing the traffic through a respective one of the one or more service chain.
  • 2. The method of claim 1, wherein collecting DNS information comprises: forwarding the DNS queries associated with the respective cloud domains to one or more DNS servers;receiving one or more DNS resolution results from the one or more DNS servers;snooping the one or more DNS resolution results; andidentifying, based on the snooping, the DNS information associated with the respective cloud domains.
  • 3. The method of claim 1, wherein the IP address associated with the respective cloud domain in the respective IP-to-domain mapping comprises at least one of a private IP address assigned to the respective cloud domain or a virtual IP address assigned to the respective cloud domain, the at least one of the private IP address or the virtual IP address corresponding to the spoofed DNS entries.
  • 4. The method of claim 1, wherein the reduced number of IP addresses associated with the respective cloud domain comprises a subset of a total number of IP addresses allocated to the respective cloud domain, wherein the IP address in the respective IP-to-domain mapping associated with the respective cloud domain is from the subset of the total number of IP addresses allocated to the respective cloud domain.
  • 5. The method of claim 4, wherein the subset of the total number of IP addresses allocated to the respective cloud domain is selected from the DNS information.
  • 6. The method of claim 1, further comprising: receiving one or more service chain configuration requests identifying the respective service chains to be configured for traffic between the private network site and respective cloud domains, wherein each of the respective service chains comprises a respective sequence of appliances for processing the traffic.
  • 7. The method of claim 1, wherein each respective service chain is programmed via one or more policies configured to route, through the respective service chain, traffic having source information associated with the private network site and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain.
  • 8. The method of claim 7, wherein the one or more policies comprise access control list (ACL) entries, the ACL entries comprising a respective ACL entry for each service in the respective service chain, and wherein each respective ACL entry specifies a source address associated with the one or more endpoints in the private network site, a destination address comprising the IP address in the respective IP-to-domain mapping associated with the respective cloud domain, and an instruction to route traffic to the service when a source and destination of the traffic match the source address and the destination address in the respective ACL entry.
  • 9. A system comprising: at least one processor; andat least one memory storing instructions which when executed by the at least one processor, causes the at least one processor to: receive an indication of one or more service chains to be configured for traffic between a private network and respective cloud domains;collect domain name system (DNS) information from DNS queries associated with the respective cloud domains;spoof DNS entries associated with the respective cloud domains, wherein the spoofed DNS entries define a reduced number of Internet Protocol (IP) addresses, identified in the collected DNS information, for the respective cloud domain;create, based on the spoofed DNS entries, IP-to-domain mappings for the respective cloud domains; program, based on the respective IP-to-domain mappings, the one or more service chains for traffic between the private network and the respective cloud domains; andin response to receiving traffic having source information associated with the private network and destination information matching an IP address in the respective IP-to-domain mapping, routing the traffic through a respective one of the one or more service chain.
  • 10. The system of claim 9, further comprising instructions, which when executed by the at least on processor, causes the at least one processor to: forward the DNS queries associated with the respective cloud domains to one or more DNS servers;receive one or more DNS resolution results from the one or more DNS servers;snoop the one or more DNS resolution results; andidentify, based on the snooping, the DNS information associated with the respective cloud domains.
  • 11. The system of claim 9, wherein the IP address associated with the respective cloud domain in the respective IP-to-domain mapping comprises at least one of a private IP address assigned to the respective cloud domain or a virtual IP address assigned to the respective cloud domain, the at least one of the private IP address or the virtual IP address corresponding to the spoofed DNS entries.
  • 12. The system of claim 9, wherein the reduced number of IP addresses associated with the respective cloud domain comprises a subset of a total number of IP addresses allocated to the respective cloud domain, wherein the IP address in the respective IP-to-domain mapping associated with the respective cloud domain is from the subset of the total number of IP addresses allocated to the respective cloud domain.
  • 13. The system of claim 12, wherein the subset of the total number of IP addresses allocated to the respective cloud domain is selected from the DNS information.
  • 14. The system of claim 9, further comprising instructions, which when executed by the at least on processor, causes the at least one processor to: receive one or more service chain configuration requests identifying the respective service chains to be configured for traffic between the private network site and respective cloud domains, wherein each of the respective service chains comprises a respective sequence of appliances for processing the traffic.
  • 15. The system of claim 9, wherein each respective service chain is programmed via one or more policies configured to route, through the respective service chain, traffic having source information associated with the private network site and destination information matching the IP address in the respective IP-to-domain mapping associated with the respective cloud domain.
  • 16. The system of claim 15, wherein the one or more policies comprise access control list (ACL) entries, the ACL entries comprising a respective ACL entry for each service in the respective service chain, and wherein each respective ACL entry specifies a source address associated with the one or more endpoints in the private network site, a destination address comprising the IP address in the respective IP-to-domain mapping associated with the respective cloud domain, and an instruction to route traffic to the service when a source and destination of the traffic match the source address and the destination address in the respective ACL entry.
  • 17. At least one non-transitory computer readable medium storing instructions, which when executed by at least one processor, causes the at least one processor to: receive an indication of one or more service chains to be configured for traffic between a private network and respective cloud domains;collect domain name system (DNS) information from DNS queries associated with the respective cloud domains;spoof DNS entries associated with the respective cloud domains, wherein the spoofed DNS entries define a reduced number of Internet Protocol (IP) addresses, identified in the collected DNS information, for the respective cloud domain;create, based on the spoofed DNS entries, IP-to-domain mappings for the respective cloud domains;program, based on the respective IP-to-domain mappings, the one or more service chains for traffic between the private network and the respective cloud domains; andin response to receiving traffic having source information associated with the private network and destination information matching an IP address in the respective IP-to-domain mapping, routing the traffic through a respective one of the one or more service chain.
  • 18. The at least one non-transitory computer readable medium of claim 17, further comprising instructions, which when executed by the at least on processor, causes the at least one processor to: forward the DNS queries associated with the respective cloud domains to one or more DNS servers;receive one or more DNS resolution results from the one or more DNS servers;snoop the one or more DNS resolution results; andidentify, based on the snooping, the DNS information associated with the respective cloud domains.
  • 19. The at least one non-transitory computer readable medium of claim 17, wherein the IP address associated with the respective cloud domain in the respective IP-to-domain mapping comprises at least one of a private IP address assigned to the respective cloud domain or a virtual IP address assigned to the respective cloud domain, the at least one of the private IP address or the virtual IP address corresponding to the spoofed DNS entries.
  • 20. The least one non-transitory computer readable medium of claim 17, wherein the reduced number of IP addresses associated with the respective cloud domain comprises a subset of a total number of IP addresses allocated to the respective cloud domain, wherein the IP address in the respective IP-to-domain mapping associated with the respective cloud domain is from the subset of the total number of IP addresses allocated to the respective cloud domain.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. Non-Provisional patent application Ser. No. 16/870,130, filed on May 8, 2020, which is a continuation of U.S. Non-Provisional patent application Ser. No. 16/001,039, filed on Jun. 6, 2018, now U.S. Pat. No. 10,666,612, the full disclosures of each are incorporated herein by reference in their entireties.

US Referenced Citations (351)
Number Name Date Kind
3629512 Yuan Dec 1971 A
4769811 Eckberg, Jr. et al. Sep 1988 A
5408231 Bowdon Apr 1995 A
5491690 Alfonsi et al. Feb 1996 A
5557609 Shobatake et al. Sep 1996 A
5600638 Bertin et al. Feb 1997 A
5687167 Bertin et al. Nov 1997 A
6115384 Parzych Sep 2000 A
6167438 Yates et al. Dec 2000 A
6400681 Bertin et al. Jun 2002 B1
6661797 Goel et al. Dec 2003 B1
6687229 Kataria et al. Feb 2004 B1
6799270 Bull et al. Sep 2004 B1
6888828 Partanen et al. May 2005 B1
6993593 Iwata Jan 2006 B2
7027408 Nabkel et al. Apr 2006 B2
7062567 Benitez et al. Jun 2006 B2
7095715 Buckman et al. Aug 2006 B2
7096212 Tribble et al. Aug 2006 B2
7139239 Mcfarland et al. Nov 2006 B2
7165107 Pouyoul et al. Jan 2007 B2
7197008 Shabtay et al. Mar 2007 B1
7197660 Liu et al. Mar 2007 B1
7209435 Kuo et al. Apr 2007 B1
7227872 Biswas et al. Jun 2007 B1
7231462 Berthaud et al. Jun 2007 B2
7333990 Thiagarajan et al. Feb 2008 B1
7443796 Albert et al. Oct 2008 B1
7458084 Zhang et al. Nov 2008 B2
7472411 Wing et al. Dec 2008 B2
7486622 Regan et al. Feb 2009 B2
7536396 Johnson et al. May 2009 B2
7552201 Areddu et al. Jun 2009 B2
7558261 Arregoces et al. Jul 2009 B2
7567504 Darling et al. Jul 2009 B2
7571470 Arregoces et al. Aug 2009 B2
7573879 Narad et al. Aug 2009 B2
7610375 Portolani et al. Oct 2009 B2
7643468 Arregoces et al. Jan 2010 B1
7644182 Banerjee et al. Jan 2010 B2
7647422 Singh et al. Jan 2010 B2
7657898 Sadiq Feb 2010 B2
7657940 Portolani et al. Feb 2010 B2
7668116 Wijnands et al. Feb 2010 B2
7684321 Muirhead et al. Mar 2010 B2
7738469 Shekokar et al. Jun 2010 B1
7751409 Carolan Jul 2010 B1
7793157 Bailey et al. Sep 2010 B2
7814284 Glass et al. Oct 2010 B1
7831693 Lai Nov 2010 B2
7852785 Lund et al. Dec 2010 B2
7860095 Forissier et al. Dec 2010 B2
7860100 Khalid et al. Dec 2010 B2
7895425 Khalid et al. Feb 2011 B2
7899012 Ho et al. Mar 2011 B2
7899861 Feblowitz et al. Mar 2011 B2
7907595 Khanna et al. Mar 2011 B2
7908480 Firestone et al. Mar 2011 B2
7983174 Monaghan et al. Jul 2011 B1
7990847 Leroy et al. Aug 2011 B1
8000329 Fendick et al. Aug 2011 B2
8018938 Fromm et al. Sep 2011 B1
8094575 Vadlakonda et al. Jan 2012 B1
8095683 Balasubramanian Jan 2012 B2
8116307 Thesayi et al. Feb 2012 B1
8166465 Feblowitz et al. Apr 2012 B2
8180909 Hartman et al. May 2012 B2
8191119 Wing et al. May 2012 B2
8195774 Lambeth et al. Jun 2012 B2
8280354 Smith et al. Oct 2012 B2
8281302 Durazzo et al. Oct 2012 B2
8291108 Raja et al. Oct 2012 B2
8305900 Bianconi Nov 2012 B2
8311045 Quinn et al. Nov 2012 B2
8316457 Paczkowski et al. Nov 2012 B1
8355332 Beaudette et al. Jan 2013 B2
8442043 Sharma et al. May 2013 B2
8451817 Cheriton May 2013 B2
8464336 Wei et al. Jun 2013 B2
8479298 Keith et al. Jul 2013 B2
8498414 Rossi Jul 2013 B2
8520672 Guichard et al. Aug 2013 B2
8601152 Chou Dec 2013 B1
8605588 Sankaran et al. Dec 2013 B2
8612612 Dukes et al. Dec 2013 B1
8627328 Mousseau et al. Jan 2014 B2
8645952 Biswas et al. Feb 2014 B2
8676965 Gueta Mar 2014 B2
8676980 Kreeger et al. Mar 2014 B2
8700892 Bollay et al. Apr 2014 B2
8724466 Kenigsberg et al. May 2014 B2
8730980 Bagepalli et al. May 2014 B2
8743885 Khan et al. Jun 2014 B2
8751420 Hjelm et al. Jun 2014 B2
8762534 Hong et al. Jun 2014 B1
8762707 Killian et al. Jun 2014 B2
8769057 Breau et al. Jul 2014 B1
8792490 Jabr et al. Jul 2014 B2
8793400 Mcdysan et al. Jul 2014 B2
8812730 Vos et al. Aug 2014 B2
8819419 Carlson et al. Aug 2014 B2
8825070 Akhtar et al. Sep 2014 B2
8830834 Sharma et al. Sep 2014 B2
8904037 Haggar et al. Dec 2014 B2
8984284 Purdy, Sr. et al. Mar 2015 B2
9001827 Appenzeller Apr 2015 B2
9071533 Hui et al. Jun 2015 B2
9077661 Andreasen et al. Jul 2015 B2
9088584 Feng et al. Jul 2015 B2
9130872 Kumar et al. Sep 2015 B2
9143438 Khan et al. Sep 2015 B2
9160797 Mcdysan Oct 2015 B2
9178812 Guichard et al. Nov 2015 B2
9189285 Ng et al. Nov 2015 B2
9203711 Agarwal et al. Dec 2015 B2
9253274 Quinn et al. Feb 2016 B2
9300585 Kumar et al. Mar 2016 B2
9311130 Christenson et al. Apr 2016 B2
9319324 Beheshti-Zavareh et al. Apr 2016 B2
9325565 Yao et al. Apr 2016 B2
9325735 Xie et al. Apr 2016 B1
9338097 Anand et al. May 2016 B2
9344337 Kumar et al. May 2016 B2
9374297 Bosch et al. Jun 2016 B2
9379931 Bosch et al. Jun 2016 B2
9385950 Quinn et al. Jul 2016 B2
9398486 La Roche, Jr. et al. Jul 2016 B2
9407540 Kumar et al. Aug 2016 B2
9413655 Shatzkamer et al. Aug 2016 B2
9424065 Singh et al. Aug 2016 B2
9436443 Chiosi et al. Sep 2016 B2
9473570 Bhanujan et al. Oct 2016 B2
9479443 Bosch et al. Oct 2016 B2
9491094 Patwardhan et al. Nov 2016 B2
9537836 Mailer et al. Jan 2017 B2
9558029 Behera et al. Jan 2017 B2
9559970 Kumar et al. Jan 2017 B2
9571405 Pignataro et al. Feb 2017 B2
9608896 Kumar et al. Mar 2017 B2
9660909 Guichard et al. May 2017 B2
9723106 Shen et al. Aug 2017 B2
9774533 Zhang et al. Sep 2017 B2
9794379 Kumar et al. Oct 2017 B2
9882776 Aybay et al. Jan 2018 B2
20010023442 Masters Sep 2001 A1
20020131362 Callon Sep 2002 A1
20020156893 Pouyoul et al. Oct 2002 A1
20020167935 Nabkel et al. Nov 2002 A1
20030023879 Wray Jan 2003 A1
20030026257 Xu et al. Feb 2003 A1
20030037070 Marston Feb 2003 A1
20030088698 Singh et al. May 2003 A1
20030110081 Tosaki et al. Jun 2003 A1
20030120816 Berthaud et al. Jun 2003 A1
20030226142 Rand Dec 2003 A1
20040109412 Hansson et al. Jun 2004 A1
20040148391 Shannon, Sr. et al. Jul 2004 A1
20040199812 Earl Oct 2004 A1
20040213160 Regan et al. Oct 2004 A1
20040264481 Darling et al. Dec 2004 A1
20040268357 Joy et al. Dec 2004 A1
20050044197 Lai Feb 2005 A1
20050058118 Davis Mar 2005 A1
20050060572 Kung Mar 2005 A1
20050086367 Conta et al. Apr 2005 A1
20050120101 Nocera Jun 2005 A1
20050152378 Bango et al. Jul 2005 A1
20050157645 Rabie et al. Jul 2005 A1
20050160180 Rabje et al. Jul 2005 A1
20050204042 Banerjee et al. Sep 2005 A1
20050210096 Bishop et al. Sep 2005 A1
20050257002 Nguyen Nov 2005 A1
20050281257 Yazaki et al. Dec 2005 A1
20050286540 Hurtta et al. Dec 2005 A1
20050289244 Sahu et al. Dec 2005 A1
20060005240 Sundarrajan et al. Jan 2006 A1
20060031374 Lu et al. Feb 2006 A1
20060045024 Previdi et al. Mar 2006 A1
20060074502 Mcfarland Apr 2006 A1
20060092950 Arregoces et al. May 2006 A1
20060095960 Arregoces et al. May 2006 A1
20060112400 Zhang et al. May 2006 A1
20060155862 Kathi et al. Jul 2006 A1
20060168223 Mishra et al. Jul 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20060233155 Srivastava Oct 2006 A1
20070061441 Landis et al. Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070094397 Krelbaum et al. Apr 2007 A1
20070143851 Nicodemus et al. Jun 2007 A1
20070237147 Quinn et al. Oct 2007 A1
20070250836 Li et al. Oct 2007 A1
20080056153 Liu Mar 2008 A1
20080080509 Khanna et al. Apr 2008 A1
20080080517 Roy et al. Apr 2008 A1
20080170542 Hu Jul 2008 A1
20080177896 Quinn et al. Jul 2008 A1
20080181118 Sharma et al. Jul 2008 A1
20080196083 Parks et al. Aug 2008 A1
20080209039 Tracey et al. Aug 2008 A1
20080219287 Krueger et al. Sep 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20090003364 Fendick et al. Jan 2009 A1
20090006152 Timmerman et al. Jan 2009 A1
20090037713 Khalid et al. Feb 2009 A1
20090094684 Chinnusamy et al. Apr 2009 A1
20090204612 Keshavarz-nia et al. Aug 2009 A1
20090271656 Yokota et al. Oct 2009 A1
20090300207 Giaretta et al. Dec 2009 A1
20090305699 Deshpande et al. Dec 2009 A1
20090328054 Paramasivam et al. Dec 2009 A1
20100058329 Durazzo et al. Mar 2010 A1
20100063988 Khalid Mar 2010 A1
20100080226 Khalid Apr 2010 A1
20100165985 Sharma et al. Jul 2010 A1
20100191612 Raleigh Jul 2010 A1
20110023090 Asati et al. Jan 2011 A1
20110032833 Zhang et al. Feb 2011 A1
20110055845 Nandagopal et al. Mar 2011 A1
20110131338 Hu Jun 2011 A1
20110137991 Russell Jun 2011 A1
20110142056 Manoj Jun 2011 A1
20110161494 Mcdysan et al. Jun 2011 A1
20110222412 Kompella Sep 2011 A1
20110255538 Srinivasan et al. Oct 2011 A1
20110267947 Dhar et al. Nov 2011 A1
20120131662 Kuik et al. May 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120324442 Barde Dec 2012 A1
20120331135 Alon et al. Dec 2012 A1
20130003735 Chao et al. Jan 2013 A1
20130003736 Szyszko et al. Jan 2013 A1
20130036307 Gagliano Feb 2013 A1
20130040640 Chen et al. Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130103939 Radpour Apr 2013 A1
20130121137 Feng et al. May 2013 A1
20130124708 Lee et al. May 2013 A1
20130148541 Zhang et al. Jun 2013 A1
20130163594 Sharma et al. Jun 2013 A1
20130163606 Bagepalli et al. Jun 2013 A1
20130238806 Moen Sep 2013 A1
20130272305 Lefebvre et al. Oct 2013 A1
20130311675 Kancherla Nov 2013 A1
20130329584 Ghose et al. Dec 2013 A1
20140010083 Hamdi et al. Jan 2014 A1
20140010096 Kamble et al. Jan 2014 A1
20140036730 Nellikar et al. Feb 2014 A1
20140050223 Foo et al. Feb 2014 A1
20140067758 Boldyrev et al. Mar 2014 A1
20140105062 McDysan et al. Apr 2014 A1
20140136675 Yao et al. May 2014 A1
20140254603 Banavalikar et al. Sep 2014 A1
20140259012 Nandlall et al. Sep 2014 A1
20140279863 Krishnamurthy et al. Sep 2014 A1
20140280836 Kumar et al. Sep 2014 A1
20140317261 Shatzkamer et al. Oct 2014 A1
20140321459 Kumar et al. Oct 2014 A1
20140334295 Guichard et al. Nov 2014 A1
20140344399 Lipstone et al. Nov 2014 A1
20140344439 Kempf et al. Nov 2014 A1
20140362682 Guichard et al. Dec 2014 A1
20140362857 Guichard et al. Dec 2014 A1
20140369209 Khurshid et al. Dec 2014 A1
20140376558 Rao et al. Dec 2014 A1
20150003455 Haddad et al. Jan 2015 A1
20150012584 Lo et al. Jan 2015 A1
20150012988 Jeng et al. Jan 2015 A1
20150029871 Frost et al. Jan 2015 A1
20150032871 Allan et al. Jan 2015 A1
20150052516 French et al. Feb 2015 A1
20150071285 Kumar et al. Mar 2015 A1
20150074276 DeCusatis et al. Mar 2015 A1
20150082308 Kiess et al. Mar 2015 A1
20150085635 Wijnands et al. Mar 2015 A1
20150085870 Narasimha et al. Mar 2015 A1
20150089082 Patwardhan et al. Mar 2015 A1
20150092564 Aldrin Apr 2015 A1
20150103827 Quinn et al. Apr 2015 A1
20150117308 Kant Apr 2015 A1
20150124622 Kovvali et al. May 2015 A1
20150131484 Aldrin May 2015 A1
20150131660 Shepherd et al. May 2015 A1
20150156035 Foo et al. Jun 2015 A1
20150180725 Varney et al. Jun 2015 A1
20150180767 Tam et al. Jun 2015 A1
20150181309 Shepherd et al. Jun 2015 A1
20150188949 Mahaffey et al. Jul 2015 A1
20150195197 Yong et al. Jul 2015 A1
20150222516 Deval et al. Aug 2015 A1
20150222533 Birrittella et al. Aug 2015 A1
20150236948 Dunbar et al. Aug 2015 A1
20150319078 Lee et al. Nov 2015 A1
20150319081 Kasturi et al. Nov 2015 A1
20150326473 Dunbar et al. Nov 2015 A1
20150333930 Aysola et al. Nov 2015 A1
20150334027 Bosch et al. Nov 2015 A1
20150341285 Aysola et al. Nov 2015 A1
20150365324 Kumar et al. Dec 2015 A1
20150365495 Fan et al. Dec 2015 A1
20150381465 Narayanan et al. Dec 2015 A1
20150381557 Fan et al. Dec 2015 A1
20160021026 Aron et al. Jan 2016 A1
20160028604 Chakrabarti et al. Jan 2016 A1
20160028640 Zhang et al. Jan 2016 A1
20160043952 Zhang et al. Feb 2016 A1
20160050132 Zhang Feb 2016 A1
20160080263 Park et al. Mar 2016 A1
20160080496 Falanga et al. Mar 2016 A1
20160099853 Nedeltchev et al. Apr 2016 A1
20160119159 Zhao et al. Apr 2016 A1
20160119253 Kang et al. Apr 2016 A1
20160127139 Tian et al. May 2016 A1
20160134518 Callon et al. May 2016 A1
20160134535 Callon May 2016 A1
20160139939 Bosch et al. May 2016 A1
20160164776 Biancaniello Jun 2016 A1
20160164826 Riedel et al. Jun 2016 A1
20160165014 Nainar et al. Jun 2016 A1
20160173373 Guichard et al. Jun 2016 A1
20160173464 Wang et al. Jun 2016 A1
20160182336 Doctor et al. Jun 2016 A1
20160182342 Singaravelu et al. Jun 2016 A1
20160182684 Connor et al. Jun 2016 A1
20160212017 Li et al. Jul 2016 A1
20160226742 Apathotharanan et al. Aug 2016 A1
20160248685 Pignataro et al. Aug 2016 A1
20160285720 Mäenpää et al. Sep 2016 A1
20160323165 Boucadair et al. Nov 2016 A1
20160352629 Wang et al. Dec 2016 A1
20160380966 Gunnalan et al. Dec 2016 A1
20170019303 Swamy et al. Jan 2017 A1
20170031804 Ciszewski et al. Feb 2017 A1
20170041332 Mahjoub Feb 2017 A1
20170078175 Xu et al. Mar 2017 A1
20170187609 Lee et al. Jun 2017 A1
20170208000 Bosch et al. Jul 2017 A1
20170214627 Zhang et al. Jul 2017 A1
20170237656 Gage et al. Aug 2017 A1
20170250917 Ruckstuhl et al. Aug 2017 A1
20170257386 Kim Sep 2017 A1
20170272470 Gundamaraju et al. Sep 2017 A1
20170279712 Nainar et al. Sep 2017 A1
20170310611 Kumar et al. Oct 2017 A1
20170317932 Paramasivam Nov 2017 A1
20170374088 Pappu Dec 2017 A1
20180026884 Nainar et al. Jan 2018 A1
20180041470 Schultz et al. Feb 2018 A1
20180219783 Pfister et al. Aug 2018 A1
20180352038 Sathyanarayana et al. Dec 2018 A1
Foreign Referenced Citations (12)
Number Date Country
103716123 Apr 2014 CN
103716137 Apr 2014 CN
10381277 May 2014 CN
2731314 May 2014 EP
3160073 Apr 2017 EP
2016149686 Aug 2016 JP
WO 2011029321 Mar 2011 WO
WO 2012056404 May 2012 WO
WO 2015180559 Dec 2015 WO
WO 2015187337 Dec 2015 WO
WO 2016004556 Jan 2016 WO
WO 2016058245 Apr 2016 WO
Non-Patent Literature Citations (62)
Entry
Aldrin, S., et al. “Service Function Chaining Operation, Administration and Maintenance Framework,” Internet Engineering Task Force, Oct. 26, 2014, 13 pages.
Alizadeh, Mohammad, et al., “CONGA: Distributed Congestion-Aware Load Balancing for Datacenters,” SIGCOMM '14, Aug. 17-22, 2014, 12 pages.
Author Unknown, “ANSI/SCTE 35 2007 Digital Program Insertion Cueing Message for Cable,” Engineering Committee, Digital Video Subcommittee, American National Standard, Society of Cable Telecommunications Engineers, © Society of Cable Telecommunications Engineers, Inc. 2007 All Rights Reserved, 140 Philips Road, Exton, PA 19341; 42 pages.
Author Unknown, “AWS Lambda Developer Guide,” Amazon Web Services Inc., May 2017, 416 pages.
Author Unknown, “CEA-708,” from Wikipedia, the free encyclopedia, Nov. 15, 2012; 16 pages http://en.wikipedia.org/w/index.php?title=CEA-708&oldid=523143431.
Author Unknown, “Cisco and Intel High-Performance VNFs on Cisco NFV Infrastructure,” White Paper, Cisco and Intel, Oct. 2016, 7 pages.
Author Unknown, “Cloud Functions Overview,” Cloud Functions Documentation, Mar. 21, 2017, 3 pages; https://cloud.google.com/functions/docs/concepts/overview.
Author Unknown, “Cloud-Native VNF Modelling,” Open Source Mano, © ETSI 2016, 18 pages.
Author Unknown, “Digital Program Insertion,” from Wikipedia, the free encyclopedia, Jan. 2, 2012; 1 page http://en.wikipedia.org/w/index.php?title=Digital_Program_Insertion&oldid=469076482.
Author Unknown, “Dynamic Adaptive Streaming over HTTP,” from Wikipedia, the free encyclopedia, Oct. 25, 2012; 3 pages, http://en.wikipedia.org/w/index.php?title=Dynannic_Adaptive_Streanning_over_HTTP&oldid=519749189.
Author Unknown, “GStreamer and in-band metadata,” from RidgeRun Developer Connection, Jun. 19, 2012, 5 pages https://developersidgerun.conn/wiki/index.php/GStreanner_and_in-band nnetadata.
Author Unknown, “IEEE Standard for the Functional Architecture of Next Generation Service Overlay Networks, IEEE Std. 1903-2011,” IEEE, Piscataway, NJ, Oct. 7, 2011; 147 pages.
Author Unknown, “ISO/IEC JTC 1/SC 29, Information Technology—Dynamic Adaptive Streaming over HTTP (DASH)—Part 1: Media Presentation Description and Segment Formats,” International Standard © ISO/IEC 2012—All Rights Reserved; Jan. 5, 2012; 131 pages.
Author Unknown, “M-PEG 2 Transmission,” © Dr. Gorry Fairhurst, 9 pages [Published on or about Jan. 12, 2012] http://www.erg.abdn.ac.uk/future-net/digital-video/mpeg2-trans.html.
Author Unknown, “MPEG Transport Stream,” from Wikipedia, the free encyclopedia, Nov. 11, 2012; 7 pages, http://en.wikipedia.org/w/index.php?title=MPEG_transport_streann&oldid=522468296.
Author Unknown, “Network Functions Virtualisation (NFV); Use Cases,” ETSI, GS NFV 001 v1.1.1, Architectural Framework, © European Telecommunications Standards Institute, Oct. 2013, 50 pages.
Author Unknown, “OpenNebula 4.6 User Guide,” Jun. 12, 2014, opennebula.org, 87 pages.
Author Unknown, “Understanding Azure, A Guide for Developers,” Microsoft Corporation, Copyright © 2016 Microsoft Corporation, 39 pages.
Author Unknown, “3GPP TR 23.803 V7.0.0 (Sep. 2005) Technical Specification: Group Services and System Aspects; Evolution of Policy Control and Charging (Release 7),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Val bonne—France, Sep. 2005; 30 pages.
Author Unknown, “3GPP TS 23.203 V8.9.0 (Mar. 2010) Technical Specification: Group Services and System Aspects; Policy and Charging Control Architecture (Release 8),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Val bonne—France, Mar. 2010; 116 pages.
Author Unknown, “3GPP TS 23.401 V13.5.0 (Dec. 2015) Technical Specification: 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access (Release 13),” 3GPP, 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Dec. 2015, 337 pages.
Author Unknown, “3GPP TS 23.401 V9.5.0 (Jun. 2010) Technical Specification: Group Services and Systems Aspects; General Packet Radio Service (GPRS) Enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) Access (Release 9),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Jun. 2010; 259 pages.
Author Unknown, “3GPP TS 29.212 V13.1.0 (Mar. 2015) Technical Specification: 3rd Generation Partnership Project; Technical Specification Group Core Network and Terminals; Policy and Charging Control (PCC); Reference points (Release 13),” 3rd Generation Partnership Project (3GPP), 650 Route des Lucioles—Sophia Antipolis Valbonne—France, Mar. 2015; 230 pages.
Author Unknown, “Service-Aware Network Architecture Based on SDN, NFV, and Network Intelligence,” 2014, 8 pages.
Baird, Andrew, et al. “AWS Serverless Multi-Tier Architectures; Using Amazon API Gateway and AWS Lambda,” Amazon Web Services Inc., Nov. 2015, 20 pages.
Bi, Jing, et al., “Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center,” 2010 IEEE 3rd International Conference on Cloud Computing, Jul. 5, 2010, pp. 370-377, IEEE Computer Society.
Bitar, N., et al., “Interface to the Routing System (I2RS) for the Service Chaining: Use Cases and Requirements,” draft-bitar-i2rs-service-chaining-01, Feb. 14, 2014, pp. 1-15.
Boucadair, Mohamed, et al., “Differentiated Service Function Chaining Framework,” Network Working Group Internet Draft draft-boucadair-network-function-chaining-03, Aug. 21, 2013, 21 pages.
Bremler-Barr, Anat, et al., “Deep Packet Inspection as a Service,” CoNEXT '14, Dec. 2-5, 2014, pp. 271-282.
Cisco Systems, Inc. “Cisco NSH Service Chaining Configuration Guide,” Jul. 28, 2017, 11 pages.
Cisco Systems, Inc. “Cisco VN-LINK: Virtualization-Aware Networking,” 2009, 9 pages.
Dunbar, et al., “Architecture for Chaining Legacy Layer 4-7 Service Functions,” IETF Network Working Group Internet Draft, draft-dunbar-sfc-legacy-14-17-chain-architecture-03.txt, Feb. 10, 2014; 17 pages.
Ersue, Mehmet, “ETSI NFV Management and Orchestration—An Overview,” Presentation at the IETF#88 Meeting, Nov. 3, 2013, 14 pages.
Farrel, A., et al., “A Path Computation Element (PCE)—Based Architecture,” RFC 4655, Network Working Group, Aug. 2006, 40 pages.
Fayaz, Seyed K., et al., “Efficient Network Reachability Analysis using a Succinct Control Plane Representation,” 2016, ratul.org, pp. 1-16.
Halpern, Joel, et al., “Service Function Chaining (SFC) Architecture,” Internet Engineering Task Force (IETF), Cisco, Oct. 2015, 32 pages.
Hendrickson, Scott, et al. “Serverless Computation with OpenLambda,” Elastic 60, University of Wisconson, Madison, Jun. 20, 2016, 7 pages, https://www.usenix.org/svstem/files/conference/hotcloud16/hotcloud16_hendrickson.pdf.
Jiang, Y., et al., “An Architecture of Service Function Chaining,” IETF Network Working Group Internet Draft, draft-jiang-sfc-arch-01.txt, Feb. 14, 2014; 12 pages.
Jiang, Yuanlong, et al., “Fault Management in Service Function Chaining,” Network Working Group, China Telecom, Oct. 16, 2015, 13 pages.
Katsikas, Goergios P., et al., “Profiling and accelerating commodity NFV service chains with SCC,” The Journal of Systems and Software, vol. 127, Jan. 2017, pp. 12-27.
Kumar, Surendra, et al., “Service Function Path Optimization: draft-kumar-sfc-sfp-optimization-00.txt,” Internet Engineering Task Force, IETF; Standard Working Draft, May 10, 2014, 14 pages.
Kumbhare, Abhijit, et al., “Opendaylight Service Function Chaining Use-Cases,” Oct. 14, 2014, 25 pages.
Li, Hongyu, “Service Function Chaining Use Cases”, IETF 88 Vancouver, Nov. 7, 2013, 7 pages.
Mortensen, A., et al., “Distributed Denial of Service (DDoS) Open Threat Signaling Requirements,” DOTS, Mar. 18, 2016, 16 pages; https://tools.ietf.org/pdf/draft-ietf-dots-requirements-01.pdf.
Newman, David, “Review: FireEye fights off multi-stage malware,” Network World, May 5, 2014, 7 pages.
Nguyen, Kim-Khoa, et al. “Distributed Control Plane Architecture of Next Generation IP Routers,” IEEE, 2009, 8 pages.
Penno, Reinaldo, et al. “Packet Generation in Service Function Chains,” draft-penno-sfc-packet-03, Apr. 29, 2016, 25 pages.
Penno, Reinaldo, et al. “Services Function Chaining Traceroute,” draft-penno-sfc-trace-03, Sep. 30, 2015, 9 pages.
Pierre-Louis, Marc-Arhtur, “OpenWhisk: A quick tech preview,” DeveloperWorks Open, IBM, Feb. 22, 2016, modified Mar. 3, 2016, 7 pages; https://developer.ibm.com/open/2016/02/22/openwhisk-a-quick-tech-preview/.
Pujol, Pua Capdevila, “Deployment of NFV and SFC scenarios,” EETAC, Master Thesis, Advisor: David Rincon Rivera, Universitat Politecnica De Catalunya, Feb. 17, 2017, 115 pages.
Quinn, P., et al., “Network Service Header,” Network Working Group, Mar. 24, 2015, 42 pages; https://tools.ietf.org/pdf/draft-ietf-sfc-nsh-00.pdf.
Quinn, P., et al., “Network Service Chaining Problem Statement,” draft-quinn-nsc-problem-statement-03.txt, Aug. 26, 2013, 18 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group, draft-quinn-sfc-nsh-02.txt, Feb. 14, 2014, 21 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group, draft-quinn-nsh-00.txt, Jun. 13, 2013, 20 pages.
Quinn, Paul, et al., “Network Service Header,” Network Working Group Internet Draft draft-quinn-nsh-01, Jul. 12, 2013, 20 pages.
Quinn, Paul, et al., “Service Function Chaining (SFC) Architecture,” Network Working Group Internet Draft draft-quinn-sfc-arch-05.txt, May 5, 2014, 31 pages.
Quinn, Paul, et al., “Service Function Chaining: Creating a Service Plane via Network Service Headers,” IEEE Computer Society, 2014, pp. 38-44.
Wong, Fei, et al., “SMPTE-TT Embedded in ID3 for HTTP Live Streaming, draft-smpte-id3-http-live-streaming-00,” Informational Internet Draft, Jun. 2012, 7 pages http://tools.ietf.org/htnnl/draft-snnpte-id3-http-live-streaming-00.
Yadav, Rishi, “What Real Cloud-Native Apps Will Look Like,” Crunch Network, posted Aug. 3, 2016, 8 pages; https://techcrunch.com/2016/08/03/what-real-cloud-native-apps-will-look-like/.
Zhang, Ying, et al. “StEERING: A Software-Defined Networking for Inline Service Chaining,” IEEE, 2013, IEEE, p. 10 pages.
International Search Report and Written Opinion from the International Searching Authority, dated Aug. 5, 2019, 12 pages, for corresponding International Patent Application No. PCT/US19/35172.
Chinese Office Action for Application No. 201980037508.X, dated Jan. 17, 2022, 14 pages.
Related Publications (1)
Number Date Country
20220094665 A1 Mar 2022 US
Continuations (2)
Number Date Country
Parent 16870130 May 2020 US
Child 17471077 US
Parent 16001039 Jun 2018 US
Child 16870130 US