The present technology pertains to cloud computing, and more specifically to multi-cloud connectivity.
The ubiquity of Internet-enabled devices has created an enormous demand for Internet services and content. We have become a connected society where users are increasingly reliant on network services and content. This Internet-connected revolution has created significant challenges for service and content providers who often struggle to service a high volume of user requests without falling short of user performance expectations. For example, cloud providers typically need large and complex datacenters to keep up with network and content demands from users. These datacenters are generally equipped with server farms configured to host specific services, and include numerous switches and routers configured to route traffic in and out of the datacenters. In many instances, a specific datacenter is expected to handle millions of traffic flows and service requests.
Private networks hosted by private entities, such as enterprises, similarly have increasing demands for computing resources and performance. To satisfy such increasing demands, private networks often purchase compute resources and services from public cloud providers. For example, private entities can create a virtual private cloud on a public cloud provider and connect the virtual private cloud to their private network in order to grow their available compute resources and capabilities. Such virtual private clouds can be created on specific provider clouds or cloud regions. Private entities can select where to establish a virtual private cloud based on proximity, cost, performance, services, etc. Unfortunately, while the number of cloud providers and regions available to private entities continues to grow, the lack of uniformity in the toolsets, architectures, and application programming interfaces (APIs) implemented by different cloud providers and regions has created significant challenges for entities wanting to deploy virtual private clouds on different cloud providers or regions. Often, entities are unable to connect to different cloud providers or regions, or maintain interconnectivity between various cloud providers or regions. In many cases, establishing connectivity between clouds or regions can be extremely difficult, requiring time-consuming configuration efforts and requiring specific network devices or dedicated infrastructure.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
Disclosed herein are systems, methods, and computer-readable media for establishing and managing connectivity between clouds using SRv6. In some examples, a method can involve adding a new virtual private cloud (VPC) to a multi-cloud environment including a private network and one or more virtual VPCs connected to the private network via a segment routing (SR) domain (e.g., an SRv6 overlay) and respective virtual routers on the one or more VPCs and the private network. The method can further include deploying, on the new VPC, a new virtual router configured to route incoming and outgoing traffic for the new VPC and, after deploying the new virtual router on the new VPC, registering the virtual router at a border gateway protocol (BGP) controller associated with the private network. The method can involve establishing a connection between the BGP controller and the new virtual router and receiving, at the BGP controller and from the new virtual router, topology information associated with the new VPC.
The method can further involve identifying routes in the multi-cloud environment based on one or more paths computed based on the topology information. The one or more paths can be respective paths between the private network, the one or more VPCs, and the new VPC. In some cases, the one or more paths can be best paths computed by a path computation engine based on the topology information. Once the routes are identified, the BGP controller can send to the new virtual router routing information for communicating across the multi-cloud environment. The routing information can include the routes, segment routing identifiers and segment routing policies associated with the multi-cloud environment. The method can further include, based on the routing information, providing interconnectivity between the private network, the one or more VPCs, and the new VPC via the segment routing domain, the respective virtual routers, and the new virtual router. For example, the respective virtual routers and the new virtual router can use the routes, segment routing identifiers, and segment routing policies to route traffic across the multi-cloud environment. The respective virtual routers and the new virtual router can be SRv6-capable nodes which can use the segment routing identifiers to steer traffic to destinations on the segment routing domain.
The disclosed technology provides efficient and user-friendly strategies for establishing and managing connectivity between clouds. The present technology involves system, methods, and computer-readable media for multi-cloud connectivity using segment routing (SR) over IPv6 (“SRv6” hereinafter) and border gateway protocol (BGP). The present technology will be described in the following disclosure as follows. The discussion begins with an overview of SRv6 and IPv6. A description of an example cloud data center architecture and an example multi-cloud environment with an SRv6 overlay, as illustrated in
The approaches herein can utilize segment routing (SR) to steer connection or communication requests towards servers or nodes on different clouds or cloud regions. IPv6 and SR, which are further described below, can be used to steer requests efficiently while limiting state information. The request will be routed to the nodes identified in the SR packet based on the IPv6 and SRv6 headers. The IPv6 header can include a Source Address (SA) and a Destination Address (DA), such as a destination server or node. An SR Header can include a list of SR nodes (e.g., S1, S2, S3, etc.) and a Segment Left (SL) counter which identifies the number of remaining destination servers or nodes.
In an IPv6 environment, such as an IPv6-centric data center, servers can be reached via an IPv6 physical prefix. The servers can run application services in isolated environments, such as virtual machines (VMs) or software containers, which can be assigned an IPv6 virtual address (VIP). In some cases, a virtual switch (e.g., Open vSwitch, vector packet processing, etc.) can be deployed on a server to route packets between physical and virtual interfaces on the server. This allows the network (e.g., data center) to be fully Layer-3 routed, without having to deploy Layer-2 tunnels such as VLANs or VXLANs.
Routing the VIPs corresponding to the different applications running in the data center can be achieved in several manners. In some examples, the virtual switches can run Interior Gateway Protocol (IGP) to propagate direct routes to the VIPs. Other examples may use a mobility protocol, such as Identifier-Locator Addressing for IPv6, wherein edge routers perform the translation between physical and virtual addresses. Moreover, network devices can use Border Gateway Protocol (BGP) to exchange routing information. As will be further explained below, the approaches herein implement segment routing to establish and manage connectivity between clouds.
SR is a source-routing paradigm, initially designed for traffic engineering, which allows for a packet to follow a predefined path, defined by a list of segments, inside an SR domain. The approaches herein leverage an SRv6 architecture and IPv6 connectivity to efficiently create and manage multi-cloud connectivity.
SRv6 and IPv6 can be leveraged together by implementing an IPv6 and SRv6 header in an IPv6 packet. For example, in some cases, an IPv6 extension header can be implemented to identify a list of segments for SR and a counter Segments Left, indicating the number of remaining segments to be processed until the final destination of the packet is reached. In an SRv6 packet, the IPv6 destination address can be overwritten with the address of the next segment. This way, the packet can go through SR-capable routers until reaching the next intended SR hop. Upon receipt of an SRv6 packet, an SR-capable router will set the destination address to the address of the next segment, and decrease the Segments Left counter. When the packet reaches the last SR hop, the final destination of the packet is copied to the IPv6 destination address field. Depending on the value of a flag in the header, the SRv6 header can be stripped by the last SR hop so that the destination receives a vanilla IPv6 packet.
The cloud 104 switches 106-1 through 106-N (collectively “106” hereinafter) and 108-1 through 108-N (collectively “108” hereinafter) configured to route traffic in the cloud data center architecture 100. The switches 106, 108 can include any network device with layer 2 (L2) and/or layer 3 (L3) capabilities. In this example, the switches 106 represent spine switches and the switches 108 represent leaf switches.
The client 102 can connect to the cloud 104 and access application servers 110-1 through 110-N (collectively “110” hereinafter) via the switches 106, 108. The client 102 can be a network, such as a cloud network or data center (e.g., a private cloud, a public cloud, a hybrid cloud, a cloud region or segment, a virtual private cloud, etc.), or any computing device, such as a laptop, a desktop, a tablet computer, a mobile phone, a server, a smart device (e.g., smart television, smart watch, etc.), an internet of things (IoT) device, etc.
The switches 106 can serve as edge devices in the cloud 104, and route traffic to and from the cloud 104. The switches 106 can thus serve as the egress and ingress point for the cloud 104. The switches 106 can also route traffic to the switches 108 in the cloud 104, which can route traffic to other nodes (e.g., appliances, firewalls, load balancers, etc.) and application servers 110 in the cloud 104.
The application servers 110 can represent physical machines and/or resources hosting applications, isolated environments, or services in the cloud 104. For example, the application servers 110 can be physical servers running various applications in the cloud 104. The application servers 110 can run some or all of their applications in isolated environments, such as VMs or software containers. In some cases, an application can by hosted by, and/or run on, multiple application servers 110 in the cloud 104. For example, multiple application servers 110 can run instances of an application (e.g., virtual instances, replicas, parallel instances, mirror instances, etc.).
The application servers 110 can include a physical network interface (e.g., NIC) to communicate with other devices or services (e.g., devices or services in the cloud data center architecture 100). The physical network interface can be assigned a physical prefix or network address for such communications. The application servers 110 can also include one or more virtual interfaces (e.g., vNICs) which can provide virtualized or abstract representations of network interfaces and connections. Virtual interfaces can provide added flexibility and network capabilities, as well as various other benefits or services, such as aggregation of links or data, isolation of data or networks, decoupling of application and system traffic, expansion of network interfaces, network redundancy, dedicated links, and so forth. Virtual interfaces can be assigned virtual addresses (e.g., VIPs) in the cloud 104. The virtual addresses can identify the virtual interfaces as well as any applications or isolated environments associated with the virtual addresses on the application servers 110.
For example, an application can be assigned a virtual address in the cloud 104, which can be used to identify the application in the cloud 104 and route traffic to and from the application. The virtual address can be used to steer traffic to and from a virtual instance of the application running on one or more of the application servers 110. In some cases, the virtual address can be mapped to the same application on multiple application servers 110, and can be used to communicate with an instance of the application on any of the multiple application servers 110.
In some cases, the application servers 110 can include a virtual switch, such as OVS or VPP, which can route traffic to and from the application servers 110. For example, a virtual switch can route traffic between physical and virtual network interfaces on an application server, between applications and/or isolated environments on the application server, and between the application server and devices or applications outside of the application server. To illustrate, an application server can run multiple workloads (e.g., applications in different VMs or containers) assigned to different virtual interfaces and virtual addresses. A virtual switch on the application server can route traffic to and from the different workloads by translating the virtual addresses of the workloads and communicating with the virtual interfaces as well as other network interfaces such as the physical network interface(s) on the application server.
A controller 212 can interact with gateways 216A-G on clouds 104A-G to collect topology information, perform path computation, propagate routes across the clouds 104A-G and/or VPCs 206-210, propagate segment routing identifiers (SIDs) and policies across the clouds 104A-G and/or VPCs 206-210, perform traffic engineering, etc. The controller 212 can be, for example, a BGP controller with a path computation engine. The controller 212 can reside on cloud 104A or any other network or cloud. The gateways 216A-G can be, for example, virtual gateways available at the clouds 104A-G. In some cases, the virtual gateways can include a vector packet processing engine (VPP).
The controller 212 can collect topology information from the clouds 104A-G and/or VPCs 206-210 and propagate forwarding rules and SR IDs (e.g., SIDs) and policies using one or more protocols such as OSPF (Open Shortest Path First), IS-IS (Intermediate System to Intermediate System), BGP Link-State (BGP-LS), BGP Traffic Engineering (BGP-TE), etc. For example, the controller 212 can collect topology information for the clouds 104A-G and/or VPCs 206-210 from gateways 216A-G using BGP-LS protocol. The controller 212 can also include a path computation engine (PCE) for computing the best paths between the gateways 216A-G. The controller 212 can use the collected topology and/or cloud information to perform the path computation. The controller 212 can then use BGP-TE to populate reachability information, such as forwarding rules and SR IDs and policies, on the gateways 216A-G.
The gateways 216A-G can include a control plane that interfaces with BGP-LS and BGP-TE to receive the forwarding rules and SR IDs policies from the controller 212. The gateways 216A-G can also include a data plane that processes IPv4 and/or IPv6 packets and is able to encapsulate/decapsulate IPv4 or IPv6 packets into SRv6 packets. Moreover, the gateways 216A-G can include BGP agents 218A-G, such as GoBGP agents, to interact with the controller 212 or any BGP peers. In some cases, the gateways 216A-G can also include an active measurement system based on IP SLA (Internet Protocol Service Level Agreement) to collect network performance information and monitor quality-of-service (QoS) between the gateways 216A-G.
The controller 212 can communicate with the clouds 104A-G via IPv4 or IPv6. The SRv6 overlay 202 can include SRv6-capable nodes that can route traffic over the SRv6 overlay 202 using SRv6, as further explained below.
The list of segments 314 in the SR header 306 can be used by nodes in the SRv6 overlay 202 to steer the packet 300 to the destination nodes (e.g., application servers 110-1 and 110-2) in the list of segments 314. The list of segments 314 identifies each segment (e.g., SRv6-capable node) along a path for the packet. Each SRv6-capable node can maintain a list of SRv6 segments instantiated at the node. The SRv6-capable node can use its list of SRv6 segments to route the packet to the next segment in the list of segments 314.
The segments field 312 can also include a counter 318, known as the Segments Left, which identifies the active segment. The value of the counter 318 is decreased by 1 each time it is received by an SRv6-capable node as the packet travels through the IPv6 network.
The IPv6 header 304 can include a source address field 310 and a destination address field 308. The source address field 310 can identify the source of the packet 300, such as client 102. The source address field 310 can include a network address of the original source of the packet 300, a return destination for the packet 300, and/or a current source or sender of the packet 300. The source field 310 can also include commands or functions to be implemented by the node identified in the source field 310, as will be further described below.
The destination address field 308 can identify the next segment or node from the list of segments 314. In this example, the destination address field 308 identifies server 110-1 (S1) which is the first destination node in the list of segments 314 for the packet 300. The destination address field 308 can be used to steer the packet 300 to the next destination. The destination field 308 in the IPv6 header 304 can allow the packet 300 to be routed even if the packet 300 traverses SR-unaware nodes.
The destination address field 308 can include a network prefix of the identified node or segment. For example, the destination address field 308 can include the physical prefix of server 110-1 (S1). This can ensure that the packet 300 is transmitted to that node or segment (e.g., server 110-1 (S1)), as the first destination for the packet 300. After the server 110-1 (S1) processes the packet 300, the server 110-1 (S1) can forward the packet 300 to the next segment in the list of segments 314, which in this example is server 110-2 (S2). When forwarding the packet, the server 110-1 (S1) can overwrite the destination address field 308 on the IPv6 header 304 to identify the server 110-2 (S2) as the destination, which ensures that the packet 300 is routed to server 110-2 (S2). Server 110-2 (S2) can then receive the packet 300 based on the destination address field 308. This way, the list of segments 314 in the SR header 306 as well as the destination address field 308 in the IPv6 header 304 can be used to push the packet 300 to the destination nodes in the list of segments 314.
As will be further explained, the list of segments 314 and/or destination address field 308 can include functions or commands (hereinafter “SR functions”) to be implemented by associated nodes or segments. For example, the destination address field 308 can identify application server 110-1 (S1) and include a function to be applied by application server 110-1 (S1), such as a connect function which application server 110-1 (S1) can interpret as a request to connect with an application or node associated with the function. The destination address field 308 can contain the state of the packet 300, including the next destination of the packet, the source or return node, and any commands or functions for such nodes or segments.
Similarly, the list of segments 314 can include commands or functions for the segments in the list of segments 314. For example, the list of segments 314 can include a connect function for each of the destination node or segment, a force connect function for the last segment in the list of segments 314, one or more parameters for one or more segments (e.g., resource identifier, flow identifier, etc.), state information, and so forth.
SR functions can encode actions to be taken by a node directly in the SR header 306 and/or the IPv6 header 304. SR functions are executed locally by the SRv6-capable nodes. Example SR functions include, without limitation, End (i.e., endpoint function), End.X (i.e., endpoint function with Layer-3 cross-connect), End.T (i.e., endpoint function with specific IPv6 table lookup), End.S (i.e., endpoint in search of a target in table T), End.B6 (i.e., endpoint bound to an SRv6 policy), etc. For example, in an SR header (306) containing s::cj, s::cj denotes the shortest-path to the node s and an x-connect function (function c) to the neighbor j.
In some examples, each node can be assigned an entire IPv6 prefix. Accordingly, the lower-order bytes in the prefix can be used to designate different SR functions. In some cases, the SR functions may depend on the address of the first segment in the list of segments 314 (e.g., the “sender” of the function). To illustrate, when a node whose physical prefix is s receives a packet with the SR header 306 containing (x, . . . , s::f, . . . ), the SR header 306 will trigger node s to perform a function f with argument x, denoted by s.f(x).
The node prefix 326 can include the physical prefix of the next segment or node. The SR function 328 can include a command or function associated with the node prefix 326. In some cases, the third segment 324 can be further segmented into sub-segments which can include arguments for the SR function 328. The arguments can be used to pass specific parameters for the SR function 328.
The switch 108-N can receive the packet 402 and forward the packet to application server 110-1 (S1) based on the IPv6 and SRv6 headers 404. The SRv6 header in the headers 404 can include a list of segments 410 identifying application servers 110-1, 110-2, 110-3 as the destination segments. The SRv6 header can in the headers 404 can also include a segments left (SL) counter 412 identifying the number of remaining segments or hops in the list of segments 410.
The application server 110-1 (S1) can receive the packet 402 from the switch 108-N and process it. The application server 110-1 (S1) can then forward the packet 402 to application server 110-2 (S2), which is the next segment in the list of segments 410, based on the list of segments 410 in the headers 406. The application server 110-1 (S1) can also decrease the SL counter 412 identifying the number of remaining segments or hops in the list of segments 410.
The application server 110-2 (S2) can receive the packet 402 from the application server 110-1 (S1) and process it. The application server 110-2 (S2) can then forward the packet 402 to application server 110-3 (S3), which is the next segment in the list of segments 410, based on the list of segments 410 in the headers 408. The application server 110-2 (S2) can also decrease the SL counter 412 identifying the number of remaining segments or hops in the list of segments 410.
The application server 110-3 (S3) can receive the packet 402 from the application server 110-2 (S2) and process it. The application server 110-3 (S3) is the last segment in the list of segments 410. Accordingly, the application server 110-3 (S3) can decrease the SL counter 412 identifying the number of remaining segments or hops in the list of segments 410, without forwarding the packet to another destination segment.
When the client associated with cloud 104A creates the new VPCs (e.g., VPCs 206 and 208) on clouds 104B and 104C, a script or program can be triggered/called to deploy gateways 216B and 216C and agents 218B and 218C on clouds 104B and 104C. Each cloud provider (e.g., clouds 104B and 104C) can have specific application programming interfaces (APIs) to make such deployment possible. The gateways 216B and 216C and agents 218B and 218C can be deployed on VMs or nodes (e.g., servers, software containers, etc.) on clouds 104B and 104C. In some cases, the gateways 216B and 216C and agents 218B and 218C can be deployed in VMs from a cloud market place(s) associated with clouds 104B and 104C.
The gateways 216B and 216C can operate as a layer 3 (L3) router for traffic communicated to and from the VPCs (e.g., 206 and 208). Once deployed, the gateways 216B and 216C and agents 218B and 218C can announce and register themselves with the controller 212. The address of the controller 212 can be preconfigured in the case of a SaaS (software as a service) model or can be manually defined by the client if the client wants to operate its own controller.
The controller 212 can check if the newly created gateways (216B-C) are allowed to connect to the controller 212. Various policies can be considered, which may depend on different pricing models (e.g., limitations on the number of simultaneous number of virtual gateways per BGP controller, etc.). Once the connection with the controller 212 is established, the gateways 216B and 216C can start announcing private networks in the VPCs 206, 208 to the controller 212. This communication can be achieved through BGP-LS, for example.
The controller 212 can then send routes, SR IDs (SIDs) and SR policies to the gateways 216B and 216C. If required, the BGP agents 218B and 218C can also configure security protections, such as BGPSec protocol. In some cases, a network services orchestrator (NSO) 502 can be deployed in communication with the controller 212 to facilitate orchestration of resources on the various clouds (e.g., clouds 104A-C).
The gateways 216B and 216C can use the routes, SIDs, and SR policies to route traffic to segments in the SRv6 overlay 202. As the gateways 216B and 216C receive packets with SRv6 headers, the gateways 216B and 216C can route the packets to destination nodes and execute SR functions based on the SRv6 headers. In some cases, the approaches herein can implement a new SRv6 function flavor applicable to the End, End.X, End.T and SR proxy functions, that upon egressing a cloud (e.g., 104A, 104B, 104C), modifies the IPv6 source address (SA) in order to avoid RPF (Reverse Path Forwarding) check failures.
As illustrated above, the approaches herein provide techniques for easily and efficiently adding new regions or VPCs for a client (e.g., cloud 104A). The process can involve creating a new VPC in a particular cloud or cloud region, loading a virtual gateway on the cloud or cloud region for the new VPC which is pre-staged to register with a controller (e.g., controller 212), and sending routes, SIDs, and SR policies to the virtual gateway via the controller. Moreover, adding a new region or VPC does not affect other existing regions (i.e., no changes to others sites). As the newly created virtual gateway receives its configuration from the controller (212), any-to-any region or VPC routing is enabled without existing regions to intervene in the process.
For example,
The client then loads virtual gateway 216D and BGP agent 218D on the cloud 104D. The virtual gateway 216D and BGP agent 218D can be loaded in one or more VMs, servers, software containers, nodes, etc., on the cloud 104D. For example, the virtual gateway 216D can be deployed via a VM available on a cloud market place associated with cloud 104D.
The client can pre-stage the virtual gateway 216D (including the BGP agent 218D) to register with the controller 212. Once the virtual gateway 216D has connected and registered with the controller 212, the controller 212 can collect topology information from the virtual gateway 216D. The controller 212 can collect the topology information using BGP-LS, for example. The controller 212 can compute best paths between the new VPC 210 and the cloud 104A and VPCs 206 and 208. More specifically, the controller 212 can compute paths between the virtual gateways 216A-D at the various locations (e.g., 104A, 206, 208). The controller 212 can use a path computation engine to compute the paths.
The controller 212 then sends routes, SIDs and SR policies to the virtual gateway 216D. The SIDs can correspond to SR-aware nodes in the SRv6 overlay 202. The routes, SIDs and SR policies can be used to route traffic between the virtual gateways 216A-D over the SRv6 overlay 202. This provides interconnectivity between the virtual gateways 216A-D and, therefore, between the cloud 104A and the VPCs 206, 208, and 210. Such interconnectivity includes any-to-any connectivity between the cloud 104A and the VPCs 206, 208, 210. In some examples, the controller 212 can populate the virtual gateway 216D with specific forwarding rules via BGP SR-TE. The virtual gateway 216D can add the forwarding rules to its forwarding table, which the virtual gateway 216D can use to determine where or how to route traffic between the new VPC 210 and the cloud 104A and VPCs 206 and 208.
The process for configuring the new VPC 210 and interconnecting the new VPC 210 with the cloud 104A and VPCs 206 and 208 can be automated or substantially automated and can be completed without modifying other existing clouds or regions (e.g., cloud 104A and VPCs 206 and 208).
In some cases, an Internet Edge Router (physical or virtual) can be deployed in the SRv6 overlay 202 to connect the different regions/networks (e.g., cloud 104A and VPCs 206 and 208) to the Internet 214. For example, with reference to
With reference to
With reference to
In addition, VPC 206 includes server 602 (Server 1), VPC 208 includes servers 604 (Server 4) and 606 (Server 5), and VPC 210 includes servers 608 (Server 2) and 610 (Server 3). The virtual gateways 216B, 216C, and 216D can interconnect VPCs 206, 208, and 210 and route packets to the various servers (602, 604, 606, 608, 610) on the VPCs 206, 208, and 210, as shown in traffic flow 600.
In this example, server 602 (Server 1) sends packet 612 to virtual gateway 216B. Packet 612 includes IPv4 header 614 with a source address 616A (SA) corresponding to server 602 (Server 1) and a destination address 616B (DA) corresponding to server 608 (Server 2) on VPC 210. Virtual gateway 216B receives the packet 612 and encapsulates the packet 612 with SRv6 header 622 and IPv6 header 624, to create SRv6 packet 618 for routing through the SRv6 overlay 202. Virtual gateway 216B then sends the packet 618 to the virtual gateway 216C (C2) via the Internet 214 and the SRv6 overlay 202.
The SRv6 header 622 can include a list of segments associated with a service chain configured for the traffic. The list of segments includes segment 628A and segment 628B. Segment 628A represents virtual gateway 216C on VPC 208 and segment 628B represents virtual gateway 216D on VPC 210. In addition, segments 628A and 628B are associated with respective SR functions to be executed by segments 628A and 628B upon receipt of the packet 618. In this example, segment 628A is associated with the function C4, which instructs the virtual gateway 216C (e.g., segment 628A) to x-connect to Server 4 (604), and segment 628B is associated with the function C2, which instructs the virtual gateway 216D (e.g., segment 628B) to x-connect (function C) to Server 2 (608). The list of segments in the SRv6 header 622 also includes counter 630A, which identifies the active segment or the Segments Left. Server 4 (604) can represent the first service in the service chain, and Server 2 (608) can represent the next service in the service chain. Thus, the list of segments (628A, 628B), including the associated SR functions, can be used by the virtual gateways 216B-D to steer the packet 618 through the service chain.
The IPv6 header 624 of packet 618 can include a source address 626A corresponding to the virtual gateway 216B (e.g., C1) and a destination address 626B corresponding to the virtual gateway 216C on VPC 208. The destination address 626B in the IPv6 header 624 can also include a function to be executed by the virtual gateway 216C upon receipt. In this example, the function is C4, which as previously explained instructs the virtual gateway 216C (e.g., segment 628A) to x-connect (function C) to Server 4 (604).
When the virtual gateway 216C (C2) receives the packet 618, it looks at the list of segments and associated functions in the SRv6 header 622 as well as the destination address 626B and associated function on the IPv6 header 624, to determine how to process the packet 618. The virtual gateway 216C (C2) then executes the function C4 defined in the SRv6 header 622 and the IPv6 header 624 for segment 628A, which corresponds to the virtual gateway 216C (C2). Accordingly, based on the function C4, the virtual gateway 216C (C2) sends the packet 618 to Server 4 (604). Server 4 (604) can receive the packet 618 and process it accordingly.
The virtual gateway 216C (C2) also identifies the next segment 628B in the list of segments on the SRv6 header 622, and determines the packet 618 should be routed to virtual gateway 216D. The virtual gateway 216C (C2) modifies the packet 618 for sending to the next segment 628B (virtual gateway 216D). The modified packet 620 includes a new counter 630B denoting the new Segments Left value of the old counter 630A is decreased by 1. The modified packet 620 also includes a modified IPv6 header 632. The modified IPv6 header 632 identifies the source address 626A, which is the same source address included in the IPv6 header 624 of packet 618, and a new destination address 626C (C3) corresponding to the next segment 628B (virtual gateway 216D). The new destination address 626C (C3) is also associated with a function to be executed by the next segment 628B upon receipt. In this example, the function is C2, which instructs the next segment 628B (virtual gateway 216D) to x-connect (function C) to Server 2 (608).
The virtual gateway 216C (C2) then sends the modified packet 620 to the virtual gateway 216D (C3), identified as the next segment 628B in the SRv6 header 622. When the virtual gateway 216D (C3) receives the modified packet 620, it looks at the list of segments and associated functions in the SRv6 header 622 and the destination address 626 and associated function in the IPv6 header 632, to determine how to process the modified packet 620. The virtual gateway 216C (C2) then executes the function C2 defined in the SRv6 header 622 and the IPv6 header 632 for segment 628B, which corresponds to the virtual gateway 216D (C3). Based on the function C2, the virtual gateway 216D (C3) sends the packet 612 to Server 2 (608), which is the final destination in the flow 600. Server 2 (608) can receive the packet 612 and process it accordingly.
As illustrated above, the virtual gateways (e.g., 216B-D) can perform service insertion and SRv6-based service chaining over the SRv6 overlay 202. In some cases, the virtual gateways can perform service chain distribution using BGP-TE. Moreover, in some cases, the virtual gateways can perform service chaining based on BGP policies (e.g., location, inner versus outer traffic, Internet, Intranet, etc.). The virtual gateways can also perform SRv6-based load balancing and performance-based routing. For example, the virtual gateways can measure latencies between themselves. The controller 212 can use the latency measurements to compute the optimal path or VPC to reach a given service identified by an address such as a virtual IP (VIP) or anycast address. The virtual gateways can interconnect VPCs across different cloud provider and/or cloud provider regions. The techniques herein allow a client associated with a private cloud or network to easily deploy new VPCs on different cloud providers or cloud provider regions, and interconnect those VPCs with the client's network and any other VPCs established.
Having described example systems and concepts, the disclosure now turns to the method illustrated in
At step 702, a new VPC (e.g., 210) can be added to a multi-cloud environment (e.g., 200) comprising a private network (e.g., 104A) and one or more VPCs (e.g., 206, 208) connected to the private network via a segment routing domain (e.g., SRv6 overlay 202) and respective virtual routers (e.g., virtual gateways 216A-C) on the one or more VPCs and the private network.
At step 704, a new virtual router (e.g., virtual gateway 216D) can be deployed on the new VPC. The new virtual router can route incoming and outgoing traffic for the new VPC. In some cases, the new virtual router can be deployed via a VM, server, or software container obtained through a cloud market place associated with a cloud provider hosting the new VPC.
At step 706, the new virtual router can be registered at a BGP controller (e.g., 212) in the segment routing domain. In some cases, the new virtual router can be pre-staged to register with the BGP controller upon deployment.
At step 708, the BGP controller can receive, from the new virtual router, topology information associate with the new VPC. In some cases, the BGP controller can receive the topology information via BGP-LS.
At step 710, the BGP controller can identify routes in the multi-cloud environment based on one or more paths computed based on the topology information. The one or more paths can be best paths computed between the private network, the one or more VPCs, and the new VPC. For example, the BGP controller can include a path computation engine which can compute paths based on the topology information.
At step 712, the BGP controller can send, to the new virtual router, routing information for communicating across the multi-cloud environment. The routing information includes the routes, segment routing identifiers (e.g., SIDs) and segment routing policies associated with the multi-cloud environment. The segment routing identifiers can correspond to SRv6-capable nodes on the multi-cloud environment, such as the respective virtual routers and the new virtual router. The BGP controller can populate the routing information on the new virtual router using BGP-TE, for example.
Based on the routing information, at step 714, the method can provide interconnectivity between the private network, the one or more VPCs, and the new VPC via the segment routing domain, the respective virtual routers, and the new virtual router. The respective virtual routers and the new virtual router can route traffic between the private network, the one or more VPCs, and the new VPC based on the routes, segment routing identifiers, and segment routing policies, as previously explained.
In some cases, an Internet Edge Router (e.g., 204) can be deployed in the segment routing domain to provide Internet connectivity to the private network, the one or more VPCs, and the new VPC. The BGP controller can send, to the Internet Edge Router, the routing information associated with the multi-cloud environment. The routing information can enable the Internet Edge Router to connect the private network, the one or more VPCs, and the new VPC to the Internet.
The disclosure now turns to
The interfaces 802 are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device 800. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 804 to efficiently perform routing computations, network diagnostics, security functions, etc.
Although the system shown in
Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 806) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory 806 could also hold various software containers and virtualized execution environments and data.
The network device 800 can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing and/or switching operations. The ASIC can communicate with other components in the network device 800 via the connection 810, to exchange data and signals and coordinate various types of operations by the network device 800, such as routing, switching, and/or data storage operations, for example.
To enable user interaction with the computing device 900, an input device 945 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 935 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 900. The communications interface 940 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 930 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 925, read only memory (ROM) 920, and hybrids thereof.
The storage device 930 can include services 932, 934, 936 for controlling the processor 910. Other hardware or software modules are contemplated. The storage device 930 can be connected to the system connection 905. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 910, connection 905, output device 935, and so forth, to carry out the function.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
Claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B.