Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks is distributed across a number of different computer systems and/or a number of different computing environments.
In some computing environments, an entity (e.g., a corporation) builds out an infrastructure and runs applications, such as, for example, Web services, “on-premise” within the infrastructure. In these computing environments, computing tasks are performed on the on-premise (or private) computer network. For example, a corporation (or other enterprise customer) can have a computer network formed from resources under its ownership and control. The corporation (or other enterprise customer) can make a private network available to its employees to perform networked computing tasks.
In other computing environments, one entity uses another entity's infrastructure to run application on behalf of the entity. For example, one entity can run an application on machines in another entities data center. Running an application in another entities data center can be referred to as running an application “in the cloud”. When applications are run in the cloud, computing resources and storage resources of the data center are allocated to a user.
In some computing environments, work is performed using both on-premise and cloud resources. In these “hybrid” arrangements, on-premise resources and cloud resources can interoperate to assist in solving a common problem. Hybrid arrangements can exist on a temporary basis, such as, for example, when one entity supplements its own resources with resources from another entity. For example, when on-premise resources are operating at or near capacity or in response to a surge in workload, a user of the on-premise resources can request allocation of cloud resources to perform additional work. When the additional work is completed, the cloud resources can be returned back to an available pool of resources for allocation to other users. The user can be charged for use of any allocated resources. Thus, the user of the on-premise resources essentially rents cloud-based resources.
Outsourcing computing workloads to a public cloud, can require significant bandwidth between a user's on-premise network and the public cloud. To reach a public cloud, data from an on-premise network typically passes through a gateway between the on-premise network and the network of the cloud provider. However, existing gateway solutions for realizing this cross-premise connectivity fail to meet various requirements, such as, for example, increased performance, multi-tenancy, security, predictability, compatibility with various modes of access, scalability, low cost, and simplicity.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
One embodiment illustrated herein is directed to a method practiced at a computer system including one or more processors and system memory. The computer system includes a shim gateway. The method includes acts for encapsulating a packet between a customer premise for delivery to customer resources within a public cloud data center. The method includes an act of receiving a packet from a customer premise. The packet is received at a customer specific shim component in the shim gateway. The packet has a VLAN tag. The packet identifies a tenant within a designated virtual network for the customer. The designated virtual network is within the public cloud data center. The method further includes an act of encapsulating the packet into an encapsulated packet. Encapsulation includes mapping the VLAN tag to a destination network address of a tenant gateway for the customer. The tenant gateway is in the designated virtual network. The method further includes an act of forwarding the encapsulated packet to the tenant gateway in the designated virtual network for delivery to the identified tenant.
Another embodiment illustrated herein includes a method that may be practiced at a computer system including one or more processors and system memory. The computer system includes a tenant gateway. The method includes acts for delivery of an encapsulated packet between a customer premise for delivery to customer resources within a public cloud data center. The method includes an act of the tenant gateway receiving an encapsulated packet for delivery to a tenant in a designated virtual network. The encapsulated packet is sent to the tenant gateway from a shim gateway component for the customer using a destination network address for the tenant gateway that was mapped from a VLAN tag. The method further includes an act of the tenant gateway using information in the encapsulated packet to send data from the encapsulated packet to the tenant in the designated virtual network.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The present invention extends to methods, systems, and computer program products for connecting on-premise networks with public clouds. Embodiments of the invention include a cross-premise gateway configured for a public cloud offering. The gateway facilitates cross-premise connectivity between a customer's on-premise networks and a public cloud. The gateway supports scalability, multiple modes of access, multi-tenancy, simplicity, and support for virtualization protocols, such as, for example, Network Virtualization using Generic Routing Encapsulation (“NVGRE”). Accordingly, customers are provided efficient and predictable (e.g., better Service Level Agreements (“SLAs”)) cross-premise connectivity to utilize a public cloud.
Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, edge devices, gateways, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Referring now to
A gateway can be physically located at an anchor site for an ISP or Dedicated Connection Provider. Logically, the gateway can provide multi-tenant and multi-mode access functionality.
Generally, a multi-tenant multi-mode gateway can provide high bandwidth (e.g., 200 GB/s+ per data center) at a reduced cost. A gateway can provide multi-protocol cross premise connectivity (e.g., via dedicated access or ISPs) using Multiprotocol Label Switching (“MPLS”) (e.g., L3vpn, 6PE, 6VPE, etc), Ethernet over MPLS (EoMPLS), Virtual Private LAN Services (“VPLS”), Locator/ID Separator Protocol (LISP), Generic Routing Encapsulation (GRE), Level 2 Tunneling Protocol version 3 (L2TPv3), Direct circuit handoff, etc. A gateway can provide logical/virtualized multi-tenancy support.
A gateway can provide dynamic routing. For example this may be done with Border Gateway Protocol (“BGP”)/Extensible Messaging and Presence Protocol (“XMPP”) peering with tenant gateways. Gateway redundancy can be provided. For example, in some embodiments this may be provided via BGP multi-path/Equal-cost multi-path routing (“ECMP”).
A gateway can be programmable to create/delete loopbacks, GRE/NVGRE tunnel end points, VPN, BGP peering on router, etc. from the gateway to tenants. Standardized Interface/APIs and control protocols can assist with demand/automated provisioning.
As described, a gateway architecture can use a split model. For example, a gateway can be split into a front-end and a back-end. The front-end can be a shim gateway located at a remote anchor or peering site, for example, located afar from cloud-computing data centers. A shim gateway can be a commodity switch or appliance configured for tunnel encapsulation/decapsulation.
The back-end can be tenant gateway virtual machine(s) (VMs) at a cloud computing data center. Gateway tenant VMs can have different arrangements. In some embodiments, tenant gateway VMs serve a single Virtual Network (“VNet”) (a non multi-tenant arrangement). In other embodiments, tenant gateway VMs serve multiple VNets (a multi-tenant arrangement). In some embodiments, a shim gateway and tenant gateway virtual machines are commonly owned.
A gateway can provide Virtual Routing and Forwarding (VRF), VLANs to VNet translation layer using different mechanisms. In some embodiments, an indirect splicing mechanism uses Generic Routing Encapsulation (“GRE”) tunnels to Virtual Machines (“VMs”). In some embodiments, a direct splicing mechanism uses directory service lookup and VNet-NVGRE encapsulation/decapsulation. The direct mechanism also maps Tenant IDs in NVGRE to VRF instance and vice versa.
Shim components (referred to generally as 116) can be configured to send GRE communication to a specified VNet. For example, the shim component 116-X can be configured to forward communication from customer network 102-X to VNet 118-X. GRE communication is forwarded to the corresponding specified VNet (e.g., VNet 118-X, VNet 118-Y, VNet 118-Z, etc.).
At each VNet, corresponding tenant gateways 120-X, 120-Y and 120-Z receive GRE communication. The tenant gateways (referred to generically at 120) are examples of back-ends of the gateway 110. A tenant gateway 120 translates GRE communication into NVGRE communication. The GRE communication and NVGRE communication are examples of a data plane. The tenant gateway 120 can also use addressing information in the GRE communication to locate appropriate tenants (e.g. tenants 122-X, 122-Y, and 122-Z) in the VNet (referred to generically as 118) for receiving the customer data. This is an example of a control plane. An example of using addressing information includes a directory lookup based on IP addresses in the GRE communication. The customer data is then sent to the appropriate tenants (referred to generically as 122) using NVGRE.
Shim components (referred to generically as 116) can be configured to send the NVGRE or GRE communication to the multi-tenant gateway 124, that in this example, is used as a back-end of the gateway 110. Accordingly, any of shim components 116-X, 116-Y and 116-Z that have customer data can send the customer data to the multi-tenant gateway 124.
When appropriate, the multi-tenant gateway 124 can translate GRE communication into NVGRE communication in the data plane. The multi-tenant gateway 124 can also use addressing information in the GRE or NVGRE communication to locate (e.g., a directory lookup based on IP addresses in the GRE or NVGRE communication) appropriate tenants within an appropriate VNet for receiving the customer data to implement a control plane. The customer data is then sent to the appropriate VNet and onto the appropriate tenants within the appropriate VNet using NVGRE.
Further, each shim component 116-X, 116-Y and 116-Z is compatible with a VNet (referred to generically as 118). Thus, the shim components 116-X, 116-Y and 116-Z can use addressing information in the NVGRE communication to locate (e.g., a directory lookup based on IP addresses in the NVGRE communication) appropriate tenants 122 in the appropriate VNet 118 for receiving the customer data to implement a control plane. The customer data is then sent to the appropriate VNet 118 and onto the appropriate tenants 122 within the appropriate VNet 118 using NVGRE.
The switch performs a customer-circuit to VLan handoff (including tagging of the customer) to the shim gateway 114 installed at a peering or anchor site 126. In the illustrated example, the shim gateway 114 comprises a b 10/40 G switch. The shim gateway 114 takes VLan frames and maps (or encapsulates) them into the VNet domain using GRE. The shim gateway 114 could do direct NVGRE encapsulation if it can lookup Directory service for CA<>PA mapping (thereby bypassing the VNet-gateway in datapath)
While not shown in the illustrated example, the tenant gateways 120-A and 120-B on the data center 106 side, can be made multi-tenant. Further, the route exchange between on-premises systems (e.g. systems on Corporation A or Corporation B's site network) and cloud (e.g. the data center 106) could be done statically or using a BGP.
As illustrated in
VLAN to GRE lookup mapping can be performed in a variety of ways. To do VLAN to GRE lookup mapping:
(1) For Non OpenFlow switches
(2) For Open Flow Switches
(3) For S/W appliance—Using Vmswitch or OpenVswitch.
Embodiments of the invention include providing redundancy for customer connections to a cloud computing data center.
Accordingly, embodiments of the invention provide increased scalability. The capacity of a gateway can be increased by adding more virtual machines running the connectivity service. Gateways can be integrated with an existing network load-balancer and hence inherits the corresponding benefits, such as resource pooling and high availability. Cross premise connectivity is supported via various access modes customers choose, including MPLS and direct circuit.
Embodiments permit multiple customers/tenants to connect to a public cloud using scalable gateway front end and multi-tenant back-end infrastructure. Dynamic routing, failover and resiliency are provided by leveraging BGP. Embodiments of the invention work at layer-2 and hence do not depend on IP routing or VRF (Virtual Routing and Forwarding) technology, lowering complexity significantly.
Accordingly, embodiments of the invention include using any of the described indirect and direct splicing mechanisms with (1) multiple access modes, (2) multi-tenancy using L2 to L3 interconnection (and independent of other mechanisms, such as, VRF), (3) scaling-out and high availability facilitated by load balancing technology, and (4) support for NVGRE.
Embodiments of the invention enable high-speed cross-premise (e.g., customer site to virtual network) interconnection scenarios.
The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed.
Referring now to
The method 1500 further includes an act of encapsulating the packet into an encapsulated packet (act 1502). Encapsulation includes mapping the VLAN tag to a destination network address of a tenant gateway for the customer, where the tenant gateway is in the designated virtual network. Examples of tenant gateways are illustrated 120 for individual gateways where each gateway is particular to a particular VNet or at 124 where a multi-tenant gateway is used for a plurality of different VNets.
The method 1500 further includes an act of forwarding the encapsulated packet to the tenant gateway in the designated virtual network for delivery to the identified tenant.
The method 1500 may be practiced where the act of receiving a packet from a customer premise comprises an act of receiving a packet via one of a plurality of access modes supported by the shim gateway.
The method 1500 may be practiced where the act of encapsulating the packet into an encapsulated packet comprises an act of encapsulating the packet into an encapsulated packet. For example, as illustrated above, encapsulation may be accomplished using GRE or NVGRE.
The method 1500 may be practiced where the tenant gateway is a multi-tenant gateway (such as is illustrated at 124). In such embodiments, the act of encapsulating the packet into an encapsulated packet comprises an act of encapsulating the packet into an encapsulated packet where encapsulation includes mapping the VLAN tag to a destination network address of a multi-tenant gateway. The multi-tenant gateway is in the public cloud data center. The multi-tenant gateway is a gateway for a plurality of different virtual networks, including the designated virtual network. The an act of forwarding the encapsulated packet to the tenant gateway in the designated virtual network for delivery to the identified tenant includes act of an act of forwarding the encapsulated packet to the multi-tenant gateway for delivery to the identified tenant.
The method 1500 may be practiced where communication is facilitated by a high-speed cross premise interconnection.
The method 1500 may be practiced where the act of forwarding the encapsulated packet to the tenant gateway in the designated virtual network for delivery to the identified tenant comprises forwarding the packet to a software load balancer to forward the encapsulated packet to a virtual machine selected from a plurality of virtual machines at the tenant gateway. For example,
The method 1500 may be practiced where the act of encapsulating the packet into an encapsulated packet includes mapping the VLAN tag and a destination address in the packet to a Tenant ID, an electronic address for the designated virtual network, and an electronic address for the tenant
Referring now to
The method 1600 further includes an act of the tenant gateway using information in the encapsulated packet to send data from the encapsulated packet to the tenant in the designated virtual network (act 1604).
The method 1600 may further include a load balancer determining to send the encapsulated packet to an instance of a virtual machine to load balance packets coming into the designated virtual network.
The method 1600 may be practiced where the act of the tenant gateway receiving an encapsulated packet for delivery to a tenant comprises an act of the tenant gateway receiving a GRE packet or an NVGRE patent.
The method 1600 may be practiced where the act of the tenant gateway using information in the encapsulated packet to send data from the encapsulated packet to the tenant in the designated virtual network comprises an act of converting a GRE packet to an NVGRE packet.
The method 1600 may be practiced where the tenant gateway is a multi-tenant gateway. The multi-tenant gateway is a gateway for multiple virtual networks. In such embodiments, the act of the tenant gateway receiving an encapsulated packet for delivery to a tenant in a designated virtual network comprises an act of the multi-tenant gateway receiving an encapsulated packet for delivery to a tenant in a designated virtual network from among the multiple virtual networks. The encapsulated packet is sent to the multi-tenant gateway using a destination network address for the multi-tenant gateway that was mapped from the VLAN tag. Such embodiments may further comprise an act of the multi-tenant gateway using information in the encapsulated packet to identify the designated virtual network. Such embodiments may further comprise an act of the multi-tenant gateway sending data from the encapsulated packet to the tenant in the designated virtual network.
The method 1600 may be practiced where the tenant gateway corresponds to a single designated virtual network.
The method 1600 may be practiced where communication is facilitated by a high-speed cross premise interconnection.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit of U.S. Provisional application 61/566,166 filed Dec. 2, 2011, titled “CONNECTING ON-PREMISE NETWORKS WITH PUBLIC CLOUDS”, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61566166 | Dec 2011 | US |