This disclosure generally relates to use of computer systems in cloud computing environments.
Cloud computing environments may provide access to computing resources such as processors, storage devices, and software as services to client systems via communications networks. Cloud computing environments may provide scalable computing resources, with processor and storage capacity being allocated according to demand and may provide security and privacy to prevent unauthorized access to information. The computing resources may include server computer systems connected via networks, associated data storage devices, and software that implements cloud services, such as infrastructure software for managing cloud resources, and application software that uses cloud resources. Each of the server computer systems may be a node of a network. The cloud's physical resources, such as server computer systems and associated hardware, such as storage devices and network routers, may be located in one or more data centers. A cloud may thus be said to be hosted by one or more data centers.
A cloud computing environment may be categorized as a public cloud or a private cloud. A public cloud may provide computing resources to the general public via the public Internet (though communications may be encrypted for information privacy). Examples of public clouds include the Microsoft® Azure™ cloud computing service provided by Microsoft Corporation, the Amazon Web Services™ cloud computing service provided by Amazon.com Inc., and the Google Cloud Platform™ cloud computing service provided by Google LLC. A private cloud may provide computing resources to only particular users via a private network or the Internet, e.g., to only users who are members of or associated with a particular organization, and may use resources in a datacenter hosted by, e.g., on the premises of, the particular organization, or resources hosted in a data center at another location, which may be operated by another organization. As an example, a private cloud may be implemented by a public cloud provider by, for example, creating an Internet-accessible private cloud for which access is restricted to only specific users. As another example, a private cloud may be implemented by an organization using private cloud software on hardware resources (e.g., in a datacenter) hosted by the organization itself (or by other organization). The VMware Cloud™ private cloud software, for example, may be used to implement a private cloud.
Cloud computing resources such as computer systems may be provisioned, e.g., allocated, to clients according to requests received from the clients. For example, a client may request access to a specified number of servers with a specified amount of storage and specified operating system and application software. Cloud providers may provision the resources accordingly and may use virtualization techniques to create one or more virtual instances of physical resources such as server computer systems. Each virtual instance may appear, to clients, to be substantially the same as the physical resource, but the virtual instances may be used more efficiently by the cloud provider to fulfill client requests. For example, multiple virtual instances of a physical server may be provided to multiple corresponding users at the same time, and each virtual instance may appear, to its user, to be the same as the physical resource Virtual instances of a physical server may be created and managed by a hypervisor executing on the physical server. An example hypervisor is the VMware ESXi™ hypervisor provided by VMware Inc. Each virtual instance may be referred to as a virtual machine (VM). An operating system may execute in a virtual machine, and application software may execute in the virtual machine using the operating system.
In particular embodiments, a Private Cloud as a Service (PCaaS) provides units of compute and storage resources referred to as “cloud racks.” A cloud rack may correspond to one or more computer systems, referred to as nodes, and associated storage devices, which are accessible via communication networks. A set of one or more cloud racks may be virtualized using a virtualization platform for executing computational tasks in virtual machines and provided as a network-accessible service referred to as a “private cloud.” Each private cloud may be associated with a set of users, who are permitted to use the computation and storage resources of the private cloud. The cloud racks and related infrastructure, such as network switches, may be installed in one or more data centers. The physical network in each data center may be shared by multiple private clouds, which may be isolated from each other so that data in each private cloud remains private to that could. Further, each data center may provide private network communication with public clouds or private clouds at other locations via the public Internet.
In particular embodiments, a private-cloud computing environment may use a set of compute nodes arranged in racks and connected to a network fabric that provides communication between the nodes and, via edge switches, with external networks such as the Internet Virtual machines located on the nodes may execute user applications and communicate with public clouds via an Internet gateway. The network fabric may be a leaf-spine network, and each rack of nodes may be associated with one or more virtual local-area networks (“VLANs”) that provide communication between nodes and the leaf switches, spine switches, and edge switches VLANs may use private IP addresses and network isolation so that virtual machines on the VLANs are protected from being accessed via public networks. For resources associated with a private cloud, such as nodes and networking hardware, isolation of communications may be implemented by creating distinct VLANs (and subnets) for distinct racks or private clouds. The isolation may occur at level 2 of the OSI network stack, e.g., at the Ethernet level.
In particular embodiments, a user of a private cloud may associate a private IP address of their choice with their virtual machines. This private IP address may be mapped to a public IP address, which may reside within a public cloud. The user may map one or more public cloud virtual networks to the private cloud. Private-cloud virtual machines may communicate with Internet hosts, e.g., web servers or other host machines, via the public IP addresses of the Internet hosts. Internet hosts may use a public IP address associated with a private cloud virtual machine to communicate with the private-cloud virtual machines.
In particular embodiments, a system may include a plurality of first host machines implementing a public-cloud computing environment, where at least one of the first host machines includes at least one public-cloud virtual machine (VM) that performs network address translation, and a plurality of second host machines implementing a private-cloud computing environment, where at least one of the second host machines includes at least one private-cloud VM, and the public-cloud VM is configured to: receive, via a network tunnel from the private-cloud VM, one or more first packets to be sent to a public Internet Protocol (IP) address of a public network host, translate, using a NAT mapping, a source address of each first packet from a private IP address of the private-cloud VM to an IP address of the public-cloud VM, and send the first packet to the IP address of the public-cloud VM.
In particular embodiments, the public-cloud VM may be further configured to receive a second packet to be sent to the private-cloud VM, translate a destination address of the second packet from the IP address of the public-cloud VM to the private IP address of the private-cloud VM, and send the second packet to the private IP address of the private-cloud VM. The second packet may have been sent by an Internet host to the private-cloud VM as a response to the first packet. The IP address of the public-cloud VM may be a private IP address for a private network of the public-cloud computing environment.
In particular embodiments, the public cloud VM may be further configured to retrieve the private IP address of the private-cloud VM from the NAT mapping, where the private IP address of the private-cloud VM was stored in the NAT mapping when the source address of the request packet was translated from the private IP address of the private-cloud VM to the IP address of the public-cloud VM. The IP address of the public-cloud VM may include a private IP address for a private network of the public-cloud computing environment.
In particular embodiments, the public-cloud VM may be further configured to receive one or more second packets from a public Internet Protocol (IP) network, each second packet having a destination address including an IP address of the public-cloud VM, translate, using a network address translation (NAT) mapping, a destination address of each second packet from the IP address of the public-cloud VM to a private IP address of one of the private-cloud VMs, send the second packet to the private IP address of the private-cloud VM, translate, using a NAT mapping, a source address of a third packet from the IP address of the private-cloud VM to a private IP address of the public-cloud VM, where the private IP address of the public-cloud VM is for a private network of the public-cloud computing environment, and send the third packet to an IP address of the public network host.
In particular embodiments, a firewall for use with private clouds may be defined as a set of rules. Each rule may have a set of fields. The fields may include a priority, protocol, source address, source port, destination address, destination port, allow or deny indicator, and direction indicator (inbound or outbound traffic). Rules may be applied to Internet, VPN, and other types of traffic by specifying a corresponding keyword, e.g., “Internet” as a source or destination address. A set of rules, which may be specified in a firewall table, may be associated with a particular subnet defined to control how network traffic is routed with respect to that subnet. A firewall table may be applied to traffic that arrives at a subnet to control how the traffic will be processed or sent from the subnet. The firewall table may be applied inside the leaf and spine, and to the gateways that send traffic to or from the Internet.
The embodiments disclosed above are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed above. Embodiments according to the disclosure are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However, any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
Although
The network 110 may include one or more network links. In particular embodiments, one or more links of the network 110 may include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links 150 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a W WAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link 150, or a combination of two or more such links. The links need not necessarily be the same throughout PCAAS computing environment 100.
In particular embodiments, client system 122 may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system 122. As an example and not by way of limitation, a client system 122 may include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, augmented/virtual reality device, other suitable electronic device, or any suitable combination thereof. This disclosure contemplates any suitable client systems 130. A client system 122 may enable a network user at client system 122 to access network 110. A client system 122 may enable its user to communicate with other users at other client systems 130.
In particular embodiments, the leaf-spine network provides network communication between the nodes 210, 212, 214 and edges 203a,b of the leaf/spine/edge portion 202a. The edges 203 may be switches, routers, or other suitable network communication devices. Each of the edges 203 is connected to each of the spines 204. That is, Edge-1 203a is connected to Spine 204a, Spine 204b, and Spine 204n. Further, Edge-2 203b is also connected to Spine 204a, Spine 204b, and Spine 204n. Each of the edges 203 may provide communication with a network external to the data center 200a, e.g., by routing network packets sent to and received from the external network by the spines 204. The edges 203 may be edge routers, for example.
Each column in
In particular embodiments, the nodes 210 in a rack, which may include, e.g., from 1 to a threshold number of nodes, such as 8, 16, 32, 64, 128, and so on, may be members of a cluster in a cloud computing platform. There can be multiple such clusters in a management plane (e.g., VMware vCenter® or the like). The collection of clusters in a management plane may correspond to a private cloud.
In particular embodiments, each rack of nodes 210 may be associated with one or more virtual local-area networks (“VLANs”) that provide communication between nodes 210 and the leaves, spines, and edges. VLANs may use private IP addresses so that virtual machines on the VLANs are protected from being accessed via public networks. An extended form of VLAN, referred to as VxLAN may be used to increase the number of VLANs that may be created. Without VxLAN, VLAN identifiers are 12 bits long, so the number of VLANs is limited to 4094, which may limit the number of private clouds that can exist. The VxLAN protocol uses a longer logical network identifier that allows more VLANs and increases the number of private clouds that can exist. VLANs that are not VxLANs may be referred to herein as “ordinary” VLANs. The term “VLAN” as used herein may include VxLANs as well as ordinary VLANs.
In particular embodiments, each rack of nodes 210 may be associated with a set of system-level virtual LANs (VLANs) for use by the cloud computing platform and a set of workload VLANs for use by customer virtual machines. Customer virtual machines may be, for example, virtual machines created in response to request from customers. Workload VLANs may have routable subnets.
The system-level VLANs may include a management VLAN, a storage (VSAN) VLAN, and a VM migration VLAN. The management VLAN may be used by management entities such as the management plane (e.g., VMware vCenter® or the like), PSC, and DNS. The management VLAN may also be used by the private cloud provider control plan to control the deployment, configuration and monitoring of virtual machines executing on the nodes. The management VLAN may have a routable subnet. The storage VLAN may be used by a hyper-converged storage layer of the cloud computing platform to distribute and replicate I/O operations. The storage VLAN subnet is not ordinarily routable. The migration VLAN may be used by the cloud computing platform to support movement of VMs between nodes, e.g., as performed by VMware vMotion® or the like. The migration VLAN subnet is not ordinarily routable, though it may be routed when a VM is moving across clusters.
In particular embodiments, since a private cloud include a set of racks under the same management plane (e.g., VMware vCenter®), the VLAN network configuration described above also applies to the private cloud. The racks in a particular private cloud may share the same set of VLANs.
In particular embodiments, the private cloud provider may enable configuration of User Defined Routes (UDRs). UDRs may be configured using a private-cloud management portal and may be referred to as “Route Tables.” The UDRs may control how network traffic is routed within the private cloud. As an example, a customer may configure UDRs so that traffic for a specific workload VLAN is sent to a specific virtual appliance (e.g. a DPI or an IDS). As another example, the customer may configure UDRs so that traffic from a subnet is sent back to on-premises infrastructure over a Site-to-Site VPN tunnel to comply with a corporate policy mandating any access to the Internet is to flow through the corporate firewall. UDRs may be implemented in the data center topology using a Policy Based Routing (PBR) feature.
For resources associated with a private cloud, such as nodes and networking hardware, isolation of communications may be implemented by creating distinct VLANs (and subnets) for distinct racks or private clouds. The isolation may occur at level 2 of the OSI network stack, e.g., at the Ethernet level, since the VLANs may be implemented at level 2. To achieve a multi-tenant network environment with isolation, the private cloud provider may encapsulate network traffic that enters the leaf switches into VxLAN tunnels. Note that the VxLAN tunnels are not necessarily extended to the hosts, since in certain cloud computing environments (e.g., certain VMware environments), VxLAN is not supported at the hypervisor (e.g., ESX) layer. Thus each ESX host may send and receive network traffic on an ordinary level 2 VLAN.
In particular embodiments, a customer may create multiple separate private clouds. For example, the customer may need to isolate departments or workloads from each other, and different sets of users may be responsible for the management and consumption of the respective departments or workloads. Thus, while routing across private clouds is possible, each customer may explicitly prevent or allow such traffic by configuring Network Security Groups (NSGs). NSGs may be configured using a private cloud management portal and may be referred to as “Firewall Tables.” NSGs may be implemented for such east-west traffic within a data center using, for example, access control lists on the network switches.
In particular embodiments, to accommodate an environment in which a data center may include numerous racks, e.g., an average of 15-30 racks per deployment, and the number of customers that may be supported by a leaf-spine system, Ethernet VPN (EVPN) may be used as the control plane to control and dynamically configure the set of ports, VLANs, and tunnels associated with each customer.
In particular embodiments, to provide routing isolation across customers, a dedicated Virtual Routing and Forwarding (VRF) feature may be created in the data center topology for each customer. The VRF feature may provide the ability to route across different private clouds created by a particular customer (if the NSGs permit). The VRF feature may also provide the ability to connect the data centers to networks of other cloud service providers, the Internet, and customer on-premises environments, which may include data centers hosted at customer facilities.
The leaf-spine topology shown in
Additional spines 204, leaves 206, 208, and edges 203 may be added, e.g., to handle increased workloads. Although two rows of leaves 206, 208 are shown, there may be more or fewer rows of leaves in other examples. Other arrangements of nodes are possible. For example, the nodes 210-214 may be located in racks or cabinets in one or more cages at one or more physical locations.
In particular embodiments, each data center 200 may be assigned a unique identifier, e.g., DC-<identifier> where identifier is a different name or number for each data center. As another example, each data center 200 may be named according to the format is <Colo-DC-provider-identifier>-<Colo-DC-identifier>-<Floor/Private-Cage/Cabinet-identifier>. E.g. CS-DC-Equinix-SV4-Cage-1250, where Colo-DC-provider-identifier identifies a colocation data center provider, Colo-DC-identifier identifies a data center of the provider, Floor identifies a floor number in a building of the provider, Private-Cage identifies a private cage on the identified floor, and Cabinet-identifier identifies a cabinet in the identified cage.
In particular embodiments, network connectivity to and from a private cloud data center may be provided when the private cloud infrastructure is hosted in Microsoft Azure™ data centers. Such connectivity may be achieved by connecting a pair (or multiple pairs) of edge switches 203 to the spine switches 204 southbound (e.g., one or more 100 Gbps links) and to Azure ExpressRoute™ devices northbound (e.g., 8 or 16 100 Gbps links from each edge to each ExpressRoute device). For protocol routing BGP peering may be established between the edge devices 203 (which may be aware of the customer VRFs described above) and the ExpressRoute devices. Tunneling may be performed using dotIq. One or more address blocks may be defined in a Microsoft Azure virtual network (vNet) to be “delegated” to enable use of subnet and routing in a data center 200. Such address blocks may be divided into appropriate subnets as described above and advertised by the edge devices 203 via BGP to the ExpressRoute® network.
The private cloud to which a node is allocated may change over time. For example, when a node is decommissioned because a private cloud is deleted, or the node is removed from the private cloud for other reasons, such as a failure of the node hardware or a reduction in size of the private cloud, the node may be securely erased, prepared for use, and provisioned as a member of another private cloud.
In particular embodiments, each private cloud 116 may be associated with an organization referred to herein as a customer. A customer may be associated with one or more private clouds. Each of the nodes 210, 212, 214 may be associated with a single customer, so processes and data of different customers are not located on the same node. An example data center DC-2 200b includes a leaf/spine/edge component 202b and nodes 220, 222, 224. Each of these nodes may be allocated to, e.g., used by or ready for use by, one of the example private clouds named PC-4, PC-5, PC-6, and PC-7, or may be unallocated and available for allocation to a private cloud. Further, each of the private clouds may be associated with one of the example customers (e.g., organizations, users, or the like), which are named Cust1, Cust2, and Cust3. In the example data center DC-2 200b, nodes 220a and 220b are allocated to private cloud PC-4 and customer Cust1, nodes 220n, and 222a-n are allocated to private cloud PC-6 and customer Cust2, and nodes 224a-n are allocated to private cloud PC-7 and customer Cust3. As can be seen, different nodes in a rack, such as the nodes 220b and 220n, may be allocated to different customers Cust1 and Cust2, respectively. Alternatively, each node in a rack, such as the nodes 224a-n, may be allocated to the same customer, such as Cust3. Further, multiple racks may be allocated to the same customer.
Network communication between the data centers DC-1 and DC-2 may be provided by an external network 230. One or more edge routers of DC-1's leaf/spine/edge 202a portion may communicate with one or more edge routers of the leaf/spine/edge portion 202b via the external network 230 to provide network communication between nodes of DC-1 and nodes of DC-2.
In particular embodiments, the data center 200a may communicate, via a private line 250, with one or more networks 240 of Network Service Providers that have a point of presence (“PoP”) in the data center 200a. Further, the data center 200a may communicate via a private line 250 with one or more customer networks that are in one or more customer data centers 244, each of which may be provided by the same colocation provider as the data center 200a or provided by different colocation providers from the data center 200a. For example, a customer data center 244 may be located on the customer's premises. As another example, the data center 200a may communicate via a private line 250 with networks of one or more cloud service providers 242 that have a PoP in the data center 200a The cloud service providers 242 may include, for example, public cloud services such as Microsoft Azure cloud services, Google Cloud services, and so on. Communication with cloud service providers may be via the Equinix® Cloud Exchange™ software-defined interconnection, for example. As another example, the data center 200a may communicate with the data center 200b via a private line 250 as an alternative to the private network 230.
In particular embodiments, the private cloud 404 may provide access to on-premises infrastructure 244 of a customer “A” so that customer “A” may use the infrastructure of the data center 200a as an extension of the on-premise infrastructure 244. Such seamless connectivity may enable migration of workloads into the private cloud data center 200a, e.g., through tools provided by the cloud services platform, and migration of workloads back to the on-premise infrastructure 244 should the customer decide to stop using the private cloud service.
The infrastructure of the data center 200a may be a component of an Azure vNet 412 and hence inherit the properties of the vNet 412 in terms of visibility and accessibility. Therefore customer “A” may connect to the infrastructure of the data center 200a in the same manner they connect to Azure vNets, e.g., using the Azure Site-to-Site VPN service or by setting up the ExpressRoute® service.
In the example of
As another example, to integrate the private cloud “B” 406 with Azure services, the infrastructure of data center 200a may be peered with one or more vNets 414 in the Azure Subscriptions of customer “B”. Such peering may be performed using the ExpressRoute connectivity described above.
In particular embodiments, a customer may create private clouds in multiple data centers 200. The data centers 200 may be connected to a public cloud (e.g., Azure). Private clouds deployed in different data centers may communicate with each other using tunnels through data centers and extend it using a global IP network, e.g., a Global Underlay Private IP Network.
In particular embodiments, different customers may share the same physical network, but network communications of each customer may be isolated from other customers by providing an isolated subnet for each customer. Subnets of one customer may be isolated from subnets of other customers using Ethernet VPN (“EVPN”). Each subnet may be associated with a private cloud, so the communications associated with one private cloud may be separate from communications associated with other private clouds. In this way, VLANs of different private clouds may be isolated from each other.
Isolation between different customers and isolation for the cloud service provider system network services may be provided by using EVPNs to separate VLANs. The EVPNs may be, e.g., L2 or L3 EVPNs supporting VxLAN. The network switches, e.g., the leaves, spines, and edges, may provide the EVPN features.
An overlay network 502 may be created on the underlay network 504 for each private cloud to separate network traffic of the private cloud from other private clouds. Each overlay network 502 may be a VxLAN overlay. For example, a Pepsi overlay 502a, which has subnet with a 10/8 prefix (for host addresses such as 10.2.1.1), may be associated with a first customer. A Cola overlay 502b may have a different subnet with a 10/8 prefix (for host addresses such as 10.2.1.1) and may be associated with a second customer. Since different customers have different private clouds, using different overlays for different customers separates the network traffic of different customers to provide data privacy for each customer EVPN may be used as a control plane to manage the overlays 502.
In particular embodiments, a user of a private cloud may associate a private IP address of their choice with their virtual machines. This private IP address may be mapped to a public IP address, which may reside within a public cloud. The user may map one or more public IP addresses to their resources inside the private cloud. Private-cloud virtual machines may communicate with Internet hosts, e.g., web servers, via public IP addresses associated with the private-cloud virtual machines. Network traffic may be sent from the private cloud to the public cloud using the private IP address, with public IP addresses being used as traffic leaves or enters the public cloud.
In particular embodiments, private-cloud VMs 604 may send data in the form of outgoing packets (OUTP) 606 to a destination public IP address of an Internet host 632. The outgoing packets 606a may be sent through a networking fabric such as a leaf-spine network, from which the outgoing packets 606a may emerge as outgoing packets 606b. The outgoing packets 606b may be sent through a network tunnel 610 to a public cloud 616 in which a public IP gateway 622 may receive and forward the tunneled packets 606b to the destination public IP address as packets 606c. In particular embodiments, as described above with reference to
Note that the three packets 606a, 606b, and 606c shown at different points in
The public cloud 616 may pass the packets 606 to the public IP gateway 622, which may perform the network address translation on the packets 606 and send the packets 606 to the public IP address. As a result of the network address translation, the packets 606c may have public source IP addresses (corresponding to the private-cloud VM 604 that sent the packets 606a), so the packets 606c appear to be from a public IP address to which the host 632 having the public IP address can send responses in the form of incoming packets (INP) 608a. The Internet host 632 may send incoming packets (INP) 608 to the private cloud 602, e.g., as responses to the outgoing packets 606, to the private-cloud virtual machine 604a that corresponds to a public IP address previously received by the host 632. The incoming packets 608 need not be responses, and may instead be, for example, request messages or other messages sent by the Internet host 632. The incoming packets 608a may pass through a communication network, as shown by the path 614, and be received by an edge router 203b, which may send the incoming packets 608b on to the private-cloud VM 604a through the leaf-spine network.
In particular embodiments, the three packets 608a, 608b, and 608c shown at different points in
The following paragraphs describe the outgoing and incoming communications in further detail. In particular embodiments, as introduced above, an application or other program code executing on a VM 604a of a private cloud 602 may send outgoing packets 606a, which may be, e.g., an HTTP request or other network communication, to a public IP address, such as a public Internet hostname (e.g., Server.com) or a public IP address (e.g., 128.192.1.1). The outgoing packets 606a may be sent from a network interface of a node 212, on which the VM 604a is executing, to a network, such as a leaf-spine network having spines 204.
The leaf-spine network may send the packets 606b through an edge 203b, e g., via a static VxLAN, to a public cloud 616 through a tunnel 610. The packets 606b may be sent as shown via a connectivity path 612 through the tunnel 610 to an internal load balancer 620 of the public cloud 616 via a public cloud private virtual network 618, e.g., an Azure vNet or the like. The packets 606b may be received by an Internet gateway, which may send them through the tunnel 610 to the internal load balancer 620. The tunnel 610 may be, e.g., a VxLAN tunnel. The underlying connectivity path 612 for the tunnel 610 may be provided by, for example, Microsoft ExpressRoute or the like.
The private virtual network 618 may provide communication between the internal load balancer 620, a public IP gateway 622 (which may execute on a virtual machine, for example), and one or more virtual machines 626a,b, which may perform processing specified by users or processing related to operation of the public cloud 616, for example. The packets 606b may arrive at the internal load balancer 620, which may send them on to the public IP gateway 622. When the packets 606b arrive at the public IP gateway 622, their source addresses may be the private IP address of the private-cloud VM 60a (“CSIP”), and their destination addresses may be the public IP address of the public network host 632, as specified by the private cloud VM 604 when the outgoing packets 606a were sent.
In particular embodiments, the public IP gateway 622 may use a NAT component 624 to perform network address translation on the outgoing packets 606b. The NAT component may perform the network address translation by translating, using a NAT mapping, the source address of the packet 606 from the private IP address of the private-cloud VM 606a to an IP address of one of the public cloud virtual machines 626 so that the packets 606 appear to be from a public cloud virtual machine 626. The IP address of the public cloud virtual machine 626 may be a public IP address, e.g., an Azure VIP address, or a private address, e.g., an Azure DIP address, which may be private to the virtual network 618, for example. In the latter case, the DIP address may be translated to a VIP address prior to sending the packets 606c to the Internet host 632, e.g., by an external load balancer 630. The NAT mapping used by the NAT component 624 may be a lookup table that maps private-cloud VM addresses to public-cloud VM addresses, and vice-versa.
In particular embodiments, the public IP gateway may receive, from the public cloud VM, a packet to be sent to the private-cloud VM but having as a destination address an IP address of the public-cloud VM. The public IP gateway may translate the destination address of the packet from the IP address of the public-cloud VM to the private IP address of the private-cloud VM and send the packet to the private IP address of the private-cloud VM. The IP address of the public-cloud VM may be a private IP address for a private network of the public-cloud computing environment, so in particular embodiments, the public IP gateway may receive, from the public cloud VM, a packet to be sent to the private-cloud VM but having as a destination address the private IP address of the public-cloud VM. The public IP gateway may translate the destination address of the packet from the private IP address of the public-cloud VM to the IP address of the private-cloud VM and send the packet to the IP address of the private-cloud VM.
In particular embodiments, when an address of a packet is overwritten or replaced by a translation operation, the overwritten or replaced address may be stored, e.g., in a NAT mapping, for subsequent retrieval to recover the original request's I P address. This case may occur when a corresponding message such as a response is sent in the reverse direction. The network address translation may further include translating, using a NAT mapping, the destination address of the first packet from the private IP address of the public-cloud VM 626 to the public IP address of the public network host 632. The NAT mapping may be a lookup table that maps private IP addresses of public-cloud VMs to IP addresses of private-cloud VMs, and vice-versa.
In particular embodiments, the private IP address of the private cloud VM 606a may have been previously stored in a NAT mapping, e.g., when a request packet 608a from an Internet host 632 (for which the current message 606c is a response) was sent to a private cloud VM 606a, and the request packet 608a's source address (which was the public IP address of the public network host 632) was replaced with a different address (such as the private IP address of the private cloud VM 606a). The public IP gateway may retrieve the private IP address of the private-cloud VM from the NAT mapping.
In particular embodiments, the public IP address of the public network host 632 may have been previously stored in a NAT mapping, e.g., when a request packet 608a from an Internet host 632 (for which the current message 606c is a response) was sent to a private cloud VM 606a, and the request packet 608a's source address (which was the public IP address of the public network host 632) was replaced with a different address (such as the private IP address of the private cloud VM 606a).
In particular embodiments, the public IP gateway may receive one or more second packets from a public Internet Protocol (IP) network. Each second packet may have a destination address comprising an IP address of the public-cloud VM. The public IP gateway may translate, using a network address translation (NAT) mapping, a destination address of each second packet from the IP address of the public-cloud VM to a private IP address of one of the private-cloud VMs. The public IP gateway may send the second packet to the private IP address of the private-cloud VM, translate, using a NAT mapping, a source address of a third packet from the IP address of the private-cloud VM to a private IP address of the public-cloud VM, and send the third packet to an IP address of the public network host.
In particular embodiments, the public IP gateway 622 may send the packets 606 to the external load balancer 630 after performing network address translation. The external load balancer 630 may send the packets 606 to the public network host 632 as outgoing packets 606c. The Internet host 632, e.g., Server.com, may then receive the outgoing packets 606c.
In particular embodiments, the Internet host 632 may send data, e.g., an HTTP response, as one or more incoming packets 608a to the external load balancer 630, which may send the incoming packets 608a to the public IP gateway 622. If the public IP address of the public cloud VM in the destination address portion of the incoming packets 608a are VIP addresses, the external load balancer 630 may translate them to DIP addresses. The public IP gateway 622 may perform network address translation on the incoming packets 608a by translating, using a network address translation (NAT) mapping, a destination address of the packet 608a from the public IP address of the public cloud VM to a private IP address of one of the private-cloud VMs. The network address translation may further include translating, using a NAT mapping, a source address of the second packet from a public IP address of a public network host to a private IP address of the public cloud VM.
In particular embodiments, gateways may be used to enable communication between private clouds and external networks such as the Internet. Public clouds, e.g., Azure or the like, may be accessed via the Internet. An Internet gateway may reside within the public cloud. The public cloud may have an edge interface, e.g., the Azure edge or the like. Network communication between data centers and the public cloud may be via a circuit connection, e.g., Azure ExpressRoute or the like. Network tunneling may be performed through the circuit. However, the circuit does not allow access to network hosts outside the private cloud, such as Internet hosts. For example, if a private cloud user wants to access a web site, if the request traffic comes into the public cloud through the circuit, the request traffic does not reach the web site, since the traffic is dropped by the public cloud.
In particular embodiments, bi-directional network communication may be established, using a VxLAN tunnel with EVPN, from hosts outside the public cloud, e.g., from hosts in the private cloud, through the circuit and public cloud, to network hosts that are external to the public cloud, such as Internet hosts. The tunnel has a destination IP of the public cloud's load balancer, which is not dropped by the public cloud. Inside the tunnel, the traffic has the original IP, e.g., as sent from the private cloud.
Nodes in the private cloud may use a Linux kernel that performs NAT operations on network traffic. A decapsulation module may decapsulate the tunnel by removing the outside header and sending the packets of the original traffic. There may be multiple tunnels, which are monitored for failures. If a tunnel fails, another one may be used. The real IP addresses of the VMs in the public cloud may overlap with the private cloud. For example, a customer may initiate a request from a VM having address 10.0.0.5 in the private cloud to an Internet host. There may be a host in the underlay in the public cloud that has the same address. The private cloud host addresses may be separated from the public cloud host addresses using an overlay/underlay technique, in which the public cloud network corresponds to an underlay, and the private clouds correspond to overlays. For example, the public IP address 50.40.50.20 may be allocated on-demand to a gateway and attached.
In particular embodiments, network traffic such as the outgoing packets 606b may be sent to the internal load balancer 620's IP address on the other end of the tunnel 610. The internal load balancer 620 may perform load-balancing on traffic by sending the outgoing packets 606 to particular virtual machines 626. The internal load balancer 620 may select one of the virtual machines 626 to receive packets 606 and change the destination IP address of the packets 606 so that they are sent to the host on which the selected virtual machine 626 is located. For example, if the load balancer 620 selects VM 626a for a packet 606, then the destination address of the packet 606 may be set to the address of the host on which VM 626 is located. To select one of the virtual machines 626, the load balancer 620 may send health probes to the virtual machines 626 to identify operational virtual machines 626 and select one of the virtual machines 626 based on the results of the health probes among other factors, such as a random factor or a hash value based on the source port. The VM 626 may receive the packets at the public cloud and send the packets to the external load balancer 630. The external load balancer 630 may change each packet's source IP to a public IP address of the VM 626, e.g., a VIP address, and send the packet to the destination address specified in the packet (e.g., Rand:80). The public IP address for the packet may be selected from a pool of pre-allocated public IP addresses. When a response packet 608c is sent, if the sending host in the private cloud 602 does not have a public IP address, the response packet 608c may be sent with an internal IP and arrive at a VM 626. The VM 626 may determine that the packet has an internal IP and select and assign a public IP from a pool. If there is already a public IP that corresponds to the internal IP, e.g., of a web server, then the existence of the public IP that corresponds to the internal IP may be detected, and the packet may be sent to a gateway corresponding to the public IP so that the packet is sent using the public IP A mapping of public IPs may be established by a user, e.g., using a portal, in which case the public IP that corresponds to a packet's address, e.g., the public IP that corresponds to the internal ID 10.0.0.5, may be identified using the mapping.
In particular embodiments, a set of riles, which may be specified in a route table, may be associated with a particular subnet defined to control how network traffic is routed with respect to that subnet. A route table may be applied to traffic that arrives at a subnet to control how the traffic will be processed or sent from the subnet. The route table may be applied inside the leaf and spine, and to the gateways that send traffic to or from the Internet.
In particular embodiments, EVPN may be used to create Layer 2 VLANs as well as Layer 3 VRF (Virtual Routing and Forwarding) Multiple instances of a route table may coexist in a router. Because the routing instances may be independent, the same or overlapping IP addresses can be used without conflicting with each other. Network functionality is improved because network paths can be segmented without requiring multiple routers. The gateway that provides access to the internet for the public IP is on a subnet for the VMs to route to. The gateway to the Internet is a service deployed in Azure (in a subscription). To get to the gateway, a subnet that extends to the private cloud is needed. This endpoint does not support EVPN, so the EVPN fabric is stitched together with static (flood-and-learn) VxLAN tunnels so that redundancy is still an aspect (ordinarily, connecting in a static manner would lose the redundancy because there would be no control plane to handle node failures and recoveries). The packet sent to the Internet from the EVPN fabric is routed to one of multiple tunnels to provide redundancy for the connection. Such routing decision may take place on the leaf switch so that inoperational tunnels are excluded. Certain switch models may not support EVPN and static VxLAN simultaneously, in which case static VxLAN tunnels may be terminated on the spines and bridged to the leaf using ordinary VLANs.
Each route table may have a list of routes, some of which may be provided by users. For example, route table RT1 may have user-configured routes R1 and R2, and route table RT2 may have user-configured routes R3 and R4. Users may specify that the user-configured routes apply to only particular subnets, in which case the user-configured routes for each subnet are not used for other subnets. A default route table may be included in each VRF to provide connectivity to the Internet, VPNs, and the like.
In particular embodiments, route leaking may be implemented using the BGP protocol, which can import and export specified routes and route targets. As part of the import and export processes, route maps can be applied. Route maps may perform filtering to exclude certain routes from the import or export process. The user-defined routes may be filtered out from the import and export processes using the route maps.
In particular embodiments, to define a route table, a user may specify a subnet to which the route table is to be applied. The user may specify one or more route table entries. Each route table entry may include a prefix and a next hop. The next hop may indicate that the traffic is to be dropped, sent to another IP address in the private cloud (also called an appliance ID), or sent through a VPN, or through the Internet. Each entry in a route table may include a source address or source subnet prefix, a next hop type, and a next hop address. The source subnet prefix may be a route prefix, e.g., 192.168.1.0/24. A route table entry may be applied to incoming traffic having a destination address that matches the destination address or subnet address prefix. The next hop type may be, e.g., “IP address” to indicate that traffic is to be routed to an IP address specified by the next hop address associated with the table entry. The next hop type may alternatively be “Virtual Appliance,” “Virtual Network,” “VPN,” “Virtual Network Gateway,” “Internet,” or “None” to send the traffic to the respective places. For example, specifying “VPN” may cause the traffic to be sent to the gateway associated with the VPN, in which case the gateway address may be determined automatically. Specifying the “None” type causes the traffic to be discarded. The route table entries may specify unidirectional routing. If the reverse direction traffic is to be routed along the reverse sequence of next hops then route table entries may be specified separately for the reverse traffic. These route table entries may be implemented by creating mappings in the network switches of the data center. For example, Policy Based Routes based on the Route Table may be created in the leaf switches of the data center. The Policy Based Routes may be implemented using PBRs applied to the Switched Virtual Interfaces (SVIs) of the corresponding VLAN In EVPN. SVIs corresponding to a VLAN may be on multiple leaves, and the policy routes may be applied to the SVIs on each leaf.
In particular embodiments, there may be two or more spines 204 in a data center network, as shown in
In particular embodiments, at a high level, traffic from the Internet is routed to the leaves 206, e.g., to the gateway that exists on each leaf 206. There is a route that has multiple next hops on a pair of static VxLANs. Each of the static VxLANs may have a gateway on each leaf 206. If a link were to fail, traffic would automatically be routed to the other VxLAN, since the gateway(s) that failed would no longer respond (e.g., to ARP requests). This IP-level active/active redundancy may be provided by the public cloud (e.g., Azure). The redundancy is at the VxLAN level Since there is a static VxLAN stitching point on the spines 204, a failure of a spine 204 is not handled by the IP redundancy, since the stitching point is at a specific spine, and the spine's redundancy is not ensured by the IP redundancy.
In particular embodiments, to recover from failure of a spine 204a, a different spine 204b may be used. Any communication loss related to the spine going down or network links being lost can be handled. When the spine 204 or link fails, the periodic ARP refresh for each of the gateway addresses on the particular static VxLAN times out, e.g., after a few seconds, and traffic switches over to the other spine. Other mechanisms can be used to detect failure, e.g., BF) (bidirectional forwarding detection) instead of ARP timeout. For example, a fast keepalive technique may be used to switch traffic to another spine 204 in tens of milliseconds.
In particular embodiments, the routing protocol is not used to switch traffic to another spine when a spine fails, since the network is statically routed. Instead, next hop unreachability detection may be used to switch traffic to another spine 204b when a spine 204a or a leaf 206, 208 fails. The logic to detect an unreachable next hop may be performed at the leaves 206, 208 and/or at the public-cloud VMs 626. The switch to another spine 204 may occur when an unreachable next hop is detected. Leaves and edge routers may send traffic only to spines for which next hops that are reachable. If an unreachable next hop is detected for a spine, the senders (e.g., leaves, edge routers, and other spines) stop sending traffic to that spine and instead send the traffic to the spines having reachable next hops. A data structure in the ARP table may be used to identify which spines are unreachable.
In particular embodiments, the firewall table rules may be implemented using Policy Based Routes (PBR) in the network switches such the leaf, spine, and edge switches shown in
In particular embodiments, rules that specify the “Internet” keyword in place of an actual subnet or address may be compiled differently from rules that specify an actual address. Traffic may be sent to the Internet from static VxLAN subnets. The actual subnet from which the traffic originates is not associated with the static VxLANs. The identity of the actual subnet from which the traffic was sent is thus not available from the subnet being used to send the Internet traffic and is instead determined by the ACL rule.
In particular embodiments, Firewall table rules that specify the Internet keyword instead of a specific subnet may be compiled differently from rules that specify an actual subnet or address. As introduced above, when a user specifies Internet keyword in a firewall rule, e.g., as source address:=“Internet” or destination address=“Internet” then the traffic to which the rule applies is not immediately identifiable. Thus the rule is not applied directly to the rule's associated subnet on the switch directly Instead, the rule is applied (e.g., by the switch) to the static VxLAN that receives Internet traffic, even though the user has specified that the firewall rule is to be associated with a specific source subnet. For example, Thus the actual source subnet of the traffic, which is the one associated with the firewall rule by the user, is not immediately identifiable.
In particular embodiments, a firewall for use with private clouds may be defined as a set of rules. Each rule may have a set of fields. The fields may include priority, protocol, source address, source port, destination address, destination port, allow or deny, and direction (inbound or outbound traffic). Rules may be applied to Internet, VPN, and other types of traffic by specifying a corresponding keyword, e.g., “Internet” as a source or destination address.
In particular embodiments, when a firewall rule associated (by the user) with a particular subnet specifies the keyword “Internet” as the source address, instead of applying the rule directly to the switch, an NSG (Network Security Group) rule may be created on the public IP gateway of the public cloud (e.g., Azure). The NSG rule has a condition specifying that the destination address, which is specified as a prefix (e.g., a subnet), contains a local address (e.g., of a specific VM) corresponding to the public IP address of the public IP gateway. The public IP address may be, e.g., the local NIC address of the public IP gateway.
In particular embodiments, when a firewall rule associated (by the user) with a particular subnet specifies the keyword “Internet” as the destination address, the source address of the actual subnet may be identified and applied to traffic evaluated by the particular rule. To identify the source address of the actual subnet, the chain of firewall table rules may be walked (traversed) backwards, starting from the subnet the firewall rule is associated with, to identify the actual source(s) of the traffic. A source address filter for an ACL may be determined based on an intersection between the actual source specified in the rule and the subnet associated (by the user) with the route if the nexthop of the route is in the subnet associated with the firewall. An ACL rule may be created in each the gateway to the Internet (e.g., on the static VxLANs) Further, there may be other source addresses specified in rules that have “Internet” as destination. An ACL rule that filters based on the actual source of the traffic may then be generated and stored in each relevant network switch. The ACL rule may be evaluated for all traffic that passes through the switch. The ACL rule may be stored in each leaf switch and/or each spine switch. This process of walking the chain may be performed for existing rules when a firewall configuration is created or changed and at least one of the rules uses the “Internet” keyword for a source or destination address.
In particular embodiments, for a specified firewall rule associated (by the user) with a specified subnet and having a Destination Address=Internet, the ACL(s) on the SVI of the VxLAN to public IP gateway, and also of the public Internet gateway of the public cloud, may generated as follows when the destination address is Internet: For a firewall rule referred to as “this rule,” create an ACL statement having as its source the intersection of the Source Address of this rule and this rule's associated subnet. If the next hop of a firewall table rule “UDR” (user-defined route) is in this rule's associated subnet, then create another ACL statement having as its source the intersection of the Source Address of this rule and the subnet associated with the other rule “UDR”, and having as its destination the intersection of the Destination Address of this rule with the Route Prefix of the “UDR” rule.
Next, walk the chain of UDRs and create ACL statements with intersection of all Route Prefixes of the other UDRs in the chain with the Destination Address of this firewall table rule. That is, then walk the chain of other UDRs by finding another firewall table rule “UDRi” that satisfies the above condition for “UDR” or has a next hop in the subnet associated with the other rule “UDR”. These steps may be repeated in a recursive manner by identifying one or more additional rules “UDRi+1” that have a next hop in the subnet associated with rule “UDRi” and creating an ACL statement having as its source the intersection of the Source Address of this rule and the subnet associated with the rule “UDRi+1” and having as its destination the intersection of the Destination Address of this rule with the Route Prefix of the rule “UDRi+1” until there are no more rules having a next hop in one of the identified subnets.
Further, for a firewall rule having a Destination Address=Internet, an NSG (Network Security Group) rule may be created on the public IP gateway of the public cloud (e.g., Azure). The NSG rule has a condition specifying that the source address, which may be specified as a prefix (e.g., a subnet), contains a local address (e.g., of a specific VM) corresponding to the public IP address of the public IP gateway.
As an example, a firewall rule specifies “Internet” as the destination address, is associated with subnet 192.168.2.0/24, and has source address 192.168.1.10/32.
In this example, since the firewall rule applies to Internet destinations, as described above it is not compiled to an ACL rule on the switch for the SVI of the associated subnet (192.168.2). Instead, since is to be applied to Internet-bound traffic, the firewall rule applies to the static VxLANs that process subnet traffic bound for the gateways. If the rule were to be applied as-is on the static VxLANs to traffic having the source address 192.168.1.10, the traffic received directly from a subnet such as 192.168.1.0/24, and bound directly for the Internet, will also be processed and filtered by the rule, which is not the specified behavior for the rule. The rule, as specified, should apply to traffic exiting the 192.168.2.0/24 subnet, or traffic that is chained to that subnet and bound for the Internet Thus, firewall table rules specifying Internet destinations may be modified to filter the traffic so that the example rule applies only to traffic exiting the 192.168.2.0/24 subnet.
In particular embodiments, a firewall table rule that applies to Internet destinations may be compiled by generating an ACL rule that has a source address based on an intersection between the source address of the firewall table rule and the rule's associated subnet.
ACL Source=Rule Source Address ∩ Rule Subnet
An example firewall table rule is shown in the table below.
The example firewall table rule has source address 192.168.1.0/23 and associated subnet 192.168.1.0/24 (where 24 indicates that the first three octets are used for the subnet prefix). The intersection of the source address and subnet is 192.168.1.0/24 The result of the intersection may then be used as the source for a generated ACL statement to be applied to a suitable network, such as to the static VxLAN, e.g., by storing the generated ACL in a switch of each VxLAN that carries Internet traffic. The generated ACL statement, named ACL-1, is shown below.
Further, as described above, if the next hop of another route table rule “UDR” (user-defined route) is in this firewall table rule's associated subnet, then create another ACL statement having its source as the intersection of the Source Address of this firewall table rule and the subnet associated with the other rule “UDR”, and having its destination as the intersection of the Destination Address of this firewall table rule with the Route Prefix of the other rule “UDR”.
For example, suppose there is another firewall table rule associated with subnet 192.168.2.0/24 having an entry with a source address of Internet and a next hop of 192.168.1.10, as shown below. This firewall table rule may correspond to the “UDR” rule in the description above.
The next hop of the UDR rule, 192.168.1.10, is in the firewall table rule's associated subnet 192.168.1.0/24. Accordingly, a new ACL statement may be created having as its source the intersection of the Source Address of this firewall table rule (192.168.1.0/23), and the subnet associated with the UDR rule (192.168.2.0/24). The intersection is 192.168.2.0/24, which is used as the new ACL statement's source. Further, the destination of the new ACL statement may have as its destination the intersection of the Destination Address of this firewall table rule with the Route Prefix of the “UDR” rule. Thus, the ACL destination may be Internet. The resulting ACL, named ACL-2, is shown below
This generated ACL statement may be applied to a suitable network, e.g., by storing the generated ACL statement in a switch of each VxLAN that carries Internet traffic. Further, as described above, these steps may be repeated in a recursive manner by identifying one or more additional rules “UDRi+1” that have a next hop in the subnet associated with rule “UDRi” and creating an additional ACL statement having as its source the intersection of the Source Address of this rule and the subnet associated with the rule “UDRi+1” and having as its destination the intersection of the Destination Address of this rule with the Route Prefix of the rule “UDRi+1” until there are no more rules having a next hop in one of the identified subnets. Each such additional ACL statement may be applied to a suitable network, e.g., by storing the generated ACL statement in a switch of each VxLAN that carries Internet traffic.
At step 950, the method may traverse the chain of next hops to identify one or more additional firewall table rules that have a next hop in the subnet associated with at least one of the identified firewall table rules and generate an additional access rule for each such additional firewall table rule. In particular embodiments, the method may identify one or more additional firewall table rules “UDRi+1” that have a next hop in the subnet associated with firewall table rule “UDRi” and generate an additional access rule having as its source the intersection of the Source Address of the first firewall rule and the subnet associated with the rule “UDRi+1” and having as its destination the intersection of the Destination Address of the first firewall rule with the Route Prefix of the rule “UDRi+41” until there are no more additional firewall table rules having a next hop in one of the identified subnets. At step 960, the method may apply the first, second, and any additional network switch access rules to one or more network switches that send packets to or receive packets from the Internet.
Particular embodiments may repeat one or more steps of the method of
Particular embodiments may repeat one or more steps of the method of
Particular embodiments may repeat one or more steps of the method of
This disclosure contemplates any suitable number of computer systems 1100. This disclosure contemplates computer system 1100 taking any suitable physical form. As example and not by way of limitation, computer system 1100 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system 1100 may include one or more computer systems 1100; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1100 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 1100 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1100 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In particular embodiments, computer system 1100 includes a processor 1102, memory 1104, storage 1106, an input/output (I/O) interface 1108, a communication interface 1110, and a bus 1112. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In particular embodiments, processor 1102 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 1102 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1104, or storage 1106; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 1104, or storage 1106. In particular embodiments, processor 1102 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 1102 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 1102 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 1104 or storage 1106, and the instruction caches may speed up retrieval of those instructions by processor 1102. Data in the data caches may be copies of data in memory 1104 or storage 1106 for instructions executing at processor 1102 to operate on; the results of previous instructions executed at processor 1102 for access by subsequent instructions executing at processor 1102 or for writing to memory 1104 or storage 1106; or other suitable data. The data caches may speed up read or write operations by processor 1102. The TLBs may speed up virtual-address translation for processor 1102. In particular embodiments, processor 1102 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 1102 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 1102 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 1102. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In particular embodiments, memory 1104 includes main memory for storing instructions for processor 1102 to execute or data for processor 1102 to operate on. As an example and not by way of limitation, computer system 1100 may load instructions from storage 1106 or another source (such as, for example, another computer system 1100) to memory 1104. Processor 1102 may then load the instructions from memory 1104 to an internal register or internal cache. To execute the instructions, processor 1102 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 1102 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 1102 may then write one or more of those results to memory 1104. In particular embodiments, processor 1102 executes only instructions in one or more internal registers or internal caches or in memory 1104 (as opposed to storage 1106 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 1104 (as opposed to storage 1106 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 1102 to memory 1104. Bus 1112 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 1102 and memory 1104 and facilitate accesses to memory 1104 requested by processor 1102. In particular embodiments, memory 1104 includes random access memory (RAM). This RAM may be volatile memory, where appropriate Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 1104 may include one or more memories 1104, where appropriate Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In particular embodiments, storage 1106 includes mass storage for data or instructions. As an example and not by way of limitation, storage 1106 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these Storage 1106 may include removable or non-removable (or fixed) media, where appropriate Storage 1106 may be internal or external to computer system 1100, where appropriate. In particular embodiments, storage 1106 is non-volatile, solid-state memory. In particular embodiments, storage 1106 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 1106 taking any suitable physical form Storage 1106 may include one or more storage control units facilitating communication between processor 1102 and storage 1106, where appropriate. Where appropriate, storage 1106 may include one or more storages 1106. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
In particular embodiments, I/O interface 1108 includes hardware, software, or both, providing one or more interfaces for communication between computer system 1100 and one or more I/O devices. Computer system 1100 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 1100. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 1108 for them. Where appropriate, I/O interface 1108 may include one or more device or software drivers enabling processor 1102 to drive one or more of these I/O devices. I/O interface 1108 may include one or more I/O interfaces 1108, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
In particular embodiments, communication interface 1110 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 1100 and one or more other computer systems 1100 or one or more networks. As an example and not by way of limitation, communication interface 1110 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 1110 for it. As an example and not by way of limitation, computer system 1100 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 1100 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 1100 may include any suitable communication interface 1110 for any of these networks, where appropriate. Communication interface 1110 may include one or more communication interfaces 1110, where appropriate Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus 1112 includes hardware, software, or both coupling components of computer system 1100 to each other. As an example and not by way of limitation, bus 1112 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 1112 may include one or more buses 1112, where appropriate Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
This U.S. patent application is a continuation of, and claims priority under 35 U.S.C. § 120 from, U.S. patent application Ser. No. 16/167,361, filed on Oct. 22, 2018, which claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application 62/734,993, filed on Sep. 21, 2018. The disclosures of these prior applications are considered part of the disclosure of this application and are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
11271905 | Cometto | Mar 2022 | B2 |
20060215657 | Lee et al. | Sep 2006 | A1 |
20130283364 | Chang et al. | Oct 2013 | A1 |
20150281059 | Xiao | Oct 2015 | A1 |
20160164914 | Madhav | Jun 2016 | A1 |
20170024260 | Chandrasekaran | Jan 2017 | A1 |
20170026355 | Mathaiyan et al. | Jan 2017 | A1 |
20170026470 | Bhargava | Jan 2017 | A1 |
20170097841 | Chang et al. | Apr 2017 | A1 |
20170099188 | Chang | Apr 2017 | A1 |
20170317901 | Agrawal | Nov 2017 | A1 |
20170317978 | Diaz-Cuellar et al. | Nov 2017 | A1 |
20180026873 | Cheng et al. | Jan 2018 | A1 |
20180063176 | Katrekar et al. | Mar 2018 | A1 |
20180063193 | Chandrashekhar | Mar 2018 | A1 |
20190327144 | Tembey | Oct 2019 | A1 |
Number | Date | Country | |
---|---|---|---|
20220174042 A1 | Jun 2022 | US |
Number | Date | Country | |
---|---|---|---|
62734993 | Sep 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16167361 | Oct 2018 | US |
Child | 17651417 | US |