The present disclosure relates to defining and managing virtual domains, virtual groups, and logical link policies in a multi-tenant virtual domain environment that overlays onto a physical network.
Hardware and software vendors offer virtualization platforms that allow a single physical machine to be partitioned into multiple independent virtual machines. These virtualization platforms have become accepted in the industry market on a small business level and on an enterprise level. Virtualization technology continues to develop in several directions in order to meet the demands of modern IT applications, such as in network services for multi-tenant environments.
According to one embodiment of the present disclosure, an approach is provided in which a computer system selects a virtual domain from multiple virtual domains, which are each overlayed onto a physical network and are independent of physical topology constraints of the physical network. The computer system selects, from the selected virtual domain, a first virtual group that includes one or more first virtual network endpoints. Next, the computer system selects, from the selected virtual domain, a second virtual group that includes one or more second virtual network endpoints. In turn, the computer system creates a logical link policy that includes one or more actions corresponding to sending data between the first virtual group and the second virtual group.
The foregoing is a summary and thus contains, by necessity, simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the present disclosure, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
The present disclosure may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings, wherein:
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present disclosure are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The following detailed description will generally follow the summary of the disclosure, as set forth above, further explaining and expanding the definitions of the various aspects and embodiments of the disclosure as necessary.
Virtual domain 105 overlays on physical network 100 and includes four defined virtual groups 110-140 with logical link policies 145-170 that “link” the virtual groups together to send and receive data packets. In one embodiment, an administrator may define virtual groups 110-140 and subsequently populate the virtual groups 110-140 with virtual endpoints (e.g., virtual machines). In this embodiment, the amount of virtual endpoints included in a virtual group may dynamically change; while communication rules between the virtual groups for the virtual endpoints are defined by their particular virtual group membership (see
In one embodiment, physical network 100 supports multiple tenants. In this embodiment, virtual domains belonging to different tenants are maintained separately. As such, a tenant administrator for virtual domain 105 is aware of entities within virtual domain 105, but is unaware of physical network 100 or of other tenant's virtual domains. In this embodiment, a distributed policy service maintains logical link policies for each virtual domain overlayed on physical network 100 and actualizes the logical link policies in terms of physical network 100 through physical path translations (see
In one embodiment, a host server includes host modules that communicate with the distributed policy service. In this embodiment, host modules are located on each physical server and physical entry points of physical network 100. Host modules intercept a virtual endpoint's egress and ingress data packets and, if needed, send a request to the distributed policy service to resolve the policies related to the traffic, overlay the traffic (e.g. using tunneling or any other overlay mechanism) according to the acquired policy, and send it onto an underlying physical network. As such, physical network 100 is viewed as a carrier for the overlaid traffic.
Referring to
Virtual group 1230 includes virtual endpoints 232-236. In one embodiment, virtual endpoints 232-236 may reside on a single physical server and share resources. In another embodiment, virtual endpoints may reside on different physical servers that may be physically distant and be defined on different physical subnets (see
Virtual group 3250 includes virtual endpoints 252 and 254. When virtual endpoint 252 sends data to virtual endpoint 232, a host module that hosts virtual endpoint 252 identifies and utilizes logical link policy 245 to send the data to virtual endpoint 232.
Virtual domain A 200 also includes a mechanism for its virtual endpoints to communicate with endpoints external to itself. Such communication can be of two types: 1) communications initiated by clients in virtual domain A 200 to external servers; and 2) communications initiated by the external clients to servers included in virtual domain A 200. In an embodiment pertaining to the first type of communication, an administrator may define a virtual group (e.g., virtual group X 260 and virtual group Y 270) that includes addresses or domain names of servers that may be contacted from a virtual group included in virtual domain A 200. In this embodiment, link policies (e.g., link policies 255 and 265) are defined to impose constraints on this communication. In an embodiment pertaining to the second type of communication, these types of communications are handled similarly with the exception that a group including external endpoints may not include concrete addresses, but rather address ranges or wildcards that allow internal servers to be contacted from any external client. In both types of communication, an external endpoint may be either an overlay network endpoint belonging to a different domain or endpoint that is not hosted by the overlay network.
The embodiment shown in
Hosts 300-325 execute virtual endpoints (virtual machines), and are grouped according to system administrator preferences. Comparing
Virtual group 2240's endpoint 242 executes on host 300, and endpoint 244 executes on host 305. Host 300 also includes endpoint 350. Virtual group 3250's endpoint 252 executes on host 305, and endpoint 254 executes on host 325. Host 305 also executes endpoints 360 and 365, which may belong to different virtual domains and/or perform particular functions.
Virtual domain B 210, which is a different virtual domain that virtual domain A 200 in which virtual groups 230-250 belong, is also overlayed onto the same physical network. Virtual domain B 210 includes endpoints 370-395, which may or may not belong to the same virtual group within virtual domain B 210. Administrators for virtual domains A and B are able to dynamically manage virtual endpoints assigned to virtual groups using a domain manager API (see
An administrator may also create an “internal” logical link policy by defining the source virtual group and the destination virtual group as the same virtual group.
Domain administrator 500 uses domain manager API 510 to send management overlay commands 515 to distributed policy service 520. Domain administrator 500 has control over a particular domain, which includes managing virtual groups within the domain and managing virtual endpoints within the virtual groups (see
Domain administrator also uses domain manager API 510 to create and manage logical link policies, which govern communications between virtual endpoints and external network servers, clients, and/or peers. The logical link policies are formulated on a virtual level and govern all the aspects of network communication, such as connectivity, security, QoS, monitoring, etc. Common networking notions of switching and routing are not explicit in policies definitions but are implicitly defined on a higher level by allowing, disallowing, or restricting communications between sets of virtual machines and other network entities.
Distributed policy service 520 receives management overlay commands 515, and informs one or more host modules 550 as to modifications of virtual domain 530. Host modules 550 reside on host 540, and store policy information in local domain tables 555 to support egress and ingress traffic to and from endpoints 560-590. For example, domain administrator 500 may wish to move endpoint 560 to a different virtual group (e.g., VG1 to VG2). In this example, domain administrator 500 uses domain manager API 510 to issue a “AddMachineToGroup” command. In turn, distributed policy service 520 sends instructions to a corresponding host module (host module 550) to store the changes in local endpoint table 555.
CREATE commands 605, 610, and 620 get a user specified NAME parameter for an entity (e.g., name of a domain, virtual group, or virtual machine) and return a unique ID generated by the system for the new entity. The system then correlates between the user specified NAME and its unique ID in subsequent interactions. For CreateDoveVirtualGroup command 610, the user specifies the domain ID (returned by the CreateDomain) and the NAME for the new virtual group. The system identifies the domain by the domain ID and creates a new virtual group within the corresponding domain with the user specified virtual group name and system generated virtual group ID.
CreateDoveVirtualMachine command 620 assigns an existing virtual machine (endpoint), which was created when added to the overlay environment, to a domain and virtual group that is specified by the administrator. In another embodiment, a new virtual machine may be instantiated as part of the CreateDoveVirtualMachine command.
DeleteDoveVirtualGroup command 615 deletes a virtual group that corresponds to a domain and virtual group identifier specified by the administrator. DeleteVirtualMachine command 625 deletes a particular virtual machine (endpoint) in a domain specified by the administrator.
AddMachineToGroup command 630 moves an existing virtual machine to a different virtual group. CreateDovePolicy command 635 creates a policy corresponding to a source virtual group and a destination virtual group that resides within a domain, each of which is specified by the administrator. DeleteDovePolicy command 640 deletes a policy corresponding to a policy identifier utilized in a particular domain specified by a domain identifier. ChangeDovePolicy command 645 changes the policy description (e.g., physical path translations) and increments the policy version number of an existing policy that is utilized in a particular domain.
On the other hand, if the request is a group creation request, decision 720 branches to the “Group” branch, whereupon processing identifies a domain identifier specified in the request at step 735, and creates a virtual group within the specified domain at step 740. Processing returns a unique virtual group identifier to domain manager API 510 (step 740), and processing ends at 745.
If the request is an endpoint generation request, decision 720 branches to the “Endpoint” branch, whereupon processing identifies a domain identifier and a virtual group identifier included in the request at step 750. Next, at step 755, processing identifies endpoints (internal and/or external) corresponding to the request. In one embodiment, external endpoints are represented as their globally meaningful IP addresses (or ranges of addresses or wildcards allowing all the addresses) or domain names. External endpoints are assumed to exist somewhere and their existence is not verified. In another embodiment, internal endpoints may either exist before the command or be created by the command. If they exist before the command, their platform-specific identifier is passed as a description. If they are created by the command, a virtual machine specification is passed as a parameter. As one skilled in the art can appreciate, the flowchart may branch into several scenarios depending upon the type of endpoint and whether or not it must be created.
Processing assigns the identified endpoint to the domain and virtual group specified in the request (step 760), and processing ends at 765.
If the request is policy generation request, decision 720 branches to the “Policy” branch, whereupon processing identifies a source virtual group and a destination virtual group included in the request at step 770. Next, at step 775, processing identifies logical actions (policy description) included in the request, such as data should be compressed and/or encrypted. At step 780, processing creates a policy and returns a policy identifier to domain manager API 510 at step 780. Processing ends at 785.
Endpoint table 820 includes columns 822-832. Column 822 includes a sequence number for each virtual endpoint. Column 824 includes a unique endpoint identifier for each virtual endpoint. Column 826 includes a name of the virtual endpoint. Column 828 includes a unique virtual group identifier to which the virtual endpoint belongs. Column 830 includes the IP address of the virtual endpoint, and column 832 includes the IP address of the physical server that hosts the virtual endpoint.
Virtual policy table 840 includes columns 842-856. Column 842 includes a sequence number for each logical link policy. Column 844 includes a unique logical link policy identifier for each logical link policy. Column 846 includes an administrator provided name for the particular logical link policy. Columns 848 and 850 includes a unique source virtual group identifier and a unique destination virtual group identifier to which the policy links, respectively. Column 852 includes a tracking number for each logical link policy, which increments as the policy updates (e.g., a version number). Column 854 includes caching properties for particular logical link policies, which may include an expiration date or an amount of time to include the logical link policy in the cache. And, column 856 includes logical actions for the logical link policies. For example, the first logical link policy (sequence #1) shows that for HTTP traffic, data should traverse through a firewall and compression, and for HTTPS traffic, data should traverse through IDS and SSL. All other types of data traffic are denied. In one embodiment, columns 854-856 may refer to additional tables that the system maintains, such a separate table for caching rules and a separate table for the logical actions.
Policy resolution request 900 includes fields 910-930. As those skilled in the art can appreciate, a policy resolution request may include more or less fields than what is shown in
Field 915 includes a session identifier type that identifies the type of session identifier included in field 920, such as a TCP 5 tuples. The session identifier type defines the information that is sent and how it is encoded (includes domain ID). Field 925 includes a policy identifier and field 930 includes a policy tracking number when revalidating an existing policy.
Policy resolution response 950 includes fields 960-990. As those skilled in the art can appreciate, a policy resolution response may include more or less fields than what is shown in
Field 965 includes a unique policy identifier that corresponds to the policy included in field 990. This unique policy identifier is stored in the host module's local policy table. Field 970 includes a policy tracking number that identifies a “version” of the policy. In one embodiment, the tracking number is updated each time the policy is updated. Field 975 includes caching instructions for the policy, such as how long a policy should be held in cache or the policy's date of expiration. Field 980 includes a destination domain identifier that corresponds to the destination endpoint. The destination domain may or may not be the same as the source domain of the source endpoint.
Field 985 includes the type of policy included in field 990, such as GRE, IPIP, DEP, MPLS, etc. Field 990 includes the policy (physical path translations) whose format is defined by the policy type included in field 985 (e.g., list of IP addresses, MPLS labels, IP & port remappings, etc.)
At step 1015, processing identifies the hosting policy server that supports the host corresponding to host module 1010, such as by accessing a lookup table that maps hosts to policy servers. A determination is made as to whether the receiving policy server supports the host corresponding to host module 1010 (decision 1020). If the receiving policy server is not the supporting policy server, decision 1020 branches to the “No” branch whereupon processing forwards the request to the identified policy server that supports the corresponding host system (step 1025), and processing ends at 1030.
On the other hand, if the receiving policy server is the supporting policy server, decision 1020 branches to “Yes” branch 1035, whereupon processing maps a source endpoint identifier (included in the request) to a source virtual group using endpoint table 820 (see
Now that processing has a source virtual group identifier and a destination virtual group identifier, processing locates a logical link policy using the source virtual group identifier and a destination virtual group identifier at step 1045. At step 1050, processing parses the policy into logical actions (e.g., go through SSL, go through compression, etc.) and translates the logical actions into physical path translations at step 1055. For example, when the destination endpoint is determined, a physical IP address of its hosting local module is determined from endpoint table 1050. This physical address is the simplest resolved policy. In this example, if the policy specifies that traffic must go through a firewall before reaching the destination, the system resolves the firewall physical IP address in addition to the physical IP address of the local module hosting the destination. In turn, the resolved path is “physical firewall IP address, physical destination IP address” and the local module enforces this path onto each data packet that belongs to the data session at hand.
Processing creates a policy resolution response at step 1060, such as that shown in
Northbridge 1115 and Southbridge 1135 connect to each other using bus 1119. In one embodiment, the bus is a Direct Media Interface (DMI) bus that transfers data at high speeds in each direction between Northbridge 1115 and Southbridge 1135. In another embodiment, a Peripheral Component Interconnect (PCI) bus connects the Northbridge and the Southbridge. Southbridge 1135, also known as the I/O Controller Hub (ICH) is a chip that generally implements capabilities that operate at slower speeds than the capabilities provided by the Northbridge. Southbridge 1135 typically provides various busses used to connect various components. These busses include, for example, PCI and PCI Express busses, an ISA bus, a System Management Bus (SMBus or SMB), and/or a Low Pin Count (LPC) bus. The LPC bus often connects low-bandwidth devices, such as boot ROM 1196 and “legacy” I/O devices (using a “super I/O” chip). The “legacy” I/O devices (1198) can include, for example, serial and parallel ports, keyboard, mouse, and/or a floppy disk controller. The LPC bus also connects Southbridge 1135 to Trusted Platform Module (TPM) 1195. Other components often included in Southbridge 1135 include a Direct Memory Access (DMA) controller, a Programmable Interrupt Controller (PIC), and a storage device controller, which connects Southbridge 1135 to nonvolatile storage device 1185, such as a hard disk drive, using bus 1184.
ExpressCard 1155 is a slot that connects hot-pluggable devices to the information handling system. ExpressCard 1155 supports both PCI Express and USB connectivity as it connects to Southbridge 1135 using both the Universal Serial Bus (USB) the PCI Express bus. Southbridge 1135 includes USB Controller 1140 that provides USB connectivity to devices that connect to the USB. These devices include webcam (camera) 1150, infrared (IR) receiver 1148, keyboard and trackpad 1144, and Bluetooth device 1146, which provides for wireless personal area networks (PANs). USB Controller 1140 also provides USB connectivity to other miscellaneous USB connected devices 1142, such as a mouse, removable nonvolatile storage device 1145, modems, network cards, ISDN connectors, fax, printers, USB hubs, and many other types of USB connected devices. While removable nonvolatile storage device 1145 is shown as a USB-connected device, removable nonvolatile storage device 1145 could be connected using a different interface, such as a Firewire interface, etcetera.
Wireless Local Area Network (LAN) device 1175 connects to Southbridge 1135 via the PCI or PCI Express bus 1172. LAN device 1175 typically implements one of the IEEE 802.11 standards of over-the-air modulation techniques that all use the same protocol to wireless communicate between information handling system 1100 and another computer system or device. Optical storage device 1190 connects to Southbridge 1135 using Serial ATA (SATA) bus 1188. Serial ATA adapters and devices communicate over a high-speed serial link. The Serial ATA bus also connects Southbridge 1135 to other forms of storage devices, such as hard disk drives. Audio circuitry 1160, such as a sound card, connects to Southbridge 1135 via bus 1158. Audio circuitry 1160 also provides functionality such as audio line-in and optical digital audio in port 1162, optical digital output and headphone jack 1164, internal speakers 1166, and internal microphone 1168. Ethernet controller 1170 connects to Southbridge 1135 using a bus, such as the PCI or PCI Express bus. Ethernet controller 1170 connects information handling system 1100 to a computer network, such as a Local Area Network (LAN), the Internet, and other public and private computer networks.
While
The Trusted Platform Module (TPM 1195) shown in
While particular embodiments of the present disclosure have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, that changes and modifications may be made without departing from this disclosure and its broader aspects. Therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this disclosure. Furthermore, it is to be understood that the disclosure is solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For non-limiting example, as an aid to understanding, the following appended claims contain usage of the introductory phrases “at least one” and “one or more” to introduce claim elements. However, the use of such phrases should not be construed to imply that the introduction of a claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to disclosures containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an”; the same holds true for the use in the claims of definite articles.