Computer networks are becoming larger and their topology is increasingly complex. Computer networks include both physical and virtual elements that are linked by communication paths. A computer network may simultaneously provide a number of services and support the needs of multiple clients. The configuration of these large computer networks is largely a manual process where one or more computer technicians determines the desired topology and configurations of the elements within the topology and then individually configures the elements to provide the desired functionality. This process is error prone and can result in significant down time of the computer networks. The downtime costs for a computer network can be significant for the network owner, tenants who depend on the network to support their organization and for clients who rely on the services provided by the tenants.
The accompanying drawings illustrate various examples of the principles described herein and are a part of the specification. The illustrated examples are merely examples and do not limit the scope of the claims.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Computer networks are becoming larger and their topology is increasingly complex. Configuring computer networks is often a manual process where one or more computer technicians determine the desired topology and configurations of the elements within the topology and then individually configures the elements to form the topology and provide the desired functionality. This process is error prone and can result in significant down time of the computer networks.
The principles described herein enable data center architects to design and create end-to-end virtual slices (“zones”) of data center networking infrastructure. The zones are created by logically slicing the data center into separate, easily understood contexts. Each zone supplies one or more end-to-end virtualized networking services. For example, the zones in the data center can be optimized to “place & route” these virtual zones while providing secure isolation between zones.
A service model made up of abstracted service units is constructed to represent a desired functionality. The service model is tested against a model of a zone and then “compiled” into the actual data center hardware that makes up the zone. Each zone that implements a service model can be optimized using a policy driven approach for specific applications or tenants. The service model can then be used by monitoring applications as a framework to understand and measure the performance of the service delivered, capacity, security, isolation, and overall infrastructure capacity loading.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present apparatus, systems and methods may be practiced without these specific details. Reference in the specification to “an example” or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples.
In this example, a number of server groups are connected to switches that feed routers. The server groups include an SAP server group, an Exchange server group, a Polycom server group, a Lync server group, and a Windows host group. These servers are grouped according to function and illustrated as a single element in the diagram. However, there is no limitation as to how these servers are physically configured or located.
The switches in this example include HP S5900-AF series switches labeled A through F. HP 5900 series switches are high-density, low latency switches that are part of Hewlett Packard's FlexFabric solution. As shown in this example, these servers can be deployed at the server access layer of data centers. The network also includes switches that are labeled as HP S7510, S5820X, S12508 series switches. A wide variety of other suitable switches could be used.
The firewalls include HP F5000 standalone firewalls that provide a throughput of up to 40 gigabits per second and support virtual private networks. Also included are an MSM320 Access Point, a WA2620 Access Point, a WX3024 Wireless switch, a F5 Big-IP Local Traffic Manager (LTM) for load balancing, and SR8812 routers.
The diagram may represent the entire computer network or only a portion of the computer network. The network may be divided into smaller units called zones. A zone can be defined in a number of ways. For example, a zone may be defined by selecting two elements in the network topology. The two elements may be endpoints, servers, or other elements. Anything that directly links and/or is associated with the two endpoints can be included in the zone. For example, an endpoint may be the access router SR8812(A) and the other element may be the Exchange server group. Any elements that are used in the operation of the Exchange server group or interface with the access router could be automatically included in the zone.
These zones provide a logical slice of data center infrastructure per application, which massively simplifies management while improving the quality of service. The various zones can be securely isolated from other zones in the network and can be independently reconfigured. For example, if a service supported within a zone fails, the network administrator has no need to examine the entire network for failures or optimization. Because the failed service is isolated within and provided solely by the zone, only the function zone needs to be examined. Further, the use of zones may facilitate thin provisioning for networks for a just-enough, just-in-time allocation of resources. This can more efficiently utilize the services and resources in layer 4 through layer 7 in the Open Systems Interconnection model. These layers are, sequentially starting with layer 4, the transport layer, the session layer, the presentation layer and the application layer. The topology independent networking with dynamic allocation of network policies described below performs much better than statically configured connectivity and security. For example, the principles described herein enable dynamic placement of workloads and services for greater agility.
In some implementations, the functionality of the network may be pooled into common categories. For example, the pools (102) may include pooling ports, paths/bandwidth, load balancing capacity, applications, hosts, etc. In the example shown in
In this example, the service units are divided into several groups, including a vNet group, a vDev group, a vLink group, vPort group, a vIP group, a vSecure group, and a vHost/vApp group. The vNet group includes Layer 2 (the data link layer) and Layer 3 (the network layer) service units and the vDev group includes (from left to right) a router, a switch, a multi-access endpoint, a wireless endpoint, a Multi-tenant Device Context (MDC), workgroup switch, and an Asynchronous Transfer Mode (ATM) switch.
The vLink group includes service units for L2, L3, and Virtual Private Network (VPN) connections. The vPort group includes a physical port and a Virtual Switch Instance (VSI) port. The vIP group includes a Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS) unit, and a Network Address Translation (NAT) unit. The vSecurity group includes a firewall unit and a load balancing unit. The vHost group could include a variety of applications and other service units. The description of service units above is only one example. A number of different service units could be included. New service units may be created by defining their properties and including them in a service unit library.
Each service unit can be selected, for example, by clicking on the icon representing the service unit, and dragging the selected service unit into the desired location in the model workspace. Forming connections between the new service unit and the other units joins the new service unit to the service model. In this example, the service model includes a Lync server and an Exchange server connected to a switch. This connection is then routed through several software/virtual elements, including an Intelligent Resilient Framework (IRF, a software virtualization technology for routing configuration and management), and a MDC. The connection is then made to a router which is also connected to a firewall and a load balancer. A wireless access point is connected to the router through a switch. This configuration supplies users connected to the wireless access point with services provided by the Lync and Exchange servers.
The service units may be created by defining its properties and included in the service unit library. In
There are a wide range of properties for the service units that can be defined.
In
The zone model (116) is created from the inventory of the computer network system. The system transfers/simulates the service model (110) within the zone model. This simulation can be performed in a variety of ways. For example, each service unit may be individually transferred to the zone model and any errors or incompatibilities between the service unit and the model can be identified. For example, a switch service unit may have a property of supporting VLAN operation, which may not be available on a load balancer service unit in the model of the zone.
Additionally, once the transfer of the individual service units is accomplished and any errors are resolved, the entire service model (110) could be simulated. This would show the performance of the overall service model within the zone model so that system parameters such as latency and data volumes could be determined.
In the interface, there are several buttons on the upper right hand corner. These buttons are “automatically simulate,” “apply,” and “deploy.” By clicking the automatic simulate button, the individual service units are transferred to the zone model (116) and checked. For example, after the automatic simulate button is pressed, each individual service unit may be separately shown as moving from the service model (110) into the zone model (116). If there are any issues or incompatibilities detected, these errors can be shown graphically in any of a number of ways. For example, the service unit icon may change to a red color, flash, or the display a flag indicating the error.
An illustrative simulation (117) of the service model in the zone model is shown in
After the service model checks out in the zone model, the deploy button can be selected in the upper right hand corner of the screen to actually deploy the service model in the computer network system. The popup screen (118) shows various aspects of the deployment with check marks indicating successful implementation. The deployment may be completely automated or partially automated. For example, the deployment may include transfer or implementation of properties of the various service units to the physical or virtual elements within the computer network. The deployment may utilize a number of techniques, including optimization of the deployment sequence.
The various service models can be saved and stored for later use in a catalog as shown in
Generating a model for a service and then simulating the service on a model of the target network provides a number of advantages. For example, a wide variety of “what if” scenarios can be run on the model of the network. This allows for experimentation and optimization of the services and hardware to be deployed without disrupting the operation of the network. Additionally, because each service model is validated through the simulation process, the actual deployment of the service model will have fewer conflicts and deployment errors. By saving various service models in library, the accumulated knowledge generated by the network administrators can be effectively captured and reutilized. Additionally, the service models saved in the library can be further optimized and redeployed in a variety of different networks and situations.
The service model is then deployed to the zone represented by the zone model (block 830). The service model can then be cataloged for later redeployment and/or optimization (block 835). The service model can then be used in a monitoring application to monitor the quality of service parameters in the zone (block 840). For example, these quality of service parameters may include bandwidth, latency, latency jitter, and data loss.
The methods given above are only examples. The principles described could be implemented in a variety of different ways, including adding blocks, combining blocks, reordering blocks, or removing blocks.
The computing system also includes a drag and drop model creation module (915) for creation of a service model. As discussed above, the drag and drop creation module may include a number of service units represented as icons. By dragging and dropping the service units into a working area and then connecting the various service units, a service model is created to perform the desired function. The drag and drop model creation module (915) receives service units that are dragged and dropped into the workspace and connections that are made between the units. A simulation module (920) validates the service model within the selected zone model and notes any errors, incompatibilities or other issues. These errors can be graphically displayed in the graphical user interface. After resolving any issues that prevent deployment, the service model can be deployed on the computer network by the deployment module (925) to reconfigure the network and allow the desired functionality to be implemented. In one embodiment, all of the modules described above are implemented by the same application.
After successful implementation, the service model can be stored in a service model catalog (940) stored in memory (935). The service model can be feed into a monitoring module (945) to define the topography of the service being provided. The service module may also define various quality of service parameters. Use of the service module by the monitoring module allows the monitoring module to automatically monitor quality of service without manual configuration by an operator.
The resource automation management principles described above provide management tools to discover and display data center resources; drag and drop these resources into a zone; define the access policies; and then compile the set of commands that are needed to manage each zone separately. This avoids the state of the art manual methods that are time consuming, difficult to modify, and error prone.
The principles described herein may be embodied as a system, method or computer program product. The principles may take the form of an entirely hardware implementation, an implementation combining software and hardware aspects, or an implementation that includes a computer program product that includes one or more computer readable storage medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable storage medium(s) may be utilized. Examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Computer program code for carrying out operations according to the principles described herein may be written in any suitable programming language. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The preceding description has been presented only to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.