The present systems and methods relate to the field of network modeling, simulation, monitoring and dynamically managing service levels on a shared network, including network engineering, network planning, and network management and dynamic allocation of network resources, for predictive problem prevention and problem solving.
Communications networks are increasingly supporting “convergence” in which different application services, e.g., voice, video, business critical data, best effort data, etc., with disparate service infrastructures, are supported on a common network infrastructure. A catch-phrase today in the networking marketplace offering “triple play” services, which simply means offering voice, video and data on a common infrastructure. “Application services” in this context is meant to include distributed services with service-specific component elements (service-specific network devices, servers, etc.) at various locations across a shared communication network that collectively deliver functionality to a distinct set of end users.
Each component element of a service provides some functionality that contributes to the overall functionality of the system supporting the service as a whole. The term “application services” is used herein to also encompass in terminology both Enterprise network environments where the term “applications” prevails, and the Service Provider environment where the term “services” prevails. Henceforth, the term “services” will be used to include either or both of the Enterprise network environments or the Service Provider environment, since the present systems and methods apply equally to both.
Table 1, given below for expository and illustrative purposes illustrates different services that may be supported on a common communications infrastructure (e.g., an IP router network as assumed in the table). Each service can have its own system architecture, its own physical and “logical” topology of supporting devices, its own end users, its own signaling traffic, its own bearer traffic, its own traffic behaviors, traffic patterns and growth patterns, its own quality of service (QoS) requirements, and its own service-specific behaviors, including dependencies on other services.
The confluence of multiple services, such as those in the table 1, on common infrastructure, creates a markedly complex and dynamic system with myriad interdependencies through shared resources, shared protocols, shared physical bandwidth, etc.
Network modeling and simulation systems (here these terms describe such systems deployed standalone or integral to online network management systems) traditionally have had a “one size fits all” approach to network modeling. Rather than representing any sort of service explicitly, there is one implicit service and all the traffic in the model is associated with it. For example, a traditional voice network analysis system describes traffic in terms of Erlangs and provides voice network analysis using mathematics driven off of Erlang traffic inputs—it has no concept of data traffic or services. A traditional data network analysis system (focused on IP), describes network traffic demands in terms of packet arrival rate and packet length distributions and drives its analysis, be it discrete event simulation-based or analytical queueing-based, off of these traffic descriptions—again without service models. Moreover, without a concept of services, these models, with purely a network focus, have lacked representation of the end systems on which services depend and the overall concept of the service itself, and the rules and models necessary to determine whether it is operational.
These traditional approaches to network modeling and management are not sufficient in a converged environment, where fundamentally different application services with disparate requirements for success ride a common network infrastructure. In such an environment, one option for making management decisions is full discrete event simulation of the entire combined system including network, end systems, and the services they support. But this is simply infeasible computationally for realistically sized networks. Accordingly, there is a need for a more feasible approach to network modeling, especially in the context of “next generation” online management systems that rely on model-based reasoning for their functions.
It is an object of the present system to overcome disadvantages and/or make improvements in the prior art.
The present system uses a set of loosely coupled models, where each model is a very efficient model for its domain, a service and/or a network. In particular, the present system includes devices and methods for managing service levels using the same representations of services and networks for both off-line modeling and simulation systems, and on-line systems that include real-time monitoring and management systems that dynamically manage service levels on a shared network by dynamically allocating network resources, for predictive problem prevention and reactive problem solving.
The present devices and methods include a processor which is configured to track services connected to the common network through nodes and links, and changes in service requirements and demand over time; run service models associated with the services under selected conditions, the selected conditions including failure and repair of one of the nodes or links; and propose corrective action and/or a change of network resources of the common network to minimize impact of the failure. The processor may also run network model(s). The models may be executed successively or simultaneously, and outputs of one model may be used as input to other models, including any necessary conversions for compatibility.
The processor may also be configured to dynamically adjust the network resources to minimize impact of the failure. To aid an operation in deciding to reallocate network resources, which may be proposed by the system, a visualization may be provided on a display, where the visualization includes a user interface showing a report with status/indication of the services and the network resources, and effects of changing the network resources.
The services may be represented in terms of at least one of service requirement and level of service. The interconnection of the each service to each other and to the common network may also be represented. The service, interconnections and/or network representations may be changed, and an impact of the change is determined on the services and the common network. Further, a common model may be formed including embedding a set of rules and evaluation functions into the common model; and coupling together selected services and selected elements of the common network that have impact on each other.
As an illustration, consider a three-tiered Web application with the tiers being: the web server, the application server, and the database server. The present invention represents theses entities, as well as the users/subscribers to the application as explicit objects. Further, it associates with the service a rule (or set of rules in general) which, when executed, results in the determination of the condition (e.g., up, down, degraded, etc.) of the service. In the illustration below, the condition is binary (the service is up or down for a given subscriber), but in general it could be one of an arbitrary enumerated set (e.g., up, minor problems, degraded, down, etc.), or a more general quantitative indication (integer, real number, etc.)
A simple up/down rule for the example multi-tiered web application is the following:
For the web service to be “up” for subscriber X, all of the following must be true:
All of the conditions constituting the rule above must be true for the service to be up. Evaluating each condition may require an evaluation function. Conditions 1 through 3 require “Reachability”, which is evaluated using a complex evaluation function that determines whether a pair of IP addresses can communicate over the network infrastructure (e.g., subscriber desktop IP address to web server IP address) and, if so, along what path. In the present invention, this evaluation is done by running a routing model (flow analysis) of the IP network infrastructure based on data collected from the network.
Reachability also could be determined by directly collecting forwarding tables from the network and “walking” them to see if there is a path, an alternative also supported today.
System status information is similar, although its evaluation function is trivial. Status could be set offline by the user, for instance, in a what-if analysis of failing a device, or it could be information received from the operational environment—e.g., a device failure event notification. In either case—online or off line, the same rule is evaluated to determine the status of the service (in general, for each subscriber).
Note that the system can and does provide an iterative failure analysis in which each device (and indeed each common network device such as a router) can be failed in turn, and the survivability of the service is evaluated.
The above illustration can be considerably expanded through further description of the invention. First, in a more realistic representation of a web-based application, there are typically n web servers for load balancing and redundancy. So Condition (5) could be “at least k of n web servers must be up”. Or a range could be defined such that when less than k, but more than m, web servers are up the service is considered to be “degraded”, taking the service's condition set beyond binary.
The evaluation functions can be far more complex and useful than simply reachability. For instance, propagation delay can be accumulated along the path the routing analysis computes for each communicating pair (e.g., subscriber desktop to web server), and the total compared to an SLA. The service can then be described as up or down (or degraded) based on thresholds of performance for that SLA.
A final and very important point is that the success of one service such as the above, can depend on the success of another service. For example, in the case of the web service example, the subscriber's local DNS (Domain Name Service) could be required to be “up” in order to resolve the address of the web server. The DNS service could have its own Rules and Evaluation Functions described analogously to the above.
In order for this service dependency to be evaluated, the present invention supports cascaded computation of rules and evaluation functions, e.g., the application service rule above, with the addition of the DNS requirement, will automatically trigger the rules and evaluation functions associated with DNS.
Further, the present invention maintains a log of the order in which the rules and evaluation functions were executed and records comprehensive results for each step and evaluation outcome for each rule. By maintaining this log and record, the system supports not only computation of the top-level status of the service (e.g., the application web service is “down” for subscriber X, but allows the user of the system to fully report on the traversal of the rules and evaluation functions to understand the nature of the failure (e.g., SLA violation between the subscriber and the web server, or the subscribers DNS service is down because of a reachability issue with the DNS server, etc.)
As pointed out earlier, all of the above discussion applies equally to the domain of offline modeling and simulation and the domain of network management. In the former, the analysis can be hypothetical; in the latter, the analysis, including all of the evaluation functions (such as a routing model), are applicable to real world analytics supporting network management that is driven by operational network and service data, and traditional management information (e g , running the above evaluation in response to a notification that a common network router has failed to determine which subscribers to which services are down/degraded/unaffected).
In this environment, where the services rules and evaluation functions are run in response to real world events, the system can be configured to issue service-related alarms (service x is failed). Additionally, the logs and results permit the user of the system to understand the precise reason for the failure, i.e., its root cause.
Further, since the system embeds a set of rules and evaluation functions collectively with the ability to model the operational environment, that same set of models can be used to reason about “fixes” to problems identified. For example, if the root cause of a web application failure for a given subscriber is an SLA violation due to congestion in some part of the network, the user of the system can explore alternative means of routing the subscribers' traffic by changing IP link weight metrics.
This same process can be automated. In one embodiment of the invention, in which the IP network is MPLS-based, the system can be configured to optimize MPLS explicit path-based routing to minimize congestion throughout the network. This automated design action is called in response to a network event, such as a failure notification. The impact of the failure is automatically analyzed as previously described, and if that analysis shows the result to be sufficiently bad, the MPLS rerouting design action is run automatically to locate a set of network changes that repair or ameliorate the problem.
The present system provides a systematic treatment of multiple distributed services with individual and interrelated behaviors in a predictive modeling and simulation context. For example, the present system enables scalable modeling and analysis for proactive problem prevention and reactive problem solving in a network supporting multiple service networks, with an emphasis on, but not exclusive focus on, managing service levels.
The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein:
The following are descriptions of illustrative embodiments that when taken in conjunction with the following drawings will demonstrate the above noted features and advantages, as well as further ones. In the following description, for purposes of explanation rather than limitation, specific details are set forth for illustration. However, it will be apparent to those of ordinary skill in the art that other embodiments that depart from these details would still be understood to be within the scope of the appended claims. Moreover, for the purpose of clarity, detailed descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the present system.
It should be expressly understood that the drawings are included for illustrative purposes and do not represent the scope of the present system. In the accompanying drawings, like reference numbers in different drawings designate similar elements.
For purposes of simplifying a description of the present system, as utilized herein regarding the present system, the following terms include meanings as follows: the term “network” intended to include to mean the common network infrastructure interconnecting the devices associated with any service and providing resources that may be shared among the services including dedicated resources that may be dynamically adjusted to prevent or minimize failures and impacts thereof
Further, the term “service network” is intended to include the union of the network and all the remaining entities necessary to support the entire service, such as end user devices, gateway devices at technology transition points, signaling devices, backup devices, etc. The term “problem” is intended to include an issue either intrinsic to the common network infrastructure (e.g., router link congestion), intrinsic to a specific service (e.g., how much capacity do I need to grow my VPN service by 30% in the New York market?), or intrinsic to both (e.g., finding an error in a router access control list configuration change that “broke” signaling among voice over Internet Protocol (VoIP) devices). The present systems and method automatically abstract configuration changes of individual devices used by the common network and/or services from policies related to the common network and services.
Table 1 below shows service illustrations, such as multiple services over a common IP infrastructure:
One component of the present system includes the treatment of each service as a separate conceptual “thread” throughout the entire process of using predictive modeling and simulation to prevent and solve data network and service network problems. The elements and operations that contribute to this end include:
Considerations and operations related to the simultaneously intertwined simulation and modeling for the shared network and multiple service networks include:
1. For the shared network common among services, this produces key metrics relevant to performance engineering, planning and problem solving related to that network. Service level requirements and metrics may be expressed in terms of network performance metrics. It should be noted that one component of the present system includes using simulation and modeling in the shared network context to generate problem-solving information at the granularity of services (i.e., within the context of overall engineering rules for the shared network). For each important or desired statistic on a device, link, tunnel, queue, interface, etc. in the shared network generated by modeling and simulation, the system reports on that statistic based on conformance to engineering targets for the shared network.
Further, the system computes: (1) new measures on the contribution of each service to that statistic (where appropriate), (2) causal effects that are service-related, (3) service impacts (both the direct impact of the statistic on affected services as well as indirect effects where that statistic is input to a service-specific performance or impact analysis function), and (4) service-specific analysis measures. These are all illustrated in an exemplary network problem report 100 shown in
As shown in
In a further or third area 130, information is provided related to the services that have been or may be affected by the current problem (i.e., the congestion in the NYC to DC link shown in the first area 110). Of course, the data may be presented in various ways, such as bar graphs instead of the pie chart 125, and may include further indications, such as being color coded. For example, as shown in the third area 130, the VoIP service may be color coded, such as colored yellow to indicate a potential problem, a relatively minor problem, or reduced quality, such as an MOS of 3.5 which is less then the needed value 4 as shown in the QoS column for VoIP of Table 1; while the VoD data may be a different color, such as color coded red to indicate the existence of a (more severe or catastrophic) problem, namely, the 10% loss of data or services in VoD between NYC and Atlanta (in this example, the end-to-end VoD traffic flow from NYC to Atlanta is routed over the congested network link between NYC and Wash DC).
2. Simulation and modeling for each service network, end system to end system, which produces key metrics relevant to performance engineering, planning, and problem solving related to that service network. Here another component of the present system includes allowing simulation and modeling of the service in a separate model from the model of the shared network, and using simple causal abstractions to couple the models loosely. It should be noted that there are many embodiments of this approach. One example is the following embodiment with a loosely coupled service model and a network model:
In one instance of a hybrid Time Domain Multiplex (TDM) based voice and VoIP network, for example, routing of the TDM-based voice calls occurs with the legacy voice network “seeing” the VoIP network as one (big) TDM switch. In this case, a traditional TDM voice analysis model (e.g., reduced load approximation) may be used to model TDM level voice behavior, such as blocking, overflows, etc. This TDM level model determines the ingress/egress points for voice traffic over the VoIP network. Once the TDM model has been run, the voice calls that ride the VoIP network are converted to IP flows as part of the simulation. The IP router network model is run with the offered load and produces packet statistics like delay, jitter, and loss, specific to the voice flows/voice services. Finally, the packet-level statistics may be converted back to voice service specific measures on quality, such as a standards-based model called Mean Opinion Score (MOS) using what is known as the ITU E-Model of the International Telecommunication Union (ITU) analysis standard.
The hybrid TDM-based and IP-based voice network example is one where the rules and evaluation functions that compute status of the overall VoIP service can be recursive, as follows. For simplicity, assume that all voice traffic originates and terminates in the TDM domain and the IP voice network is used as an embedded core network for long distance transport of voice traffic. To analyze this hybrid environment, first, the TDM voice network analysis (e.g., a reduced load approximation model) is run for the offered voice traffic (say Erlangs between each city pair in the network). This analysis performs TDM domain routing of the traffic. That routing determines the ingress and egress points on the IP network of voice flows that will traverse it (the IP network appears to be just another “big” voice switch in the TDM analysis—when in fact its trunk interfaces are actually media gateways distributed over a large geographical area). Next, the voice traffic (Erlangs) must be converted to IP flow traffic (using the appropriate CODEC and packetization parameters—e.g., G.711 with 2 voice frames per packet). Next, the VoIP analysis must be run using a separate model—an IP flow analysis with traffic sources and sinks being the media gateways on the edge of the IP router network. After this analysis, it can and often is the case that certain of the IP voice flows cannot be supported, so these are deemed “blocked”. In the real network, these calls would be blocked one at a time as they are setup and the blocking notification would occur in signaling. In the service evaluation environment, they are blocked as a group since they are offered as a group. This information is fed back to the TDM domain model (e.g., 15% of NYC to LA traffic is blocked—i.e., cannot be setup).
This last step leads to the recursion. The traffic that can be setup has changed. So the TDM domain model must be rerun, with the reduced traffic load, and again it will embed the VoIP flow analysis. The recursion repeats until all offered traffic to the VoIP domain is supported, at which point all the routes and performance metrics are known for both the TDM and IP voice domains. The outcome (result) is that a percentages of each source-destination pair's traffic flow for voice is supported (i.e., not blocked). This can translate into the service status directly (voice support from NYC to LA is at 85%) or more likely it is thresholded with a success/failure rule: “Voice service from NYC to LA is up if less than 1% of its voice traffic is blocked”.
As an additional illustration of the elegance of the solution, consider that the VoIP bearer service (what has been discussed so far) could be dependent on VoIP signaling working, i.e., a VoIP signaling service. So, for example, a set of rules and evaluation functions can describe the fact that for any pair of media gateways on the VoIP network to pass bearer traffic, they must be able to signal, which requires that each of their local softswitches be up, that each media gateway has reachability to its local softswitch, and that the two softswitches have reachability to each other. (Of course, more complicated rules, like SLAs, are appropriate here, since signaling latency is an important issue in this particular environment.) Thus in the recursion described previously, the VoIP flow analysis evaluation cascades with a VoIP signaling analysis. Again with logs and detailed recording of all the outcomes of the steps in the process, the system can elegantly produce a report such as:
“Voice traffic from NYC to LA is 15% blocked because VoIP signaling from the NYC media gateway to the SF media gateway is down. This is due to a IP-unreachability caused signaling failure between the NYC and SF softswitches because router Cisco_Chicago is down”.
Another component of the service network simulation modeling aspect of the present system includes comparing the requirements of a service (e.g., QoS regarding a latency from one component to another) for successful operation against the actual QoS it receives on the converged network. This element uses a flexible set of abstractions that may capture causal relationships between service behaviors and network behaviors.
As noted, the present system includes providing critical decision analysis of the impact on the shared network of changes in a service and providing cross-service impact analysis of changes in one service on another service, such as:
A. Impact of common infrastructure and service specific configuration changes;
B. Analysis of network and service configuration errors (often caused by inconsistencies between the service and the network);
C. Impact of network failures on services and impact of service failures on other services and the network;
D. Analyzing cascading changes in interrelated QoS configurations and policies on service levels in the above;
E. Analyzing service specific failover and load balancing behaviors (typically ignorant of the underlying communications infrastructure); and
F. Supporting deployment of new services and growth in existing services in all of the analyses named above.
It should be noted that the present system not only includes configuration analysis, network modeling and simulation, a failure analysis, but also includes analysis that focuses on services in the described context, that includes globally managing or optimizing the network to support both common infrastructure metrics within engineering tolerances and service-specific metrics within corresponding service level thresholds, as well as visualization and reporting of all of the common infrastructure and service-specific inputs, simulation results, and optimization results from the above analyses and optimizations to effectively manage the available common network infrastructure and individual services in view of the needed and on-going services.
As noted, the present system uses a set of loosely coupled models of both the services and network domains, where each model is particularly suited and very efficient for its particular domain. The term “loosely coupled” is used to mean that a system of rules and evaluation functions permit the embedding of different modeling techniques within one another and provide for coordination in the overall analysis, including moving data inputs and outputs among the individual models. For example, a service may require that two of its components are reachable across a common IP network. This is a simple rule which embeds an evaluation function (reachability). The evaluation function, however, requires running a complex IP network routing model in order to return with its simple (binary—yes or no) answer.
In addition, to multiple loosely coupled models in the converged multi-services environment, a service is treated as a first class object throughout the entire software infrastructure necessary for network modeling and simulation, including in data collection, analyses, visualization and reporting, optimization, etc. Such a treatment of a service allows the modeling and simulation systems to support more efficient and effective predictive activities, such as planning and preventative problem solving (e.g., predicting behavior under failures in the process of protecting against those failures), to support troubleshooting network or application service level problems, and to support service level management and optimization.
For such efficient operations, the network model (the union of each service's own devices and traffic, and the common network infrastructure interconnecting all service-specific devices) maintains a complete “set of books”, so to speak, for each service individually, as well as for their common network infrastructure. Each “set of books” may be in a different mathematical language, one in the language for the common infrastructure and one each in the language of the different services. For example, in the voice world, “pin drop” quality may translate into scoring the subjective quality of a telephone call using a standard model called Mean Opinion Score (MOS). In the IP data networking world, the various concerns may include link congestion, packet delay, jitter and loss.
The present system systematically treats application services throughout the predictive network modeling and simulation environment, from initial inputs to application service specific outputs.
For example, first, the system accepts as input a description of the common communications infrastructure shared among services, such as the following and the like:
In addition to the above three inputs, the present modeling and simulation system also accepts as an additional input the description of each service it supports. Dimensions of this additional input and capabilities or descriptions of each service include:
For each service as appropriate, the system performs the following operations as necessary, whether automatically or in response to user action, changes in service levels, conditions, requirements traffic etc. including changes in network configuration and resources to the various services:
The present systems and methods provide a mechanism to represent a high-level Service concept to enable users to perform service-oriented analyses, including determination of the impact of network configuration problems/failures on the ability of the network to provide each specific service. As described, the services are represented as configured services in the network model which may receive results of service model analysis runs which may be converted for compatibility with the network model, as necessary.
The present systems and methods including software applications provide a mechanism to create services, display, and configure them. The analysis of the service is performed in concert with (“as part of” in the sense that the user executes one command) a flow analysis run (now extended to perform service-specific analysis in addition to its original common network infrastructure analysis function), and reports are generated and displayed as part of the particular model, e.g. a particular service or part(s) thereof including all or parts of relevant network elements, components, devices and interconnections.
In one embodiment of the invention, to aid with defining and analyzing services, two types of top-level objects are used: Service Elements and Service User Group objects:
Services and Service User Groups may be visualized in the network browser. An option related to “Services” is added to an ‘arrange-by’ menu in the network browser. This will contain two folders, one for Services and the other for Service User Groups, as shown in
A service analysis includes at least two parts, namely, server status and reachability. Server Status relates to whether a server is up or down (as determined by its ‘condition’ attribute), while reachability indicates whether or not the servers can reach the service's dependent services. For example, if a demand is included as one of the service elements, the routability of the demand is included in the service analysis. The service is considered down if the demand is unroutable. Other characteristics of the demand (such as SLAs) may be used to influence the status of the service. More complex service-specific analyses can be employed here as well: for example, computing the VoIP MOS score of an end-to-end voice service demand, based on packet delay, jitter, and loss that a demand experiences as it traverses the network infrastructure.
The success or failure of a service user group may also be defined in terms of its inability to access one or more services. This would be relevant for security-related analyses, to determine which clients have access to certain services.
The service analysis also includes using service evaluation function(s). A service object is associated with an ‘Evaluation Function’, which can be specified by the user. This function is evaluated by the Core engine to determine if a service is up or down. Illustratively, the evaluation functions include Boolean statements having Boolean combinations of expressions such as:
Element aliases may also be evaluated. For example, an element alias, such as ‘N1’, may be evaluated by determining if that element is up or down. For nodes, this may be based on a check of the ‘condition’ attribute. For demands, this may be based on whether the demand is routable or not. For services, this may be based on an analysis of the service's evaluation function.
The ‘Is Connected’ function may have the following syntax:
In one embodiment, only the first two parameters (the element aliases) may be required; reasonable default values may be used for the others.
A default evaluation function may also be used where, if there is no evaluation function specified, a default analysis behavior may be used. For example, a service may be considered to be up if all its components are up, and all the servers can reach all the dependent services.
Other application programming interfaces (APIs) may also be used, such as one referred to as Ets API. An Ets_Service API is provided to allow Ets clients to query the network for configured services, perform the services analysis and retrieve status and failure messages from the services analysis.
Various reports and user interfaces (UIs) may be provided. For example,
Other UI items include Topology>Services>Clear Visualization, which will remove any additional ‘failure/impacted’ icons from the treeview elements in the network browser. Import and export options may also be provided where a Topology>Services>Import allows users to import a service definition from previously exported services definition (.sdi) file. This will bring up a file-chooser dialog, to allow users to select and import the file.
The services elements (nodes, demands, etc.) may be referred to by their hierarchical name so that an exported file may be reliably imported into another network that contains objects of the same name and hierarchy. If an object is missing, it will be skipped and the service definition will not include it. This may be useful both in the modeling and simulation environments and network/system management contexts equally, as services may not always be discovered from the operational environment, so a degree of manual configuration may be required that is then desirable to persist as the discoverable parts of the network and services are repopulated over time as change occurs.
Topology>Services>Export allows users to export their service definition to a text file (extension .sdi), for import into a new version of the network, for example.
A Service Right-Click Menu may also be provided where right-clicking on a service object in the network browser will display the following items in the menu:
A Service User Group Right-Click Menu may also be provided where right-clicking on a service user group object in the network browser may display the following items in the menu:
Service analysis may be initiated by a Flow Analysis run. A new checkbox ‘Evaluate Services’ may be added to a ‘Configure Flow Analysis’ dialog. The list of generated flow analysis (recall, this is what executes the set of models for common infrastructure and services) reports may be enhanced to include services-specific reports. These reports may provide information on the defined services and service user groups, and their status. Drilldown tables may be provided to list the reason(s) for the failures of any service and/or the impacted status of service users. Additional reports may provide such things as consumption of network resources by each service, i.e., reports that more broadly characterize the impact each service has on the network.
A Survivability Analysis feature may also be enhanced to support reporting on services. Thus, users may determine the survivability of services when particular network components fail. Some examples of the service-related survivability analysis reports are shown in
Other features of the present systems and methods include automatic creation of services. For example, a method for automatically creating application level services may be based on packet trace information. The trace of any given application contains information about different tiers involved. In a modular service structure, each of these tiers may be a separate service. Each of these services may be dependent on other services as well. For example, assume a trace of a web-based application with 3 tiers, the user, the web server and a database server. The information may easily translate into a web service and a database service with the user being a consumer of the web service and the web service being a consumer of the database service. These set of services may be deployed on the modeled network and each service component, user, web server and database server can be represented by one or many network elements. Note that in cases where IP address information is available for components of a service (e.g. its web server, its softswitch, etc.), that information can be used to automatically connect the service elements to the common network infrastructure.
Further, additional visualizations and reports may be provided. For example, network views may be provided that filter the topology visualization to only display the service-related components of the network. Other visualizations can include displaying the service elements and showing the paths that the traffic between them would traverse (or where traffic is unavailable, similarly, the path that traffic might take, i.e., as a consequence of reachability requirements). Further, such paths could be displayed or otherwise characterized with data collected from the operational network along the path; for example, color-coding the path at each hop based on the link congestion collected from router MIB-II data. Many such visualizations are possible (delay, loss, errors, queue information, etc.).
Additional and Custom Evaluation Functions may also be provided. Illustratively, the custom function (Is_Connected) may be extended to support additional functions which may take into account SLA criteria, for example. Thus, the success/failure status of a service may be tied to specific SLAs. These functions may be based on a plug-in mechanism, thus allowing for customization by the users.
As described, the present systems and methods apply equally to the cases: (i) where the common network and services networks are “modeled” in a standalone virtual environment, and (ii) where part or all of the common network and service networks information is collected from the operational environment and the “model” includes some data that was collected from the real world. In one embodiment, the present systems and methods continually collect data (events, topology and configuration, performance data, traffic, etc.) from just the common network, for example, and the constructs of the services are an add-on in the management system that allows seeing the impact on a service of a change in the common network. Data may also be collected on some or all of the services to auto-populate the services models and know service-related traffic.
The present systems and methods include modeling and simulation (i.e., offline) systems and methods, as well as network management (i.e., online) systems and methods. Further, the present systems and methods combine both offline and online management systems and methods that have services overlays thus providing leading analytics in network management. These analytics involve model-based reasoning combined with online data collection. For example, a simulation model embedded in an online network management system may be used to understand the impact on a service of an event, e.g., received from an online fault management system. All of the information collected may be stored and utilized at a later time to assist in network and services analysis.
The methods of the present system are particularly suited to be carried out by a computer software program, such program containing modules corresponding to one or more of the individual steps or acts described and/or envisioned by the present system. Such program may of course be embodied in a computer-readable medium, such as an integrated chip, a peripheral device or memory, such as the memory 1120 or other memory coupled to the processor 1110.
The computer-readable medium and/or memory 1120 may be any recordable medium (e.g., RAM, ROM, removable memory, CD-ROM, hard drives, DVD, floppy disks or memory cards) or may be a transmission medium (e.g., a network comprising fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store and/or transmit information suitable for use with a computer system may be used as the computer-readable medium and/or memory 1120.
Additional memories may also be used. The computer-readable medium, the memory 1120, and/or any other memories may be long-term, short-term, or a combination of long-term and short-term memories. These memories configure processor 1110 to implement the methods, operational acts, and functions disclosed herein. The memories may be distributed or local and the processor 1110, where additional processors may be provided, may also be distributed or may be singular. The memories may be implemented as electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term “memory” should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by a processor. With this definition, information on a network is still within memory 1120, for instance, because the processor 1110 may retrieve the information from the network for operation in accordance with the present system.
The processor 1110 is capable of providing control signals and/or performing operations in response to input signals from the user input device 1140 and executing instructions stored in the memory 1120. The processor 1110 may be an application-specific or general-use integrated circuit(s). Further, the processor 1110 may be a dedicated processor for performing in accordance with the present system or may be a general-purpose processor wherein only one of many functions operates for performing in accordance with the present system. The processor 1110 may operate utilizing a program portion, multiple program segments, or may be a hardware device utilizing a dedicated or multi-purpose integrated circuit.
Of course, it is to be appreciated that any one of the above embodiments or processes may be combined with one or more other embodiments or processes or be separated in accordance with the present system.
Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. In addition, the section headings included herein are intended to facilitate a review but are not intended to limit the scope of the present system. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
In interpreting the appended claims, it should be understood that:
This application is a Divisional of Patent Application 11/507,113, filed Aug. 19, 2006, and claims the benefit of U.S. Provisional Patent Application No. 60/709,723, filed Aug. 20, 2005 and U.S. Provisional Patent Application No. 60/821,018, filed Aug. 1, 2006.
Number | Date | Country | |
---|---|---|---|
Parent | 11507113 | Aug 2006 | US |
Child | 12652499 | US |