1. Field of the Invention
The present invention relates to a multi-domain network and a method for use in a multi-domain network.
2. Description of the Related Art
In a multi-domain network there is provided a multi-connected network of different Autonomous Systems (AS). In a multi-provider environment, each AS or groups of aSs belong to different, independent network providers.
Such types of network do not exist at present to any large extent, owing in large part to the ubiquitous hierarchical structure of the current Internet. However, in the near future, multi-domain network with a flat connection structure will come to the fore in telecommunications. With such a structure, different operators on the same level will connect to each other's network and offer transport services with different Quality of Service (QoS) guarantees.
However, the above new network structure causes some new problems and issues in the area of routing and resource allocation. It is desirable to address these problems and issues.
According to a first aspect of the present invention there is provided a method for use in a multi-domain network environment, in which each domain of the network collects intra-domain routing information relating to that domain and makes a reduced view of that information available to other domains of the network, and in which each domain of the network uses its own intra-domain routing information together with the reduced-view routing information from the other domains to form a logical view of the network so as to enable that domain to make an end-to-end route selection decision.
The logical view formed at each domain may comprise a plurality of intra-domain links between respective pairs of nodes of that domain.
Each intra-domain link may be a real and direct link between nodes.
The logical view formed at each domain may comprise a plurality of virtual intra-domain links for each other domain, each virtual link representing one or more real links.
The reduced-view routing information made available by a domain may comprise routing information relating to each of the virtual intra-domain links for that domain.
The logical view formed at each domain may comprise a plurality of inter-domain links between respective pairs of domain border routers.
All domain border routers of the network may appear in the logical view.
The domain border routers may be responsible for making the reduced-view information available to other domains of the network.
Each virtual link may be between two different domain border routers associated with the domain concerned.
The logical view formed at each domain may comprise a full-mesh topology in relation to the domain border routers of the other domains.
Each link may be associated with a respective administrative weight for use in the route selection decision.
Each administrative weight may carry information about properties of each real link represented by that administrative weight.
An administrative weight associated with a virtual link may be determined based on a sum of the respective administrative weights of each real link represented by that virtual link.
Each virtual link may represent a shortest path between the two end nodes for that link.
Each weight may comprise a vector of weights.
The domain border routers may be responsible for determining the virtual links and calculating the weights.
A respective scale value may be maintained for each domain, with the weights in each domain being scaled in dependence on the scale value for that domain before use in the route selection decision.
Each virtual link may be associated with a respective weight relating to a primary path for that virtual link and a different respective weight relating to a backup path for that virtual link.
A route may be selected taking account of both the primary path and the backup path of each virtual link on the route.
A shared protection scheme may be applied when calculating the backup path for each primary path.
The traffic capacity of each link may be allocated between a first part for handling primary traffic and a second part for handling backup traffic.
The second part may be shared between intra- and inter-domain protection.
A communication failure occurring on the selected route within a domain may be handled by that domain, independently of the other domains.
A communication failure occurring on the selected route between domains may be handled by an alternative end-to-end protection path.
If a problem is realised during resolution of the selected route, the originating node may be notified and, unless the originating node accepts the problem, a new route may be selected.
The route selection decision may be made according to a shortest path algorithm.
Each domain of the network may be of a type that is not predisposed towards sharing its intra-domain routing information with other domains of the network.
The route selection decision may be one based on Quality of Service.
The intra-domain routing information for each domain may also comprise resource information relating to that domain, so that the logical view of the network formed at each domain may enable that domain to make an end-to-end route selection and resource allocation decision.
At least some domains may belong to different respective operators.
A common intra-domain routing protocol may be used in the network.
According to a second aspect of the present invention there is provided a multi-domain network in which each domain of the network is arranged to collect intra-domain routing information relating to that domain and to make a reduced view of that information available to other domains of the network, and in which each domain of the network is arranged to use its own intra-domain routing information together with the reduced-view routing information from the other domains to form a logical view of the network so as to enable that domain to make an end-to-end route selection decision.
According to a third aspect of the present invention there is provided an apparatus for use in a domain of a multi-domain network, the apparatus being provided by one or more nodes of that domain and comprising means for: collecting intra-domain routing information relating to that domain, making a reduced view of that information available to other domains of the network, and forming a logical view of the network using the collected intra-domain routing information together with reduced-view routing information from the other domains so as to enable an end-to-end route selection decision to be made based on the logical view.
The apparatus may be provided by one or more domain border routers of that domain. The route selection decision may be made by a domain border router or it may be made by another node within the domain, such as a source node.
The apparatus may be provided by a single network node. If, on the other hand, the apparatus is provided by a plurality of network nodes, an appropriate method for exchanging information between them would be provided.
According to a fourth aspect of the present invention there is provided a program for controlling an apparatus to perform a method according to the first aspect of the present invention, or which, when run on an apparatus, causes the apparatus to become apparatus according to the second or third aspect of the present invention.
The program may be carried on a carrier medium.
The carrier medium may be a storage medium.
The carrier medium may be a transmission medium.
According to a fifth aspect of the present invention there is provided an apparatus programmed by a program according to the fourth aspect of the present invention.
According to a sixth aspect of the present invention there is provided a storage medium containing a program according to the third aspect of the present invention.
An embodiment of the present invention concerns multi-domain networks, described briefly above, that have a flat connection structure, and in which different operators on the same level will connect to each other's network and offer transport services with different Quality of Service (QoS) guarantees. An integrated route management/traffic engineering solution is proposed (route selection with resilience) for guaranteeing the effective inter-working of the providers in order to provide end-to-end QoS. Before a detailed description of a network embodying the present invention, an analysis of the current technologies will be provided, since at least part of the invention lies in a clear appreciation of the prior art and the problems associated therewith. Differences between the prior art and embodiments of the present invention are highlighted.
Today, the most widespread intra-domain routing protocol is the Open Shortest Path First (OSPF) protocol, while the Border Gateway Protocol (BGP) is a de facto inter-domain routing protocol in the Internet. OSPF is an effective, robust intra-domain link-state routing protocol, in which the route decision is based on administrative link weights. These weights can be related to link delay or utilization, consequently OSPF is able to provide QoS routing. During the continuous growth of Internet BGP has proven to be a resilient inter-domain routing protocol. The most important strengths of BGP are the scalability and stability even at Internet scale, and its policy-based routing features. Policy based routing allows each administrative domain at the edge of a BGP connection to manage its inbound and outbound traffic according to its specific preferences and needs.
The afore-mentioned routing protocols/solutions fit the current Internet architecture, but in a multi-operator, multi-domain, multi-service (with QoS features) based architecture they cannot provide the needed network efficiency. The most important bottlenecks of the current solutions that will appear in next generation long-haul networks are as follows:
Firstly, although OSPF is an efficient intra-domain routing protocol and it can be used even for intra-domain QoS routing, the routing information cannot be shared through the entire network, it can be used only in the current domain.
Secondly, BGP completely hides the state of intra-domain resources within every AS. This causes that it very difficult to select a route, which is able to provide the sufficient QoS.
Thirdly, in many cases BGP requires tens of minutes to recover from a route or a link failure. In case of providing QoS such large recovery time is not acceptable.
Fourthly, even though BGP allows an Autonomous System (AS) to flexibly manage its outbound traffic, it has insufficient degree of control to manage and balance how traffic enters the AS across multiple possible paths.
Fifthly, each BGP router only advertises the best route it knows to any given destination prefix. This implies that many alternative paths that could have been potentially used by any source of traffic will be unknown because of this pruning behaviour inherent in
BGP.
The goal with an embodiment of the present invention is not to describe a new routing protocol, rather to propose a high-level description of a route management solution, which can be used for solving Traffic Engineering problems.
From a general point of view, a significant problem in a multi-domain environment is that of spreading the different kind of network state description information (like OSPF and BGP link weights, free resources, QoS parameters, failures, etc). From a routing point of view, different kinds of information aggregation strategies have been investigated, but they cannot be applied here for solving the entire problem.
If a multi-provider environment is considered, then the situation becomes more complicated. The providers can build networks with different topology, furthermore, they can apply different OSPF weight system, QoS classes or resource allocation schemes. In this case a further problem is that the operators do not want to share all information between each other (especially routing, traffic information—which is required to find optimal end-to-end path).
Especially in the multi-provider case, it is possible that the different providers use different type of resilience strategies in their domains. For example a provider may apply 1+1 dedicated protection, but the next one towards the destination node applies only some kind of on-line restoration mechanism. This causes that different grade of protection is provided through different sequence of domains. Another problem is that a change in the routing may cause change in the grade or type of protection. To sum up: the unknown topologies, traffic volume (link loads) and the different routing policies domain by domain makes the planning of end-to-end resilience very complicated.
The other problem belongs to the inter-domain links: Protection against failure of these links requires extra backup capacity reservation in all domains. The main problem here is how these resources can be divided between the providers in a fair way.
In the literature, there are several papers dealing with topology aggregation based routing. A brief survey is provided below of the most important ones, concentrating on the differences between the existing works and proposals embodying the present invention.
Many papers deal with the problem of aggregating the topology, however the most important reason of the aggregation is the minimization of the entries in the routing tables or the bandwidth needed to carry the routing information update messages over the network (see: [a] Fang Hao, Ellen W. Zegura, “On Scalable QoS Routing: Performance Evaluation of Topology Aggregation”, Proceedings of IEEE Infocom, 2000. Tel Aviv, Israel, Mar 2000; and [b] Venkatesh Sarangan, Donna Ghosh and Raj Acharya, “Performance Analysis of Capacity-aware State aggregation for Inter-domain QoS routing”, Globecom 2004, Dallas, December 2004, pp. 1458-1463). Although aggregation can decrease the entries in the routing tables significantly, and some QoS parameters can be considered in the routing decision (so called QoS-routing), but in itself it cannot provide end-to-end QoS.
Many proposals are based on single-domain network environment using the existing hierarchical aggregation capabilities of Private Network to Network Interface (PNNI) or
OSPF protocols (see Yuval Shavitt, “Topology Aggregation for Networks with Hierarchical Structure: A practical Approach”, 36th Annual Allerton Conference on Communication, Control and Computing, September 1998).
Furthermore, the existing aggregation solutions do not consider the inter-working of intra- and inter-domain routing protocols.
Another limitation of the existing solution is the pre-defined topology of the aggregated network, such as tree, star, full-mesh (see: [a]Yuval Shavitt, “Topology Aggregation for Networks with Hierarchical Structure: A practical Approach”, 36th Annual Allerton Conference on Communication, Control and Computing, September 1998; [b] Fang Hao, Ellen W. Zegura, “On Scalable QoS Routing: Performance Evaluation of Topology Aggregation”, Proceedings of IEEE Infocom, 2000. Tel Aviv, Israel, Mar 2000; and [c] Venkatesh Sarangan, Donna Ghosh and Raj Acharya, “Performance Analysis of Capacity-aware State aggregation for Inter-domain QoS routing”, Globecom 2004, Dallas, December 2004, pp. 1458-1463). Simple topologies, like different types of stars and trees cannot be used if it is wanted to provide end-to-end QoS guarantees (in case of these topologies there is an information loss in the aggregation process). In the article entitled “Routing with Topology Aggregation in Bandwidth-Delay Sensitive Networks” by K-S. Lui, K. Nahrstedt, and S. Chen, IEEE/ACM Transactions on Networking, February 2004, Vol. 12, No. 1, pp. 17-29, the authors propose a scheme for information loss free network representation using star topology expanded with special links, called bypass links. The main drawback of the method is that very complicated computations are required in case of each update and that the aggregated topology can change significantly in case of updates. As a result, the applicability of the method in real network environment is limited.
Furthermore, the application of pre-defined topologies is limited in case of multi-provider network environment.
In the article entitled “Macro-routing: a new hierarchical routing protocol” by Sanda Dragos and Martin Collier, presented at Globecom 2004, Dallas, Tex., 29 Nov—3 Dec 2004, the authors propose an automatic route-discover scheme, which use so-called mobile agents to find an appropriate path over a domain and the aggregated topology is built up using these paths. The problem with the method is that the routing information updates require significant time.
It is also important to appreciate that there is no paper in the literature that deals with resilience issues in the case of topology aggregation.
Three main issues are addressed by an embodiment of the present invention:
Firstly, a traffic engineering solution is provided in multi-domain environment. The main part of this solution in one embodiment consists of a link-state type (link weights) aggregated description of the intra-domain routing information and the flooding mechanism of this information through the network. This aggregated routing information is then combined by the adequate intra-domain routing information forming an overall network view in all nodes. A further new feature is that the above routing information does not determine a priori the path of a demand. Any source node can modify the link weights according to its additional information or existing knowledge of the previous route selection procedures.
Secondly, an algorithmic solution is proposed for the harmonization of aggregated weights system of different domains. The modification of the weight system caused by successful or unsuccessful demand establishment is also the part of the weight harmonization methodology.
Thirdly, a resilience methodology is also proposed, which conforms to the above model, but it can also be applied separately.
A more detailed description of an embodiment of the present invention will now be provided, based on the following definitions and assumptions:
The following assumptions are made regarding the network:
In order to select an optimal route in the network, the entire network topology must be known in order to be able to select appropriate paths. However, this would result in an unmanageable amount of routing information and, moreover, the operators are often unwilling to provide information about his internal-domain topology and routing strategy.
The solution proposed in an embodiment of the present invention is to use an aggregated virtual network topology to describe the multi-domain area and only this aggregated network topology and routing information is considered in the routing decision.
Some properties of the aggregated topology in an embodiment of the present invention are as follows:
In summary, the basic idea of topology aggregation is as follows. A domain naturally has all topology and routing information about itself, it sees an aggregated view about other domains in the network and it advertises an aggregated view about itself on the basis of the above criteria.
The routing is based on the administrative weights of the links (see short-dashed lines for intra-domain links and solid bold lines for inter-domain links), therefore, adequate weights have to be calculated to the aggregated links (see long-dashed lines in
The weight could be the sum of the weights along the “shortest” path between a pair of border routers. Where the “shortest” path could represent, for example, the physically shortest not-overloaded path or the least loaded path according to the routing policy applied in a given domain.
If the operators would like to provide differentiated QoS services, then a simple number for a weight may be not enough to meet their wishes, but it is required that the weights are represented as a vector. This vector could contain the throughput, the delay or any other quality-related parameters between the two border routers of the domain represented the given link.
More detailed information concerning the computing of routing information, flooding and updating will now be provided, summarizing the main steps of how to calculate the administrative weights in the aggregated topology and how to use them in the route selection.
In each domain the DBRs are responsible to setup the direct virtual links between each other in the aggregated level, compute the weight of the virtual links on the basis of the intra-domain link utilizations or other policy-based issues and forward the links state advertisement of the virtual links over the network similarly to the OSPF flooding mechanism.
For the task of building up the aggregated network topology, in this embodiment the key equipments are the DBRs; they are responsible for building up peering connections with their neighbour DBRs, for agreeing on the aggregated topology of the domain it belongs to, and for distributing the link-state information about the links connected to them.
The main steps of the building up and the maintenance of the aggregated topology are summarized as follows:
The routing information distributed in the network is simply the weight of the links in the aggregated topology. However, several considerations can be taken into the account in the way that these weights are computed. (Note that more efficient protection requires more information; see the description below under heading “Protection and resilience schemes”.) On the one hand, the weight calculation policy can be different for the real links and for the virtual links, and, on the other hand, the applied policy can be different in each domain. Some typical weights are considered below, and how they are mapped into the virtual links.
There are two typical weighting groups:
In the case of an inter-domain virtual link, its weight could be the above mentioned link weights. An intra-domain virtual link represents some kind of shortest path between two DBRs, so there are several alternatives for weight calculation. Two main alternatives are:
Because the method according to an embodiment of the present invention is a link-state based routing, the synchronization of the distributed link-state databases is an important issue. To synchronize the databases a flooding mechanism is proposed that is similar to the known OSPF flooding.
The mechanism is as follows on the entire network level:
It is assumed that all nodes' databases are synchronized and in a domain there is a change on one link. It causes an intra-domain OSPF link-state advertisement and flooding process.
During the OSPF flooding, the DBRs' peers (whose aggregated level virtual link contains the current link) recognize that weights on a physical link have changed, so the weight of the adequate virtual links is not valid any more.
The source DBR(s) of the corresponding virtual link(s) updates the virtual link weight(s) according to the methods described above in the part headed “Routing information computing” and forming a link state update message and send it to all neighbours. The frequency of the virtual link weight updates need to fulfil different requirements as it is considered below in the description headed “Updating the aggregated link weights”.
The link state update message contains:
If a DBR gets a virtual link state update message from one of its neighbours, then it repackage the message, and put its router ID into the message, and send the message out on all interfaces, except the one on that the update message was received. At the same time the current DBR sends an acknowledgement back to the DBR, which sent the update message. During this procedure the DBR-servers in all domain will got at least one piece of the link-state update message. Then the DBR-servers repackage the message and send them directly to all nodes in the current domain. If a node got the update message it updates its local database. If there is more than one DBR-server in the domain, then the highest priority server will send out the information through the domain.
From the viewpoint of efficient route selection, it is important that the aggregated network view carries up-to-date utilization/delay/etc information about the real network. For that reason, it is a basic requirement to update the aggregated link weights at the required frequency. On the other hand, it is desirable to avoid insufficient large volume of administrative traffic related to the aggregated network topology.
If the demand arrivals/tear downs have low intensity, then these relatively infrequent events can trigger the corresponding DBR(s) to start an update process (triggered update). Otherwise, the DBR(s) start update process at predefined periods.
With the knowledge of the virtual topology and the weights, the source node of a demand can calculate the appropriate route by performing a shortest path algorithm.
After the route is selected on the virtual topology, then the source of the demand sends a reservation request along the route. This message contains the virtual links, the required capacity (or additional QoS parameters) and the destination node. The resolution process consists of four blocks:
It can be seen that if the weights of the virtual links are properly calculated, then the route from the source to a DBR of the destination is optimal without seeing the details of the real route. Here a DBR selecting procedure is done, which is to choose the closest one by default, but this procedure can be completed to be optimal. If it is possible to poll the DBRs of the destination node about the weight of the route between them and the destination node, then the entire route will be optimal.
A particular resolution process can be the combination of the above steps as follows:
Note that inaccuracy problems in the updating procedure of the link weights may result that along the selected “virtual” route there are not enough free resources or the QoS requirements cannot be met in the underlying real network. After a notification step, a resilience process should be done in that case, which is detailed below in the description headed “Routing inaccuracy problems”.
The task of the protection is divided into two parts according to the place of the failure. Since the intra-domain territories are hidden from outside in the virtual topology, the failures evolve in these territories have to be solved within the domain (see below in the description headed “Intra-domain traffic protection”). Moreover, this property and the multi-operator environment imply that the operation of the protection and the resilience scheme is distributed. On the other hand, the protection of the inter-domain traffic (traffic on inter-domain links) should be solved by an agreement of the domains/operators (see below in the description headed “Inter-domain traffic protection”). This is realized as a parallel resource reservation beside the primary paths, however, the resources can be shared very effectively (for details see the description headed “Resource sharing”). Then some possible routing inaccuracy problems and the prevention of them are introduced (see the description headed “Routing inaccuracy problems”). Finally a weight harmonization method is proposed in order to avoid unbalanced inaccurate routing caused by the different weight calculation policies applied the different domains (see the description headed “Weight harmonization”).
The proposed technique combines the domain-by-domain and the end-to-end protection scheme:
The proposed resource sharing technique (see the description headed “Resource sharing”) guarantees the minimal resource reservation for the protection at a given weighting.
In case of intra-domain protection, the routing resolution on intra-domain virtual links is extended to provide two independent paths between the DBRs instead of one path. In case of inter-domain link protection, however, the routing resolution remains the same.
A per-domain internal protection scheme is proposed, where each operator handles the intra-domain failures locally, independently from the other operators. In this scenario it can be assumed that the source node need not take actions against the failure (in fact, it is not even informed about a failure of this kind). Furthermore, the connections of the users are able to survive more than one failure if the failures occurred in different domains.
Beside the requirements on delay and capacity by a demand, the provisioning of QoS can also contain protection and resilience requirements, which practically means that different values are requested for the primary and the backup paths. Therefore, in a model embodying the present invention, the weight of the links in the virtual topology is given by two fields. One field for the primary and another field for the weight of the backup path. So in the case of intra-domain link resolution between two DBRs, two shortest paths are computed based on the two fields, a primary and a backup.
In the route selection process, a route can be chosen in such a way that both the primary and the backup path satisfy the demand. It can still happen that the domain cannot provide a backup path in the real topology when resolving the virtual link. Therefore, in the case of an intra-domain failure, a resolution process should be done, however, probably different paths are selected for the demands using the down virtual link in question.
Inter-Domain Traffic Protection
Demands may require protecting their traffic also in case of inter-domain link failure. So, in this case, two inter-domain link independent paths are calculated and reserved by the source node of the demand.
It is a general requirement to keep the reserved resources at minimum. In the case of intra-domain protection, a shared protection scheme can be applied when calculating the backup paths for each primary path corresponding to each virtual intra-domain link.
In the case of inter-domain protection, the used intra-domain resources should be minimized. Since only one-failure-at-a-time scenario is concerned, the intra-domain backup resources can be freely used for inter-domain protection. In order to provide intra-inter sharing, it has to be indicated during the reservation process that this is the inter-domain backup path of a particular demand (“backup reservation”), so no extra protection is needed and can be shared with the intra-domain backup paths.
With additional indicators, the inter-domain link protection can be shared with each other. Then not only the “backup status” has to be indicated, but the list of the inter-domain links to whose failure the particular inter-domain backup path corresponds. In this case, the capacity reserved for protection proposes in the network can be shared between each inter-domain link failure. The way of sharing the capacity of a link is highlighted in
The first part is reserved for primary traffic regardless of whether the link is an inter-domain or an intra-domain link.
The second part is reserved for the backup traffic and can be totally shared between intra-domain protection and inter-domain protection. Moreover, in both cases the full capacity can be shared between the demands going through or generated inside the particular domain. Furthermore, in the case of inter-domain protection, the capacity also can be shared between each inter-domain link failure as stated above.
Distributed non-real-time link weight management system causes routing inaccuracy problems in practice. In the case of protection, these problems are twofold. Beside the insufficient free resources or unsatisfactory QoS provided in the underlying real network, these problems may affect both the primary and the backup paths. If such a problem is realized during the routing resolution, the source node is notified about the error (no resource, QoS or protection degradation). Unless the source accepts the error or degradation, new paths are selected on the virtual topology omitting the defectively resolved links.
Note that the update procedure of the link weights should be restricted to the proper level of the network. On the other hand, the update of the real weights is by definition restricted to each domain. Then the DBRs calculate the weights of the virtual link based on the above real weights and the updating of the virtual links should be limited to a flooding between the DBRs. Finally the DBR-servers will inform the nodes inside their domain about the weights in the virtual topology.
The routing algorithm highly depends on the weights assigned to the links of the aggregated topologies. These weights are obtained from the controlling parameters of the actual domain topologies. However, neither the way of setting these parameters nor the aggregating algorithms are standardized. This, however, may induce serious inconsistency in the aggregated topology. So the inter-domain traffic would be routed according incomparable sets of weights. If the range of the weights used by the domains differs, the routing will be done based on false information.
To overcome the above problem a scaling method is also proposed to handle this possible inconsistency. Therefore the routing engine maintains a floating-point scale-value Cd assigned to each domain d∈D. Then the propagated link weights are scaled by the corresponding scaling value, and the path allocation uses these scaled weights.
It is assumed that the inter-domain links are operated by their destination domain separately for both directions, therefore their weights are scaled according to this.
In order to avoid unstable weights, the following two normalization conditions apply:
This automatically keeps the scale-values to be at most 1. The normalization is simply done by dividing all the weights by the value
after each weight modification.
is introduced and the scale-values are maintained to be at least tmin (a practical value for tmin might be around
If some scale-values disobey this condition, the following normalization is applied:
Note that, while this latter operation also preserves the first normalization condition, it is not true for the first normalization operation, i.e. it can result in weights below the lower threshold. So, the order of the normalization operations is important.
Scale-values are modified by two basic operations: a set H⊂ D can be promoted or demoted by a given value μ>1.
Note that not only does the promote operation decrease the scale-values of the promoted domains but also it increases the others at the same time. Of course, the opposite is true for the demote operation.
The update process of the scale values is based on the success of the demand allocation. Three strategies can be used here.
Note, that if it was tried to immediately reallocate the demand, the same path would be got, so the bandwidth allocation would probably fail again. So, this scheme can only be applied if new demands appear very frequently in the network. This latter assumption ensures, that the scaling values will significantly change between two allocation trials of a certain demand. To sum up, this scheme would rather be considered as a theoretical possibility.
Because in a real life network the allocation failure should be significantly less frequent than the successful allocations, the proper value of μdem is much larger than μpro.
Note again, that a failure requires much larger reconfiguration of the scale values than the case of the successful allocations. Therefore, it is important to use μdem value that is much larger than μpro.
The following summarises at least some of the protocol extensions proposed according to an embodiment of the present invention:
In this part of the description, some numerical investigation of the proposed solution is presented.
It is important to see the effect of the topology aggregation and the inaccuracy of the routing (caused by the not-always up-to-date weights) in order to analyze the efficiency of the routing. The tests used a sample European network topology shown in
Four different cases were analyzed:
The load of the network was set to provide approximately 1% blocking in the reference case, when all topology information is known. It was found that the blocking probability can be kept in an acceptable level even if the updates come after 50 events.
The motivation in the construction of the resilience mechanism was twofold: to minimize the recovery time, and to keep the recovery mechanism as simple as possible.
The required recovery time of the domain-by-domain protection is much less than the recovery time of the end-to-end protection, since the domain-by-domain protection yields a “per-domain fast reroute” scheme containing many bypasses between the primary and the backup routes. On the other hand, important feature of the domain-by-domain protection is that the providers can handle the failure situations independently.
Capacity sharing between the intra- and inter-domain traffic protection depends on the length of the inter-domain protection path measured in the number of visited domains. If this number equals to the length of the primary (and the intra-domain protection) path or the following inequality is true, then the intra- and inter-domain backup traffic can share the resources by definition in practice
where the inner traffic refers to a domain's own traffic and the transit traffic is traffic of the demands, which have intra-domain backup path and go through the particular domain. In other cases, the inter-domain protection needs additional resources compared to the intra-domain protection.
An embodiment of the present invention has one or more of the following advantages:
It will be appreciated that operation of one or more of the above-described components can be controlled by a program operating on the device or apparatus. Such an operating program can be stored on a computer-readable medium, or could, for example, be embodied in a signal such as a downloadable data signal provided from an Internet website. The appended claims are to be interpreted as covering an operating program by itself, or as a record on a carrier, or as a signal, or in any other form.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2006/068148 | 11/6/2006 | WO | 00 | 11/6/2009 |