The present invention relates to packet ring networks and, more particularly, to bandwidth allocation and traffic management thereof.
Managing the data traffic in a ring network, such as an Ethernet Ring, involves two general procedures: determining the applicable bandwidth parameters at each point in the ring; and moderating the actual (real-time) data traffic at each point according to the applicable bandwidth parameters.
Bandwidth parameters take into account factors including, but not limited to: Quality of Service (QoS); Class of Service (CoS); Class Type (CT); ringlet (also denoted as “ring side”, being one of: “inner” and “outer”); and failure protection scenario. Moreover, rates are categorized in terms of Committed Information Rate (CIR) and Excess Information Rate (EIR), as well as combinations thereof in cases of shared resources.
Moderating the real-time traffic is typically done via hardware modules which detect real-time traffic rates, and which buffer and schedule transmission of data packets according to various prior-art strategies and algorithms. A typical goal of traffic management is to minimize network latency, especially for high-priority classes of service, by versatile utilization of bandwidth resources. At the same time, however, it is also desired to avoid traffic congestion, because this can cause failures in sustaining QoS for certain classes. It is therefore highly desirable to know the details of the available bandwidth distribution in order to moderate real-time traffic efficiently while minimizing the probability of congestion.
The available bandwidth distribution (as a function of the factors listed above) typically varies, however, especially in cases of node and/or segment failure. Failure of a single node and/or segment typically has an effect on the available bandwidth throughout the ring, and the effect is typically different from one node to another.
Unfortunately, however, when configuring or reconfiguring a network ring, the network elements currently have limited information about the available bandwidth parameters, and therefore cannot configure traffic management in the best way possible.
There is thus a need for, and it would be highly advantageous to have, an improved way of dependably determining the available bandwidth parameters of a ring network and thereby providing efficient traffic management functionality to the network elements thereof. This goal is met by the present invention.
The present invention provides a functionality and method for determining aggregate bandwidth requirements for an Ethernet ring network, based on a priori knowledge of the ring network, including the topology, path utilization, bandwidth sharing, and the failure protection scenarios of the ring network. It is therefore assumed that the topology and failure scenarios are known a priori, and that a priori primary bandwidth allocation data is available from a resource or agent such as a bandwidth broker.
According to embodiments of the present invention, aggregate bandwidth requirements are furnished in an aggregate bandwidth database; only transit bandwidth requirements through each node are determined—add traffic and download traffic are not taken into account; the aggregate bandwidth requirements are separately determined for each node of the ring network; aggregate bandwidth is automatically determined upon configuration or reconfiguration of the ring network; and aggregate bandwidth requirements are determined in a manner that is independent of the actual real-time data traffic rates—i.e., actual real-time data traffic rates do not affect the aggregate bandwidth. The present invention therefore does not require any actual real-time data traffic rate information.
The resulting aggregate bandwidth requirements are available for traffic management and configuring the ring network, to improve the operation, efficiency, and Quality of Service.
Therefore, according to the present invention there is provided a traffic management functionality for an Ethernet ring network having a plurality of nodes and segments, the traffic management functionality including: (a) a priori knowledge based bandwidth responsive functionality for automatically configuring a set of traffic management parameters at at least one of the plurality of nodes; and (b) a priori knowledge based bandwidth change responsive functionality for automatically reconfiguring the set of traffic management parameters at at least one of the plurality of nodes; (c) the bandwidth responsive functionality and,the bandwidth change responsive functionality being: (d) based on a priori knowledge of available bandwidths of the plurality of nodes and segments; and (e) independent of traffic rates within the Ethernet ring.
In addition, according to the present invention there is also provided a method for providing to a traffic management module an aggregate bandwidth database for an Ethernet ring having a plurality of nodes, the method including: (a) obtaining a bandwidth allocation database containing an allocated pass-through bandwidth corresponding to each node of the plurality of nodes; (b) providing an aggregate bandwidth database containing at least one field, the field having at least one value initialized to zero and associated with a predetermined set of keys having at least one key selected from: a Class of Service; a Class Type; a ring side; and a protection scenario; (c) for a given node of the plurality of nodes, aggregating to the at least one value the allocated pass-through bandwidth corresponding to the given node according to a predetermined rule; and (d) furnishing the aggregate bandwidth database to the traffic management module.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
The principles and operation of traffic management functionality and method according to the present invention may be understood with reference to the drawings and the accompanying description.
Ring 100 has multiple nodes, including a node A 107, which is the node containing traffic management module 101 and traffic management functionality 102. Other nodes contain similar modules and functionalities (not shown). A typical other node is a node B 109. Also shown are multiple transmission media segments (“segments”), such as a segment 111 denoted as DA; and a segment 113 denoted as CB. Data packets input through a node into the ring constitute what is commonly called “add” traffic, and this is generally known any given node. Data packets which pass through a node from one segment of the ring to another, however, constitute what is commonly called “transit” traffic, and this is generally unknown.
In certain embodiments of the present invention, the ring network is an Ethernet ring.
According to embodiments of the present invention, functionalities are based on pre-existing information related to bandwidth capacities in a ring network. Certain embodiments of the present invention rely for this on a predetermined bandwidth allocation database, as is typically provided by a Bandwidth Broker (BWB).
In particular, certain embodiments of the present invention make no use of actual real-time traffic rates within the ring network, and the functionalities thereof are for automatically configuring traffic management parameters for the ring network in a manner that is independent of the traffic rates in the ring.
In embodiments of the present invention, traffic management parameters of the ring network are data transmission bandwidths. A non-limiting example of traffic management configuration is the setting of the parameters for a Weighted Fair Quality (WFQ) shaper.
The details of response operation according to an embodiment of the present invention are as follows:
A loop entry point 203 with a loop exit point 213 defines a procedure which iterates on the ring nodes. For each node, a step 205 initializes the value of an aggregate bandwidth database field 215 to zero. The details of an aggregate bandwidth database 219 and the structure thereof are illustrated in
Next, a loop entry point 207 with a loop exit point 211 defines a procedure which iterates on the fields of bandwidth allocation database 202 which are applicable to nodei. Details of the database fields are also discussed in detail below. It is understood that aggregate bandwidth database field 215 is shown as being representative of a general aggregate bandwidth database 219 field denoted as fieldj for the purpose of illustrating the procedure, and does not represent any given field in particular. Specifically, as loop entry point 207 iterates over all values of j for bandwidth allocation database 202, all applicable fields of aggregate bandwidth database 219 will be updated as provided by predetermined rules 217.
According to certain embodiments of the present invention, it is assumed that bandwidth responsive functionality 103 has access to: the ring reference topology; and a ring-wide database of bandwidth allocations.
As is common practice in the field, a ring-wide database of bandwidth allocations 202 is constructed and distributed by a Bandwidth Broker (BWB) 201.
In a step 209, the bandwidth parameter values in bandwidth allocation database 202 which are associated with the various fields whose bandwidths are applicable to nodei are aggregated to the value of fieldj 215 in accordance with the rules of a predetermined rule set 217.
Predetermined rule set 217 determines conditions including, but not limited to: whether a parameter value in bandwidth allocation database 202 is aggregated to the value of fieldj 215 (or not aggregated); and, if so, specifically how a parameter value in bandwidth allocation database 202 is aggregated to the value of fieldj 215.
The term “predetermined rule” herein denotes a rule based on factors including, but not limited to: network topology; failure scenario; and bandwidth sharing.
Although a rule's structure is predetermined, it is understood that the factors above (network topology, failure scenario, etc.) may change and are therefore determined at the time the rule is applied. This is illustrated in the LSP example presented below.
The terms “aggregate”, “aggregated”, and related forms herein denote the inclusion of a parameter value into a collective overall amount. Depending on the predetermined rule in effect, aggregation is performed in ways including, but not limited to: addition; and selecting the greater (or maximum) of two or more values.
The LSP example given below shows the application of some predetermined rules.
Elements of a ring can be shared among different paths, and, according to embodiments of the present invention, this case is taken into account by the predetermined rules for aggregating bandwidth capacity. As illustrated in
After iteration loop 203 has iterated all applicable nodes, aggregate bandwidth database 219 contains the aggregate bandwidth data for the ring. In embodiments of the present invention, an application can then use this data for ring management or analysis. A non-limiting example of an application is a traffic management module 221, which provides traffic management functionality for an Ethernet ring network.
In an embodiment of the present invention, the application is external to the bandwidth responsive functionality. In another embodiment of the present invention, the application and the bandwidth responsive functionality are contained in a common hardware, software, or hardware/software module. In yet another embodiment of the present invention, the bandwidth responsive functionality contains the application. In a further embodiment of the present invention, the application contains the bandwidth responsive functionality. In a still further embodiment of the present invention, the functionality is contained within a ring network, within a node thereof, or within a network entity or network element (NE) thereof.
An aggregate bandwidth database (such as aggregate bandwidth database 219 in
The term “database” herein denotes any data structure, or part thereof, which is arranged according to a schema for storing and retrieving at least one data value organized by at least one key and contained in machine-readable data storage of any kind, including, but not limited to: computer memory, RAM, ROM, and the like; magnetic and optical storage, and the like; flash memory storage; computer and network data storage devices; or in similar devices, apparatus, or media.
In particular, the term “database” is herein expansively construed to include data organized in tabular format, where data values appear in cells arranged in one or more rows and/or one or more columns serving as keys. Representations of databases in table format herein are understood to correspond to data stored in machine-readable devices and media.
As shown in
Keys for an aggregate bandwidth database according to embodiments of the present invention include, but are not limited to: a Class of Service (such as: High Priority; Expedited Forwarding; Assured Forwarding; Best Effort; and Expedited Forwarding multicast), a Class Type (such as: Real Time; T1 Committed Information Rate (CIR); T1 Excess Information Rate (EIR); T2 Committed Information Rate; and T2 Excess Information Rate); a ring side (Outer side, also referred to as “East ringlet”; and Inner side, also referred to as “West ringlet”); and a protection scenario (such as a normal scenario, where all ring nodes and segments are functioning properly; and a failure scenario, where a particular node and/or segment has failed).
In a non-limiting example of the present invention, an aggregate bandwidth database is represented as follows, for a ring having four nodes (“A”, “B”, “C”, and “D”) and four segments (“AB”, “BC”, “CD”, and “DA”):
The rows and columns of Table 1 are the keys, and the cells are the fields holding the values. The database example shown in Table 1 is initialized and is currently empty (as initialized in step 205 of
It is emphasized that an aggregate bandwidth database can have additional keys and key values. For example, there are additional classes of Assured Forwarding service than are shown in the non-limiting example of Table 1.
A bandwidth allocation database (such as bandwidth allocation database 202 in
In a non-limiting example, a bandwidth allocation database is represented as follows:
In this non-limiting example, the database keys include the ID, Source (“Src”) Destination (“Dest”), ringlet, and protection method (“none” for unprotected; “SteerR” for steer-revertive) of a particular path (“Tunnel”). Other keys are also possible. The data values of the fields are in kbps, representing the bandwidth allocations.
As is well-known, a bandwidth allocation database of this sort is provided by a Bandwidth Broker (such as Bandwidth Broker 201 in
Ring networks in general offer a number of different failure recovery mechanism broadly classified as either wrapping or steering. Certain embodiments of the present invention recognize two protection scenarios: non-protected, and steer-revertive protected. In the non-protected case, if a failure occurs on the path designated for the service, then the service itself unconditionally fails. In the case of steer-revertive protection, the ring is temporarily reconfigured to route the traffic through alternative segments and nodes around the failure.
According to embodiments of the present invention, general predetermined rules provide for excluding traffic that is not transit traffic:
According to an embodiment of the present invention, predetermined rules for LSP rate aggregation include:
A non-limiting example is illustrated in
A label switched path (LSP), denoted as LSP1409 has node “D” 407 as a source and node “B” 403 as a destination. LSP's are unidirectional, and LSP1409 is directed through node “A” 401 via an outer ringlet 417.
(It is noted that an LSP is sometimes referred to as a “tunnel” in Multi-Protocol Label Switching, and the term “tunnel” is also used herein with reference to LSP's. In addition, the term “path” is herein expansively construed to denote any data route over one or more segments of a network. A path having more than one segment also involves one or more nodes connecting the segments.)
Another LSP, denoted as LSP2411 also has node “D” 407 as a source and node “B” 403 as a destination. LSP2411, however, is directed through node “C” 405 via an inner ringlet 419.
Still another LSP, denoted as LSP3413 also has node “D” 407 as a source and node “B” 403 as a destination. Like LSP2411, LSP3413 is directed through node “C” 405 via inner ringlet 419. It is furthermore stipulated that LSP3 is shared with LSP1, as described previously for bandwidth sharing.
Yet another LSP, denoted as LSP4415 has node “C” 405 as a source and node “B” 403 as a destination, and is also via inner ringlet 419.
In this non-limiting example, an aggregate bandwidth database is generated for node “A” 401. (To generate a complete aggregate bandwidth database according to certain embodiments of the present invention, a loop would iterate over all the nodes of the ring, as shown by loop entry point 203 in
In order to generate the aggregate bandwidth database, the bandwidth allocation database with a priori knowledge of the network is first obtained, such as through a Bandwidth Broker or similar agent. For this non-limiting example, the bandwidth allocation database is shown in Table 3. In addition, the Class of Service for all LSP's in this non-limiting example is given as Assured Forwarding at T1 rates.
As indicated in initialization step 205 of
The initialized aggregate bandwidth database is the same as shown in Table 1 above, and simplified for the Assured Forwarding CoS at T1 rates is:
Table 4, as well as the other aggregate bandwidth database tables for the LSP example applies only to the outer ringlet of the ring shown in
As noted previously, an empty cell in table format represents a zero value in the corresponding database field. In addition, this is for outer ringlet 417.
Next, to iterate through the various fields of the Bandwidth Allocation Database (as indicated by loop entry point 207 of
20
40
50
The data values shown in Table 5 apply to the “Normal” scenario (i.e., no failures of any part of the ring) of the Aggregate Bandwidth Database as well as the scenarios involving the failure of segment “BC” or segment “CD”. If, however, segment “AB” or segment. “DA” fails, the data from Table 5 does not apply.
These data values correspond to an “iteration group”, because there are, in this case, three applicable fields, shown with underlined values in Table 5. After iterating on the applicable fields of Table 5, the Aggregate Bandwidth Database is:
20
40
50
20
40
50
20
40
50
In a like manner, the second iteration group uses LSP2 fields:
30
50
200
In Table 7, fields for LSP1 are also featured for convenient reference, although they are not used in this iteration. The applicable field values in this iteration are underlined in Table 7.
Reference to
The underlined values in Table 8 are those aggregated from Table 7.
Continuing with the third iteration group using LSP3 fields:
50
10
80
As before, in Table 9, fields for LSP1 and LSP2 are also featured for convenient reference, although they are not used in this iteration. The applicable field values in this iteration are underlined in Table 9.
In the event of a failure of segment “BC” and/or segment “CD”, LSP3 will be reconfigured to utilize segments “DA” and “AB”. As previously given, LSP3 is shared with LSP1. Therefore, according to the procedure for aggregation as specified previously, the bandwidths thereof are aggregated by taking the maximum of LSP1 and LSP3 in the failure cases of interest:
Here it is seen that the reference information for LSP1 in Table 9 is useful for identifying the arguments of the MAX( ) function. In Table 10, the values from LSP1 are in italics, and the values from LSP3 are underlined. Note that the CIR and EIR values from LSP1 are for the normal case, not the failure-protected case, whereas the CIR and EIR values from LSP3 are for the failure-protected case, because it is LSP3 that is reconfigured in the event of “BC” and/or “CD” failure.
And finishing with the fourth iteration group using LSP4 fields:
100
90
10
Once again, in Table 11, fields for LSP1, LSP2, and LSP3 are also featured for convenient reference, although they are not used in this iteration. The applicable field values in this iteration are underlined in Table 11.
It is seen in
Therefore, the aggregate bandwidth database for node “A” 401 as used for traffic management is:
According to an embodiment of the present invention, the aggregation rules for VPLS are simplified by aggregating unicast, multicast, and broadcast traffic in the same manner, by considering only the source of the traffic, but not the destination. This simplification is a conservative approach to bandwidth aggregation, representing the worst-case situation: VPLS traffic is aggregated throughout the entire ring (subject to the predetermined rules, of course), even if the VPLS utilizes only a portion of the ring. In addition to simplifying the calculations, this approach allows for the possibility of adding a node of the ring to support expansion of the Virtual LAN.
According to an embodiment of the present invention, predetermined rules for VPLS rate aggregation include:
In this non-limiting example, an aggregate bandwidth database is generated for node “A” 501. (The previous comments in the LSP example above, pertaining to the generating of similar aggregate bandwidth databases for the other nodes of ring 500 are also applicable in this case.)
A non-limiting example is illustrated in
A virtual private LAN service (VPLS), denoted as VPLS1519 has a device 511 connected to node “D” 507, a device 513 connected to node “A” 501, a device 515 connected to node “B” 503, and a device 517 connected to node “C” 505. The virtual LAN connections 519 between the respective devices 511, 513, 515, and 517 are implemented physically by ring network 500.
Similarly, a VPLS denoted as VPLS2529 has a device 521 connected to node “D” 507, a device 523 connected to node “A” 501, and a device 525 connected to node “B” 503. The virtual LAN connections 529 between the respective devices 521, 523, and 525 are also implemented physically by ring network 500.
For this non-limiting example, the bandwidth allocation database is shown in Table 14. Note that in all cases, the protection is steer revertive, and as previously discussed, only the VPLS source is taken into account; the VPLS destination is not considered when aggregating rates.
As done previously for the LSP example and as indicated in initialization step 205 of
Next, to iterate through the various fields of the Bandwidth Allocation Database (as indicated by loop entry point 207 of
50
20
70
10
40
After iterating on the applicable fields of Table 16 according to the predetermined rules for VPLS aggregation as presented above, the Aggregate Bandwidth Database is:
50
20
70
50
10
40
50
10
40
50
10
40
50
10
40
Proceeding to the second VPLS1 entry from Table 14, the iteration group is shown in Table 18:
This second iteration results in:
It is seen that Table 19 is identical to Table 17, because we are aggregating bandwidth requirements for node “A” 501, and the source in Table 18 is also node “A” 501. As previously presented in the general predetermined rules, the bandwidths for the source node are not added.
Proceeding to the third VPLS1 entry from Table 14, the iteration group is shown in Table 18:
70
40
90
30
60
This third iteration results in:
70
30
60
70
30
60
Proceeding to the fourth and last VPLS1 entry from Table 14, the iteration group is shown in Table 22:
80
50
100
40
70
This fourth iteration results in:
At this point, there are three more iterations to perform, for the bandwidth allocations of VPLS2.
Proceeding to the first VPLS2 entry from Table 14, the iteration group is shown in Table 24:
This fifth iteration results in:
Proceeding to the second VPLS2 entry from Table 14, the iteration group is shown in Table 26:
This sixth iteration results in:
It is seen that Table 27 is identical to Table 25, because we are aggregating bandwidth requirements for node “A” 501, and the source in Table 26 is also node “A” 501. As previously presented in the general predetermined rules, the bandwidths for the source node are not added.
Finally, proceeding to the third VPLS2 entry from Table 14, the iteration group is shown in Table 28:
110
80
130
70
100
This seventh and final iteration results in:
Therefore, the aggregate bandwidth database for node “A” 501 as used for traffic management is:
A further embodiment of the present invention provides a computer program product for performing a method disclosed in the present application or any variant derived therefrom. A computer program product according to this embodiment includes a set of executable commands for a computer, and is incorporated within tangible and persistent machine-readable data storage including, but not limited to: computer media of any kind, such as magnetic media and optical media; computer memory; semiconductor memory storage; flash memory storage; data storage devices; and a computer or communications network. The terms “perform”, “performing”, etc., and “run”, “running”, when used with reference to a computer program product herein denote the action of a computer when executing the computer program product, as if the computer program product were performing the actions. The term “computer” herein denotes any data processing apparatus capable of, or configured for, executing a set of executable commands to perform a method, including, but not limited to: computers; workstations; servers; gateways; routers; switches; networks; processors; controllers; and other devices capable of processing data.
While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications and other applications of the invention may be made.
Number | Name | Date | Kind |
---|---|---|---|
6240066 | Nagarajan et al. | May 2001 | B1 |
7068607 | Partain et al. | Jun 2006 | B2 |
7190698 | Svanberg et al. | Mar 2007 | B2 |
7330431 | Bruckman | Feb 2008 | B2 |
7336605 | Bruckman et al. | Feb 2008 | B2 |
7418000 | Bruckman et al. | Aug 2008 | B2 |
20040228278 | Bruckman et al. | Nov 2004 | A1 |
20050083842 | Yang et al. | Apr 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20100172242 A1 | Jul 2010 | US |