The present application relates to the computer field of “traffic shaping”. Traffic shaping is used to manage the bandwidth on a communications network to meet performance goals. Traffic shaping is a process of optimizing traffic by examining attributes of packets and delaying or dropping certain packets in order to achieve goals such as: attaining specific bitrates, attaining specific ratios between different types of traffic, providing fair sharing of bandwidth or smoothing bursts of traffic.
Traffic shaping deals with datagram traffic (a stream of datagram packets) from one or many source computers to one or many destination computers. A traffic shaper lies between the source and the destination of each packet of the traffic. The traffic is prioritized by the shaper, and based on this, a decision is made for each packet whether it is delivered to the destination or not, as well as when it is delivered.
In a typical solution, a traffic shaping deployment may include multiple traffic processing devices, each shaping a portion of the traffic. The quantity or makeup of traffic going to each device may not be the same, which means that simply dividing the desired values by the number of devices and allowing each device to shape the traffic independent of the others will not be sufficient to achieve the performance goals.
Deriving the solution for the correct rates and parameters to achieve performance goals is non-trivial. This cannot generally be done manually as the quantity and makeup of traffic in each traffic processing device is constantly changing. As such, there is a need for an improved method, system and apparatus for managing a shaper or shapers.
Embodiments herein are intended to address the need for achieving traffic shaping performance goals by coordinating shaping behavior on multiple traffic processing devices using iterative re-evaluation and updating of the parameters applied to shapers on each traffic processing device.
According to one aspect herein, there is provided a computer based system and method for distributing a global shaper rate implemented across multiple traffic processing devices. In particular, a controller distributes credits according to the demand (amount of traffic, or offered load) of each traffic processing device, in such a way to achieve global targets, including the shaper rate, strict prioritization of traffic, WFQ weights and fairness between cloned channels, iteratively updated as changes occur in the quantity and makeup of the traffic across the devices.
In an aspect of embodiments herein, there is provided a system for monitoring and modifying the behavior of a plurality of shapers, the system including: a server, residing on a controller; and a plurality of clients in communication with the server, each of the clients residing on one or more traffic processing devices and the clients configured to monitor datagram traffic, wherein the server is configured to receive statistics related to the datagram traffic from the clients and to retain the same; and the server is configured to utilize the statistics to send commands to the clients to modify the behavior of the shapers.
In a particular case, the commands may instruct the clients to provide a new maximum rate for a priority related to at least one of the shapers.
In another particular case, the commands may instruct the clients to provide a new weight for a channel related to at least one of the shapers.
In yet another particular case, the commands may instruct the clients to assign a total rate to each of the multiple shapers.
In another aspect, there is provided a method for monitoring and modifying the behavior of a plurality of shapers, the method including: for each shaper: and for each priority of the shaper: a) determining a demand sum for an instance and priority; b) determining a weight sum for the priority; c) determining a rate for the priority; and d) determining a channel weight for the priority, wherein the determining utilizes data provided by a statistics record related to the shaper; and generating a command record for delivery to each shaper to modify the behavior of the shapers. In particular, the command record may be a message containing new parameters for the shaper in order to make the shaper more efficient based on current operating conditions.
In a particular case, the demand sum comprises the sum of demand across all clients for a shaper.
In another particular case, the weight sum comprises the sum of the weights of all channels in a given priority.
In yet another particular case, the rate for a priority comprises an allocated rate for an instance priority pair, divided by a demand ratio.
Embodiments will now be described, by way of example only, with reference to the attached Figures, wherein:
A user creates a “policy definition” which defines a shaper. A policy definition is the template information for a shaper. A shaper has one or more priorities, each priority having one or more channels. Each shaper may have a unique-by variable associated with it, which defines the shaper instances. Each priority may have a shared-by variable associated with it which defines the channel instances.
A number of schemes may be utilized in shaping traffic to meet performance goals. Schemes may be combined and typically utilize “credits” (which represent a binary “bit” of traffic) to determine when a packet may be sent. In every case, credits are created at a constant rate, and if and only if a “channel” has enough credits, a packet is sent. Examples of schemes for allocating credits follow:
An example policy definition for creating a shaper comprising two shaper instances as shown in
With reference to
As discussed earlier, a shaper 10 may have many instances such as gold 12 or bronze 14.
The features 36 and 38 refer to subscribers, each being assigned as a value of the shared-by variable. A subscriber may be a single IP address or multiple IP addresses belonging to the same customer. Each subscriber may have multiple channel instances, typically one for each channel. By way of example, Sub 1 (36) of
Within a shaper instance, there may be a plurality of priorities, the exact meaning of each is configurable by the user. In the example of
Each channel is assigned a weight, configurable by the user, and each channel instance cloned from that channel is given that weight. For example, channel instances (sub120, sub n 28) are clones of the channel for VOIP 40, of weight five. Channel instances (sub124, sub n 32) are instances of a channel 44 for web traffic, of weight three. These channels are shown by way of example. Many different channels may be added with weights for specific traffic. Different weighted channels are made for different protocols in this case, but not necessarily always. Some configurations, for example, may provision different weighted channels for different classes of customers. For example, a deluxe level of service of weight ten, a normal level of service, of weight five, and an economy level of service, of weight two.
A channel may have multiple instances, typically one for each unique value of the shared-by variable. This is shown as features Sub 1 (36) to Sub n (38). For example, a channel VOIP 40 is cloned for Sub 1 (36) to create channel instance 20. The same channel is cloned for Sub n (38) to make channel instance 28. Note that the number of channel instances is determined by the shared-by variable, and may not be the same for different shaper instances of the same shaper. The channel instances receive datagram packets and determine if packets are delivered, delayed or dropped according to available credits.
Referring now to
An implementation consists of four types of modules, based upon a client-server model. As one skilled in the art will appreciate the function of each module may be distributed or combined between modules. By way of example we describe an implementation of a basic system.
The modules of
Server 99 receives statistics (100a, 100b) and transmits commands (102a, 102b) to clients (98a, 98b). Each traffic processing device (96a, 96b) receives datagram packets (90a, 90b). Depending on the configuration, a packet may be passed to a channel instance inside a shaper, and depending upon available credits, it may be dropped, delivered or queued for future delivery. The delivery of packets is shown by features 92a and 92b.
Traffic processing devices (96a, 96b) are typically computing devices upon which a software client (98a, 98b) may reside as a separate computing thread. In one embodiment there may be one or more clients each handling a subset of the traffic going to a traffic processing device. Controller 94 is typically a computing device upon which a software server 99 may reside as a separate computing thread. It will be understood that the traffic processing devices, controller, server and clients may be embodied in hardware or software. In some cases, these elements may be co-located while in others they may be distributed both physically and logically. Where implemented as software, these elements may be provided as physical computer-readable media containing computer-readable instructions, which, when executed on a computing device, which may be a dedicated device, cause the device to perform the functions of the respective feature.
A client (98a, 98b) runs parallel to its traffic processing device (96a, 96b), and serves at least two purposes:
Referring now to
A shaper 10 is defined by a policy as discussed above with reference to Appendix “A”. A shaper 10 may be utilized by a plurality of traffic processing devices such as 96a and 96b (see
Referring now to
The first field of each section is a unique identifier for that section, e.g. each shaper definition is given a shaper ID 110a. Each shaper instance is given an Instance ID 112a. Each priority is given a Priority ID 114a. Each channel is given a Channel ID 116a. The last field (110b, 112d and 114d) of each section, excluding the field 116d, indicate how many instances of the following sub-section are present for that record, e.g. the field 112d indicates the number of priorities present in this statistics record for the shaper instance. The field 116d indicates a maximum bandwidth or load a channel is requesting.
Current rate 112b is the current value for the rate of a shaper instance. Current Max Rate 114b is the current value for the maximum rate of a priority. Current weight 116b is the current value for the weight of a channel. Current weight 116b is stored in all the channel instances for a channel. Each channel instance for a channel generally has the same weight, so the current weight 116b is the weight for a channel, representing all of its channel instances. In other words, statistics are generally sent for a channel, not individual instances.
Demand metrics 112c, 114c and 116c indicate how much datagram traffic is being handled. For example, a doubling of traffic would effect a doubling of the demand. This can be expressed in various metrics, one being an input bit rate.
Referring now to
Section 118 indicates there are two instances of a shaper having a Shaper ID of “0”. Each section 120 describes one of these two instances. Each section 120 includes a current rate 120b, and a demand metric 120c. Current Rate 120b is the rate for a shaper instance (shown here in Mbps). The demand metric 120c is the bits per second requested by the instance and in this example is the sum of the two demand metric fields 122c of priorities 122 associated with instance 120,
Each instance may have multiple priorities 122. Each priority section 122 includes a current max rate 122b and a demand metric 122c. Current max rate 122b in this example is set to infinity. Demand metric 122c is the bits per second requested by the priority.
Each priority may have multiple channel sections 124. Each channel section 124 includes a current weight 124b, which is the current weight of all the channel instances for the specified channel. This is initially the target weight defined for the channel. For example in
Referring now to
The structure of
For each shaper instance of a shaper with ID 130a, a section 132 exists. Section 132 comprises an instance ID 132a to identify the shaper instance. New level setting 132b represents the new rate for the shaper instance. Number of priorities 132c indicates the number of priorities for a shaper instance, each priority having a section 134. Section 134 comprises a field 134a which identifies the priority. Field 134b indicates a new maximum rate for the priority. Field 134c indicates the number of channels associated with priority ID 134a. Finally, section 136 exists for each channel ID 136a and provides a new weight 136b.
Referring now to
Section 138 indicates there are two instances of a shaper having a Shaper ID of “0”. Each section 140 describes a shaper instance of the shaper. An instance section 140 includes a new rate value 140b which defines what the new rate for the shaper instance should be set to. Each instance section 140 may have multiple priority sections as shown by sections 142. Each priority section 142 includes a new maximum rate 142b to be set for the priority.
Each priority may have multiple channels. Each channel section 144 includes a new weight in field 144b for the channel. The values shown in field 144b may be large numbers (where 1000s are denoted with the symbol ‘k’) or small numbers. In this embodiment, since the absolute values of the weights are insignificant, and only the ratios are significant, a weighting of 250 k to 400 k is the same as a weighting of 25 to 40 (which could be further simplified to 5 to 8).
As statistics (100a, 100b) arrive to the server (99), they are stored in a data structure, which is used for the calculation of commands (102a, 102b). One embodiment of such a data structure follows. Data type details have been omitted (e.g. int32/int64, signedness, rounding errors).
Data Structure 1
The statistic values and those stored in the data structure are generally the same. These values are manipulated by the server 99, to generate command values. The following Table 1 illustrates an example correlation of the various values.
In the above Table 1, the value X can be substituted for one of: instance, priority or channel. For example “Number of Channels” would be “channel.num_channels”.
The keys to the arrays of each structure are generally the various IDs: Channel ID, Priority ID, Instance ID, Client ID, and Shaper ID.
Referring now to
The methods illustrated in
To aid the reader in better understanding the flowcharts of
Referring first to
At step 152 a test is made to determine if all shapers have been examined. If there are no more shapers to examine, processing moves to step 154 and ends. If there are still shapers to examine the process moves to step 156. The process of step 156 is detailed in
Processing then moves from step 158 to step 160 where the value of remaining[i] is set to shaper.rate. After step 160, processing returns to step 156. Once step 156 is completed, processing moves to step 161. At step 161, a weight calculation is made for each priority as shown in
For each priority examined, processing moves to step 166. Once all priorities have been examined, processing returns to step 152. At step 166 the value of allocated_rate[i][p] is set as shown in
Upon completing step 168 processing moves to step 170 which is detailed in
We refer now to
We refer now to
We refer now to
We refer now to
If the value of bw_remaining is positive, processing moves to step 304 where a test is made to determine if there is a channel remaining to examine, If not, processing returns to step 302. If a channel does remain to be examined, processing moves to step 306. At step 306 a test is made to determine if the value of new_channel.new_level is less than the value of channel.load. If the test at step 306 is positive, processing moves to step 308, where the value of new_level is set. If the test at step 306 is negative processing returns to step 304. Upon completion of step 308 processing moves to step 310 where the value of new_channel_new_level is increased by new_level. Processing then moves to step 312 where a test is made to determine if the value of new_channel.new_level>channel.load. If not processing moves to step 320. If the test at step 312 is positive processing moves to step 314 where the value of new_level is set. Processing then moves to step 316 where the value of new_channel.new_level is set. Processing then moves to step 318 where the value of weight_sum[p] is decreased by the channel.weight. Processing then continues at step 320, where the value of bw_remaining is decreased by the new_level and processing then returns to step 304.
We refer now to
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
Number | Name | Date | Kind |
---|---|---|---|
6052375 | Bass et al. | Apr 2000 | A |
6286052 | McCloghrie et al. | Sep 2001 | B1 |
6400718 | Yamada et al. | Jun 2002 | B1 |
6532213 | Chiussi et al. | Mar 2003 | B1 |
6816907 | Mei et al. | Nov 2004 | B1 |
6829250 | Voit et al. | Dec 2004 | B2 |
6842783 | Boivie et al. | Jan 2005 | B1 |
6871233 | Bearden et al. | Mar 2005 | B1 |
6968379 | Nielsen | Nov 2005 | B2 |
7729250 | Matoba | Jun 2010 | B2 |
7764615 | Gilfix | Jul 2010 | B2 |
7768920 | Goshen et al. | Aug 2010 | B2 |
7808918 | Bugenhagen | Oct 2010 | B2 |
7826371 | Hirayama et al. | Nov 2010 | B2 |
7830889 | Lemaire et al. | Nov 2010 | B1 |
7983273 | Beshai | Jul 2011 | B2 |
20020065907 | Cloonan et al. | May 2002 | A1 |
20020116488 | Subramanian et al. | Aug 2002 | A1 |
20030035373 | Bass et al. | Feb 2003 | A1 |
20030069972 | Yoshimura et al. | Apr 2003 | A1 |
20030191853 | Ono | Oct 2003 | A1 |
20030214948 | Jin et al. | Nov 2003 | A1 |
20040196788 | Lodha | Oct 2004 | A1 |
20040228291 | Huslak et al. | Nov 2004 | A1 |
20050160178 | Venables | Jul 2005 | A1 |
20050278456 | Hassan | Dec 2005 | A1 |
20060007854 | Yu | Jan 2006 | A1 |
20060088032 | Venables | Apr 2006 | A1 |
20060174023 | Horn et al. | Aug 2006 | A1 |
20060280119 | Karamanolis et al. | Dec 2006 | A1 |
20070081554 | Saffre | Apr 2007 | A1 |
20070153697 | Kwan et al. | Jul 2007 | A1 |
20080259852 | Massiera et al. | Oct 2008 | A1 |
20090010264 | Zhang | Jan 2009 | A1 |
20100220595 | Petersen | Sep 2010 | A1 |
20100296520 | Matthews et al. | Nov 2010 | A1 |
Entry |
---|
V. P. Kumar, T. V. Lakshman, D. Stiliadis, “Beyond Best Effort: Router Architectures for Tomorrow's Internet”, IEEE Communications Magazine, May 1998, pp. 152-164. |
J. Bennett and H. Zhang, “Worst-case fair packet fair queueing algorithms”, Technical report, 1996. |
J. Bennett and H. Zhang, “Hierarchical Packet Fair Queueing Algorithms”, ACM SIGCOMM Computer Communication Review, Oct. 1996, pp. 143-156, vol. 26 issue 4. |
G. Armitage, “Quality of Service in IP Networks: Foundations for a Multi-Service Internet”, Apr. 2000, pp. 63-104, MTP Indianapolis, IN, USA. |
Canadian Intellectual Property Office, Office Action dated Dec. 22, 2010, Canadian patent application No. 2,655,033, Quebec Canada. |
Canadian Patent Office, Canadian Office Action, CA Application No. 2,655,033, Dated May 28, 2012. |
Number | Date | Country | |
---|---|---|---|
20100208587 A1 | Aug 2010 | US |