1. Field of the Invention
The invention relates to a method for controlling data flow from a first network block to a second network block connected via a link providing a certain data rate, and a corresponding network element comprising the first network block and the second network block.
2. Description of the Related Art
This invention is related to an equipment or network architecture that performs data forwarding between a data source and a data sink via a multiplexed transmission interface. There are various types of transmission interfaces for the connection of data sources and data sinks (that may be implemented as physically different modules). Some of them provide a flow control mechanism, some of them don't. The present invention is related to the latter type and is directed to the problem of a missing flow control.
In the following, the considered architecture is described by referring to
The L3 block and the L2 block are interconnected via an Ethernet interface. In order to distinguish data packets from the different L3 sources, a logical multiplexing is done based on the VLAN Ethernet header. The Ethernet interface has a much higher throughput than the aggregated throughput of the Network Interfaces. For this reason, each L3 data packet source is followed by a rate limiter. This rate limiter limits the number of transmitted bytes per time unit, so that the data rate from L3 source to the associated PPP/HDLC block does not exceed the maximum throughput of the network interface. Limiting the data rate is performed in its basic form by inserting time intervals between subsequent packets, for example.
Only the transmit direction (TX) is relevant in this context (from data sources to network interfaces). The receive direction does not exhibit the problem stated below that is addressed by the present invention.
PPP/HDLC processing (transmit direction) in L2 adds bits or bytes (depends on the operational mode) to the payload of the data packets (bit/byte stuffing). The number of added bits or bytes depends on the bit pattern of the payload and can not be predicted without inspecting the payload of each packet. The effective amount of data to be transmitted on the network interface is increased, or in other words, the effective available throughput of the network interface, as perceived by the L3 block, is reduced.
The problem is that this decrease of effective throughput is not predictable by the L3 sources block (unless they inspect the payload of each packet, which is a considerable effort). That is, there is more or less time needed for transmission on the network interfaces which is not predictable. If the rate limiter only takes into account the number of bytes of the original payload, the network interface will be over-subscribed, and packet loss will occur in the L2 block. If the rate limiter tries to take into account the PPP/HDLC bit/byte stuffing by setting the data rate well below the nominal network interface throughput, capacity is wasted.
The problem was solved earlier by limiting the data rate in the L3 block to a value low enough so that even with worst case bit/byte stuffing in the L2 block, the transmit capacity of the network interface is not exceeded. Result is that transmit capacity on network interfaces is not efficiently used.
It is noted that this problem does not only exist in the above-described L3/L2 architecture, but may occur in other structures in which a device X supplies data to a device Y via a multiplexed (shared) interface. Device Y processes this data further in a not exactly predictable speed (e.g., transmits it via a network interface or the like). The link between the two devices allows a higher data rate than the rate at which the data is further processed in device Y. Device X includes individual rate limiter (also referred to as rate shaper) functions for each processing block of device Y, in order to limit the amount of data transmitted, so that the available transmission capacity of the subsequent interface is never exceeded. Due to not predictable available transmit capacity variation of the interfaces in device Y (resulting from e.g. stuffing operations), and addition of variable header information from the data to be transmitted in device Y, the achievable throughput compared to the available capacity is lower, because some margin for those non predictable capacity variations must be left by the rate shaper in device X belonging to the interface in device Y (a typical value is 10% of the available transmission capacity).
Hence, it is an object of the invention to remove the above drawback such the maximum possible data rate can be fully exploited.
This object is solved by a network element comprising
a first network block and a second network block connected via a link providing a certain data rate, wherein
the first network block comprises at least one data source and at least one data rate limiting means associated to the data source,
the second network block comprises at least one data processing means associated to the data source, and a data flow information obtaining means for obtaining data flow information regarding the data rate of the data processed by the data processing means,
wherein the data rate limiting means of the first network block is adapted to vary the data rate of data sent from the data source depending on the data flow information.
Alternatively, the above object is solved by a method for controlling data flow from a first network block to a second network block connected via a link providing a certain data rate, comprising the steps of
sending data received from a a data source of the first network block via the link from the first network block to the second network block,
processing the data received via the link in the second network block,
obtaining data flow information regarding the data rate of the data processed by the data processing means, and
varying the data rate of data sent from the data source of the first network block to the data link depending on the data flow information.
Furthermore, the above object is solved by a network block comprising at least one data source, at least one data rate limiting means associated to the data source and a data sending means, wherein the data rate limiting means is adapted to vary the data rate of data sent from the data source depending on data flow information.
As a further alternative, the above object is solved by a network block comprising a data receiving means, at least one data processing means associated to the data, and a data flow information obtaining means for obtaining data flow information regarding the data rate of the data processed by the data processing means, wherein the data flow information obtaining means is adapted to provide the data flow information for varying the data rate.
Hence, according to the invention, information regarding a data rate used in the second network block/element (in the following also referred to as backpressure information) is supplied to the rate limiter in the first network block/element, so that the data rate is varied based on the backpressure information.
Thus, the maximum data rate achievable in the second network block/element by the means which is determinant for the data rate can be fully exploited. For example, in case the second network block provides a network interface and the data processing means prepares the data for it, the maximum interface capacity can be exploited to 100%, without any packet loss.
Moreover, according to the present invention, only the data rate is adapted. That is, depending on the backpressure information, the data rate is increased or decreased, but never set to zero. Hence, the traffic is never interrupted. That is, according to the invention a smooth communication is possible.
It is noted that the terms “network element” or “network block” refer to any kind of “module”, “unit”, “functional block of a system” in a network.
A plurality of data streams may be provided and each data stream may be associated with one data source and one data rate limiting means of the first network block, and with one data processing means and one data flow information obtaining means and one network interface of the second network block.
The link may be a multiplexed link, and the plurality of data streams is transferred via the multiplexed link between the first network block and the second network block. The multiplexed link may be an Ethernet link, and the multiplexing technique applied to the Ethernet link may be Virtual Local Area Network (VLAN) Ethernet.
For obtaining the data flow information, a buffering means and a buffer level detecting means may be used, wherein the data flow information comprises information regarding the buffer filling level.
At least a first threshold may be provided for the buffer filling level, and the data flow information obtaining means may be adapted to include information whether the threshold is exceeded in the data flow information. The information whether the first threshold is exceeded may be included in a data flow message and the data flow message may be sent only when the first threshold is exceeded. The data rate may be decreased in case the first threshold is exceeded.
A second threshold may be provided for the buffer filling level, wherein the data flow information obtaining means is adapted to include information whether the buffer filling level has fallen below the second threshold in the data flow information. The above first and second thresholds may be both applied, wherein the second threshold is lower than the first threshold.
The data rate may be increased in case the data rate has fallen below the second threshold.
The information whether the buffer filling level has fallen below the second threshold may be included in a data flow message, and the data flow message may be sent only when the buffer filling level has fallen below the second threshold.
The invention is described by referring to the enclosed drawings showing only the TX direction, in which:
In the following, preferred embodiments of the present invention are described by referring to the attached drawings.
The general structure of a network element according to the embodiment of the present invention is described in the following by referring to
A network element comprises a L3 block as an example for a first network block 1 and a L2 block as an example for a second network block 2. Both blocks are connected via a data link 3. An example for such a data link is an Ethernet interface. It is noted that this link provides a certain data rate that is larger than the aggregated data rate of the interfaces on the L2 block. The L2 block comprises data sources (e.g., packet sources) 11-1 to 11-n and data rate limiting means 12-1 to 12-n. Each of the data rate limiting means is associated to a particular data source (e.g., 11-1 to 12-1, as indicated in the drawing). It is noted that at least one of the data sources and the data rate limiting means have to be provided. A sending means 13 sends the data over the interface 3.
The L2 block 2 comprises a receiving means 21 which receives data from the interface 3. Data processing means 22-1 to 22-n are provided (correspondingly to the data sources 11-1 to 11-n in the L3 block 1). Furthermore, buffers 23-1 to 23-n each comprising a buffer filling level detecting means are provided. The buffers 23-1 to 23-n are connected to network interfaces 24-1 to 24-n, respectively.
It is noted that one packet source, one rate limiter, one data processing means, one buffer and one interface are respectively associated to each other, so that they conduct one data stream. For example, a first data stream is conducted via the packet source 11-1, the rate limiter 12-1, the buffer 23-1 and the interface 24-1. The interface 3 is in this example an Ethernet interface, as mentioned above, and the sending means 13 of the L3 block performs a multiplexing of the data streams, whereas the receiving means 21 of the L2 block performs a de-multiplexing of the data streams.
The buffer filling level detectors associated to each buffer 23-1 to 23-n are examples for data flow information obtaining means which are obtaining data flow information regarding the data rate of the data processing means, e.g., the data rate which can actually be exploited by the interfaces. This information is supplied to the corresponding rate limiters of the L3 block, wherein rate limiter varies the data rate depending on the data flow information.
The rate limiter varies the data rate by inserting time gaps between subsequent packets, for example. That is, in order to decrease the data rate, the rate limiter extends the gaps between subsequent packets, whereas in order to increase the data rate, the gaps between the subsequent packets are shortened.
The general structure and operation according to the embodiment described above is described in the following in more detail also by referring to
For simplifying the description, the mechanism of only one Packet source/rate limiter/network interface is described. All other interfaces work with more implementations of the same mechanism. As shown in
The L3 rate limiter (i.e., 12-1 to 12-n) works with two different rates: one is the nominal rate of the network interface (taking into account the predictable part of the PPP/HDLC encapsulation which is the additional header). Working with this rate ensures that in case of no bit/byte stuffing (because it may not be required due to the payload pattern), the network interface capacity is fully exploited. If there is bit/byte stuffing because of the payload pattern, then the FIFO buffer slowly fills up. When the first threshold th, is exceeded, an information is sent to the L3 block, and the corresponding rate limiter starts to work with a rate that is well below the nominal network interface capacity. This rate is chosen in such a way that even with maximum bit/byte stuffing, the filling level of the FIFO buffer is not increasing, i.e., in non-worst cases, the filling level decreases. When the filling level has fallen below the second threshold th2 which is smaller than the first threshold th1, then the L3 block is informed again, and the rate of the rate limiter is, again, set to the nominal rate of the network interface (and the FIFO buffer starts to fill up, and so forth).
The FIFO buffers and the threshold values are shown in
The information about FIFO buffer filling levels is transported in special messages (“backpressure messages”) from the L2 block to the L3 block. These messages are distinguished from the normal payload packets either by a dedicated value for a VLAN (Virtual Local Area Network) tag in the VLAN Ethernet header, or by using a standard Ethernet header (potentially with a proprietary value for the Ethertype field).
The backpressure messages may contain filling level information for one network interface only, or they may contain filling level information for all network interfaces of the L2 block. The information that is transferred to the L3 block may be either just of the type “th1 exceeded” (in this case, the L2 block compares actual filling level and threshold value), or it may give the actual filling level in number of bytes (in this case, the L3 block compares actual filling level and threshold value).
This mechanism is summarized in the following by referring to the flowchart shown in
In detail, in step S1 it is checked whether the buffer filling exceeds the first threshold th1 or falls below the threshold th2 described above. If the buffer filling level does not exceed the first threshold or falls below the threshold, i.e., is within the range, step S1 is repeated. If the buffer filling level, however, exceeds the first threshold th1 or falls below the second threshold th2, the process proceeds to step S2, in which a backpressure message comprising information that the data rate should be changed is created. This backpressure message is forwarded to the L3 block in step S3, and in more detail to the rate limiter. In step S4, the rate limiter in the L3 block is controlled according to the backpressure information included in the backpressure message, as described above.
It is noted that the process of step S1 is only illustrative. As an alternative, instead of monitoring exceeding the threshold or falling below the threshold, it is also possible to continuously monitor the buffer filling level, such as whether the buffer filling level is in a range between the first threshold th1 and the second threshold th2.
Thus, as described above, according to the invention a mechanism is provided that provides backpressure information to implement flow control for independent data streams transferred via one multiplexed (Ethernet) link in order to overcome the problem underlying the present invention. In particular, separate flow control (backpressure) mechanisms are used for each individual data stream in the multiplexed link. Furthermore, the transmit data rate of each rate limitier (also referred to as rate shaper) is toggled between 2 configurable rates. The lower one leading to a receiver buffer fill decrease, the higher one to a receiver buffer fill increase. That is, the rate of each L3 rate limiter is dynamically adapted (toggled between a higher rate and a lower rate), depending on the filling level of L2 FIFO buffers and the status of associated thresholds. This information is communicated to the L3 rate limiters by dedicated in-band messages. The result is that available capacity of the network interfaces is exploited in an optimum way, and no packets are dropped. The invention supports optimal transmit capacity usage, because extra capacity needed e.g. for stuffing operations needs not to be reserved.
Compared to other known backpressure/flow control solutions transmission is never stopped. This improves delay variation and jitter behaviour.
The advantage of 100% capacity utilisation without packet loss is not possible with standard Ethernet flow control in cases of logical multiplexing. This allows more freedom in the architectural design of network elements and the use of inexpensive, standardized Ethernet interfaces between separate functional blocks.
It is noted that the invention is not limited to the embodiments described above, which should be considered as illustrative and not limiting. Thus, many variations of the embodiments are possible.
For example, the above embodiment is directed to a L3/L2 structure. However, the invention is not limited to this architecture, but can be applied whenever a first network block supplies data to a second network block with a higher data rate than the rate which the second network block is capable to process. In particular, the invention is not limited to a network interface of the second network block, but also other data processing means are possible.
In particular, the two network blocks described above can be separate network elements within a network. That is, in this case the invention is directed to a network system comprising two network elements which are connected via a link, wherein the two network elements are independent from each other.
Furthermore, in the above embodiment two thresholds th1 and th2 are applied. However, alternatively only one threshold can be applied. Namely, in case only the upper threshold th1 is used, the data rate is reduced by the rate limiter whenever the buffer filling level exceeds the threshold, and the rate limiter resumes limiting data rate to the nominal rate when the buffer filling level does not exceed the threshold anymore. This would lead to a higher frequency of backpressure messages and more frequent changes of the data rate, on the other hand the structure of the buffer can be simplified since only one threshold has to be monitored.
Moreover, the invention is not limited to a multiplexed Ethernet between the two network blocks concerned, but any suitable link mechanism can be applied.
Furthermore, the invention is not limited to a VLAN structure as described above.
The data processing is not limited to the PPP/HDPLC processing, but any kind of “data processing” can be applied in which the amount of data after data processing can not be predicted by the data source but varies.
Number | Date | Country | Kind |
---|---|---|---|
04 013-408.2 | Jul 2004 | EP | regional |