Applications often rely on estimation of network bandwidth to properly manage network usage. For example, real time communication software may manage network usage among different modalities such as audio, video, file and application sharing to achieve optimal end to end user experiences. The encoding rate of the different modalities can be obtained from the network bandwidth estimate. Such applications have relied on packet pair and/or packet train approaches to estimate the bandwidth of bottlenecks for a particular network path. Also, TCP (Transmission Control Protocol) deploys a congestion control algorithm in providing network transport. This TCP algorithm monitors packet loss rates and controls congestion by adjusting sending rates according to the packet loss rates.
It has been found that existing approaches to bandwidth estimation and congestion control have drawbacks, especially when used with real time communication applications. Tools and techniques described herein relate to estimating bandwidth. For example, bandwidth estimates may take into account relative one way delay of packets. Relative one way delay is the one way delay of a packet relative to one or more reference packets, such as relative to a first packet in a stream or a packet with a minimum delay within a window. Calculations of relative one way delay may take into account and correct for clock drift using clock drift estimates. Also, the relative one way delay calculations may use statistical techniques to combine delay features from multiple packets, such as by using moving averages, etc. As another example of the tools and techniques, the bandwidth estimates may take into account whether characteristics of the data stream indicate contention with other data streams.
In one embodiment, the tools and techniques can include determining whether relative one way delay for data packets sent across a computer network in a data stream from a sending node to a receiving node exceeds a delay threshold (such as a threshold delay value and/or a number of consecutive delay values that exceed a threshold delay value). If so, then a delay congestion signal indicating that the relative one way delay exceeds the delay threshold can be generated. The delay congestion signal can be used in calculating an adaptive bandwidth estimate for the data stream.
In another embodiment of the tools and techniques, it can be determined whether a data stream of data packets sent across a computer network from a sending node to a receiving node is in a contention state. The contention state can indicate a state of contention with one or more other data streams, where the other data stream(s) are tending to take more than a fair share of the available bandwidth. For example, one or more of the other data streams may be using a rate control technique that differ from a rate control technique for the present data stream, such as a technique only considers loss rates, and not delays, in controlling congestion. This could allow the other data stream(s) to keep increasing their sending rate(s) even if network delay was increasing. If the data stream is in the contention state, then an adaptive bandwidth estimate can be calculated for the data stream using a first bandwidth estimation technique. This first bandwidth estimation technique may be a technique that allows the data stream to use its fair share of the bandwidth, even in the presence of other contending data streams. If the data stream is not in the contention state, then the bandwidth estimate for the data stream can be calculated using a second bandwidth estimation technique that is different from the first bandwidth estimation technique. The data stream may not be in the contention state even if other data streams are present, and even if there is congestion. For example, the present data stream may not be in a state of contention, even if there is congestion, if all the other data streams are using the same type of bandwidth estimation and congestion control technique as the present data stream.
This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Similarly, the invention is not limited to implementations that address the particular techniques, tools, environments, disadvantages, or advantages discussed in the Background, the Detailed Description, or the attached drawings.
Embodiments described herein are directed to techniques and tools for improved adaptive bandwidth estimation. Such improvements may result from the use of various techniques and tools separately or in combination.
Such techniques and tools may include using a relative one way delay congestion signal in estimating an adaptive bandwidth for a data stream. In one implementation, an initial bandwidth estimate may be generated using a technique that does not consider relative one way delay, such as a packet pair or packet train technique that does not consider relative one way delay. The relative one way delay congestion signal may then be used to repeatedly update the initial bandwidth estimate, as occurrences of relative one way delay are calculated and analyzed. Computing relative one way delay can take into account clock differences and clock drift between clocks of the sending and receiving nodes.
Also, a bandwidth estimation technique may take into account whether characteristics of the data stream indicate contention with another data stream. A different bandwidth estimation technique can be used depending on whether such a contention state is indicated. For example, if no contention is indicated, then the bandwidth estimate may take into account a packet loss rate congestion signal and a relative one way delay congestion signal. If either signal indicates congestion, the bandwidth estimate may be decreased. On the other hand, if contention is indicated, then the bandwidth estimate may be based on the packet loss rate congestion signal, without considering the delay congestion signal. Accordingly, the bandwidth estimate may not be decreased due to increased packet delay that is not accompanied by increased packet loss. This can allow the data stream to obtain its fair share of bandwidth when the data stream is competing with other data streams that do not decrease their bandwidth in response to increased packet delays. In some situations, no rate control or bandwidth estimate adjustment may be performed in the non-contention state. For example, this may be the case when the bandwidth estimate is larger than what the application wishes to use, and no congestion signals of delay or loss are detected. As an example, for applications with bursty traffic patterns, the sending rate may often be below the estimated bandwidth. As another example, the average sending rate of multimedia or video may also be below the estimated bandwidth. In such cases, if no delay or loss congestion signal is detected, the bandwidth estimate may not be updated (increased). Additionally, these uncongested periods where the sending rate is below the estimated bandwidth may be used to obtain the delay and loss thresholds above which congestion is declared. This can be done by learning the delay and loss values seen in the uncongested state.
The subject matter defined in the appended claims is not necessarily limited to the benefits described herein (e.g., obtaining a fair share of bandwidth from competing data streams, effectively accounting for clock drift, etc.). A particular implementation of the invention may provide all, some, or none of the benefits described herein. Although operations for the various techniques are described herein in a particular, sequential order for the sake of presentation, it should be understood that this manner of description encompasses rearrangements in the order of operations, unless a particular ordering is required. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, flowcharts may not show the various ways in which particular techniques can be used in conjunction with other techniques.
Techniques described herein may be used with one or more of the systems described herein and/or with one or more other systems. For example, the various procedures described herein may be implemented with hardware or software, or a combination of both. For example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement at least a portion of one or more of the techniques described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. Techniques may be implemented using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Additionally, the techniques described herein may be implemented by software programs executable by a computer system. As an example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Moreover, virtual computer system processing can be constructed to implement one or more of the techniques or functionality, as described herein.
The computing environment (100) is not intended to suggest any limitation as to scope of use or functionality of the invention, as the present invention may be implemented in diverse general-purpose or special-purpose computing environments.
With reference to
Although the various blocks of
A computing environment (100) may have additional features. In
The storage (140) may be removable or non-removable, and may include computer-readable storage media such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (100). The storage (140) stores instructions for the software (180).
The input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball; a voice input device; a scanning device; a network adapter; a CD/DVD reader; or another device that provides input to the computing environment (100). The output device(s) (160) may be a display, printer, speaker, CD/DVD-writer, network adapter, or another device that provides output from the computing environment (100).
The communication connection(s) (170) enable communication over a communication medium to another computing entity. Thus, the computing environment (100) may operate in a networked environment using logical connections to one or more remote computing devices, such as a personal computer, a server, a router, a network PC, a peer device or another common network node. The communication medium conveys information such as data or computer-executable instructions or requests in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The tools and techniques can be described in the general context of computer-readable media, which may be storage media or communication media. Computer-readable storage media are any available storage media that can be accessed within a computing environment, but the term computer-readable storage media does not refer to propagated signals per se. By way of example, and not limitation, with the computing environment (100), computer-readable storage media include memory (120), storage (140), and combinations of the above.
The tools and techniques can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment. In a distributed computing environment, program modules may be located in both local and remote computer storage media.
For the sake of presentation, the detailed description uses terms like “determine,” “choose,” “generate”, “calculate,” and “operate” to describe computer operations in a computing environment. These and other similar terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being, unless performance of an act by a human being (such as a “user”) is explicitly noted. The actual computer operations corresponding to these terms vary depending on the implementation.
A. General Environment
In one implementation, the receiving node (240) can generate packet loss congestion signals (250) and delay congestion signals (250) when specified congestion conditions are met, as will be described more below. The receiving node (240) can use the delay congestion signals (250) and packet loss congestion signals (252) in calculating bandwidth estimates (260). The calculated bandwidth estimates (260) may be updates to current bandwidth estimates (260). For example, in one implementation, an initial bandwidth estimate (260) may be generated using a technique such as the known packet pair or packet train techniques. That bandwidth estimate (260) can be updated with a newly-calculated bandwidth estimate (260) that is calculated using the delay congestion signals (250) and the packet loss congestion signals (252). Such updated bandwidth estimates (260) can be repeatedly updated as new delay congestion signals (250) and packet loss congestion signals (252) are calculated and analyzed. The sending node (210) can use the bandwidth estimate (260) to control a sending rate of the data packets (220) in the data stream (222).
The congestion signals (250 and 252) and the bandwidth estimates (260) could be generated by different devices. For example, the receiving node (240) could send timestamp information to the sending node (210), and the sending node (210) could use that timestamp information to calculate the congestion signals (250 and 252). The sending node (210) could also use the congestion signals (250 and 252) to calculate the bandwidth estimates (260). Likewise, the receiving node (240) can send the congestion signals (250 and 252) to the sending node (210) to inform the sending node (210) of the congestion. The sending node (210) could use the delay congestion signals (250) and packet loss congestion signals (252) in calculating bandwidth estimates (260). Accordingly, the congestion signals (250 and 252) may be generated and used within the same node and/or different nodes. Other configurations may also be used. For example, one or more third-party nodes or environments could calculate and/or send out the congestions signals (250 and 252) and/or the bandwidth estimates (260).
B. Adaptive Bandwidth Estimation Implementation
An implementation of adaptive bandwidth estimation will now be described with reference to
1. Delay Congestion State Module
Referring now to
An example of calculating ROWD will now be described. ROWD here measures how much higher the one way delay is than the one way delay of a reference packet. For example, the reference packet may be a packet with the lowest delay within a window, such as a sliding window, or the first second of a data stream. As another example, the reference packet may be a packet with the lowest delay thus far in a data stream, with the reference packet being adjusted if a new lowest-delay packet is analyzed. The calculation of ROWD can include a calculation of clock drift, which can be an estimate of clock drift.
a) Choosing a Reference Packet to be Used in Calculating ROWD
As noted above, a reference packet can be chosen to allow the calculation of ROWD to include an estimated correction for clock offset and drift. In the calculation of clock drift, the difference RRi between receive times of the current packet (packet i) and a reference packet (packet 0) can be defined as follows:
RR
i=recvTimei−recvTime0 (using receiving node clock),
where recvTimei is the receive time of packet i using the receiving node clock and recvTime0 is the receive time of packet 0 using the receiving node clock.
The difference RSi between send times of the current packet (packet i) and a reference packet (packet 0) can be defined as follows:
RS
i=sendTimei−sendTime0 (using sending node clock),
where sendTimei is the sending time of packet i using the sending node clock and sendTime0 is the sending time of packet 0 using the sending node clock.
In addition, the difference SRi in receive time of packet i and the send time of packet 0, both using the sending node's clock, can be defined as follows:
SR
i=recvSendTimei−sendTime0 (using sending node clock),
where recvSendTimei is the received time of packet i according to the sending node clock, and sendTime0 is as defined above.
The equation for SRi can be rewritten as follows:
where (Δ+1) is the clock drift of the sending and receiving node clocks, and θ is the clock offset between the sending and receiving node clocks.
The queuing delay, δq, using the sender clock, can be given by the following:
δ=δq+δprop=recvSendTime−sendTime=SR−RS,
where δ is total delay and δprop is propagation delay. From this equation, the following equation for δq can be derived:
For uncongested packets where δq≈0, linear regression can be used to compute approximate Δ, and θ′=θ+δprop. That is, clock drift (Δ) can be computed using uncongested packets, but this may not allow a computation of clock offset (θ). The computation can allow computation of clock offset plus the propagation delay (θ′).
Given initial estimates of Δ and θ′, the initial estimates can be updated using uncongested packets (those where δq≈0). For example, uncongested packets can be defined as those with the minimum one-way delay (or ROWD, as discussed below) over a given window (using current estimates of clock drift and clock offset), or those where round trip time (RTT, which is the time it takes for the sender to send the packet to the receiver plus the time it takes for the receiver to send an acknowledgement back to the sender) is close to the minimum (provided RTT is available). Accordingly, moving average estimates of Δ and θ′ for an nth packet can be given as follows:
where α is a constant that provides a weight to the moving average values for Δn and θ′n.
Additionally, the following inequalities can hold true: δq≧0 and δq≦RTT (if RTT is available). This can give the following bounds on θ′ for each packet:
Thus, the clock offset plus propagation delay (θ′) can be at least the minimum clock drift adjusted one-way delay
However, θ may grow if “uncongested” packets start having larger one way delay. Thus, an approximation of a reference packet with minimum one way delay may be to use a packet with a minimum one way delay over some sliding window for the reference packet. An example of using such a reference packet in the calculation of ROWD will be discussed below.
b) Calculation of Relative One Way Delay Using Reference Packet Times
The calculation of ROWD using information about reference packets will now be discussed, beginning with the following equation:
In this equation, packet i is the current packet, packet 0 is the reference packet, and the other variables are as follows:
A=estimated drift;
RR
i=recvTimei−recvTime0 (using the receiving node clock);
RS
i=sendTimei−sendTime0 (using the sending node clock).
As noted above, it can be useful to use a reference packet with the smallest ROWD within a window, where a window size is at least some amount of time and at least some number of packets. This can facilitate finding packets with the least amount of actual delay, because such packets can have the closest actual one way delay relative to each other and the reference packet.
A new drift estimate for the ith packet can be calculated as a moving average using an existing drift estimate as follows:
where:
Δ′i-1=previous drift estimate;
Δ′i=current drift estimate;
α=a constant for drift estimate moving average.
The constant α and the other constants herein can be tuned by observing the results of using different constant values with actual network conditions. Different values may affect how quickly the techniques converge on fair and effective bandwidth estimates.
The clock drift compensated ROWD of packet i with reference to packet 0 may be calculated using the following formula:
where:
ROWDi=relative one way delay of packet i;
RR
i=recvTimei−recvTime0 (using receiving node clock);
RS
i=sendTimei−sendTime0 (using sending node clock);
A=drift (calculated using drift estimate discussed above).
2. Loss Rate Congestion State Module
Referring now to
C. Contention State Module
Referring now to
In the state diagram of
nC=ROWD<lowROWDThresh&&LossRate<lowLossRateThresh,
R=currentTotalSendRate,
R<<Max=R is very small, meaning it is below some fixed minimum rate,
pC=consecutiveHighROWD>ε∥consecutiveHighLossRate>θ,
where lowROWDThresh is a minimum threshold for ROWD, lowLossRateThresh is a minimum threshold for loss rate, currentTotalSendRate is the current total send rate from the sending node, Max is a fixed minimum loss rate, consecutiveHighROWD>ε means that more than £ consecutive occurrences of ROWD have been above a set ROWD threshold as illustrated in
The contention state module (500) can start in an unknown state (510). If R<<Max (the send rate is very small) and there is a high ROWD or loss rate, this can indicate that the network is in a contention state (520). Because there can be noise, the contention state module (500) can check whether there have been £ high ROWDs in a row or θ high loss rates in a row. These constants can be set to values that are sufficient to justify signaling a state of contention. If the network is in contention, then the network can remain congested, with no low ROWDs. Accordingly, if there is a low ROWD or low loss rate, the contention state module (500) can enter an uncontention state (530). The contention state module (500) may output a contention signal only when it enters the contention state (520). If the contention state module (500) is called and the state stays in the contention state, the output signal may be a false value, rather than a true value, which would indicate entering the contention state (520).
D. Bandwidth Estimation Update Module
Referring now to
The bandwidth estimate update module (600) can repeatedly evaluate characteristics of the data stream and can repeatedly increment or decrement the bandwidth estimate. The variables in the state diagram of
C=(ROWDCongestionSignal∥lossCongestionSignal);
TCPC=lossCongestionSignal;
NC=ROWD<lowROWDThresh&&LossRate<lowLossRateThresh;
R=currentTotalSendRate;
BW=current bandwidth estimate;
NTCPC=LossRate<lowLossRateThresh;
ζ=ceilingThresholdPercent of B.
As is illustrated in
In the contention mode (610), the bandwidth estimate BW can be decremented in a contention bandwidth decrement operation (650) if there is a loss rate congestion signal and the sending rate is less than a ceiling threshold percent of the current estimated bandwidth (i.e., TCPC&&R<ζ·BW). Also in the contention mode (630), the bandwidth estimate BW can be incremented in a contention bandwidth increment operation (660) if the loss rate is less than a loss rate threshold, and the sending rate is greater than or equal to the ceiling threshold percent of the current estimated bandwidth (i.e., NTCPC&&R>=ζ·BW).
The bandwidth estimate update module can use a technique such as an additive increase multiplicative decrease (AIMD) technique to adjust the estimated bandwidth with the bandwidth update operations (630, 640, 650, and 660). For example, the bandwidth decrement operations (630 and 650) can each decrement the estimated bandwidth by multiplying the current bandwidth by a factor β to calculate the new estimated bandwidth. The bandwidth increment operations (640 and 660) can each increment the estimated bandwidth by adding a value λ to the current bandwidth to calculate the new estimated bandwidth. Accordingly the increment operations (640 and 660, indicated by the + sign) and the decrement operations (630 and 650, indicated by the − sign) can be represented as follows:
−: newBandwidth=β·maxTotalSendRate;
+: newBandwidth=λ+currentBandwidth.
The new bandwidth can be sent back to the sending node to allow the sending node to adjust the actual send rate to be within the new bandwidth limit. This could result in the actual send rate being decreased if the bandwidth estimate is decremented or in the send rate being increased if the bandwidth estimate is incremented. The bandwidth estimate update module (600) can continue to successively update its bandwidth estimates as the data stream continues to be sent from the sending node to the receiving node.
Several adaptive bandwidth estimation techniques will now be discussed. Each of these techniques can be performed in a computing environment. For example, each technique may be performed in a computer system that includes at least one processor and memory including instructions stored thereon that when executed by at least one processor cause at least one processor to perform the technique (memory stores instructions (e.g., object code), and when processor(s) execute(s) those instructions, processor(s) perform(s) the technique). Similarly, one or more computer-readable storage media may have computer-executable instructions embodied thereon that, when executed by at least one processor, cause at least one processor to perform the technique.
Referring to
If the relative one way delay does exceed the delay threshold, then the technique of
The technique of
The delay threshold and/or the loss threshold may be calculated using one or more characteristics of the computer network. For example, the threshold(s) may be initially calculated and/or adapted from existing threshold(s) using one or more characteristics such as characteristics of the sending and/or receiving node (e.g., application characteristics, hardware characteristics, etc.), or other characteristics of the network. The delay threshold may be established based on delay for the data stream while the data stream is in an uncongested state. Also, the loss rate threshold may be established based on loss rate for the data stream while the data stream is in an uncongested state. The technique of
Referring now to
The contention state can include a sending rate characteristic of the data stream that can include a sending rate below a minimum sending rate. The contention state can also include a congestion characteristic of the data stream. The congestion characteristic can be selected from a group consisting of a delay above a delay threshold, a loss rate above a loss rate threshold, and combinations thereof. The delay above the delay threshold can include a single relative one way delay above a relative one way delay value threshold, or another value, such as a consecutive number of relative one way delay values above a relative one way delay value threshold.
The second bandwidth estimation technique can include determining whether delay of data packets in the data stream exceeds a delay threshold (e.g., a single value or number of consecutive values above a threshold value). If the delay exceeds the delay threshold, then acts can be performed. The acts can include generating a delay congestion signal indicating that the delay exceeds the delay threshold, and using the delay congestion signal in calculating the bandwidth estimate. The second bandwidth estimation technique may include maintaining the bandwidth estimate at a current bandwidth estimate value if a current sending rate for the data stream is below the current bandwidth estimate value and the data stream is in an uncongested state.
The first bandwidth estimation technique can be a technique that does not use a delay congestion signal in estimating bandwidth.
The first bandwidth estimation technique and the second bandwidth estimation technique can both include determining whether loss rate of data packets in the data stream exceeds a loss rate threshold, and if the loss rate exceeds the loss rate threshold, then performing acts. The acts can include generating a loss rate congestion signal indicating that the loss rate exceeds the loss rate threshold, and using the loss rate congestion signal in calculating the bandwidth estimate.
Referring now to
If the data stream is in the contention state, then the technique of
If the data stream is not in the contention state, then the bandwidth estimate for the data stream can be calculated (930) using a second bandwidth estimation technique that is different from the first bandwidth estimation technique. The second bandwidth estimation technique can consider (932) the packet loss rate congestion signal and can consider (934) the congestion signal from relative one way delay of packets.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.