The disclosure relates generally to communication networks and, more specifically but not exclusively, to controlling congestion in communication networks.
Transmission Control Protocol (TCP) is a common transport layer protocol used for controlling transmission of packets via a communication network. TCP is a connection-oriented protocol that supports transmission of packets between a TCP sender and a TCP receiver via an associated TCP connection established between the TCP sender and the TCP receiver. TCP supports use of a congestion window which controls the rate at which the TCP sender sends data packets to the TCP receiver. While typical use of the TCP congestion window may provide adequate congestion control in many cases, there may be situations in which typical use of the TCP congestion window does not provide adequate congestion control or may result in undesirable effects.
Various deficiencies in the prior art may be addressed by embodiments for controlling congestion in a communication network.
In at least some embodiments, an apparatus includes a processor and a memory communicatively connected to the processor, wherein the processor is configured to control a size of a congestion window of an information transmission connection based on a threshold, wherein the threshold is based on an ideal bandwidth-delay product (IBDP) value, wherein the IBDP value is based on a product of an information transmission rate measure and a time measure.
In at least some embodiments, a computer-readable storage medium stores instructions which, when executed by a computer, cause the computer to perform a method that includes controlling a size of a congestion window of an information transmission connection based on a threshold, wherein the threshold is based on an ideal bandwidth-delay product (IBDP) value, wherein the IBDP value is based on a product of an information transmission rate measure and a time measure.
In at least some embodiments, a method includes controlling, using a processor and a memory communicatively connected to the processor, a size of a congestion window of an information transmission connection based on a threshold, wherein the threshold is based on an ideal bandwidth-delay product (IBDP) value, wherein the IBDP value is based on a product of an information transmission rate measure and a time measure.
In at least some embodiments, an apparatus includes a processor and a memory communicatively connected to the processor, wherein the processor is configured to control a size of a congestion window of an information transmission connection based on a cap threshold and based on a reset threshold. The processor is configured to prevent the size of the congestion window from exceeding the cap threshold. The processor is configured to reduce the size of the congestion window, prior to transmitting a new information block from a sender of the information transmission connection toward a receiver of the information transmission connection, based on a determination that the size of the congestion window exceeds the reset threshold and based on a determination that the sender of the information transmission connection has received confirmation that one or more information blocks already transmitted by the sender of the information transmission connection toward the receiver of the information transmission connection have been received by the receiver of the information transmission connection.
In at least some embodiments, a computer-readable storage medium stores instructions which, when executed by a computer, cause the computer to perform a method that includes controlling a size of a congestion window of an information transmission connection based on a cap threshold and based on a reset threshold. The method includes preventing the size of the congestion window from exceeding the cap threshold. The method includes reducing the size of the congestion window, prior to transmitting a new information block from a sender of the information transmission connection toward a receiver of the information transmission connection, based on a determination that the size of the congestion window exceeds the reset threshold and based on a determination that the sender of the information transmission connection has received confirmation that one or more information blocks already transmitted by the sender of the information transmission connection toward the receiver of the information transmission connection have been received by the receiver of the information transmission connection.
In at least some embodiments, a method includes controlling, using a processor and a memory communicatively connected to the processor, a size of a congestion window of an information transmission connection based on a cap threshold and based on a reset threshold. The method includes preventing the size of the congestion window from exceeding the cap threshold. The method includes reducing the size of the congestion window, prior to transmitting a new information block from a sender of the information transmission connection toward a receiver of the information transmission connection, based on a determination that the size of the congestion window exceeds the reset threshold and based on a determination that the sender of the information transmission connection has received confirmation that one or more information blocks already transmitted by the sender of the information transmission connection toward the receiver of the information transmission connection have been received by the receiver of the information transmission connection.
The teachings herein can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements common to the figures.
The present disclosure provides a capability for controlling a size of a congestion window of an information transmission connection. The information transmission connection may be a network connection (e.g., a Transmission Control Protocol (TCP) connection or other suitable type of network connection) or other suitable type of information transmission connection. The information transmission connection may be used to transmit various types of information, such as content (e.g., audio, video, multimedia, or the like, as well as various combinations thereof) or any other suitable types of information. The size of the congestion window of the information transmission connection may be controlled based on one or more of a target encoding rate of information to be transmitted via the information transmission connection (e.g., the encoding rate of the highest quality level of the information to be transported via the information transmission connection), round-trip time (RTT) information associated with the information transmission connection (e.g., an RTT, a minimum RTT, or the like), or buffer space that is available to packets of the information transmission connection along links of which a path of the information transmission connection is composed. The size of the congestion window of the information transmission connection may be controlled in a manner tending to maintain the highest quality of information to be transmitted via the information transmission connection.
The present disclosure provides embodiments of methods and functions for reaching and maintaining the highest quality of adaptive bit-rate data streamed over a Transmission Control Protocol (TCP) compliant network connection (which also may be referred to as a TCP connection or, more generally, as a network connection) based on adjustments of a congestion window (cwnd) of the TCP connection that are based on at least one or more of (i) the encoding rate of the highest quality level of the streamed data, (ii) the round-trip time (RTT) of the TCP connection carrying the streamed data, and (iii) the buffer space available to packets of the TCP connection in front of the links that make up its network path. The methods and functions of the present disclosure apply to the TCP sender (i.e., the data source side of the TCP connection) and coexist with its ordinary mode of operation, as it is defined in published standards or specified in proprietary implementations. A description of the ordinary mode of operation of a TCP sender follows.
Along the network path of a TCP connection, typically composed of multiple network links, there is always at least one network link where the data transmission rate (or simply the data rate) experienced by packets of the TCP connection is the lowest within the entire set of links of the network path. Such a network link is called the bottleneck link of the TCP connection. The packet buffer memory that may temporarily store the packets of the TCP connection before they are transmitted over the bottleneck link is referred to as the bottleneck buffer. Congestion occurs at a bottleneck link when packets arrive to the bottleneck buffer faster than they can depart. When congestion is persistent, packets accumulate in the bottleneck buffer and packet losses may occur. To minimize the occurrence of packet losses, which delay the delivery of data to the TCP receiver and therefore reduce the effective data rate of the TCP connection, the TCP sender reacts to packet losses or to increases in packet delivery delay by adjusting the size of its congestion window.
The TCP congestion window controls the rate at which the TCP sender dispatches data packets to the TCP receiver. It defines the maximum allowed flight size. The flight size is the difference between the highest sequence number of a packet transmitted by the TCP sender and the highest ACK number received by the TCP sender. The ACK number is carried by acknowledgment packets that the TCP receiver transmits to the TCP sender on the reverse network path of the TCP connection after receiving data packets from the TCP sender on the forward network path of the TCP connection. The ACK number carried by an acknowledgment packet is typically the next sequence number that the TCP receiver expects to receive with a data packet on the forward path of the TCP connection. When the flight size matches the congestion window size, the TCP sender stops transmitting data packets until it receives the next acknowledgment packet with a higher ACK number than the current highest ACK number. The TCP sender stops transmitting new packets as soon as the flight size again matches the congestion window size.
The TCP sender adjusts the size of the TCP congestion window according to the sequence of events that it infers as having occurred along the network path of the TCP connection based on the sequence of ACK numbers that it receives on the reverse path of the TCP connection and also based on the time at which it receives those ACK numbers. The TCP sender typically increases the size of the congestion window, at a pace that changes depending on the specific TCP sender implementation in use, until it recognizes that a packet was lost somewhere along the network path of the TCP connection, or that data packets or acknowledgment packets have started accumulating in front of a network link. The TCP sender may reduce the size of its congestion window when it detects any one of the following conditions: (1) arrival of multiple acknowledgment packets carrying the same ACK number; (2) expiration of a retransmission timeout; or (3) increase of the TCP connection round-trip time (RTT). The RTT measures the time between the transmission of a data packet and the receipt of the corresponding acknowledgment packet. The growth of the congestion window size also stops when such size matches the size of the receiver window (rwnd). The TCP receiver advertises the value of rwnd to the TCP sender using the same acknowledgment packets that carry the ACK numbers. The receiver window size stored by the TCP receiver in the acknowledgment packet for notification to the TCP sender is called the advertised receiver window (arwnd).
As an example, consider an adaptive bit-rate (ABR) video source (e.g., located at a video application server) offering video content encoded at multiple encoding rates (although it will be appreciated that the ABR source may offer content other than video). Each encoding rate corresponds to a video quality level. A higher encoding rate implies a better video quality, which is obtained through a larger number of bytes of encoded video content per time unit. The video asset is packaged into segments of fixed duration (e.g., 2 seconds, 4 seconds, or 10 seconds) that the video application client will request from the video application server as ordinary web objects using Hypertext Transfer Protocol (HTTP) messages. The video segments are commonly referred to as video chunks. When the video application client requests a video asset, the video source at the video application server responds with a manifest file that lists the encoding rates available for the video asset and where the chunks encoded at different video quality levels can be found. As the transmission of the video progresses, the video application client requests subsequent chunks having video quality levels that are consistent with the network path data rate measured for previously received chunks and other metrics that the video application client maintains. One of those additional metrics is the amount of video content already buffered by the client that is awaiting reproduction by a video player used by the client to play the received video.
Typically, the ABR video application client requests a new video chunk only after having received the last packet of a previous chunk. For the TCP sender at the video application server, this implies that there is always a period of inactivity located between the transmission of the last packet of a chunk and the transmission of the first packet of a new chunk. When the transmission of the new chunk starts, the TCP sender is allowed to transmit a number of back-to-back packets up to fulfillment of the current size of the congestion window. This sequence of back-to-back packets may depart from the TCP sender at a much higher rate than the data rate available to the TCP connection at the bottleneck link. As a consequence, a majority of the packets in this initial burst may accumulate in the bottleneck buffer. If the size of the congestion window is larger than the size of the buffer, one or more packets from the initial burst may be dropped. In standards-compliant TCP sender instances the loss of a packet during the initial burst requires the retransmission of the lost packet and induces a downward correction of the congestion window size. Both the packet retransmission and the window size drop contribute to a reduction of the TCP connection data rate and, therefore, of the data rate (or throughput) sample that the video application client measures for the chunk after receiving its last packet. The lower throughput sample may then translate into the selection of a lower video quality level by the video application client.
In at least some embodiments, methods are provided for controlling the size of a congestion window of a TCP sender in a manner for satisfying the throughput requirement discussed above. The methods for controlling the size of the congestion window are configured to minimize the probability of packet losses occurring during the initial burst of a chunk transmission, so that the size of the congestion window during the entire chunk transmission is relatively higher and, therefore, conducive to a relatively higher throughput sample for the chunk being transmitted. A first embodiment of the present disclosure, called TCP window reset (TWR), provides a method of operating a TCP sender which satisfies the throughput requirement discussed above. The method of the first embodiment of the present disclosure drops the size of the congestion window to a carefully determined size immediately before the TCP sender starts transmitting the packets of a new chunk. It will be appreciated that application of the TWR method is not restricted to TCP senders associated with adaptive bit-rate video sources, but may be extended to any TCP sender that alternates the transmission of data packets with periods of inactivity.
When multiple ABR video streams share a common bottleneck link in their respective network paths, the data rate share obtained by each stream at the bottleneck link depends on the range of variation of its congestion window size. If not constrained, the size of the congestion window for one stream may grow beyond the level that is strictly needed for achievement of the desired video quality level. At the same time, a congestion window size larger than strictly necessary for video quality satisfaction may compromise the video quality satisfaction of one or more of the other video streams that share the same bottleneck link.
In at least some embodiments, methods are provided for controlling the size of the congestion window of a TCP sender in a manner for satisfying the maximum size requirement discussed above. The methods for controlling the size of the congestion window of a TCP sender are configured to stop the growth of the congestion window size beyond the level that is strictly necessary to reach and maintain the data rate that supports the desired quality level. A second embodiment of the present disclosure, called TCP window cap (TWC), provides a method of operating a TCP sender which satisfies the maximum size requirement discussed above. The TWC method may impose a maximum size of the congestion window that may be smaller than the receiver window size advertised by the TCP receiver. The target data rate of the TCP connection may drive the choice of the maximum size of the congestion window imposed by the TWC method. It will be appreciated that application of the TWC method is not restricted to TCP senders associated with adaptive bit-rate video sources, but may be extended to any TCP sender that is associated with a target data rate.
Referring to
The TWR and TWC methods of the present disclosure control the size of the congestion window of TCP sender 106 using congestion window size values determined based on an ideal bandwidth-delay product (IBDP).
In at least some embodiments, such as a first embodiment of the TWR method of the first embodiment of the present disclosure (e.g., depicted and described with respect to
In at least some embodiments, such as a second embodiment of the TWR method of a first embodiment of the present disclosure (e.g., depicted and described with respect to
When the propagation delays of the forward and reverse data paths of TCP connection 114 between TCP sender 106 and TCP receiver 112 are fixed, the most accurate approximation of the minimum RTT is given by the minimum of all the RTT samples collected up to the time when the minimum RTT is used. In this case the value of the ideal bandwidth-delay product can be updated every time the value of the minimum RTT drops. Instead, if the data path between the TCP sender 106 and the TCP receiver 112 is subject to changes, for example because the TCP receiver 112 resides in a mobile device, the minimum RTT may at times need to be increased. To allow for possible increases in minimum RTT, TCP sender 106 maintains two values of minimum RTT, called the working minimum RTT (wminRTT) and the running minimum RTT (rminRTT), where wminRTT is the value of minimum RTT 108 that is used to calculate various parameters as will be discussed infra, while rminRTT is the value of minimum RTT that is being updated but is not yet used. At time intervals of duration T (e.g., T=60 sec), called the IBDP update period, TCP sender 106 sets the working minimum RTT equal to the running minimum RTT (i.e., wminRTT=rminRTT), uses the working minimum RTT to update the IBDP, and resets the running minimum RTT to an arbitrarily large value (e.g., rminRTT=10 sec). During the IBDP update period, TCP sender 106 keeps updating rminRTT every time it collects a new RTT sample x, where rminRTT is updated as rminRTT=min(rminRTT, x). The calculation of the IBDP may require that the encoding rate Rhigh of the highest video quality level expected for the stream be passed to TCP sender 106 as soon as possible after TCP connection 114 is established.
Referring now to
In step 202, TCP sender 106 is ready to perform the calculation of minRTT and, thus, method 200 starts. In step 204, several variables and parameters, which are all discussed above, are initialized: the variable rwnd, which is the receiver window value maintained by TCP sender 106 based on the advertised receiver window arwnd chosen by TCP receiver 112; the variable wminRTT, which is the working minimum RTT; the variable rminRTT, which is the running minimum RTT; the variable IBDP, which is the ideal bandwidth-delay product; and the parameter T, which is the IBDP update period. After having initialized the above variables and parameter, method 200 proceeds to step 206, at which point TCP sender 106 waits for an RTT sample to arrive.
In step 208, upon the arrival of a new RTT sample (e.g., computed after receipt of an acknowledgment packet), a determination is made as to whether the IBDP timer has expired. Expiration of the IBDP timer signals when the IBDP and other variables controlled method 200 are to be updated as discussed with respect to step 210 below. If the IBDP timer has not expired, method 200 proceeds to step 212, at which point a determination is made as to whether the RTT sample that was just received is less than the latest calculated value of rminRTT. If the RTT sample just received is less than the latest calculated value of rminRTT then method 200 proceeds to step 214, at which point rminRTT is set to the value of the just received RTT sample. Still in step 212, if the just received RTT sample is not less than the value of rminRTT then the same value of rminRTT is maintained and method 200 proceeds to step 216, at which point a determination is made as to whether the RTT sample that was just received is less than the latest calculated value of wminRTT. If the RTT sample just received is less than the latest calculated value of wminRTT then method 200 proceeds to step 218 (at which point wminRTT is set to the value of the just received RTT sample) and then to step 220 (at which point the following tasks are performed: update the IBDP value using the new value of wminRTT and the target rate from source 102; update the receiver window rwnd as the minimum between the advertised receiver window and twice the ideal bandwidth delay product (i.e., rwnd=min(arwnd,2IBDP)); reset the IBDP update timer; reset rminRTT to a relatively large value (e.g., rminRTT=10 s or any other suitable value)). Still in step 216, if the just received RTT sample is not less than the value of wminRTT, then the same value of wminRTT is maintained and method 200 returns to step 206 to wait for the next RTT sample.
Returning to step 208, if the IBDP timer has indeed expired, then method 200 proceeds to step 210 (at which point the following tasks are performed: set wminRTT to the value of rminRTT, update the IBDP using the new value of wminRTT and the target rate from source 102, update the receiver window rwnd as the minimum between the advertised receiver window and twice the ideal bandwidth delay product (i.e., rwnd=min(arwnd ,2IBDP)), reset the IBDP update timer, and reset rminRTT to a relatively large value (e.g., rminRTT=10 s or any other suitable value)), and then proceeds to step 212 to determine whether the just received RTT sample is less than the current value of rminRTT. If the RTT sample is less than rminRTT, method 200 proceeds to step 214, at which point rminRTT is reset to the value of the just received RTT sample. If the RTT sample is not less than rminRTT, the value of rminRTT is kept unchanged and method 200 returns to step 206 to wait for the next RTT sample.
It will be appreciated that, although omitted from
Referring now to
In step 302, TCP sender 106 is ready to perform the first embodiment of the TWR method of the first embodiment of the present disclosure and, thus, method 300 starts. At step 304, TCP sender 106 waits for new data to transmit to become available from application server 104. The new data may include any type of application data requested by application client 110 from application server 104. For adaptive bit-rate video streaming, for example, the new data includes a new video chunk. After the new data to transmit becomes available for transmission at step 304, method 300 proceeds to step 306, at which point a determination is made as to whether the value of the counter variable holdChunkCounter satisfies a threshold (illustratively, whether the value of the counter variable holdChunkCounter is equal to zero, although it will be appreciated that any other suitable threshold may be used). The counter variable holdChunkCounter provides the number of future consecutive chunks during which the same estimate B of the bottleneck buffer size will be considered valid. When the counter reaches zero (0), the estimate B of the bottleneck buffer size is no longer considered valid and a new valid value must be obtained by TCP sender 106 before it can use again the buffer size estimate in the first embodiment of the TWR method of the first embodiment of the present disclosure. If the value stored in holdChunkCounter is zero (0), method 300 proceeds to step 310. If the value stored in holdChunkCounter is not zero (0), method 300 proceeds to step 308. At step 308, before transmission of the new data chunk begins, the size of the congestion window (cwnd) is reset to the minimum of its current value (cwnd), the ideal bandwidth-delay product (IBDP), and the estimated size of the bottleneck buffer (B) (namely, cwnd=min(cwnd, IBDP, B)), the value of the down counter holdChunkCounter is decremented by one unit, and method 300 then proceeds to step 312. At step 310, the size of the congestion window is reset to the minimum of its current value and of the ideal bandwidth-delay product (cwnd=min(cwnd, IBDP)), and then proceeds to step 312. At step 312, the highest acknowledgment number received so far (found in the variable highestAck) is copied into the initBurstInitAck variable, the acknowledgement number that TCP sender 106 expects for the last packet in the initial burst is stored into the initBurstHighestAck variable (initBurstHighestAck=highestAck+cwnd), TCP sender 106 begins transmitting packets that carry the new data chunk (e.g., following the ordinary mode of operation of TCP sender 106 described above), and method 300 then proceeds to step 314.
At step 314, a determination is made as to whether there is a packet loss during transmission of the data chunk (e.g., by expiration of the retransmission timeout, by receipt of duplicate acknowledgments, or the like). If a packet loss is not detected at step 314 (during transmission of the data chunk), TCP sender 106 concludes that the estimated size of the bottleneck buffer B is not oversized, and method 300 proceeds to step 316 directly from step 314. If a packet loss is detected at step 314 (during transmission of the data chunk), method 300 proceeds to step 318 (depicted in
The method 300 of
Referring now to the TCP window cap method of the second embodiment of the present disclosure, TCP sender 106 imposes an upper bound on the size of the congestion window. The upper bound may be twice the value of the ideal bandwidth-delay product IBDP as defined for the first embodiment of the TWR method of the first embodiment of the present disclosure: 2IBDP=2 minRTT·Rhigh·τ/(τ−minRTT). TCP sender 106 obtains the value of minRTT according to method 200 of
With a shared tail-drop buffer at the bottleneck link, the TWC method most effectively conduces to the elimination of unfairness and video quality instability when:
(a) the bottleneck rate C is at least as large as the sum of the encoding rates Rhigh,i, of the highest video quality levels for all the streams i that share the bottleneck link, each amplified by the amount needed by the respective client to measure the same rate, i.e.,
Σi[Rhigh,i·τi/(τi−minRTTi)]≦C, and
(b) the size B of the shared buffer is at least as large as the sum of the ideal bandwidth delay products IBDPi computed for each stream i that shares the bottleneck link, i.e., Σi[Rhigh,i·minRTTi·τi/(τi−minRTTi)]≦B.
Indeed, if the above conditions on bottleneck data rate and bottleneck buffer size are both satisfied, and the TWC method is applied to the TCP sender in conjunction with the TWR method, the bottleneck buffer is guaranteed to never overflow and cause packet losses, because each stream i never places in the buffer more than IBDPi data units.
A TCP sender 106 that implements the TWC method of the second embodiment of the present invention computes the ideal bandwidth-delay product IBDP the same way as described in method 200 of
Referring now to
In step 402, TCP sender 106 is ready to perform the second embodiment of the TWR method of the first embodiment of the present disclosure and, thus, method 400 starts. At step 404, TCP sender 106 waits for new data to transmit to become available from application server 104. The new data may include any type of application data requested by application client 110 from application server 104. For adaptive bit-rate video streaming, for example, the new data to transmit includes a new video chunk. After the new data to transmit becomes available for transmission at step 404, method 400 proceeds to step 406, at which point a determination is made as to whether the value of the counter variable holdChunkCounter satisfies a threshold (illustratively, whether the value of the counter variable holdChunkCounter is greater than one, although it will be appreciated that any other suitable threshold may be used). The counter variable holdChunkCounter provides the number of future consecutive chunks during which the same estimate B of the bottleneck buffer size will be considered valid. When the counter reaches zero (0), the estimate B of the bottleneck buffer size is no longer considered valid and a new valid value must be obtained by TCP sender 106 before it can use again the buffer size estimate in the second embodiment of the TWR method of the first embodiment of the present disclosure. When the counter reaches one (1), the second embodiment of the TWR method of the first embodiment of the present disclosure suspends the use of the estimate B of the bottleneck buffer size in its control of the congestion window size before the start of a video chunk transmission. If a determination is made at step 406 that the value stored in holdChunkCounter is not greater than one, method 400 proceeds to step 460 (depicted in
At step 414, a determination is made as to whether there is a packet loss during transmission of the data chunk (e.g., by expiration of the retransmission timeout, by receipt of duplicate acknowledgments, or the like). If a packet loss is not detected during transmission of the data chunk), TCP sender 106 concludes that the estimated size of the bottleneck buffer B is not oversized and method 400 proceeds to step 416 directly from step 414. If a packet loss is detected during transmission of the data chunk, method 400 proceeds to step 418 (depicted in
At step 420, a new sample of the bottleneck buffer size is obtained (as runningBuffer=highestAck−initBurstInitAck), and method 400 then proceeds to step 422. At step 422, the difference between runningBuffer and the previous buffer size estimate B is computed and the absolute value of the difference is compared with a relatively small threshold delta (e.g., delta may represent the data payload carried by two packets, the data payload carried by four packets, or the like). If the absolute value of the difference computed at step 422 is not smaller than delta (which is indicative that the buffer space available in front of the bottleneck link is not stable and cannot be trusted for resetting the size of the congestion window prior to future chunk transmissions), method 400 proceeds to step 428. If the absolute value of the difference computed at step 422 is smaller than delta, method 400 proceeds to step 424. At step 424, TCP sender 106 determines whether the value of runningBuffer is larger than an activation threshold minBuffer (e.g., minBuffer may represent the data payload carried by ten packets). If the value in runningBuffer is larger than minBuffer, the last collected sample of the bottleneck buffer size is considered to be valid and method 400 proceeds to step 426. If the value in runningBuffer is not larger than minBuffer, the last collected sample of the bottleneck buffer size is considered to be invalid and method 400 proceeds to step 428. At step 426, after having established that the current estimate B of the bottleneck buffer size is stable and can be trusted for resetting the size of the congestion window at step 408, holdChunkCounter is set to an initialization value stored in maxHoldChunkCounter (e.g., maxHoldChunkCounter=six chunks or any other suitable number of chunks), and method 400 then proceeds to step 430. At step 428, holdChunkCounter is reset to zero as a way to avoid using the buffer size estimate B when the size of the congestion window is reset again before the transmission of the next chunk (illustratively, by ensuring that method 400 proceeds from step 406 to step 460, rather than 450), and method 400 then proceeds to step 430. At step 430, the estimate B of the bottleneck buffer size is set equal to the last buffer size sample stored in runningBuffer, and method 400 then proceeds to step 416 (depicted in
The method 400 of
The second embodiment of the TWR method of the first embodiment of the present disclosure uses the estimated bottleneck buffer size B and the target rate α·Rhigh as independent criteria for resetting the slow-start threshold and the congestion window size before starting a new chunk transmission. Either criterion can be suspended by proper setting of certain configuration parameters of the second embodiment of the TWR method of the first embodiment of the present disclosure. For example, the use of the estimated buffer size B may be suspended when the value of the parameter maxHoldChunkCounter is set to zero. Similarly, for example, the use of the target rate may be suspended when the correction factor α, and consequently the target rate α·Rhigh, is assigned an arbitrarily large value (e.g., α=1,000).
Referring now to the TCP window cap method of the second embodiment of the present disclosure, TCP sender 106 imposes an upper bound on the size of the congestion window. The upper bound may be twice the value of the ideal bandwidth-delay product IBDP as defined for the second embodiment of the TWR method of the first embodiment of the present disclosure: 2IBDP=2 minRTT·α·Rhigh. TCP sender 106 obtains the value of minRTT according to method 200 of
With a shared tail-drop buffer at the bottleneck link, the TWC method most effectively conduces to the elimination of unfairness and video quality instability when:
(a) the bottleneck rate C is at least as large as the sum of the target rates α·Rhigh,i of the highest video quality levels for all the streams i that share the bottleneck link, i.e., Σi(α·Rhigh,i)≦C, and
(b) the size B of the shared buffer is at least as large as the sum of the ideal bandwidth delay products IBDPi computed for each stream i that shares the bottleneck link, i.e., Σi(α·Rhigh,i·minRTTi)≦B
Indeed, if the above conditions on bottleneck data rate and bottleneck buffer size are both satisfied, and the TWC method is applied to the TCP sender in conjunction with the second embodiment of the TWR method, the bottleneck buffer is guaranteed to never overflow and cause packet losses, because each stream i never places in the buffer more than IBDPi data units.
A TCP sender 106 that implements the TWC method of the second embodiment of the present disclosure computes the ideal bandwidth-delay product IBDP the same way as described in method 200 of
rwnd=min(αrwnd,2IBDP). The new upper bound on the congestion window size cwnd becomes immediately effective, because by ordinary operation of TCP sender 106 rwnd is used in every upward update of the congestion window size: cwnd=min(cwnd,rwnd).
The computer 500 includes a processor 502 (e.g., a central processing unit (CPU) and/or other suitable processor(s)) and a memory 504 (e.g., random access memory (RAM), read only memory (ROM), and the like).
The computer 500 also may include a cooperating module/process 505. The cooperating process 505 can be loaded into memory 504 and executed by the processor 502 to implement functions as discussed herein and, thus, cooperating process 505 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, solid state memories, and the like.
The computer 500 also may include one or more input/output devices 506 (e.g., a user input device (such as a keyboard, a keypad, a mouse, and the like), a user output device (such as a display, a speaker, and the like), an input port, an output port, a receiver, a transmitter, one or more storage devices (e.g., a tape drive, a floppy drive, a hard disk drive, a compact disk drive, solid state memories, and the like), or the like, as well as various combinations thereof).
It will be appreciated that computer 500 depicted in
It will be appreciated that the functions depicted and described herein may be implemented in software (e.g., via implementation of software on one or more processors, for executing on a general purpose computer (e.g., via execution by one or more processors) so as to implement a special purpose computer, and the like) and/or may be implemented in hardware (e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/or any other hardware equivalents).
It will be appreciated that at least some of the steps discussed herein as software methods may be implemented within hardware, for example, as circuitry that cooperates with the processor to perform various method steps. Portions of the functions/elements described herein may be implemented as a computer program product wherein computer instructions, when processed by a computer, adapt the operation of the computer such that the methods and/or techniques described herein are invoked or otherwise provided.
Instructions for invoking the inventive methods may be stored in fixed or removable media, transmitted via a data stream in a broadcast or other signal bearing medium, and/or stored within a memory within a computing device operating according to the instructions.
It will be appreciated that the term “or” as used herein refers to a non-exclusive “or,” unless otherwise indicated (e.g., use of “or else” or “or in the alternative”).
It will be appreciated that, although various embodiments which incorporate the teachings presented herein have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/940,945, filed on Feb. 18, 2014, entitled “Control Of Transmission Control Protocol Congestion Window For A Video Source,” which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61940945 | Feb 2014 | US |