This invention relates generally to the field of data communication and more specifically to a method and system for encapsulating variable-size packets.
Encapsulating packets in a communication system may involve the use of multiple queues for buffering packets waiting to be encapsulated. Packets at different queues, however, may experience different waiting times prior to encapsulation, also known as packet delay variation. Packet delay variation may introduce unwanted jitter into the communication system. Moreover, encapsulation according to known techniques may result in sub-optimal bandwidth usage of a communications channel. Furthermore, packets associated with Internet Protocol (IP) traffic may involve datagrams of variable size, which may affect encapsulation for optimal bandwidth usage. Consequently, encapsulating packets while controlling jitter and enhancing bandwidth utilization has posed challenges.
In accordance with the present invention, disadvantages and problems associated with previous techniques for encapsulation of variable-size packets in data communication may be reduced or eliminated.
According to one embodiment, encapsulating packets includes receiving packets at a queue of an encapsulator. The following operations are repeated until certain criteria are satisfied. A number of packets are accumulated at the queue. A current encapsulation efficiency associated with the accumulated packets is determined. A next encapsulation efficiency associated with the accumulated packets and a predicted size and arrival time of the next packet is determined. If the current encapsulation efficiency satisfies an encapsulation efficiency threshold and if the current encapsulation efficiency is greater than the next encapsulation efficiency, the accumulated packets are encapsulated.
Certain embodiments of the invention may provide one or more technical advantages. A technical advantage of one embodiment may be that the size of an encapsulation section is adjusted in response to a current encapsulation efficiency and a predicted jitter in an effort to control jitter while maintaining efficiency. If the accumulated packets satisfy an encapsulation efficiency threshold, the packets may be encapsulated to maintain efficiency while controlling delay jitter. Another technical advantage of an embodiment may be that a delay associated with receiving predicted packets may be determined and if the delay satisfies a jitter requirement, the accumulated packets may be encapsulated to control jitter. Yet another technical advantage of an embodiment may be that encapsulation sections packets may be formed into cells to improve jitter. The encapsulation sections may be formed into cells so that wasted, or additional, capacity in the cells is avoided.
Certain embodiments of the invention may include none, some, or all of the above technical advantages. One or more other technical advantages may be readily apparent to one skilled in the art from the figures, descriptions, and claims included herein.
For a more complete understanding of the present invention and its features and advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
System 10 receives packets from real-time flows 15 and non-real-time flows 20, encapsulates the received packets to form encapsulation sections, forms the encapsulation sections into cells, and transmits the cells to a receiver 70. Packets may be associated with datagrams of different sizes or variable-size packets. For example, a packet may comprise an IP packet or datagram, which may be of any suitable size. Cells may generally be associated with fixed-size packet of data. For example, a cell may comprise a moving pictures experts group-2 packet (MPEG-2 packet). Any other suitable packet of fixed or variable size may be formed into cells without deviating from the concept of this invention.
Real-time flows 15 transmit real-time traffic, and non-real-time flows 20 transmit non-real-time traffic. According to one embodiment, real-time flows 15 may comprise any delay-sensitive traffic, for example, voice data such as voice over IP (VOIP) or video on demand (VOD) data. Non-real-time flows 20 may include any delay-tolerant traffic, for example, IP data. Flows that transmit other types of delay-sensitive traffic such as voice traffic or other real-time traffic may be used in place of or in addition to real-time flows 15.
System 10 may receive any suitable type of traffic, for example, moving pictures experts group-2 (MPEG-2) or MPEG-4 video traffic, voice over Internet protocol (VOIP), or Internet protocol (IP) packet traffic. The traffic may be classified according to jitter tolerance. According to one embodiment, traffic that is jitter tolerant comprises non-real-time traffic, and traffic that is not jitter tolerant comprises real-time traffic. Jitter tolerant traffic, however, may comprise any traffic that is jitter tolerant according to any suitable definition of “jitter tolerant,” and jitter intolerant traffic may comprise any traffic that is not jitter tolerant. For example, jitter intolerant traffic may include voice traffic.
System 10 includes real-time per-flow queues 40, non-real-time per-flow queues 30, and a processor 80. Real-time per-flow queues 40 buffer real-time traffic from real-time flows 15, and non-real-time per-flow queues 30 buffer non-real-time traffic from non-real-time flows 20. Each real-time per-flow queue 40a may receive from real-time flows 15 a real-time flow 15a corresponding to the real-time per-flow queue 40a. Similarly, each non-real-time per-flow queue 30a receives a non-real-time flow 20a associated with the non-real-time per-flow queue 30a. As used in this document, “each” refers to each member of a set or each member of a subset of the set. Any number nv of real-time flows 15 may be used and any suitable number nd of non-real-time flows 20 may be used. Similarly, any suitable number of real-time per-flow queues 40nv and non-real-time per-flow queues 30nd may be used. According to one embodiment, queues that queue other types of traffic such as voice traffic or other real-time traffic may be used in place of or in addition to real-time per-flow queues 40 or non-real-time per-flow queues 30.
Processor 80 manages the encapsulation process that starts when an incoming packet is received at a real-time per-flow queue 40. Processor 80 determines if the packets accumulated at the real-time per-flow queue 40 form an encapsulation section of a maximum section size. If the packets do not form an encapsulation section of a maximum section size, processor 80 determines if a current encapsulation efficiency of the accumulated packets satisfies an encapsulation efficiency threshold. Processor 80 may predict a next packet size to determine if the current encapsulation efficiency would improve by waiting for the next packet. Processor 80 may also predict a next packet arrival time to determine whether a jitter requirement would be violated by waiting for the next packet. The processor 80 may adjust the size of sections to be encapsulated at the corresponding real-time per-flow queues 40, and may also adjust the number of cells formed from the encapsulated sections. In addition, processor 80 may adjust the number of packets received from non-real-time per-flow queues 30 that are encapsulated to form sections and cells.
Encapsulation sections from queues 40 and 30 are formed into cells. Cells of real-time encapsulation sections are copied into common real-time buffer 50, and cells of non-real-time encapsulation sections are copied into common non-real time buffer 55. The cells of an encapsulation section may be copied sequentially into real-time buffer 50 or non-real time buffer 55 such that the cells are not interleaved by cells of another encapsulation section. Real-time buffer 50 and non-real time buffer 55 may process the cells according to a first-in-first-out procedure. According to one embodiment, buffers that buffer other types of traffic such as voice traffic or other real-time traffic may be used in place of or in addition to real-time buffer 50 or non-real time buffer 55.
System 10 may also include a scheduler 60 that outputs real-time cells from real-time buffer 50 and non-real-time cells from non-real time buffer 55. Cells received at the real-time buffer 50 and non-real time buffer 55 may be handled by the scheduler 60 in order of priority. For example, real-time cells associated with the real-time buffer 50 may be assigned higher priority than non-real-time cells associated with the non-real time buffer 55 such that non-real-time cells from non-real time buffer 55 are transmitted only if real-time buffer 50 is empty. Accordingly, non-real time buffer 55 may be sufficiently large to store delayed non-real-time encapsulation sections. The scheduler 60 may further transmit the cells from the real-time buffer 50 and non-real time buffer 55 to a receiver 70. Any other suitable priority-scheduling schemes may be used to arbitrate between the real-time buffers and the non-real-time buffers.
To summarize, system 10 encapsulates packets to form encapsulation sections. The number of packets to be encapsulated at each real-time per-flow queue 40 is adjusted in response to an encapsulation efficiency threshold and a jitter requirement, with the goal of reducing jitter while maintaining efficiency. Encapsulation sections are formed into cells. One embodiment of the formation of cells from encapsulation sections is to be described with reference to
According to one embodiment, encapsulation section 200 may be formed into cells 250. Each cell 250 includes a cell header 220, a payload 240, and a cell footer 230. Cell header 220 may include, for example, MPEG-2 packet header data. Cell footer 230 may include, for example, MPEG-2 packet footer data. Payload 240 may include MPEG-2 packets which have, according to one embodiment, 184 bytes.
According to one embodiment, an encapsulation section 200 may be formed into one or more sequential cells 250. In the illustrated example, a cell 250a may be loaded with a first portion of section 200, a next cell 250b may be loaded with a continuing portion of section 200, and a cell 250c may be loaded with a last portion of section 200. In general, cells 250 may be formed sequentially until the encapsulation section 200 has been fragmented into one or more cells 250. According to one embodiment, a cell 250 loaded with a last portion of an encapsulation section 200 may include additional capacity 280. In the illustrated example, a last portion 245 of encapsulation section 200, comprising a section data 216c and section footers 214, is loaded into the payload of cell 250c to form a loaded portion of the payload 240c. The last portion 245 does not fill the capacity of the payload, so cell 250c has additional capacity 280. However, cells 250 may be formed in any other suitable manner. For example, another embodiment of forming cells 250 is described with reference to
In the illustrated method, packets are received at the per-flow queues 40 and 30 from real-time flows 15 and non-real-time flows 20. As an example, a packet of real-time flow 15a is received by real-time per-flow queue 40a. In describing the embodiment of
The method begins at step 400, where a per-flow queue receives a packet. The method then proceeds to step 402, where the size of an encapsulation section including the received packet is compared to a maximum number of bytes Qmax of an encapsulation section. The size of an encapsulation section describes the combined sizes of the one or more packets stored at the per-flow queue. According to one embodiment, the maximum number of bytes Qmax may be 4080 bytes. If the size of the encapsulation section exceeds the maximum number of bytes Qmax associated with the per-flow queue, the method proceeds to step 416 to encapsulate the packet.
If the size of the encapsulation section does not satisfy the maximum number of bytes Qmax associated with the per-flow queue, the method proceeds to step 404 to determine if a current encapsulation efficiency η(χ) of the one or more packets accumulated at the per-flow queue satisfies a first threshold. In general, step 404 determines if a current encapsulation efficiency η(χ) satisfies a real-time encapsulation efficiency threshold ηrt or a non-real time encapsulation efficiency threshold ηnrt. According to one embodiment, the current encapsulation efficiency may be defined as, for example, the ratio of a number χ of payload bytes associated with an encapsulation section comprising one or more packets accumulated in the per-flow queue to the total number of bytes of the encapsulation section. According to one embodiment, the payload bytes refer to the IP-traffic bytes that generate an encapsulation section. Accordingly, the current encapsulation efficiency may be described by Equation (1):
where ┌.┐ is the ceiling function. Referring to Equation (1), the larger the value of the number χ of payload bytes, the higher the current encapsulation efficiency η(χ) However, the trend may not be monotonic due to padding of unused bytes in a cell. For example, the optimal efficiency ηopt, for one embodiment, is achieved when number χ=Sopt=4024 bytes, as described by Equation (2):
ηopt=max η(χ)=η(4024)=89.66% (2)
According to one embodiment, real-time encapsulation efficiency threshold ηrt and non-real time encapsulation efficiency threshold ηnrt are predetermined encapsulation efficiency thresholds associated with a corresponding real-time or non-real-time per-flow queue, respectively. An encapsulation efficiency threshold may describe a minimum acceptable encapsulation efficiency. For example, the value of threshold ηrt may be set approximately at 80%. To summarize step 404, a current encapsulation efficiency η(χ) for packets received at the per-flow queue is obtained from Equation (1). The current encapsulation efficiency η(χ) is compared with an encapsulation efficiency threshold to determine if the encapsulation efficiency threshold has been satisfied. If the current encapsulation efficiency does not satisfy the encapsulation efficiency threshold at step 404, the method proceeds to step 420 to wait for the next packet.
If the current encapsulation efficiency satisfies the encapsulation efficiency threshold at step 404, the method proceeds to steps 406 and 408 to predict a next encapsulation efficiency and a next packet arrival time. According to one embodiment, a next encapsulation efficiency may be associated with one or more packets including a predicted next packet accumulated at a per-flow queue. In the illustrated example, predicting the next encapsulation efficiency comprises predicting a next packet size. The next packet arrival time is calculated to predict delay.
According to one embodiment, steps 406 and 408 are calculated, for example, using a recursion procedure. According to another embodiment, predicting a next packet size and predicting a next packet arrival time may be obtained by an iterative procedure or any other procedure suitable to generate the predicted values of a next packet size and a next packet arrival time.
For example, steps 406 and 408 may be performed in the following manner. A predicted next packet size Si(j), used to calculate a next encapsulation efficiency at step 406, may be associated with the size of bytes of the jth packet of the ith flow. Similarly, a predicted next packet arrival time Ti(j) is associated with the jth packet arrival time of the ith queue flow. The exponentially weighted moving averages (EMWAs) of Ti(j) and Si(j) may be defined as μinter(i,j) and μsize (i,j) given by Equations (3) and (4), respectively:
μinter(i,j+1)=ffinterμinter(i,j)+(1−ffinter)Ti(j+1) (3)
μsize(i,j+1)=ffsizeμsize(i,j)+(1−ffsize)Si(j+1) (4)
where ffinter and ffsize are forgetting factors that determine the degree of adaptiveness in the estimation of the moving averages.
According to one embodiment, the next encapsulation efficiency calculated at step 406 and the prediction of next packet arrival time calculated at step 408, may include modeling a sequence according to an autoregressive model. In the illustrated example, an arrival time sequence may be modeled by defining (τi(j)≡Ti(j)−μinter(i, j)) and a packet size sequence may be modeled by defining (si(j)≡Si(j)−μsize(i,j)), where τi(j) and si(j) are normalized, zero-mean, packet inter-arrival times and sizes, respectively. Thus, according to one embodiment, for each i, the sequences (τi(j): j=1,2, . . . ) and (si(j): j=1,2, . . . ) may be obtained according to an autoregressive model of order one (AR(1)) with time-varying coefficients as described by Equations (5) and (6):
τi(k+1)=αi(k)τi(k)+σiε(k+1),k=1,2, (5)
si(k+1)=βi(k)si(k)+ξiε(k+1),k=1,2, (6)
where αi(k) and βi(k) are time-varying coefficients, {ε(j):j=1,2, . . .} is a sequence of independent identically distributed zero-mean random variables of the noise processes, and σi and ξi are variance-related coefficients of the noise processes. The modeling of a sequence may be performed using any autoregressive techniques.
As discussed previously, the predicted size of the next packet Si(j) and predicted next packet arrival time Ti(j) are determined at steps 406 and 408, respectively. For an ith per-flow queue flow, where i=1, 2, . . . , nv, and j packets,
where j=1, 2, . . . , m, and m is the index of the most recent packet to arrive at the encapsulator, the predicted values of Ti(j) and Si(j) for j>m during the r-step of a recursion procedure are described respectively by Equations (7) and (8):
{circumflex over (T)}i(m;r)=({circumflex over (α)}i(m))r(Ti(m)−μinter(i,m))+μinter(i,m) (7)
Ŝi(m;r)=({circumflex over (β)}i(m))r(Si(m)−μsize(i,m))+μsize(i,m) (8)
where the adjusted coefficients {circumflex over (α)}i(m) and {circumflex over (β)}i(m) are obtained by minimizing the mean-square error as described by Equations (9) and (10):
As previously described, the illustrated method may use a recursion procedure. According to one embodiment, the adjusted coefficient {circumflex over (α)}i(m) may be generated recursively each time a next packet arrival time needs to be predicted. To predict a next packet arrival time called for at step 408, a value for {circumflex over (α)}i(m) may be determined using recursive variables Yi(k) and Zi(k) as defined by Equations (11) and (12):
where the r-step prediction may be obtained using the following procedure once the mth measurement for the last arrival time is available:
The prediction of the next encapsulation efficiency at step 406 may be determined using a similar recursion procedure. Once the prediction of next encapsulation efficiency is determined, the method proceeds to step 410 to determine if the predicted next encapsulation efficiency exceeds the current encapsulation efficiency previously determined at step 404 by Equation (1). If the next encapsulation efficiency is lower than the current encapsulation efficiency, the method proceeds to step 416 to encapsulate the packets accumulated at the per-flow queue.
If the next encapsulation efficiency is higher than the current encapsulation efficiency, the method proceeds to step 412 to determine the expected delay associated with receiving the next packet. In the illustrated method, the expected delay may be defined as the absolute value of the difference between the delay of the first packet at the encapsulator and the delay of the next packet. According to one embodiment, when a first packet arrives at the encapsulator, the time of arrival of the first packet is noted until the value is updated with the time of arrival of a subsequent or next packet. The time of arrival of the first bit of the kth packet in the ith flow queue may be represented by tstart(i)(k), where k=1,2, . . . , and i=1,2, . . . , nv. The times of arrival and departure for the first bit of a first packet may be represented by t(i)start,1 and t(i)finish,1. Similarly, the times of arrival and departure for the first bit of a second packet may be represented by t(i)start,2 and t(i)finish,2. The expected delay di may be defined by Equation (13):
In the worst case, the computation of di may be performed after each packet arrives at the per-flow queue.
To initialize the process of determining delay di at step 412, t(i)start,1=t(i)finish,1=0 and t(i)start,2=t(i)start,1. Subsequently, when an encapsulation section is formed, the value of t(i)start,1 is updated and used to update t(i)finish,1 as described by Equation (14):
where Stot(i) is the current number of bytes in the per-flow queue, Scurrent(i) is the size of the most recent packet to arrive from the ith per-flow queue, Ri is the peak rate in bits per second of the ith flow, Rencap is the rate of the encapsulator in bps, and QRT is the number of encapsulated cells in the real-time per-flow queue 40 at the time of computing t(i)finish,1. According to one embodiment, time t(i)finish,1 may be generated by, for example, using expression t(l)start,1+(8S(i)current/Ri), if it is computed at or after the last bit of the first packet arrives. After the encapsulation section including the first packet is formed, the value of time t(i)start,2 is updated.
Determining the delay di of Equation (13) at step 412 may require the prediction of time t(i)finish,2. The predicted value of time t(i)finish,2 may be referred to as {circumflex over (t)}(i)finish,2 and described by Equation (15):
where:
and QpredRT(t) is the predicted number of cells at the real-time buffer 50 after t seconds from the current time tcurrent(QRTpred(0)=QRT). The predicted packet arrival time {circumflex over (T)}next(i) and the predicted packet size Ŝ(i)next of Equation (16) may be obtained from Equations (7) and (8) as previously described.
To generate predicted time {circumflex over (t)}finish,2(i) in accordance with Equation (15), the predicted number of cells at the real-time buffer 50 QRTpred(W(i)) may be obtained from Equation (17):
where Ŝtot(j)(W(i)) is described as the number of additional bytes that could accumulate at queue j as defined by Equations (18) through (22):
where the recursion variable r* is the largest integer that satisfies Equation (19):
and where mlast(j) is the index of the most recent packet to arrive at the jth queue.
According to one embodiment, the recursion variable r* may be, for example, the largest integer that satisfies Equation (20):
for any positive real value number t. Thus, using Equation (7) to obtain predicted time {circumflex over (T)}j(mlast(j);k) and manipulating the result into Equation (20), a recursion function r(t) may be obtained as described by Equation (21):
Finally, once the recursion function r(t) is generated from Equation (21), for example, by using an iterative process, recursion variable r* may be obtained by substituting time t in Equation (21) with W(i)+tcurrent−tstart(j)(mlast(j)) so that a solution for Ŝtot(j)(W(i)) may be obtained from Equation (22):
where {circumflex over (β)}j={circumflex over (β)}j(mlast(j)) and Sj=Sj(mlast(j)). Thus, using Equation (22) to obtain Ŝtot(j)(W(i)) for each j=1,2, . . . , nv, a value for the predicted number of cells at the real-time buffer 50 QRTpred(W(i)) may be obtained from Equation (17) and therefore a delay di in accordance with Equation (13) may be obtained.
The method proceeds to step 414 to determine if the delay associated with waiting for a next packet is within jitter bounds. The jitter bounds may comprise a predetermined maximum jitter requirement that functions as a delay threshold for evaluating a delay. According to one embodiment, the maximum jitter requirement is, for example, in the range of 25 to 35 milliseconds such as approximately 30 milliseconds. The delay obtained in step 412 is compared with the jitter requirement. A delay is within jitter bounds when the delay is lower than the jitter requirement. If the delay is within jitter bounds, that is, the delay is lower than the jitter requirement, the method proceeds to step 420 to wait and accumulate the next packet expected at the queue. If the delay is not within jitter bounds, that is, the delay is equal to or greater than the jitter requirement, the method proceeds to encapsulation of the accumulated packets in the per-flow queue under step 416 and to subsequently form cells from the encapsulation section at step 418. Cells may be formed in any suitable manner. For example, cells may be formed according to the procedure described with reference to
To summarize, if the packets accumulated satisfy the encapsulation efficiency threshold, the predicted jitter is evaluated to determine if the accumulated packets are encapsulated. If the predicted jitter is low, more packets will be accumulated. If the predicted jitter is too high, then the accumulated packets are encapsulated. The method may include more, fewer, or different steps. The steps of the method may be performed in any suitable order. For example, the prediction of the arrival time at step 408 may be performed after the determination at step 410.
Encapsulation section 500 may be formed into cells 550. Each cell 550 includes a cell header 220, a payload 540, and a cell footer 230. Cell header 220 may include, for example, MPEG-2 packet header data. Cell footer 230 may include, for example, MPEG-2 packet footer data. Payload 540 may include MPEG-2 packets, which have, according to one embodiment, 184 bytes.
According to one embodiment, one or more encapsulation sections 500 may be formed into one or more sequential cells 550. In the illustrated example, a cell 550a may be loaded with a first portion of encapsulation section 500a, a next cell 550b may be loaded with a next portion of encapsulation section 500a, while another cell 550c may be loaded with a last portion of section 500a. In general, cells may be formed sequentially until an encapsulation section 500 has been formed into one or more cells. In the illustrated example, the last portion 580 of encapsulation section 500a may be loaded into a payload 540c of a cell 550c, and a first portion 585 of a next encapsulation section 500b is loaded into the payload 540c of cell 550c, so that a condition of having additional capacity 280 as described by
Certain embodiments of the invention may provide one or more technical advantages. A technical advantage of one embodiment may be that the size of an encapsulation section is adjusted in response to a current encapsulation efficiency and a predicted jitter in an effort to control jitter while maintaining efficiency. If the packets accumulated do not satisfy a predicted encapsulation efficiency threshold, the packets are encapsulated to maintain efficiency. Another technical advantage of an example may be that a delay associated with receiving predicted packets may be determined and if the delay satisfies a jitter requirement, the accumulated packets may be encapsulated to control jitter. Another technical advantage of an example may be that encapsulation sections packets may be formed into cells to improve jitter. The encapsulation sections may be formed into cells so that additional capacity in the cells is avoided.
Although an embodiment of the invention and its advantages are described in detail, a person skilled in the art could make various alterations, additions, and omissions without departing from the spirit and scope of the present invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5537408 | Branstad et al. | Jul 1996 | A |
5557608 | Calvignac et al. | Sep 1996 | A |
6252855 | Langley | Jun 2001 | B1 |
6483846 | Huang et al. | Nov 2002 | B1 |
6570849 | Skemer et al. | May 2003 | B1 |
6701363 | Chiu et al. | Mar 2004 | B1 |
6990105 | Brueckheimer et al. | Jan 2006 | B1 |
7013318 | Rosengard et al. | Mar 2006 | B2 |
7170897 | Mackiewich et al. | Jan 2007 | B2 |
Number | Date | Country |
---|---|---|
1 178 635 | Feb 2002 | EP |
1 303 083 | Apr 2003 | EP |
1 317 110 | Jun 2003 | EP |
WO 9847293 | Oct 1998 | WO |
03103241 | Dec 2003 | WO |
2004062203 | Jul 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20040117502 A1 | Jun 2004 | US |