Method and device for determining a jitter buffer level

Information

  • Patent Grant
  • 9042261
  • Patent Number
    9,042,261
  • Date Filed
    Friday, February 7, 2014
    11 years ago
  • Date Issued
    Tuesday, May 26, 2015
    9 years ago
Abstract
A buffer level for jitter data buffer is determined. A frame payload size difference is determined for a plurality of video frames encoded into data packets sequentially received from a network. The difference is a difference in a payload size of a current frame and a previous frame. A frame network transit delay is determined as a difference in a transport time between the current frame and the previous frame and an expected transport time between the current frame and the previous frame. A slope and a variance of a linear relationship between the frame payload size difference and the frame network transit delay are determined for the plurality of video frames. Finally, a buffer level of a jitter data buffer is determined using a maximum frame payload size, an average frame payload size, the slope and the variance.
Description
TECHNICAL FIELD

The present disclosure generally relates to electrical telecommunication and more particularly to data packet delivery in networks using the Internet protocol (IP).


BACKGROUND

In a video-over-IP system, each image, or frame, may be encoded into one or several data packets that are sent with minimal delay (“back-to-back”) to the IP network. The frames are usually produced at a constant frame rate, wherefore the packet clusters are sent at the same constant rate. On the receiver side the packets arrive with a variable delay. This delay is mainly due to the delays inflicted by the IP network and is often referred to as jitter. The severity of the jitter can vary significantly depending on network type and current network conditions. For example, the variance of the packet delay can change with several orders of magnitude from one network type to another, or even from one time to another on the same network path.


In order to reproduce a video stream that is true to the original that was transmitted from the source(s), the decoder (or receiver) must be provided with data packet clusters at the same constant rate with which the data packet clusters were sent. A device often referred to as a jitter buffer may be introduced in the receiver. The jitter buffer may be capable of de-jittering the incoming stream of packets and providing a constant flow of data to the decoder. This is done by holding the packets in a buffer, thus introducing delay, so that also the packets that were subject to larger delays will have arrived before their respective time-of-use.


There is an inevitable trade-off in jitter-buffers between buffer delay on the one hand and distortions due to late arrivals on the other hand. A lower buffer level, and thus a shorter delay, generally results in a larger portion of packets arriving late or even being discarded, as the packets may be considered as being too late, while a higher buffer level, and thus a longer delay, is generally detrimental in itself for two-way communication between, e.g., humans.


BRIEF SUMMARY

It is with respect to the above considerations and others that the present invention has been made. In particular, the inventors have realized that it would be desirable to achieve a method for determining a buffer level of a jitter data buffer comprised in a receiver adapted to sequentially receive data packets from a communications network, wherein frames are encoded into the data packets, which method is capable of determining the appropriate buffer level in various network conditions. Furthermore, the inventors have realized that it would be desirable to determine the buffer level of a jitter data buffer on the basis of a first part of frame arrival delay related to payload size variation between frames and a second part related to the amount of crosstraffic in the communications network.


To better address one or more of these concerns, a method and a receiver having the features defined in the independent claims are provided. Further advantageous embodiments of the present invention are defined in the dependent claims.


According to a first aspect of the present invention, there is provided a method for determining a buffer level of a jitter data buffer comprised in a receiver adapted to sequentially receive data packets from a communications network, wherein frames are encoded into the data packets, each frame comprising timestamp information T and payload size information L. The buffer level is determined on the basis of a first part of frame arrival delay related to payload size variation between frames and a second part related to the amount of crosstraffic in the communications network. The method may comprise, for each frame, determining a frame pay-load size difference ΔL by comparing L of the current frame with L of a previous frame, determining a frame inter-arrival time Δt by comparing the measured arrival time of the current frame with the measured arrival time of a previous frame, determining a temporal frame spacing ΔT by comparing T of the current frame with T of a previous frame, and/or determining a frame network transit delay d on the basis of the difference between Δt and ΔT. The method may comprise, for each frame, estimating a set of parameters of a linear relationship between ΔL and d on the basis of ΔL and d for the current frame and ΔL and d determined for at least one previous frame. A first parameter and a second parameter comprised in the set may be adapted to be indicative of the first and second part, respectively. The method may comprise, for each frame, estimating a maximum frame payload size and an average frame payload size on the basis of L of the current frame and L of at least one previous frame. For each frame, the buffer level may be determined on the basis of the maximum frame payload size, the average frame payload size and the parameters of the linear relationship.


Such a configuration enables the determination of an appropriate buffer level of the jitter buffer in various network conditions, by determining the buffer level on the basis of statistical measures of current network conditions. In this manner, both frame arrival delay related to payload size variation between frames (‘self-inflicted’ frame arrival delay) and a frame arrival delay related to the amount of crosstraffic in the communications network may be taken into account in the determination of the buffer level. This is generally not the case in known devices and methods.


By the separation of the packet delay contributions into self-inflicted and cross-traffic delays, an improved adaptability to varying network conditions may be obtained. For instance, consider a typical situation where a majority of the frames are roughly equal in size (i.e. having a roughly equal payload size), while few frames are relatively large (e.g. in comparison with the majority of frames). Conventionally, only the frame inter-arrival time is considered in the procedure of setting the buffer level. In such a case, only the few packets that result in the largest inter-arrival time would provide any useful information for the procedure of setting the buffer level. In contrast, according to embodiments of the present invention, all of the frames encoded in the arriving packets in general contributes with information that may be utilized for estimating the set of parameters of a linear relationship between ΔL and d. In this manner, an improved accuracy and an increased adaptability with regards to varying network conditions may be achieved.


According to a second aspect of the present invention, there is provided a receiver adapted to sequentially receive data packets from a communications network. Frames are encoded into the data packets, each frame comprising timestamp information T and payload size information L. The receiver may comprise a jitter data buffer and a processing unit adapted to determine a buffer level of the jitter data buffer on the basis of a first part of frame arrival delay related to payload size variation between frames and a second part related to the amount of cross-traffic in the communications network. The receiver may comprise a time measuring unit adapted to measure the arrival time of each frame. The receiver, or the processing unit, may be adapted to, for each frame, determine a frame payload size difference ΔL by comparing L of the current frame with L of a previous frame, determine a frame inter-arrival time Δt by comparing the measured arrival time of the current frame with the measured arrival time of a previous frame, determine a temporal frame spacing ΔT by comparing T of the current frame with T of a previous frame, and/or determine a frame network transit delay d on the basis of the difference between Δt and ΔT. The receiver, or the processing unit, may be adapted to, for each frame, estimate a set of parameters of a linear relationship between ΔL and d on the basis of ΔL and d for the current frame and ΔL and d determined for at least one previous frame. A first parameter and a second parameter comprised in the set are indicative of the first and second part, respectively. The receiver, or the processing unit, may be adapted to, for each frame, estimate a maximum frame payload size and an average frame payload size on the basis of L of the current frame and L of at least one previous frame. For each frame, the buffer level may be determined on the basis of the maximum frame payload size, the average frame payload size and the parameters of the linear relationship.


By such a receiver, or decoder, there may be provided a receiver (or decoder) adapted to sequentially receive data packets from a communications network, which receiver (or decoder) may achieve the same or similar advantages achieved by the method according to the first aspect of the present invention.


According to a third aspect of the present invention, there is provided a computer program product adapted to, when executed in a processor unit, perform a method according to the first aspect of the present invention or any embodiment thereof.


According to a fourth aspect of the present invention, there is provided a computer-readable storage medium on which there is stored a computer program product adapted to, when executed in a processor unit, perform a method according to the first aspect of the present invention or any embodiment thereof.


Such a processing unit, or microprocessor, may for example be comprised in a receiver, or decoder, according to the second aspect of the present invention. Alternatively, or optionally, such processing unit or microprocessor may be arranged externally in relation to the receiver or decoder, with the processing unit or microprocessor being electrically connected to the receiver or decoder.


The first and second parameters may for example comprise a slope and a variance.


The estimation of the parameters of a linear relationship between between ΔL and d may for example be performed by means of an adaptive filter algorithm, such as adaptive linear regression, recursive least-squares estimation, a Kalman filter, etc. Such adaptive filter algorithms may be used in different combinations. By utilizing one or more adaptive filter algorithms, the accuracy of the estimation of the linear relationship between between ΔL and d may be refined by the choice of the adaptive filter algorithm(s) and/or the model parameters of the respective adaptive filter algorithm. For example, the adaptive filter algorithm(s) may be selected and/or the parameters thereof may be modified on the basis of user, capacity and/or application requirements.


There may be determined whether the absolute value of a difference between d for the current frame and at least one of the first part of frame arrival delay, related to payload size variation between frames, for respective previous frames exceeds a predetermined threshold value.


In other words, a so called “sanity” check, or extreme outlier identification, can be made on the measurements (e.g. frame payload sizes, arrival time differences) prior to performing an estimation of a set of parameters of a linear relationship between ΔL and d. In this manner, the effect of undesired, spurious events in the data packet delivery in the communication network may be avoided or mitigated.


An indication of a discontinuous change in a parameter of the communications network may be sensed. The parameter may be indicative of traffic conditions of the communications network.


In other words, so called change detection may be performed to assess whether a discontinuous, or abrupt, change has occurred in traffic conditions of the communications network. Such change detection may be performed in various manners, for example in accordance with user, capacity and/or application requirements. For example, change detection may be performed by means of a CUSUM test.


In case an indication of a discontinuous change in the parameter is sensed, at least one model parameter used in the estimation of the parameters of a linear relationship between ΔL and d may be reset. In this manner, re-convergence of the process of estimating a set of parameters of a linear relationship between ΔL and d for the new network conditions may be facilitated.


The at least one model parameter may for example comprise a Kalman covariance matrix, a noise estimate, etc., depending on implementation details.


There may be determined, on the basis of T of the current frame and T of the previous frame, whether the previous frame was transmitted earlier with regards to transmission order compared to the current frame. In other words, it may be checked whether the current frame has suffered from re-ordering. In this case, the processing of the current frame may be stopped and discarded. Subsequently, the receiver may await the arrival of the next frame for processing thereof.


According to another aspect of the teachings herein, a method includes determining a frame payload size difference for a plurality of video frames encoded into data packets sequentially received from a communications network, wherein a frame payload size difference is a difference in a payload size of a current frame and a payload size of a previous frame, determining a frame network transit delay for the plurality of video frames, wherein the frame network transit delay is a difference in a transport time between the current frame and the previous frame and an expected transport time between the current frame and the previous frame, determining a slope and a variance of a linear relationship between the frame payload size difference and the frame network transit delay for the plurality of video frames, and determining a buffer level of a jitter data buffer using a maximum frame payload size, an average frame payload size, the slope and the variance.


According to another aspect of the teachings herein, an apparatus includes memory and a processing unit. The processing unit is configured to execute instructions stored in the memory to determine a frame payload size difference for a plurality of video frames encoded into data packets sequentially received from a communications network, wherein a frame payload size difference is a difference in a payload size of a current frame and a payload size of a previous frame, determine a frame network transit delay for the plurality of video frames, wherein the frame network transit delay is a difference in a transport time between the current frame and the previous frame and an expected transport time between the current frame and the previous frame, determine a slope and a variance of a linear relationship between the frame payload size difference and the frame network transit delay for the plurality of video frames, and determine a buffer level of a jitter data buffer using a maximum frame payload size, an average frame payload size, the slope and the variance, the jitter data buffer located in the memory.


Further objects and advantages of the present invention are described in the following by means of exemplifying embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS

Exemplifying embodiments of the present invention will be described in the following with reference to the other accompanying drawings, in which:



FIG. 1 is a schematic exemplifying graph of the frame network transit delay d versus the frame payload size difference ΔL;



FIG. 2A is a schematic view of a receiver according to an exemplifying embodiment of the present invention;



FIG. 2B is a schematic view of a frame in accordance with an exemplifying embodiment of the present invention;



FIG. 3 is a schematic flow diagram of a method according to an exemplifying embodiment of the present invention; and



FIG. 4 are schematic views of different exemplifying types of computer readable digital storage mediums according to embodiments of the present invention.


In the accompanying drawings, the same reference numerals denote the same or similar elements throughout the views.





DETAILED DESCRIPTION

The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplifying embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and fully convey the scope of the invention to those skilled in the art. Furthermore, like numbers refer to like or similar elements or components throughout. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.


According to the present invention, the transmission delay of a packet or cluster of packets may be considered to be made up of two independent parts: self-inflicted delay and cross-traffic delay. Because a large packet (i.e. having a large payload) generally takes a longer time to transmit over a network link with limited capacity compared to a smaller packet, the payload size variations from one image (or frame) to the next generally give rise to a payload-size dependent delay variation, which is referred to in the context of some embodiments of the present invention as self-inflicted delay. Any delays that arise because of other network traffic (cross-traffic) utilizing the links and queues of the network are referred to in the context of some embodiments of the present invention as the cross-traffic delay. To the receiver, the cross-traffic delay may appear as quasi-random and independent of the (video) traffic.


The present invention is based on separating the packet delay contributions into self-inflicted and cross-traffic delays, estimating appropriate parameters describing the current conditions for the self-inflicted delay and the cross-traffic delay, and determining an appropriate jitter buffer level based on the estimates.


In a video-over-IP system, the encoder may operate on clusters of packets that make up each image. In the context of some embodiments of the present invention, such a group of packets is referred to as a ‘frame’. When the arrival of a frame is completed at the receiver, i.e. when all the packets included in the frame have arrived at the receiver, the arrival time may be determined. Comparing the arrival time with the arrival time of a previously received frame provides a frame inter-arrival time Δt.


In general, each frame comprises some kind of timing information, or time-stamp information, that indicates the production time of the frame. Such timing information may for example comprise the timestamp field in a real-time transport protocol (RTP) header. Such timing information may provide a nominal temporal frame spacing between the present frame and a previously received frame, denoted ΔT. The nominal temporal frame spacing ΔT may for example comprise the time between capturing the two frames from a camera. By determining a difference between the actual inter-arrival time Δt and the nominal temporal frame spacing ΔT a deviation measure, or a frame network transit delay, d for the currently received frame may be obtained. For example, d=Δt−ΔT.


In general, each frame comprises payload size information L. The difference ΔL between the payload size between the currently received frame and a previously received frame may be determined.


The self-inflicted delay and the cross-traffic delay described in the foregoing may be described as a part of d that can be attributed to ΔL and a part that generally cannot be attributed to ΔL. Namely, in case of a relatively large value of ΔL, the current frame is larger than the previous frame, and the current frame may generally be expected to be later than the previous frame. This would result in an increased Δt, resulting in a larger value of d. On the other hand, the cross-traffic related delay generally affects Δt in ways that are difficult to correlate with ΔL.


On receiving the complete frame with frame number k, the frame delay d(k)=Δt(k)−ΔT(k)=t(k)−t(k−1)−(T(k)−T(k−1)) may be calculated. The payload size difference is denoted ΔL (k)=L(k)−L(k−1). The differential frame delay, or frame network transit delay, may be assumed to follow the model:

d(k)=A+BΔL(k)+w(k),

where w(k) represents the cross-traffic related delay. It may be assumed that w(k), k=0, 1, 2, . . . is a sequence of independent realizations of a zero-mean stochastic variable with variance σ2. Hence, the relation between d(k) and ΔL may be represented by a straight line with offset A and slope B, with a scattering around the line determined by σ2.


In view of the foregoing description and with reference to FIG. 1, in FIG. 1 there is shown a schematic exemplifying straight line and scatter plot of the frame network transit delay d versus the frame payload size difference ΔL. The quantities d and ΔL may for example be given in units of ms and bytes, respectively.


Given estimates of the parameters A, B and σ2, respectively, and knowledge of the largest expected ΔL, the worst case frame inter-arrival time may be estimated. On the basis of this worst case frame inter-arrival time a size of the jitter buffer may be set that mitigates the jitter.


In view of the above, an embodiment of the present invention may comprise collecting measurements of d(k) and ΔL(k) for a number of frames k, estimating parameters A, B, and σ2, storing or estimating the largest expected frame size difference ΔL, and determining a jitter buffer level based on the estimates.


Referring now to FIG. 2A, there is shown a schematic view of a receiver 200 according to an exemplifying embodiment of the present invention. The receiver 200 is communicating in a wireless manner with a communications network 202 via an air interface 201. The receiver comprises a data jitter buffer 203. The receiver may comprise a time-measuring unit 204 and a processing unit 205.


The communication between the receiver 200 and the communications network 202 may be performed in a non-wired (e.g. by means of wireless radiowave communications) or a wired (e.g. by means of electrical conductors or optical fibers or the like) fashion.


Referring now to FIG. 2B, the receiver 200 may be adapted to sequentially receive data packets from the communications network 202, wherein frames 206 are encoded into the data packets, each frame 206 comprising timestamp information T 206a and payload size information L 206b. Each frame 206 may alternatively or optionally comprise additional information. According to an embodiment of the present invention, the buffer level of the data jitter buffer 203 is determined on the basis of a first part of frame arrival delay related to payload size variation between frames and a second part related to the amount of crosstraffic in the communications network 202.


Optionally, or alternatively, the receiver 200 may comprise a sensing unit 207, which may be adapted to sense an indication of a discontinuous change in a parameter of the communications network 202. This parameter may for example be indicative of traffic conditions in the communications network 202.


Referring now to FIG. 3, there is shown a schematic flow diagram of a method 300 according to an exemplifying embodiment of the present invention. With reference to FIGS. 2A and 2B, a receiver 200 may be adapted to sequentially receive data packets from a communications network 202. The data packets may be such that frames 206 are encoded in the data packets.


Referring now again to FIG. 3, at step 301 a frame payload size difference ΔL may be determined for each frame by comparing L of the current frame with L of a previous frame.


At step 302, a frame inter-arrival time Δt may be determined by comparing the measured arrival time of the current frame with the measured arrival time of a previous frame.


At step 303, a temporal frame spacing ΔT may be determined by comparing T of the current frame with T of a previous frame.


At step 304, a frame network transit delay d may be determined on the basis of the difference between Δt and ΔT.


The method 300 may further comprise, for each frame, in step 305 estimating a set of parameters of a linear relationship between ΔL and d on the basis of ΔL and d for the current frame and ΔL and d determined for at least one previous frame. With reference to FIGS. 2A and 2B, a first parameter and a second parameter comprised in the set may be adapted to be indicative of a first part of frame arrival delay related to payload size variation between frames 206 and a second part related to the amount of crosstraffic in the communications network 202.


The method 300 may further comprise, for each frame, in step 306 estimating a maximum frame payload size and an average frame payload size on the basis of L of the current frame and L of at least one previous frame.


The method 300 may further comprise, for each frame, a step 307 of determining the buffer level on the basis of the maximum frame payload size, the average frame payload size and the parameters of the linear relationship.


Optionally, the method 300 may comprise a step 308 comprising determining, on the basis of T of the current frame and T of the previous frame, whether the previous frame was transmitted earlier with regards to transmission order compared to the current frame.


Optionally, the method 300 may comprise a step 309 comprising determining whether the absolute value of a difference between d for the current frame and at least one of the first part of frame arrival delay, related to payload size variation between frames, for respective previous frames exceeds a predetermined threshold value.


Optionally, the method 300 may comprise a step 310 comprising sensing an indication of a discontinuous change in a parameter of the communications network. The parameter may for example be indicative of traffic conditions in the communications network. Optionally, if an indication of a discontinuous change in such a parameter has been sensed, at least one model parameter used in the estimation of the parameters of a linear relationship between ΔL and d may be reset (step 311).


In the following, a method according to an exemplifying embodiment of the present invention is described in some detail.


The method of the exemplifying embodiment can be described using three phases—inter-arrival delay calculation, parameter estimation and buffer level calculation—each phase being carried out at least once each time arrival of a frame is completed, i.e. when all the packets included in the frame have arrived at the receiver. Additionally, there may be a reset phase, carried out in the beginning of the method and also when otherwise found necessary. The phases are described in the following.


The following notation is used in the description in the following:

  • F: The last received frame that is currently being processed.
  • Fp: A previously processed frame.
  • t: Arrival time for frame F (the time at which the arrival of the frame is completed at the receiver).
  • tp: Arrival time for frame Fp.
  • T: Timestamp for frame F.
  • Tp: Timestamp for frame Fp.
  • L: Payload size (in the communications network) for frame F.
  • Lp: Payload size (in the communications network) for frame Fp.
  • Lavg(k): The k-th update of the estimated average payload size.
  • φ: Filter factor for estimate Lavg; 0≦φ≦1.
  • Lmax(k): The k-th update of the estimated maximum payload size.
  • ψ: Filter factor for estimate Lmax; 0≦ψ≦1.
  • C1: Design parameter; C1>0.
  • C2: Design parameter; C2>0.
  • σ2(k): The k-th update of the estimated noise variance.
  • m(k): The k-th update of the estimated noise mean.
  • D (k): The k-th update of the desired buffer level.
  • A(k): The k-th update of the line offset.
  • B(k): The k-th update of the line slope.
  • M (k|k−1) and M(k|k): Predictor and measurement updates, respectively, of the Kalman filter covariance matrix.
  • K(k): Kalman filter gain vector in the k-th iteration.
  • Q: Kalman filter process noise covariance matrix.
  • I: 2-by-2 identity matrix.


    Reset Phase I


At time index k=0, all parameters and variables may be reset to suitable initial values. What constitutes a suitable initialization may vary depending on current communications network conditions and application.


Inter-Arrival Delay Calculation


In case the current frame has suffered from re-ordering (in other words, if the current frame is earlier in (transmission-order) sequence than the last processed frame), the processing for the current frame may be stopped and the current frame may be discarded.


In other case, a nominal inter-arrival time is calculated using the timestamps of the current frame and a previously received frame:

ΔT=T−Tp.

The actual inter-arrival time is calculated:

Δt=t−tp.

The frame network transit delay is calculated:

d=Δt−ΔT.


All of these times and time periods may be adjusted in order to use the same time scale, e.g., milliseconds or sample periods.


Parameter Estimation


Another phase in the exemplifying method is to estimate the model parameters.


A frame size difference is calculated:

ΔL=L−Lp.

The average estimate for frame size is calculated:

Lavg(k)=φLavg(k−1)+(1φ)L.

The estimate of the maximum frame size is calculated:

Lmax(k)=max{ψLmax(k−1); L}.

An extreme outlier identification may then be performed. Let

δ=d−(A(k−1)+B(k−1)ΔL).


In case |δ|≧C1√{square root over (σ2(k−1))}, the current frame may be considered to be an extreme outlier.


Next, on one hand, if the current frame is not an extreme outlier:


Update average noise: m(k)=αm(k−1)+(1−α)δ.


Update noise variance: σ2(k)=ασ2(k−1)+(1−α)(δ−m(k))2.


Update the estimates of A and B using, e.g., a Kalman filter iteration:


Covariance matrix predictor update:

M(k|k−1)=M(k−1/k−1)+Q.

Compute Kalman gain vector:







K


(
k
)


=



M


(

k


k
-
1


)




[




Δ





L





1



]



f
+


[




Δ





L



1



]




M


(

k


k
-
1


)




[




Δ





L





1



]










where the function ƒ may be:






f
=


(


300








-




Δ





L





L
max



(
k
)






+
1

)






σ
2



(
k
)



.






Update estimates of A and B:







[




B


(
k
)







A


(
k
)





]

=


[




B


(

k
-
1

)







A


(

k
-
1

)





]

+


K


(
k
)





(

d
-

A


(

k
-
1

)


+


B


(

k
-
1

)



Δ





L


)

.







Calculate covariance matrix measurements:

M(k|k)=(l−K(k)[ΔL 1])M(k|k−1).


On the other hand, if the current frame is an extreme outlier: Let sδ be the sign function of δ, i.e.,







s
δ

=

{





-
1

,





δ
<
0

;







+
1

,




δ

0.










Calculate average noise: m(k)=αm(k−1)+(1−α)sδC1√{square root over (σ2(k−1))}.
Calculate noise variance: σ2(k)=ασ2(k−1)+(1−α)(sδC1√{square root over (2(k−1))}−m(k))2.

Reset Phase II


Next, change detection may be performed in order to assess whether an abrupt change has occurred in the network. Change detection can be performed in many manners, for example by utilizing a CUSUM test. In case an abrupt change is detected, a suitable set of variables may be reset. For instance, the Kalman covariance matrix M(k|k) can be reset to its initial (large) value in order to facilitate a rapid re-convergence to the new communication network conditions. Alternatively or optionally, the noise estimates σ2(k) and/or m(k) can be reset.


Buffer Level Calculation


The last phase in the exemplifying method comprises calculation of the desired buffer level:

D(k)=B(k)(Lmax−Lavg)+C2√{square root over (σ2(k))}.


Referring now to FIG. 4, there are shown schematic views of computer readable digital storage mediums 400 according to exemplifying embodiments of the present invention, comprising a DVD 400a and a floppy disk 400b. On each of the DVD 400a and the floppy disk 400b there may be stored a computer program comprising computer code adapted to perform, when executed in a processor unit, a method according to the present invention or embodiments thereof, as has been described in the foregoing.


Although only two different types of computer-readable digital storage mediums have been described above with reference to FIG. 4, the present invention encompasses embodiments employing any other suitable type of computer-readable digital storage medium, such as, but not limited to, a non-volatile memory, a hard disk drive, a CD, a flash memory, magnetic tape, a USB stick, a Zip drive, etc.


The receiver may comprise one or more microprocessors (not shown) or some other device with computing capabilities, e.g. an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a complex programmable logic device (CPLD), etc., in order to perform operations such as estimating a set of parameters of a linear relationship between ΔL and d. Such a microprocessor may alternatively or optionally be comprised in, integrated with, or be, the processing unit of the receiver.


When performing steps of different embodiments of the method of the present invention, the microprocessor typically executes appropriate software that is downloaded to the receiver and stored in a suitable storage area, such as, e.g., a Random Access Memory (RAM), a flash memory or a hard disk, or software that has been stored in a non-volatile memory, e.g., a Read Only Memory (ROM). Such a microprocessor or processing unit may alternatively or optionally be located externally relatively to the receiver (and electrically connected to the receiver).


A computer program product comprising computer code adapted to perform, when executed in a processor unit, a method according to the present invention or any embodiment thereof may be stored on a computer (e.g. a server) adapted to be in communication with a receiver according to an exemplifying embodiment of the present invention. In this manner, when loaded into and executed in a processor unit of the computer, the computer program may perform the method. Such a configuration eliminates the need to store the computer program locally at the receiver. The communication between the computer and the receiver may be implemented in a wired fashion (e.g. by means of Ethernet) or in a non-wired fashion (e.g. by means of wireless infra-red (IR) communications or other wireless optical communications, or by means of wireless radiowave communications.


While the invention has been illustrated and described in detail in the appended drawings and the foregoing description, such illustration and description are to be considered illustrative or exemplifying and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims
  • 1. A method, comprising: determining, using a processing unit, a frame payload size difference for a plurality of video frames encoded into data packets sequentially received from a communications network, wherein a frame payload size difference is a difference in a payload size of a current frame and a payload size of a previous frame;determining, using the processing unit, a frame network transit delay for the plurality of video frames, wherein the frame network transit delay is a difference in a transport time between the current frame and the previous frame and an expected transport time between the current frame and the previous frame;determining, using the processing unit, a slope and a variance of a linear relationship between the frame payload size difference and the frame network transit delay for at least some of the plurality of video frames; anddetermining a buffer level of a jitter data buffer using a maximum frame payload size, an average frame payload size, the slope and the variance, the jitter data buffer located in memory associated with the processing unit.
  • 2. The method of claim 1, further comprising: calculating the expected transport time between the current frame and the previous frame by comparing timestamp information of the previous frame with timestamp information of the current frame.
  • 3. The method of claim 2, further comprising: calculating the transport time between the current frame and the previous frame by comparing a measured arrival time of the current frame with a measured arrival time of the previous frame.
  • 4. The method of claim 1, further comprising: calculating the transport time between the current frame and the previous frame by comparing a measured arrival time of the current frame with a measured arrival time of the previous frame.
  • 5. The method of claim 1 wherein determining the slope and the variance comprises: performing an adaptive filter algorithm updated upon arrival of each frame.
  • 6. The method of claim 1, further comprising: before determining the slope and the variance, identifying whether a frame of the plurality of frames has an absolute value of a difference between its frame network transit delay and a portion of frame network transit delay attributable to payload size variation between previous frames that exceeds a threshold value.
  • 7. The method of claim 6, further comprising: updating the linear relationship differently based on whether or not the absolute value exceeds the threshold.
  • 8. The method of claim 1, further comprising: sensing a discontinuous change in a parameter of the communications network, the parameter being indicative of traffic conditions in the communications network; andadjusting at least one parameter used to determine the slope and the variance of the linear relationship based on the discontinuous change.
  • 9. The method of claim 8 wherein adjusting at least one parameter comprises resetting the at least one parameter.
  • 10. The method of claim 1, further comprising: determining whether the current frame was transmitted earlier than the previous frame with regards to a transmission order of the plurality of frames.
  • 11. The method of claim 10, further comprising: omitting the current frame from determining the slope and the variance of the linear relationship when the current frame was transmitted earlier with regards to the transmission order.
  • 12. An apparatus, comprising: memory; anda processing unit configured to execute instructions stored in the memory to: determine a frame payload size difference for a plurality of video frames encoded into data packets sequentially received from a communications network, wherein a frame payload size difference is a difference in a payload size of a current frame and a payload size of a previous frame;determine a frame network transit delay for the plurality of video frames, wherein the frame network transit delay is a difference in a transport time between the current frame and the previous frame and an expected transport time between the current frame and the previous frame;determine a slope and a variance of a linear relationship between the frame payload size difference and the frame network transit delay for the plurality of video frames; anddetermine a buffer level of a jitter data buffer using a maximum frame payload size, an average frame payload size, the slope and the variance, the jitter data buffer located in the memory.
  • 13. The apparatus of claim 12 wherein the processing unit is configured to: calculate the expected transport time between the current frame and the previous frame by comparing timestamp information of the previous frame with timestamp information of the current frame.
  • 14. The apparatus of claim 12 wherein the processing unit is configured to: calculate the transport time between the current frame and the previous frame by comparing a measured arrival time of the current frame with a measured arrival time of the previous frame.
  • 15. The apparatus of claim 12 wherein the processing unit is configured to determine the slope and the variance by: performing an adaptive filter algorithm updated upon arrival of each frame.
  • 16. The apparatus of claim 12 wherein the processing unit is configured to, before determining the slope and the variance, identify whether a frame of the plurality of frames has an absolute value of a difference between its frame network transit delay and a portion of frame network transit delay attributable to payload size variation between previous frames that exceeds a threshold value.
  • 17. The apparatus of claim 16 wherein the processing unit is configured to: update the linear relationship differently based on whether or not the absolute value exceeds the threshold.
  • 18. The apparatus of claim 12 wherein the processing unit is configured to: sense a discontinuous change in a parameter of the communications network, the parameter being indicative of traffic conditions in the communications network; andreset at least one parameter used to determine the slope and the variance of the linear relationship based on the discontinuous change.
  • 19. The apparatus of claim 12 wherein the processing unit is configured to: determine whether the current frame was transmitted earlier that the previous frame with regards to a transmission order of the plurality of frames.
  • 20. The apparatus of claim 19 wherein the processing unit is configured to: omit the current frame from determining the slope and the variance of the linear relationship when the current frame was transmitted earlier with regards to the transmission order.
CROSS-REFERENCES TO RELATED APPLICATIONS

This application is continuation of U.S. patent application Ser. No. 13/497,625, filed Mar. 22, 2012, which was a National Stage of International Application No. PCT/EP2010/063818, filed Sep. 20, 2010, and claimed priority to European Patent Application No. 09171120.0, filed Sep. 23, 2009, and U.S. Provisional Patent Application No. 61/245,003, filed Sep. 23, 2009, all of which are hereby incorporated by reference in their entireties.

US Referenced Citations (174)
Number Name Date Kind
5432900 Rhodes et al. Jul 1995 A
5473326 Harrington et al. Dec 1995 A
5589945 Abecassis Dec 1996 A
5606371 Klein Gunnewiek et al. Feb 1997 A
5659539 Porter et al. Aug 1997 A
5696869 Abecassis Dec 1997 A
5793647 Hageniers et al. Aug 1998 A
5828370 Moeller et al. Oct 1998 A
5903264 Moeller et al. May 1999 A
5910827 Kwan et al. Jun 1999 A
5913038 Griffiths Jun 1999 A
5930493 Ottesen et al. Jul 1999 A
5943065 Yassaie et al. Aug 1999 A
5963203 Goldberg et al. Oct 1999 A
6011824 Oikawa et al. Jan 2000 A
6014706 Cannon et al. Jan 2000 A
6047255 Williamson Apr 2000 A
6052159 Ishii et al. Apr 2000 A
6061821 Schlosser May 2000 A
6112234 Leiper Aug 2000 A
6119154 Weaver et al. Sep 2000 A
6134239 Heinanen et al. Oct 2000 A
6134352 Radha et al. Oct 2000 A
6185363 Dimitrova et al. Feb 2001 B1
6253249 Belzile Jun 2001 B1
6266337 Marco Jul 2001 B1
6404817 Saha et al. Jun 2002 B1
6452950 Ohlsson et al. Sep 2002 B1
6453283 Gigi Sep 2002 B1
6510219 Wellard et al. Jan 2003 B1
6512795 Zhang et al. Jan 2003 B1
6587985 Fukushima et al. Jul 2003 B1
6590902 Suzuki et al. Jul 2003 B1
6597812 Fallon et al. Jul 2003 B1
6636561 Hudson Oct 2003 B1
6665317 Scott Dec 2003 B1
6683889 Shaffer et al. Jan 2004 B1
6684354 Fukushima et al. Jan 2004 B2
6700893 Radha et al. Mar 2004 B1
6707852 Wang Mar 2004 B1
6721327 Ekudden et al. Apr 2004 B1
6732313 Fukushima et al. May 2004 B2
6747999 Grosberg et al. Jun 2004 B1
6778553 Chou Aug 2004 B1
6792047 Bixby et al. Sep 2004 B1
6859460 Chen Feb 2005 B1
6885986 Gigi Apr 2005 B1
6918077 Fukushima et al. Jul 2005 B2
6934258 Smith et al. Aug 2005 B1
6996059 Tonogai Feb 2006 B1
7003039 Zakhor et al. Feb 2006 B2
7068710 Lobo et al. Jun 2006 B2
7092441 Hui et al. Aug 2006 B1
7096481 Forecast et al. Aug 2006 B1
7124333 Fukushima et al. Oct 2006 B2
7180896 Okumura Feb 2007 B1
7180901 Chang et al. Feb 2007 B2
7263644 Park et al. Aug 2007 B2
7271747 Baraniuk et al. Sep 2007 B2
7295137 Liu et al. Nov 2007 B2
7356750 Fukushima et al. Apr 2008 B2
7359324 Ouellette et al. Apr 2008 B1
7372834 Kim et al. May 2008 B2
7376880 Ichiki et al. May 2008 B2
7379068 Radke May 2008 B2
7406501 Szeto et al. Jul 2008 B2
7447235 Luby et al. Nov 2008 B2
7447969 Park et al. Nov 2008 B2
7484157 Park et al. Jan 2009 B2
7502818 Kohno et al. Mar 2009 B2
7504969 Patterson et al. Mar 2009 B2
7636298 Miura et al. Dec 2009 B2
7653867 Stankovic et al. Jan 2010 B2
7680076 Michel et al. Mar 2010 B2
7719579 Fishman et al. May 2010 B2
7733893 Lundin Jun 2010 B2
7756127 Nagai et al. Jul 2010 B2
7823039 Park et al. Oct 2010 B2
7886071 Tomita Feb 2011 B2
RE42272 Zakhor et al. Apr 2011 E
7974243 Nagata et al. Jul 2011 B2
8050446 Kountchev et al. Nov 2011 B2
8098957 Hwang et al. Jan 2012 B2
8102399 Berman et al. Jan 2012 B2
8139642 Vilei et al. Mar 2012 B2
8326061 Massimino Dec 2012 B2
8352737 Solis et al. Jan 2013 B2
8462654 Gieger et al. Jun 2013 B1
8477050 Massimino Jul 2013 B1
8526360 Breau et al. Sep 2013 B1
8542265 Dodd et al. Sep 2013 B1
20020034245 Sethuraman et al. Mar 2002 A1
20020099840 Miller et al. Jul 2002 A1
20020140851 Laksono Oct 2002 A1
20020157058 Ariel et al. Oct 2002 A1
20020159525 Jeong Oct 2002 A1
20020167911 Hickey Nov 2002 A1
20030018647 Bialkowski Jan 2003 A1
20030058943 Zakhor et al. Mar 2003 A1
20030098992 Park et al. May 2003 A1
20030103681 Guleryuz Jun 2003 A1
20030193486 Estrop Oct 2003 A1
20030210338 Matsuoka et al. Nov 2003 A1
20030226094 Fukushima et al. Dec 2003 A1
20040017490 Lin Jan 2004 A1
20040071170 Fukuda Apr 2004 A1
20040146113 Valente Jul 2004 A1
20040165585 Imura et al. Aug 2004 A1
20050024384 Evans et al. Feb 2005 A1
20050063402 Rosengard et al. Mar 2005 A1
20050063586 Munsil et al. Mar 2005 A1
20050104979 Fukuoka et al. May 2005 A1
20050111557 Kong et al. May 2005 A1
20050154965 Ichiki et al. Jul 2005 A1
20050180415 Cheung et al. Aug 2005 A1
20050220188 Wang Oct 2005 A1
20050220444 Ohkita et al. Oct 2005 A1
20050232290 Mathew et al. Oct 2005 A1
20050259690 Garudadri et al. Nov 2005 A1
20050281204 Karol et al. Dec 2005 A1
20060150055 Quinard et al. Jul 2006 A1
20060164437 Kuno Jul 2006 A1
20060200733 Stankovic et al. Sep 2006 A1
20060256232 Noguchi Nov 2006 A1
20060268124 Fishman et al. Nov 2006 A1
20070168824 Fukushima et al. Jul 2007 A1
20070189164 Smith et al. Aug 2007 A1
20070230585 Kim et al. Oct 2007 A1
20070233707 Osmond et al. Oct 2007 A1
20070255758 Zheng et al. Nov 2007 A1
20070269115 Wang et al. Nov 2007 A1
20080005201 Ting et al. Jan 2008 A1
20080008239 Song Jan 2008 A1
20080046249 Thyssen et al. Feb 2008 A1
20080052630 Rosenbaum et al. Feb 2008 A1
20080055428 Safai Mar 2008 A1
20080065633 Luo et al. Mar 2008 A1
20080101403 Michel et al. May 2008 A1
20080123754 Ratakonda et al. May 2008 A1
20080124041 Nielsen et al. May 2008 A1
20080130756 Sekiguchi et al. Jun 2008 A1
20080170793 Yamada et al. Jul 2008 A1
20080209300 Fukushima et al. Aug 2008 A1
20080211931 Fujisawa et al. Sep 2008 A1
20080225735 Qiu et al. Sep 2008 A1
20080228735 Kenedy et al. Sep 2008 A1
20080273591 Brooks et al. Nov 2008 A1
20080291209 Sureka et al. Nov 2008 A1
20090007159 Rangarajan et al. Jan 2009 A1
20090052543 Wu et al. Feb 2009 A1
20090073168 Jiao et al. Mar 2009 A1
20090103606 Lu et al. Apr 2009 A1
20090110055 Suneya Apr 2009 A1
20090164655 Pettersson et al. Jun 2009 A1
20090172116 Zimmet et al. Jul 2009 A1
20090213940 Steinbach et al. Aug 2009 A1
20090219994 Tu et al. Sep 2009 A1
20090249158 Noh et al. Oct 2009 A1
20090254657 Melnyk et al. Oct 2009 A1
20090271814 Bosscha Oct 2009 A1
20090284650 Yu et al. Nov 2009 A1
20100111489 Presler May 2010 A1
20100150441 Evans et al. Jun 2010 A1
20110078532 Vonog et al. Mar 2011 A1
20110122036 Leung et al. May 2011 A1
20110265136 Liwerant et al. Oct 2011 A1
20120002080 Sasaki Jan 2012 A1
20120252679 Holcomb Oct 2012 A1
20120262603 Chen et al. Oct 2012 A1
20120314102 Wang Dec 2012 A1
20120315008 Dixon et al. Dec 2012 A1
20130039410 Tan et al. Feb 2013 A1
20130046862 Yang Feb 2013 A1
20130182130 Tran Jul 2013 A1
Foreign Referenced Citations (8)
Number Date Country
1947680 Jul 2008 EP
WO9611457 Apr 1996 WO
WO9949664 Sep 1999 WO
WO0233979 Apr 2002 WO
WO02062072 Aug 2002 WO
WO02067590 Aug 2002 WO
WO02078327 Oct 2002 WO
WO03043342 May 2003 WO
Non-Patent Literature Citations (54)
Entry
Bankoski et al. “Technical Overview of VP8, An Open Source Video Codec for the Web”. Dated Jul. 11, 2011.
Bankoski et al. “VP8 Data Format and Decoding Guide” Independent Submission. RFC 6389, Dated Nov. 2011.
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Internet-Draft, May 18, 2011, 288 pp.
Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010.
ISR and Written Opinion (date of mailing: Oct. 15, 2012; PCT/US2012/040177, filed May 31, 2012.
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp.
Multi-core processor, Wikipedia, the free encyclopedia. Http://wikipedia.org/wiki/Multi-core—processor; dated Apr. 30, 2012.
“Rosenberg, J. D. RTCWEB I-D with thoughts on the framework. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/mail-archive/web/dispatch/current/msg03383.html on Aug. 1, 2011.”
“Rosenberg, J.D., et al. An Architectural Framework for Browser based Real-Time Communications (RTC) draft-rosenberg-rtcweb-framework-00. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/id/draft-rosenberg-rtcweb-framework-00.txt on Aug. 1, 2011.”
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010.
VP8 Data Format and Decoding Guide. WebM Project. Google On2. Dated: Dec. 1, 2010.
Wikipedia, the free encyclopedia, “Low-density parity-check code”, http://en.wikipedia.org/wiki/Low-density—parity-check—code, Jul. 30, 2012 (5 pp).
Al-Omari, Huthaifa, et al; “Avoiding Delay Jitter in Cyber-Physical Systems Using One Way Delay Variations Model”, Computational Science and Engineering, 2009 International Conference, IEEE (Aug. 29, 2009) pp. 295-302.
Bagni, D.—A constant quality single pass vbr control for dvd recorders, IEEE, 2003, pp. 653-662.
Balachandran, et al., Sequence of Hashes Compression in Data De-duplication, Data Compression Conference, Mar. 2008, p. 505, issue 25-27, United States.
Begen, Ali C., et al; “An Adaptive Media-Aware Retransmission Timeout Estimation Method for Low-Delay Packet Video”, IEEE Transactions on Multimedia, vol. 9, No. 2 (Feb. 1, 2007) pp. 332-347.
Begen, Ali C., et al; “Proxy-assisted interactive-video services over networks wit large delays”, Signal Processing: Image Communication, vol. 20, No. 8 (Sep. 1, 2005) pp. 755-772.
Cui, et al., Opportunistic Source Coding for Data Gathering in Wireless Sensor Networks, IEEE Int'l Conf. Mobile Ad Hoc & Sensor Systems, Oct. 2007, http://caltechcstr.library.caltech.edu/569/01 HoCuiCodingWirelessSensorNetworks.pdf, United States.
David Slepian and Jack K. Wolf, Noiseless Coding of Correlated Information Sources, IEEE Transactions on Information Theory; Jul. 1973; pp. 471-480; vol. 19, United States.
Digital Video Processing, Prentice Hall Signal Processing Series, Chapter 6: Block Based Methods, Jun. 2014.
Feng, Wu-chi; Rexford, Jennifer; “A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video”, Paper, 1992, 22 pages.
Friedman, et al., “RTP: Control Protocol Extended Reports (RTPC XR),” Network Working Group RFC 3611 (The Internet Society 2003) (52 pp).
Fukunaga, S. (ed.) et al., MPEG-4 Video Verification Model VM16, International Organisation for Standardisation ISO/IEC JTC1/SC29/WG11 N3312 Coding of Moving Pictures and Audio, Mar. 2000.
Ghanbari Mohammad, “Postprocessing of Late Calls for Packet Video”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 6, Dec. 1996, 10 pages.
Keesman, G.—Bit-rate control for MPEG encoders, Signal Processing Image communication 6 (1995) 545-560.
Khronos Group Inc. OpenMAX Integration Layer Application Programming Interface Specification. Dec. 16, 2005, 326 pages, Version 1.0.
Laoutaris, Nikolaos, et al; “Intrastream Synchronization for Continuous Media Streams: A Survey of Playout Schedulers”, IEEE Network, IEEE Service Center, vol. 16, No. 3 (May 1, 2002) pp. 30-40.
Li, A., “RTP Payload Format for Generic Forward Error Correction”, Network Working Group, Standards Track, Dec. 2007, (45 pp).
Liang, Yi J., et al; “Adaptive Playout Scheduling Using Time-Scale Modification in Packet Voice Communications”, 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3 (May 7, 2001), pp. 1445-1448.
Liu, Haining, et al; “On the Adaptive Delay and Synchronization Control of Video Conferencing over the Internet”, International Conference on Networking (ICN) (2004) 8 pp.
Liu, Hang, et al; “Delay and Synchronization Control Middleware to Support Real-Time Multimedia Services over Wireless PCS Networks”, IEEE Journal on Selected Areas in Communications, IEEE Service Center, vol. 17, No. 9 (Sep. 1, 1999) pp. 1660-1672.
Nethercote, Nicholas, et al,; “How to Shadow Every Byte of Memory Used by a Program”, Proceedings of the 3rd International Conference on Virtual Execution Environments, Jun. 13-15, 2007 San Diego CA, pp. 65-74.
Roca, Vincent, et al., Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec, INRIA Rhone-Alpes, Planete project, France, Date Unknown, (12 pp), Jun. 2014.
Sekiguchi S. et al.: “CE5: Results of Core Experiment on 4:4:4 Coding”, JVT Meeting: Mar. 31-Apr. 7, 2006 Geneva, CH; (Joint Videoteam of ISO/IEC JTC1/SC29/WG11 and ITU-T Sg. 16), No. JVT-S014, Apr. 1, 2006 pp. 1-19.
Sunil Kumar Liyang Xu, Mrinal K. Mandal, and Sethuraman Panchanathan, Error Resiliency Schemes in H.264/AVC Standard, Elsevier J. of Visual Communicatio & Image Representation (Special issue on Emerging H264/AVC Video Coding Standard), vol. 17 (2), Apr. 2006.
Trista Pei-Chun Chen and Tsuhan Chen, Second-Generation Error Concealment for Video Transport Over Error Prone Channels, electrical computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A, Jun. 2014.
Tsai, et al., The Efficiency and Delay of Distributed Source Coding in Random Access Sensor Networks, 8th IEEE Wireless Communications and Networking Conference, Mar. 2007, pp. 786-791, United States.
Vasudev Bhaskaran et al., “Chapter 6: The MPEG Video Standards”, Image and Video Compression Standards—Algorithms & Architectures, Second Edition, 1997, pp. 149-230 Kluwer Academic Publishers.
Wang, et al., Distributed Data Aggregation using Clustered Slepian-Wolf Coding in Wireless Sensor Networks, IEEE International Conference on Communications, Jun. 2007, pp. 3616-3622, United States.
Wang, Yao “Error Control and Concealment for Video Communication: A Review”, Proceedings of the IEEE, vol. 86, No. 5, May 1998, 24 pages.
Woo-Shik Kim et al: “Enhancements to RGB coding in H.264/MPEG-4 AVC. FRExt”, Internet Citation, Apr. 16, 2005, XP002439981, Retrieved from the internet: URL:ftp3.itu.ch/av-arch/video-site/0504—Bus/VCEG-Z16.doc, retrieved on Jun. 28, 2007 p. 5.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May 2003.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005.
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006.
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007.
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009.
Page, E. S., “Continuous Inspection Schemes”; Biometrika 4I; Statistical Laboratory, University of Cambridge, (1954); pp. 100-115.
Extended European Search Report EP09171120, dated Aug. 2, 2010.
Schulzrinne, H, et al., “RTP: A Transport Protocol for Real-Time Applications”, IETF, 2003, RFC 3550.
Gustafsson, F., “Adaptive Filtering and Change Detection”, John Wiley & Sons, Ltd, 2000.
Related Publications (1)
Number Date Country
20140153431 A1 Jun 2014 US
Provisional Applications (1)
Number Date Country
61245003 Sep 2009 US
Continuations (1)
Number Date Country
Parent 13497625 US
Child 14174916 US