The invention relates in general to transmitting data over lossy networks, and in particular, to improving data transmission performance over lossy wireless network connections.
Typical problems associated with transmitting compressed video over wireless networks include Quality of Service (QoS), latency and maintaining basic image integrity. For example, if a single packet of transmitted video data is lost, the fact that the video data is typically compressed in the temporal domain can cause propagation and cascading of a single artifact in one frame through multiple successive frames. Such lossy wireless networks may operate wireless protocols, such as those described by IEEE 802.11x and 802.15.3a, and video compression algorithms such as those described by the AVC video standard. Lossy networks may use transmission methods other than wireless such as, for example, HomePlug AV Powerline Communications.
During video transmission over a lossy network, video data (sometimes referred to as “video packets”) may be lost. For example, wireless lossy transmission mediums can be unreliable in that the transmitted video packets may not always be received (accurately or at all) by the wireless receiver. To counter this, the 802.11x Media Access Control (MAC) requires that in most cases a packet (or group of packets for 802.11e extensions to the standard) that is received will be acknowledged to the transmitter by sending back an “ACK” signal. Hence a missing ACK signal normally indicates that a video packet (or packets) has been lost.
In addition, video packet data can also be lost at the receiver. For example, as Advanced Video Coding (AVC) encoded data tends to be bursty, an unexpectedly large burst can overflow buffers on the receiver-side at several locations between the 802.11x module and the AVC decoder itself. Most packets lost in this way can be detected by Real-Time Transport Protocol (RTP) feedback.
To alleviate some of the inherent drawbacks associated with lossy network communication, various data recovery and error correction features have been built into the data coding standard used. For example, H264/AVC is a more recently developed coding standard which includes a Video Coding Layer (VCL) to efficiently represent the video content, and a Network Abstraction Layer (NAL) to format the VCL representation of the video and provide header information in a manner appropriate for conveyance by particular transport layers or storage media. Despite these efforts, there is a need to improve transmission reliability and error concealment over lossy network connections, such as 802.11x networks.
Thus, there is still an unsatisfied need for an improved system and method for transmitting video data over networks in a manner which decreases the probability of a failed transmission, improves the probability of successful decoding and/or increases the quality of error concealment on the receiver-side.
Systems and methods for transmitting data over lossy networks, such as wireless networks, are disclosed and claimed herein. In one embodiment a method comprises transmitting a plurality of encoded data packets over a wireless network, receiving acknowledgment signals for each of said plurality of encoded data packets when successfully transmitted, and determining if a number of transmission failures exceeds a to predetermined threshold, and if so, signaling that multiple reference frames should be used for encoding predictions.
Other embodiments are disclosed and claimed herein.
The invention relates to a system in which data (e.g., video data) is being transmitted at least partially wirelessly over a lossy network originating from a server to one or more client-side systems. In one embodiment, the server includes an encoder module and a transmitter, while the client includes a decoder module and a receiver.
According to one aspect of the invention, data packets sent over the wireless network by the server which are not successfully received may be retransmitted a particular number of times. In one embodiment, the number of retransmission attempts is based on how important the data in the lost packet is considered. That is, in one embodiment, the number of retransmission attempts may be made adaptive, and packets considered more important (e.g., those containing IDR frames) are retransmitted a greater number of times as compared to less important packets (e.g., those containing P frames). In one embodiment, the aforementioned data packets may be encoded by an H.264/AVC encoder and/or sent over a 802.11x wireless network connection to be decoded by one or more H.264/AVC decoders.
Another aspect of the invention is to improve the probability of successful decoding at the client side. In one embodiment, this is done by having a decoder signal to a corresponding encoder to use multiple reference frames for subsequent prediction operations. In the case of an H.264/AVC system, this allows frames, such as P frames, that refer to data in prior frames (or in the case of B frames also future frames) to refer to macroblocks in multiple reference frames in order to determine the macroblocks in a current frame.
In another embodiment, the probability of successful decoding at the client side may be improved by identifying exactly which slices of the original data stream have not been received at the decoder. In one embodiment, this may be done by noting when an acknowledgment signal is not received for a given packet and all the packet retransmission attempts have been used. Using this information, an encoder may then stop referring to these lost slices/macroblocks in future coding operations. This may then limit the propagation of errors in future decoded frames at the client side. While in one embodiment, this may be done concurrently with the aforementioned operations, in another embodiment it may be subsequently performed.
Still another aspect of the invention is to estimate the distortion of the data caused by lost data packets/slices. Using this information, past pixels that are considers to have been reconstructed adequately by the error concealment at the client may continue to be referred to, while references to data not adequately reconstructed properly may be avoided. In one embodiment, the client itself may indicate the level of distortion back to the server given that the client decoder knows exactly what error concealment was used. This information may then be communicated back to the server. However, in another embodiment, an estimate of error of the received data may also be determined on the server-side by estimating the error concealment at the client for the given lost data packets, and the distortion may then be estimated by comparing the reconstructed data to the original data which is also available at the server.
Still another aspect of the invention is to increase the quality of error-concealment at the client by making use of the Flexible Macroblock Ordering (FMO) functionality of an H.264/AVC encoder/decoder system. That is, the reconstruction of IDR frames (on which all successive frames in the picture depend) may be made more robust by first decomposing each IDR frame into n fields such that all macroblocks of the IDR frame may be included in the n fields without duplication. Thereafter, each field may be segmented into m slices. In one embodiment, all m slices for the first field may be transmitted first, followed by the transmission of all m slices for the second field, and so on until all m slices for all n fields have been transmitted. This may be desirable since errors in lossy network environments (e.g., 802.11x) are often bursty, where a short burst of errors might eliminate all m slices, for example, for a given field. That being the case, if the m slices for different fields (e.g., 1,3,4, etc.) have been received correctly, then the error concealment functionality of the decoder (e.g., H.264/AVC decoder) may interpolate neighboring pixels to estimate the missing pixels since the missing macroblocks are spatially surrounded by available macroblocks.
While it should be appreciated that all or some of the aforementioned aspects of the invention may be implemented using an H.264/AVC encoder/decoder system and/or a 802.11x wireless network connection, it should equally be appreciated that they may also be implemented using other similar codecs and/or lossy communication channels.
When implemented in software, the elements of the invention are essentially the code segments to perform the necessary tasks. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link.
H.264/AVC Overview
The H.264/AVC standard supports video coding that contains either progressive, interlaced frames or both mixed together in the same sequence. Generally, a frame of video contains two interleaved fields—a top and a bottom field. The two fields of an interlaced frame, which are separated in time by a field period, may be coded separately as two field pictures or together as a frame picture. A progressive frame, on the other hand, is coded as a single frame picture. However, it is still considered to consist of two fields at the same instant in time.
The VCL, which will be described in more detail below, represents the content of the video data. In contrast, the NAL formats the data and provides header information in a manner appropriate for conveyance by the transport layers or storage media. All data is contained in NAL units, each of which contains an integer number of bytes. An NAL unit specifies a generic format for use in both packet-oriented and bitstream systems.
The VCL of the H.264/AVC standard is similar in spirit to other standards such as MPEG-2. In short, it consists of a hybrid of temporal and spatial prediction, in conjunction with transform coding. Each picture of a video, which can either be a frame or a field, is partitioned into fixed-size macroblocks that cover a rectangular picture area of 16×16 samples of the luma component and 8×8 samples of each of the two chroma components. All luma and chroma samples of a macroblock are either spatially or temporally predicted, and the resulting prediction residual is transmitted using transform coding.
The macroblocks are organized in slices, which represent portions of a given image that can be decoded independently, and the transmission order of macroblocks in the bitstream depends on a Macroblock Allocation Map. The H.264/AVC standard supports five different slice-coding types. The simplest one is referred to as an I slice, or Intra slice. In I slices, all macroblocks are coded without referring to other pictures within the video sequence. On the other hand, prior-coded images can be used to form a prediction signal for macroblocks of the predictive-coded P and B slices (where P stands for predictive and B stands for bi-predictive). The two additional slice types are SP (switching P) and SI (switching I), which are specified for efficient switching between bitstreams coded at various bit-rates.
The H.264/AVC standard supports a feature called Flexible Macroblock Ordering (FMO) in which a pattern that assigns the macroblocks in a picture to one or several slice groups is specified. Each slice group may then be transmitted separately.
System Architecture Overview
Once encoded, encoded data 125 is provided to transmitter a client 145, as shown in
Continuing to refer to
It should be appreciated that server 135 and client 145 may have numerous configurations other than as depicted in
Although not depicted, it should equally be appreciated that server 135 and/or client 145 may include other components, such as a central processing unit (CPU), which may include an arithmetic logic unit (ALU) for performing computations, a collection of registers for temporary storage of data and instructions, and a control unit for controlling operation for the computer system. In one embodiment, the CPU may be any one of the x86, Pentium™ class microprocessors as marketed by Intel™ Corporation, microprocessors as marketed by AMD™, or the 6×86MX microprocessor as marketed by Cyrix™ Corp. In addition, any of a variety of other processors, including those from Sun Microsystems, MIPS, IBM, Motorola, NEC, Cyrix, AMD, Nexgen and others may be used. Moreover, any such CPU need not be limited to microprocessors, but may take on other forms such as microcontrollers, digital signal processors, reduced instruction set computers (RISC), application specific integrated circuits, and the like.
Other components that the server 135 and/or client 145 may include are a random access memory, a non-volatile memory (e.g., hard disk, floppy disk, CD-ROM, DVD-ROM, tape, high density floppy, high capacity removable media, low capacity removable media, solid state memory device, etc., and combinations thereof). The server 135 and/or client 145 may also include a network interface (e.g., a network interface card, a modem interface, integrated services digital network, etc.), and a user input device (e.g., a keyboard, mouse, joystick and the like for enabling a user to interact with and provide commands).
It should further be appreciated that the server 135 and/or client 145 may include system firmware, such as system BIOS, and an operating system (e.g., DOS, Windows, Unix, Linux, Xenix, etc) for controlling the server 135 and/or client's operation and the allocation of resources.
Decreasing Probability of a Failed Transmission
As mentioned above, one aspect of the invention is to be able to decrease the probability of a failed transmission. To that end,
If it is determined at block 220 that the packet in question was in fact received, then process 200 simply moves to block 230 where the next data packet is processed. If, on the other hand, it is determined that the packet was not properly received, then process 200 may continue to block 240 where a determination is made as to what number of retransmission should be attempted for the given packet. In one embodiment, the number of retransmission attempts is based on the importance of the data contained within the given packet. In another embodiment, the number of retransmissions is made adaptive at all levels at which retransmission is implemented to occur. For example, the number of retransmissions may be made adaptive at the 802.11x MAC layer, as well as at the RTP layer if a form of reliable or semi-reliable RTP retransmission has been implemented.
Once the packet's number of retransmissions has been determined, process 200 will continue to block 250 where it is determined if the packet should be re-sent or not. If the number of retransmission attempts equals zero, then the packet will not be re-sent and process 200 ends. If, on the other hand, the number of retransmission attempts is greater than zero, then process 200 will continue to block 260 where the packet is re-sent, and then the number of remaining retransmission attempts is reduced by 1 (block 270). Once re-sent, a determination must then be made at block 280 as to whether the re-sent packet was received this time. If so, then process 200 simply moves to block 230 where the next data packet is processed. If not, then process 200 moves back to block 250 where it is determined if the packet should be re-sent or not (i.e., determine if the number of retransmission attempts equals zero or not).
Increasing Probability of Successful Decoding
As mentioned above, another aspect of the invention is to increase the probability of successful decoding at the client side of a lossy transmission. To that end,
Once the number of ACKs (or the rate of ACKs received) falls below the predetermined threshold, the client or decoder-side may signal back to the encoder that it should use multiple reference frames for prediction purposes. For example, in the case of a H.264/AVC encoder, P frames that refer to data in prior frames (or in the case of B frames also future frames) to refer to macroblocks in multiple reference frames in order to determine the macroblocks in the current frame. Hence, if one of the reference frames is missing due to a lossy transmission error, the current frame can still be successfully reconstructed assuming the other reference frames are not also lost.
Thus, by using multiple reference frames (block 315) the probability that a lost frame will cause a cascading effect that leads to corruption of future predicted frames can be effectively decreased. As illustrated by the dashed progress lines leading from block 315, the operations of blocks 310-315 may be performed either concurrently with or sequentially with the operations described below with reference to blocks 320-330 and 335-345.
Regardless of whether sequentially or concurrently performed, at block 320 the server side (e.g., server 135) may identify exactly which slices of the original video stream have not been received at the decoder. In one embodiment, this information is available since the server knows which data packets are lost because a ACK signal would not have been received for the given packet at the 802.11x MAC layer, or which RTP packets are lost at the RTP application layer. Once the missing slices have been identified, process 300 may continue to block 325 where the encoder is notified of exactly which slices (and hence which macroblocks) have not been received by the decoder. Using this information, the encoder (which in one embodiment is an H2.64/AVC encoder) will stop referring to these lost slices/macroblocks in future P and B frames (block 330). In another embodiment, the encoder may also generate additional IDR frames if necessary.
Either currently with or sequentially to the aforementioned operations of process 300, the level of distortion caused by the missing packets may be estimated by the server at block 335. This may be significant since not all lost packets degrade the video equally, specially since many decoders (e.g., H264 decoders) employ error concealment. In one embodiment, the client itself may estimate the distortion of the client's video caused by the lost packets. This may be preferable since the decoder will know exactly how it has implemented possibly proprietary error concealment. In this case, the level of distortion may be communicated back to the server (block 340).
Alternatively, this back channel of communication between the client and server may be unreliable. In that case, an estimate of the error of the final video may also be determined at the server by estimating the error concealment at the client for the given lost slices/packets. The distortion may be estimated by comparing the reconstructed video to the original video which is also available at the server. Regardless of whether the distortion has been estimated on the client side or on the server side, this information may then be used by the encoder to only refer to past pixels that are considered to have been reconstructed adequately by the error concealment at the client (block 345).
In addition to real-time video content, the process 300 of
Increasing Quality of Error Concealment
As mentioned above, another aspect of the invention is to increase the quality of error concealment at the client side of a lossy transmission. To that end,
Once the fields have been segmented into m slices, process 400 may continue to block 430 for all m slices for the first field may be transmitted first, followed by the transmission of all m slices for the second field, and so on until all m slices for all n fields have been transmitted. This form of transmission may be particularly useful for transmissions over a lossy network in which errors are often bursty (e.g., 802.11x), where a short burst of errors might eliminate all m slices for a given field. However, if the m slices for different fields (e.g., 1,3,4, etc.) have been received correctly, then the error concealment functionality of the decoder (e.g., H.264/AVC decoder) will be able to interpolate neighboring pixels to estimate the missing pixels given that the missing macroblocks are spatially adjacent to or surrounded by available macroblocks. This approach avoids the need to transmit duplicate data to compensate for lost packets or slices, hence requiring less bandwidth. In another embodiment, instead of a single macroblock separated by n macroblocks, the single macroblock may itself be expanded to include a group of macroblocks around the original macroblock.
In another embodiment, the encoding process 400 of
While the aforementioned process 400 was described in terms of IDR frames, it should further be appreciated that it may be used with any type of data frame containing spatial intra-frame data so long as the available bandwidth and computational resources permit.
Referring now to
While the invention has been described in connection with various embodiments, it will be understood that the invention is capable of further modifications. This application is intended to cover any variations, uses or adaptations of the invention following, in general, the principles of the invention, and including such departures from the present disclosure as, within the known and customary practice within the art to which the invention pertains.
This application is a divisional of U.S. patent application Ser. No. 11/197,818, filed on Aug. 5, 2005, which is hereby fully incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3728681 | Fuller et al. | Apr 1973 | A |
3810100 | Hungerford et al. | May 1974 | A |
3934224 | Dulaney et al. | Jan 1976 | A |
4383315 | Torng | May 1983 | A |
4633462 | Stifle et al. | Dec 1986 | A |
4835731 | Nazarenko et al. | May 1989 | A |
5020132 | Nazarenk et al. | May 1991 | A |
5128930 | Nazarenko et al. | Jul 1992 | A |
5164942 | Kamerman et al. | Nov 1992 | A |
5191585 | Velazquez | Mar 1993 | A |
5206863 | Nazarenko et al. | Apr 1993 | A |
5212724 | Nazarenko et al. | May 1993 | A |
5339316 | Diepstraten | Aug 1994 | A |
5357525 | Moriue et al. | Oct 1994 | A |
5453987 | Tran | Sep 1995 | A |
5502733 | Kishi et al. | Mar 1996 | A |
5550847 | Zhu | Aug 1996 | A |
5646686 | Pearlstein | Jul 1997 | A |
5703570 | Gorday et al. | Dec 1997 | A |
5768533 | Ran | Jun 1998 | A |
5847763 | Matsumura et al. | Dec 1998 | A |
5963559 | Ohki | Oct 1999 | A |
6088342 | Cheng et al. | Jul 2000 | A |
6256334 | Adachi | Jul 2001 | B1 |
6289054 | Rhee | Sep 2001 | B1 |
6400695 | Chuah et al. | Jun 2002 | B1 |
6430661 | Larson et al. | Aug 2002 | B1 |
6587985 | Fukushima et al. | Jul 2003 | B1 |
6594240 | Chuah et al. | Jul 2003 | B1 |
6643318 | Parsa et al. | Nov 2003 | B1 |
6674765 | Chuah et al. | Jan 2004 | B1 |
6693907 | Wesley et al. | Feb 2004 | B1 |
6708107 | Impson et al. | Mar 2004 | B2 |
6741554 | D'Amico et al. | May 2004 | B2 |
6792286 | Bharath et al. | Sep 2004 | B1 |
7031273 | Shores et al. | Apr 2006 | B2 |
7061942 | Noronha et al. | Jun 2006 | B2 |
7131048 | Suzuki et al. | Oct 2006 | B2 |
7158473 | Kurobe et al. | Jan 2007 | B2 |
7184421 | Liu et al. | Feb 2007 | B1 |
7243284 | Machulsky et al. | Jul 2007 | B2 |
7355976 | Ho et al. | Apr 2008 | B2 |
20030009717 | Fukushima et al. | Jan 2003 | A1 |
20030112754 | Ramani et al. | Jun 2003 | A1 |
20040013409 | Beach | Jan 2004 | A1 |
20040032853 | D'Amico et al. | Feb 2004 | A1 |
20040218816 | Hannuksela | Nov 2004 | A1 |
20050031097 | Rabenko et al. | Feb 2005 | A1 |
Number | Date | Country |
---|---|---|
2004203324 | Aug 2004 | AU |
09-037425 | Feb 1997 | JP |
09037245 | Feb 1997 | JP |
2001-119437 | Apr 2001 | JP |
2001119437 | Apr 2001 | JP |
2001-156782 | Jun 2001 | JP |
2001156782 | Jun 2001 | JP |
2006-165733 | Jun 2006 | JP |
2006165733 | Jun 2006 | JP |
2004075555 | Sep 2004 | WO |
WO-2004075555 | Sep 2004 | WO |
Entry |
---|
Notice of Allowance from U.S. Appl. No. 12/770,464 mailed Oct. 24, 2011. |
Notice of Allowance from U.S. Appl. No. 12/770,646 mailed Dec. 20, 2012. |
PCT Search Report for PCT/US06/29569 mailed Sep. 26, 2007. |
R. Schaefer et al.; “The emerging H.264/AVC standard”; EBU Technical Review, Jan. 2003, p. 1-12, http://tech.ebu.ch/docs/techreview/trev—293-schaefer.pdf. |
Notice of Preliminary Rejection for Korean Patent Application No. 10-2008-7005355 mailed from Korean Intellectual Property Office on Sep. 14, 2012. |
Extended European Search Report for European Application No. 06800504 mailed from the European Patent Office on Jan. 7, 2013. |
Notice of Last Preliminary Rejection for Korean Patent Application No. 10-2008-7005355 mailed from Korean Intellectual Property Office on Dec. 5, 2012. |
Number | Date | Country | |
---|---|---|---|
Parent | 11197818 | Aug 2005 | US |
Child | 12770464 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12770464 | Apr 2010 | US |
Child | 14147467 | US |