This disclosure relates to video encoding and decoding and particularly to video coding using un-equal error protection.
Digital video streams represent video using a sequence of still images, called “frames.” Modern video coding techniques can efficiently compress the video stream for many applications such as video conferencing and streaming through bandwidth-limited communication channels. However, packet loss and error may occur during video bitstream transmission, resulting in errors in decoding the bitstream.
This disclosure includes aspects of systems, methods and apparatuses for encoding and decoding video signals that include error protection so as to achieve high quality video communications and other types of applications under packet loss or error conditions. One aspect of a disclosed implementation is a method for encoding a video stream including a plurality of video frames, each frame including a plurality of blocks. The method includes selecting a first block substantially central to a frame and then selecting subsequent blocks in a substantially spiral manner such that each subsequent block adjoins a previously selected block. Selection continues until the selected blocks form a contiguous group of blocks occupying at least a central section of the frame. The selected blocks are encoded in an order in which the selected blocks were selected. The method applies a first level of error protection to encoded blocks within the central section, and applies a second level of error protection, different than the first level of error protection, to encoded blocks outside the central section of the frame.
Another aspect of a disclosed implementation is an apparatus for encoding a video stream. The apparatus includes memory and at least one processor programmed to execute instructions contained in the memory. The instructions cause the processor to select a first block substantially central to a frame and then select subsequent blocks in a substantially spiral manner such that each subsequent block adjoins a previously selected block, continue the selecting of subsequent blocks until the selected blocks form a contiguous group of blocks occupying at least a central section of the frame, encode the selected blocks in an order in which the selected blocks were selected, apply a first level of error protection to encoded blocks within the central section, and if any encoded blocks are located outside the central section, apply a second level of error protection, different than the first level of error protection, to the encoded blocks outside the central section.
Another aspect of a disclosed implementation is a method for decoding a video stream. This method includes receiving a bitstream of encoded blocks associated with a frame, and decoding the encoded blocks in an order corresponding to a first block substantially central to the frame, a second block adjacent to the first block, and subsequent blocks adjacent to a previously selected block in a spiral manner. Responsive to data errors affecting decoding of a block of the encoded blocks, the method performs data recovery operations. The recovery operations include attempting recovery of data related to the block using a first level of error protection when the block resides within a central section of the frame, the central section of the frame including at least a contiguous set of blocks about a center of the frame, and if the block resides outside the central section, attempting recovery of data related to the block using a second level of error protection, the second level of protection different than the first level, or skipping recovery of the data related to the block.
Another aspect of a disclosed implementation is an apparatus for decoding a video. The apparatus includes digital data storage and at least one processor programmed to execute instructions contained in the digital data storage. The instructions cause the processor to receive a bitstream of encoded blocks associated with a frame and decode the encoded blocks wherein the encoded blocks are encoded in an order corresponding to a first block substantially central to the frame, a second block adjacent to the first block, and subsequent blocks adjacent to a previously selected block in a spiral manner. Responsive to data errors affecting decoding of a block of the encoded blocks, the processor performs data recovery by attempting recovery of data related to the block using a first level of error protection when the block resides within a central section of the frame, the central section of the frame including at least a contiguous set of blocks about a center of the frame. If instead the block resides outside the central section, the processor performs one of attempting recovery of data related to the block using a second level of error protection, the second level of protection different than the first level of error protection, or skipping recovery of the data related to the block.
These and other aspects are described in additional detail below.
This disclosure refers to the accompanying drawings, where like reference numerals refer to like parts throughout the several views and wherein:
Digital video is used for various purposes, with some examples including remote business meetings via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. As technology evolves, users obtain higher expectations for video quality; they expect high resolution video even for video transmitted over communications channels of limited bandwidth.
To permit transmission of digital video streams while limiting bandwidth consumption, video encoding and decoding schemes incorporate various compression schemes. These compression schemes generally break the image up into blocks and use one or more techniques to limit the information included in a resulting digital video bitstream for transmission. The bitstream, once received, can be decoded to re-create the blocks and the source images from the limited information.
Noisy communications channels between an encoder and decoder can corrupt transmitted data, resulting in degraded video performance and a decrease in perceived video quality. One way to address this is to trade bit rate for additional error correction information. Even with somewhat less resolution in either spatial resolution or dynamic range, the resultant image can yield a perceived image quality that is greater than if the tradeoff was not made under packet-loss conditions. In dynamic error correction, a maximum bit rate for a given communications channel may be determined experimentally or known a priori. The decoder or another component with knowledge of the communications channel can determine the error rate of the communications channel and advise the encoder as to how much bandwidth should be allotted for error correction to assure a desired level of video quality. The encoder can adjust the resolution of the encoding process according to the equation:
BT−BE=BIR; where
BT is the total bandwidth available for use in the communications channel;
BE is the bandwidth to be used for error correction; and
BIR is the bandwidth to be used to transmit an image I at a resolution R.
In one dynamic error correction scheme, a number of blocks for an image are transmitted and the communications channel error rate is measured on a periodic or other basis. Based on these calculations, a change in BE is made, and hence a change to BIR. While such a scheme typically avoids dropped frames or other synchronization problems, the image degradation caused by reduced bit rate can interfere with perceived image quality, e.g., when the viewer is interested in the portion of the image that is subject to reduced image quality when higher requirements for error correction bandwidth result in reduced bandwidth for the image.
In some real-time video communication systems, constant bitrate is utilized to fit channel bandwidth. However, constant bitrate can be difficult to achieve in practice, and it generally calls for the quantization levels to change constantly during the block encoding process for every image in the video sequence. One challenge is that due to the limited bit budget, conventional techniques increase quantization levels for the latter-encoded blocks in an image in order to achieve the desired bitrate. This can reduce the picture quality for these latter-encoded blocks, in some cases resulting in uneven image quality when encoding in raster scan order. This uneven quality of the image can be annoying to the viewers.
Another challenge with noisy communications channels between an encoder and decoder is that portions of the transmitted bitstream can be corrupted or entirely lost, also potentially resulting in degraded video performance and a decrease in perceived video quality.
One or more aspects of this disclosure address these and other challenges by limiting the potentially reduced quality of interesting areas of an image such as a central section of the image during the encoding process so as to maintain perceived image quality. Herein, an interesting area of an image is defined as a portion of an image that may contain image data of actual or predicted interest to a viewer such as the face of a person talking in a video frame.
In one example, a network 28 connects transmitting station 12 and a receiving station 30 for encoding and decoding of the video stream. In one example, the video stream is encoded in transmitting station 12 and the encoded video stream is decoded in receiving station 30. Network 28 may include any network that is appropriate to the application at hand, such as the public Internet, corporate or other Intranet, local or wide area network, virtual private network, Token Ring, cellular telephone data network, or any other wired or wireless configuration of hardware, software, communication protocol suitable to transfer the video stream from transmitting station 12 to receiving station 30 in the illustrated example.
Receiving station 30 includes a CPU 32 and memory 34, exemplified by similar components as discussed above in conjunction with the system 12. A display 36 is configured to display a video stream. Display 36 can be connected to receiving station 30 and can be implemented in various ways, including by a liquid crystal display (LCD), a cathode-ray tube (CRT), organic or non-organic light emitting diode display (LED), plasma display, or any other mechanism to display a machine-readable video signal to a user. Display 36 can be configured to display a rendering 38 of the video stream decoded by a decoder in receiving station 30.
Other implementations of encoder and decoder system 10 are possible. In the implementations described, for example, an encoder is implemented in transmitting station 12 and a decoder is implemented in receiving station 30 as instructions in memory or a component separate from memory. However, an encoder or decoder can be coupled to a respective station 12, 30 rather than be located within it. Further, one implementation can omit network 28 and/or display 36. In another implementation, a video stream can be encoded and then stored for transmission at a later time to receiving station 30 or any other device having memory. In another implementation, additional components can be added to encoder and decoder system 10. For example, a display or a video camera can be attached to transmitting station 12 to capture the video stream to be encoded.
In an exemplary implementation, the real-time transport protocol (RTP) is used for transmission of the bitstream. In another implementation, a transport protocol other than RTP may be used, such as an HTTP-based video streaming protocol.
When video stream 50 is presented for encoding, each frame (such as frame 56 from
Next, still referring to
Quantization stage 76 converts the transform coefficients into discrete quantum values, which are referred to as quantized transform coefficients. The quantized transform coefficients are then entropy encoded by entropy encoding stage 78. The entropy-encoded coefficients, together with other information used to decode the block, such as the type of prediction used, motion vectors and quantizer value, are then output to a compressed bitstream 88. Compressed bitstream 88 can be formatted using various techniques, such as variable length coding (VLC) and arithmetic coding.
The reconstruction path in
Other variations of encoder 70 can be used to encode compressed bitstream 88. For example, a non-transform based encoder 70 can quantize the residual signal directly without transform stage 74. In another implementation, an encoder 70 may have quantization stage 76 and dequantization stage 80 combined into a single stage.
When compressed bitstream 88 is presented for decoding, the data elements within compressed bitstream 88 can be decoded by entropy decoding stage 102 to produce a set of quantized transform coefficients. Dequantization stage 104 dequantizes the quantized transform coefficients, and inverse transform stage 106 inverse transforms the dequantized transform coefficients to produce a derivative residual that can be identical to that created by inverse transform stage 82 in encoder 70. Using header information decoded from compressed bitstream 88, decoder 100 can use the intra/inter prediction stage 108 to create the same prediction block as was created in encoder 70. At reconstruction stage 110, the prediction block can be added to the derivative residual to create a reconstructed block. Loop filtering stage 112 can be applied to the reconstructed block to reduce blocking artifacts. Deblocking filtering stage 114 can be applied to the reconstructed block to reduce blocking distortion, and the result is output as output video stream 116.
Other variations of decoder 100 can be used to decode compressed bitstream 88. For example, decoder 100 can produce output video stream 116 without deblocking filtering stage 114.
Blocks are conventionally formed and processed in raster scan order, meaning that a block is formed from the upper left corner of the image by, for example, sixty-four pixels in an 8×8 window starting at the upper left corner of the image. The next block is formed by 64 pixels in an 8×8 window located by stepping over eight pixels in the horizontal direction and so forth. Once a row is completed, the next row of blocks is formed by stepping down eight pixels in the vertical direction starting from the upper left corner of the image and then proceeding in the horizontal direction as described above. Each row of blocks is formed in a like manner. In general, raster scan block schemes do not provide a mechanism to dynamically change error correction/bit rate tradeoffs to provide increased protection for particular regions in a frame.
This disclosure provides improved image encoding and decoding with dynamic error correction by altering the order in which image blocks are processed and transmitted. For many video streams, the most important part of the image frame is the central portion, with the outer edges typically containing less interesting image data. An example of this would be images from a video conferencing system, where the images are principally pictures of faces occupying the central portion of the image. In implementations of this disclosure, the residual quantization and coding process starts from the blocks at the center of the image rather than the top-left corner of the image. The subsequent blocks are scanned in a spiral manner. This way, the image quality degradation due to bit budget is applied to the outer part of the image so as to create a visually more pleasant image. This approach optionally allows more bit budget to be distributed to the center section of image without concern for the potential image quality problems later. Moreover, designating blocks in this way can allow different levels of error protection to be applied to different parts of the frame.
Referring again to
To provide one example, a counter-clock-wise spiral scan is described with reference to
The blocks so selected can be encoded as described with respect to the operation of encoder 70. After processing central section 134 in step 122 of
In one example, less emphasis may be placed on the order in which the blocks are selected for encoding from the edges of frame 56, in comparison to the order in which blocks are selected from central section 134 when it is assumed that less important image information resides in the edges. According to one implementation in a counter-clockwise spiral scan for a central section, the left strip is scanned and coded first. If the left strip contains multiple macroblock columns, the scanning can start from the inner column first, connecting the last macroblock of the last scanned macroblock in the N×N central section of the frame. After the left columns are scanned and coded, the columns of macroblocks in the right strip are scanned and coded. The start point of the right strip scan can start from its left-most column, from the top to the bottom, and then can proceed from the next column on the right, from bottom to top. This process is repeated until all the macroblocks are scanned and coded. In an example, the right strip may be scanned first where the spiral scan is a clockwise scan. In a different example, the right strip is scanned and coded first when the scan order is counter-clockwise or the left strip is scanned and coded first when the scan order is clockwise. The blocks of the outer strips, whether located on the left and right or the top and bottom can be scanned and coded in different orders after processing the central section.
In the description above, the edge strips are scanned from top to bottom from columns closer to the central section to columns closer to the outer edges of the image frame where it is assumed M>N, but other scan patterns are possible. For example, instead of proceeding from top-to-bottom for the first column and then from bottom-to-top for the next column and so on, the scan could occur from bottom-to-top for the first column and then from top-to-bottom for the next column and so on. The scan could occur along each column from top-to-bottom or from bottom-to-top for all blocks of a strip. In one example, scanning could proceed in like patterns using left-to-right and/or right-to-left scanning of rows. Left-to-right and/or right-to-left scanning patterns of rows starting with rows closest to the central section is also an example. However, scanning the columns of top and bottom outer sections in top-to-bottom and/or bottom-to-top patterns is also possible.
Regardless of the scan order for the outer sections, the blocks so selected can be processed in step 126 by being encoded as described with respect to the operation of encoder 70.
In order to distinguish the different scan patterns such as regular scan from left to right then top to bottom from the spiral scan described herein, code words can be added in a header to indicate the different scan methods. That is, implementations can indicate to the encoding process whether or not a particular video stream or frame should be encoded using spiral scan or another scan order by keywords or a code embedded in the header for the stream or frame. This code can also be transmitted or stored along with the video stream or frame to indicate to the decoder that a spiral scan or other scan order was used in encoding and/or is to be used in decoding the video stream or frame.
Selecting image blocks to be encoded and transmitted in a spiral scan order beginning at or near the center of the image frame can facilitate image blocks containing the most important image information can be transmitted first. This spiral scan order can be utilized but not necessarily exclusively utilized for quantization control for the residual signals in quantization stage 76. For example, transform coefficients obtained from transform stage 74 are converted into discrete quantum values by application of quantization levels in quantization stage 76. The discrete quantum values may be referred to as quantized transform coefficients. The bits allowed for each of these conversions can be controlled by the position of the block in the stream so as to provide more resolution to blocks earlier in the stream and less to those later in the stream. For motion vector and other information coding in intra-inter prediction stage 72, one may still choose to use conventional motion vector prediction using a regular macroblock scan according to known techniques. That is, for example, a conventional partition technique can be used to code motion vectors and mode. A code may also be added to distinguish the different scan approaches for the motion vectors and for the quantization. This code can also be transmitted or stored along with the video stream or frame to indicate to the decoder which of the different scan approaches for the motion vectors, quantization, and the like are to be used in decoding the video stream or frame.
The spiral macroblock scan order described herein can also be applied to a slice or a segment of an image. In addition, the scan pattern can be extended to different regions of an image to apply different error protection levels in accordance with the teachings herein.
A compressed bitstream including the encoded residuals for the blocks transmitted in the scan order can include, as described above by example, codes indicating the scan order and the partitioning for generating motion vectors and other data used to decode the blocks. When presented for decoding, the data elements within the bitstream can be decoded to produce a set of quantized transform coefficients. The quantized transform coefficients are dequantized then inverse transformed to produce a derivative residual that can be identical to that created by inverse transform stage 82 in encoder 70. Header information decoded from the bitstream allows the decoder to create or select the same prediction block as was used in encoder 70 to generate the residual. The prediction block is then added to the residual to create a reconstructed block. These steps may be performed for each block of a frame to be decoded. Filters can be applied to reduce blocking artifacts and/or blocking distortion before rendering a decoded frame.
The central section may include any desired region, such as a rectangle, of the current frame. A representative frame 56 is discussed and illustrated in
In step 142, CPU 14 encodes the identified blocks in an order in which they were selected. Hence, encoding occurs for the central section first and then proceeds to the remainder of the frame. This operation may be carried out according to the encoding techniques explained in detail above. Encoding may further include the application of motion search and/or motion compensation processes. As a further enhancement, motion search and/or motion compensation may be performed selectively. As an example, for encoded blocks within the central section, step 142 may be performed using motion search and/or motion compensation based on data from blocks only within the central section. For blocks outside the central section, step 142 in this example is performed using motion search and/or motion compensation based on data from blocks inside and outside the central section.
The motion compensation process can in some cases have greater error resiliency for the inner part of the image while having greater coding efficiency for the outer part of the image. This can be achieved by allowing the image data in the outer part of the image to use the inner part of the image for motion search and motion compensation, while not allowing the image data in the inner part of the image to use the outer part of the image for motion search and motion compensation. This can help to prevent errors in the outer region from spreading into the inner region.
In one example, the motion vectors of the macroblocks within the central region are not permitted to go out of the boundary of that region. In this case, correct receipt of the bitstream containing only this region is decodable without the receipt of the other regions. In this example, the outer and likely less important regions will be able to use all the regions for prediction. Therefore, outside the central section, there is no restriction on the motion search region and motion vector range in this implementation. In some cases, this may be an advantage compared to a slice based approach where all slices are treated equally for motion search, motion vectors and other information. Due to the spiral scan, the center section of macroblocks are scanned and coded before the outer section(s) of macroblocks.
In various implementations, examples of error protection can be utilized depending upon the needs of the available computing resources and other details of the specific application at hand. In one example, the error protection scheme is coded into the programming of transmitting station 12. In a different example, the user of transmitting station 12 selects from various error protection schemes programmed into transmitting station 12. In still another example, the processing of
Two examples of error protection are illustrated by optional steps 144 and 154 in
Step 144, as illustrated, includes steps 146, 148 and 150. These steps can be performed for each block as it is encoded by step 142, after all blocks have been encoded or any other arrangement that is suitable considering the teachings of this disclosure and the application at hand. Step 146 considers a given block or group of blocks or all blocks, and asks whether the block or blocks reside within the designated central section of the current frame.
For data related to blocks residing within the central section, the encoder applies a first level of error protection in step 148. Otherwise, a second level of error protection is applied in step 150.
There are various examples of the first and second levels of error protection. In one example, the first level of error protection is forward error correction (FEC), and the second level of error protection is no error protection. In another example, the first level of error protection is forward error correction (FEC) applied to a first ratio or percentage of encoded blocks, and the second level of error protection includes forward error correction (FEC) applied to a second ratio or percentage of encoded blocks, where the first ratio is greater than the second ratio. For example, forward error correction may be applied to fifty percent of the encoded blocks within the central section and applied to twenty five percent of encoded blocks outside the central section, if any. Further examples of the first and second levels of error protection may be apparent to ordinarily skilled artisans having the benefit of this disclosure.
In step 152, CPU 14 transmits the encoded and optionally error protected bitstream. The protection of optional step 154 uses input from a source external to the encoder in order to implement different levels of error protection. Step 154, as illustrated, includes steps 156, 158, 160 and 162. These steps may be performed for each lost block, either individually or in a group along with other lost blocks. The further operations starting at step 154 are described in context of loss of a single data packet, where a data packet is a portion of the information needed to recreate one or more blocks, for ease of illustration. Step 154 starts with a query as to whether any data was lost in the transmission of step 152. For example, lost data in the form of a data packet may be indicated by a signal from receiving station 30 by an identification code of the packet (called “packet ID” hereinafter). Lost data can encompass data that is lost in its entirety or is corrupted in part or in its entirety, for example. If there is no lost data in response to the query of step 154, processing of the current frame is completed. Encoder 70 does not need to query decoder 100 as to whether data has been lost. For example, decoder 100 can transmit a signal that data was lost or corrupted upon occurrence that the encoder 70 can check as needed.
In the event of lost data, processing advances to step 158. For a given packet of data, the query in step 158 asks whether the data is associated with a block residing within in interesting region of the frame (e.g. the central section of the frame). If so, error correction is applied using a first level of error protection in step 160. Otherwise, if the block resides outside the central section, error correction is applied using a second level of error protection in step 162. In one example, the first level of error protection includes retransmitting the lost data packet from transmitting station 12 responsive to a request for retransmission from receiving station 30. In this example, applying the second level of error protection can, in some cases, involve avoiding any retransmission of lost data. Thus, according to this example, lost data is only retransmitted if it corresponds to the area of interest in the frame.
The processing of step 154, when performed, can involve waiting for an input signal before taking action and can be continuously repeated until all frames a video stream are transmitted with a delay to await any possible signals indicating lost data from the last frame. In this description, a query is performed to determine if the data lost is related to blocks in the central section in step 158. This query can, however, be omitted in certain implementations. For example, instead of receiving station 30 determining the packet ID identifying the lost packet and comparing it with transmitted data to determine whether it relates to the central section, the signal indicating lost data could include both the packet ID and a bit indicating its status as being in the central section or not. Alternatively, and as discussed in additional detail below, the signal for data loss could be sent to transmitting station 12 only when the first level of protection should be applied. In this way, steps 158 and 162 would be omitted.
Other combinations of steps are possible. According to one example, a block in the central section has a first level of protection applied to the related data in the form of forward error correction (FEC) at the encoder 70 in step 148 while the second level of protection is no protection in step 150 (e.g., FEC is not applied). Then, if data related to other than the central section is lost, the second level of protection is used in step 162. The first level of protection used in step 160 can be no protection in this example. The opposite could also occur. Moreover, additional steps could be added so that both steps 144 and 154 could occur depending on the severity of the loss (e.g., the number of lost packets, the content of a lost packet, etc.).
In one example, decoding may further include selectively conducting motion search and/or motion compensation. Further, one or both of these may be limited to blocks within the central section.
Receiving and decoding of the bitstream occurs until an entire frame is received and decoded. During step 174, decoder 100, for example, queries as to whether the received video stream is missing any data from the transmitted video stream. Data may be deemed lost because it is missing or incomplete, because it contains errors preventing proper decoding, or because other flaws are evident. Lost or corrupt data can be identified by comparing received packet IDs, for example, to determine if there are missing packets, or by checking the size or content of a received packet. If there is no lost data, the processing of the current frame ends.
On the other hand, if data has been lost, error correction is applied so as to minimize the effects of the lost data. For ease of illustration, without any intended limitation, steps 176-180 are discussed in the context of loss of a single data packet, where a data packet is a portion of the information needed to recreate one or more blocks. For a given lost data packet, decoder 100 queries as to whether the lost data was associated with a block residing in the area of interest (e.g. the central section) based on, for example, a known frame size, the size (and hence the number) of blocks forming the frame, and the position of the block within the bitstream as compared to other blocks representing the frame. Alternatively or in addition, the information is encoded into the bitstream in headers associated with the appropriate packets. For example, where the three partitions are formed as described below, the lack of an FEC packet associated with packet-3 would indicate that the lost data is not associated with the central section. The nature and extent of one exemplary central section was described above. If the lost data is related to a block residing in the central section, then error correction based on a first level of error protection is applied at step 178. Otherwise, error correction based on a second level of error protection is applied at step 180.
The first and second levels of error protection used in steps 178 and 180 can vary. In one example, decoder 100 applies error correction by re-creating a lost centrally-located block using forward error correction (FEC) packets as applied to some or all blocks in the video stream transmitted by transmitting station 12, while omitting error correction entirely when the block is outside the central section. In a different example, the error correction in step 178 re-creates a block using forward error correction (FEC) packets as generated by a given ratio of blocks in the video stream, while the error correction in step 180 re-creates a block using forward error correction (FEC) packets as generated by a lesser ratio of blocks in the video stream.
In another example, the error correction in step 178 includes transmitting a signal to transmitting station 12 to retransmit the lost data packet, where the packet ID of the data packet is sent. According to a modification of this example, the signal transmitted to can also include a bit or code indicating whether the lost data packet is associated with a block located in the central section. The error correction in step 180 can omit a request for retransmission in an example. In an example, retransmission is only performed in an implementation where transmitting station 12 does not apply any forward error correction. In a different example, retransmission is performed even where transmitting station 12 applies forward error correction depending on the data packet lost.
Processing in
The processes described above have various advantages, some of which are described as follows. In one example, by limiting the application of forward error correction to certain parts of the bitstream, these techniques avoid the overhead and possible delay of forward error correction applying universally. Although traditional raster scan coding does not offer a practical way to define a region of importance in the frame for protection, this is overcome in an example using spiral scanning, wherein the importance of macroblocks is indicated by the order of scanning.
Although the processing of
To supplement the foregoing explanation, a detailed example is given without any intended limitation. Among many useful contexts, this example may be useful in a video conference system or video streaming system.
In this example, a bitstream of an image is divided into three different partitions. A first partition contains picture header, motion vectors, quantization information, and other high level information. The coding of this partition may be performed using any conventional methods, such as a video compression format where the motion vectors can be coded using predictions and variable length code, for example. The motion vector prediction may be performed using spiral scan, a conventional macroblock raster-scan, or another scan pattern. A second partition contains the residual coding of macroblocks from the center section of the frame using spiral scan, e.g., as produced by step 142. A third partition contains the residual coding of macroblocks from the outer part of the image using spiral scan, as produced by step 142.
In this example, an encoder, such as encoder 70, applies a first level of error protection to data associated with a central section of a frame in the form of forward error correction (e.g., at step 148) and applies a second level of error protection to the remainder of the frame by omitting forward error correction (e.g., at step 150).
In this example, the bitstream, such as encoded bitstream 88, is transmitted to receiving station 30 (e.g., in step 152) with partial forward error correction. In the current example, transmitting station 12 protects the first and second partitions with forward error correction, e.g., in step 148, but does not protect the third partition with any forward error correction, e.g., as per step 150.
In the present example, a fifty percent forward error correction is applied to data of the first and second partitions. For example, the original bitstreams of these partitions can be combined together, then divided into two different packets of equal length, referred to as packet-1 and packet-2. An FEC packet may be generated by an XOR operation of packet-1 and packet-2. Ordinarily skilled artisans, having the benefit of this disclosure, will be capable of applying different ratios to the FEC operations. A third packet, packet-3, contains the third partition.
The encoder has total of four packets, namely, packet-1, packet-2, the FEC packet and packet-3. When receiving station 30 receives any two of packet-1, packet-2 and the FEC packet, then the center section of the image can be reconstructed with error protection.
To further illustrate this, certain scenarios are set forth. In a first scenario, receiving station 30 correctly receives packet-1 and the FEC packet, while packet-2 and packet-3 are lost. This is equivalent to a fifty percent packet loss. From packet-1 and the FEC packet, receiving station 30 can recover packet-2 according to known techniques, e.g., in step 178 of
In a second scenario, receiving station 30 correctly receives packet-2, the FEC packet and packet-3, but loses packet-1. In this case, application of the first level of error protection, e.g., in step 178, results in recovery of the data in packet-1 using packet-2 and the FEC packet. Since packet-3 was not lost, it is decoded in a conventional way by decoder 100. As a result, the entire frame is reconstructed to produce the same image as if no data was lost.
As mentioned above, decoder 100 can signal for retransmission of a lost packet or not based on whether the missing packet refers to blocks in a particular section, e.g., the central section. Retransmission can occur either in place of or in addition to the protection described in these examples. In one example, retransmission could occur if packet-1 and packet-2 were lost by signalling encoder 70 to transmit one of the packets, while the other packet is recovered using the retransmitted packet and the FEC packet. Packet-3 could be conventionally decoded.
The implementations of encoding and decoding described above illustrate some exemplary encoding and decoding techniques. However, encoding and decoding, as those terms are used in the claims, could mean compression, decompression, transformation, or any other processing or change of data.
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one implementation” throughout is not intended to mean the same implementation unless described as such.
The implementations of transmitting station 12 and/or receiving station 30 and the algorithms, methods, instructions, and such stored thereon and/or executed thereby can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, ASICs, programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “processor” encompasses any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, portions of transmitting station 12 and receiving station 30 do not necessarily have to be implemented in the same manner.
Further, in one implementation, for example, transmitting station 12 or receiving station 30 can be implemented using a general purpose computer/processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain specialized hardware for carrying out any of the methods, algorithms, or instructions described herein.
Transmitting station 12 and receiving station 30 can, for example, be implemented on computers in a screencasting system. Alternatively, transmitting station 12 can be implemented on a server and receiving station 30 can be implemented on a device separate from the server, such as a cell phone or other hand-held communications device. In this instance, transmitting station 12 can encode content using an encoder 70 into an encoded video signal and transmit the encoded video signal to the communications device. In turn, the communications device can then decode the encoded video signal using decoder 100. Alternatively, the communications device can decode content stored locally on the communications device, such as content that was not transmitted by transmitting station 12. Other suitable transmitting station 12 and receiving station 30 implementation schemes are available. For example, receiving station 30 can be a generally stationary personal computer rather than a portable communications device and/or a device including encoder 70 may also include decoder 100.
Further, all or a portion of implementations of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
The above-described implementations have been described in order to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.
Number | Name | Date | Kind |
---|---|---|---|
5452435 | Malouf et al. | Sep 1995 | A |
5638114 | Hatanaka et al. | Jun 1997 | A |
5731840 | Kikuchi et al. | Mar 1998 | A |
5801756 | Iizawa | Sep 1998 | A |
6021213 | Helterbrand et al. | Feb 2000 | A |
6025870 | Hardy | Feb 2000 | A |
6091777 | Guetz et al. | Jul 2000 | A |
6108383 | Miller et al. | Aug 2000 | A |
6195391 | Hancock et al. | Feb 2001 | B1 |
6204847 | Wright | Mar 2001 | B1 |
6243416 | Matsushiro et al. | Jun 2001 | B1 |
6243683 | Peters | Jun 2001 | B1 |
6266337 | Marco | Jul 2001 | B1 |
6346963 | Katsumi | Feb 2002 | B1 |
6363067 | Chung | Mar 2002 | B1 |
6421387 | Rhee | Jul 2002 | B1 |
6462791 | Zhu | Oct 2002 | B1 |
6473460 | Topper | Oct 2002 | B1 |
6483454 | Torre et al. | Nov 2002 | B1 |
6483928 | Bagni et al. | Nov 2002 | B1 |
6532306 | Boon et al. | Mar 2003 | B1 |
6556588 | Wan et al. | Apr 2003 | B2 |
6577333 | Tai et al. | Jun 2003 | B2 |
6587985 | Fukushima et al. | Jul 2003 | B1 |
6681362 | Abbott et al. | Jan 2004 | B1 |
6684354 | Fukushima et al. | Jan 2004 | B2 |
6687304 | Peng | Feb 2004 | B1 |
6707852 | Wang | Mar 2004 | B1 |
6711209 | Lainema et al. | Mar 2004 | B1 |
6728317 | Demos | Apr 2004 | B1 |
6732313 | Fukushima et al. | May 2004 | B2 |
6741569 | Clark | May 2004 | B1 |
6812956 | Ferren et al. | Nov 2004 | B2 |
6816836 | Basu et al. | Nov 2004 | B2 |
6918077 | Fukushima et al. | Jul 2005 | B2 |
6952450 | Cohen | Oct 2005 | B2 |
7007098 | Smyth et al. | Feb 2006 | B1 |
7007235 | Hussein et al. | Feb 2006 | B1 |
7015954 | Foote et al. | Mar 2006 | B1 |
7116830 | Srinivasan | Oct 2006 | B2 |
7124333 | Fukushima et al. | Oct 2006 | B2 |
7178106 | Lamkin et al. | Feb 2007 | B2 |
7180896 | Okumura | Feb 2007 | B1 |
7197070 | Zhang et al. | Mar 2007 | B1 |
7218674 | Kuo | May 2007 | B2 |
7219062 | Colmenarez et al. | May 2007 | B2 |
7236527 | Ohira | Jun 2007 | B2 |
7253831 | Gu | Aug 2007 | B2 |
7263125 | Lainema | Aug 2007 | B2 |
7263644 | Park et al. | Aug 2007 | B2 |
7356750 | Fukushima et al. | Apr 2008 | B2 |
7372834 | Kim et al. | May 2008 | B2 |
7376880 | Ichiki et al. | May 2008 | B2 |
7379653 | Yap et al. | May 2008 | B2 |
7424056 | Lin et al. | Sep 2008 | B2 |
7447235 | Luby et al. | Nov 2008 | B2 |
7447969 | Park et al. | Nov 2008 | B2 |
7450642 | Youn | Nov 2008 | B2 |
7457362 | Sankaran | Nov 2008 | B2 |
7471724 | Lee | Dec 2008 | B2 |
7484157 | Park et al. | Jan 2009 | B2 |
7502818 | Kohno et al. | Mar 2009 | B2 |
7577898 | Costa et al. | Aug 2009 | B2 |
7636298 | Miura et al. | Dec 2009 | B2 |
7664185 | Zhang et al. | Feb 2010 | B2 |
7664246 | Krantz et al. | Feb 2010 | B2 |
7680076 | Michel et al. | Mar 2010 | B2 |
7684982 | Taneda | Mar 2010 | B2 |
7710973 | Rumbaugh et al. | May 2010 | B2 |
7735111 | Michener et al. | Jun 2010 | B2 |
7739714 | Guedalia | Jun 2010 | B2 |
7756127 | Nagai et al. | Jul 2010 | B2 |
7797274 | Strathearn et al. | Sep 2010 | B2 |
7822607 | Aoki et al. | Oct 2010 | B2 |
7823039 | Park et al. | Oct 2010 | B2 |
7860718 | Lee et al. | Dec 2010 | B2 |
7864210 | Kennedy | Jan 2011 | B2 |
7886071 | Tomita | Feb 2011 | B2 |
7974243 | Nagata et al. | Jul 2011 | B2 |
8000546 | Yang et al. | Aug 2011 | B2 |
8010185 | Ueda | Aug 2011 | B2 |
8019175 | Lee et al. | Sep 2011 | B2 |
8060651 | Deshpande et al. | Nov 2011 | B2 |
8085767 | Lussier et al. | Dec 2011 | B2 |
8087056 | Ryu | Dec 2011 | B2 |
8130823 | Gordon et al. | Mar 2012 | B2 |
8139642 | Vilei et al. | Mar 2012 | B2 |
8161159 | Shetty et al. | Apr 2012 | B1 |
8175041 | Shao et al. | May 2012 | B2 |
8176524 | Singh et al. | May 2012 | B2 |
8179983 | Gordon et al. | May 2012 | B2 |
8208545 | Seo et al. | Jun 2012 | B2 |
8233539 | Kwon | Jul 2012 | B2 |
8265450 | Black et al. | Sep 2012 | B2 |
8307403 | Bradstreet et al. | Nov 2012 | B2 |
8311111 | Xu et al. | Nov 2012 | B2 |
8325796 | Wilkins et al. | Dec 2012 | B2 |
8369411 | Au et al. | Feb 2013 | B2 |
8438451 | Lee et al. | May 2013 | B2 |
8443398 | Swenson et al. | May 2013 | B2 |
8448259 | Haga et al. | May 2013 | B2 |
8494053 | He et al. | Jul 2013 | B2 |
8526498 | Lim et al. | Sep 2013 | B2 |
8553776 | Shi et al. | Oct 2013 | B2 |
8566886 | Scholl | Oct 2013 | B2 |
8666181 | Venkatapuram et al. | Mar 2014 | B2 |
8666186 | Rasche | Mar 2014 | B1 |
8711935 | Kim et al. | Apr 2014 | B2 |
8724702 | Bulusu et al. | May 2014 | B1 |
8761242 | Jeon et al. | Jun 2014 | B2 |
8856624 | Paniconi | Oct 2014 | B1 |
20020035714 | Kikuchi et al. | Mar 2002 | A1 |
20020085637 | Henning | Jul 2002 | A1 |
20020140851 | Laksono | Oct 2002 | A1 |
20020152318 | Menon et al. | Oct 2002 | A1 |
20020157058 | Ariel et al. | Oct 2002 | A1 |
20020176604 | Shekhar et al. | Nov 2002 | A1 |
20020191072 | Henrikson | Dec 2002 | A1 |
20030012287 | Katsavounidis et al. | Jan 2003 | A1 |
20030016630 | Vega-Garcia et al. | Jan 2003 | A1 |
20030061368 | Chaddha | Mar 2003 | A1 |
20030098992 | Park et al. | May 2003 | A1 |
20030226094 | Fukushima et al. | Dec 2003 | A1 |
20030229822 | Kim et al. | Dec 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20040071170 | Fukuda | Apr 2004 | A1 |
20040105004 | Rui et al. | Jun 2004 | A1 |
20040165585 | Imura et al. | Aug 2004 | A1 |
20040172252 | Aoki et al. | Sep 2004 | A1 |
20040172255 | Aoki et al. | Sep 2004 | A1 |
20040184444 | Aimoto et al. | Sep 2004 | A1 |
20040196902 | Faroudja | Oct 2004 | A1 |
20040233938 | Yamauchi | Nov 2004 | A1 |
20050024487 | Chen | Feb 2005 | A1 |
20050041150 | Gewickey et al. | Feb 2005 | A1 |
20050076272 | Delmas et al. | Apr 2005 | A1 |
20050117653 | Sankaran | Jun 2005 | A1 |
20050125734 | Mohammed et al. | Jun 2005 | A1 |
20050154965 | Ichiki et al. | Jul 2005 | A1 |
20050157793 | Ha et al. | Jul 2005 | A1 |
20050180415 | Cheung et al. | Aug 2005 | A1 |
20050185715 | Karczewicz et al. | Aug 2005 | A1 |
20050220188 | Wang | Oct 2005 | A1 |
20050251856 | Araujo et al. | Nov 2005 | A1 |
20050259729 | Sun | Nov 2005 | A1 |
20050265447 | Park | Dec 2005 | A1 |
20060005106 | Lane et al. | Jan 2006 | A1 |
20060013310 | Lee et al. | Jan 2006 | A1 |
20060039470 | Kim et al. | Feb 2006 | A1 |
20060066717 | Miceli | Mar 2006 | A1 |
20060146940 | Gomila et al. | Jul 2006 | A1 |
20060150055 | Quinard et al. | Jul 2006 | A1 |
20060153217 | Chu et al. | Jul 2006 | A1 |
20060188024 | Suzuki et al. | Aug 2006 | A1 |
20060215014 | Cohen et al. | Sep 2006 | A1 |
20060215752 | Lee et al. | Sep 2006 | A1 |
20060247927 | Robbins et al. | Nov 2006 | A1 |
20060248563 | Lee et al. | Nov 2006 | A1 |
20060282774 | Covell et al. | Dec 2006 | A1 |
20060291475 | Cohen | Dec 2006 | A1 |
20070036354 | Wee et al. | Feb 2007 | A1 |
20070064094 | Potekhin et al. | Mar 2007 | A1 |
20070080971 | Sung | Apr 2007 | A1 |
20070081522 | Apelbaum | Apr 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070097257 | El-Maleh et al. | May 2007 | A1 |
20070121100 | Divo | May 2007 | A1 |
20070168824 | Fukushima et al. | Jul 2007 | A1 |
20070195893 | Kim et al. | Aug 2007 | A1 |
20070223529 | Lee et al. | Sep 2007 | A1 |
20070237226 | Regunathan et al. | Oct 2007 | A1 |
20070237232 | Chang et al. | Oct 2007 | A1 |
20070250754 | Costa et al. | Oct 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20070285505 | Korneliussen | Dec 2007 | A1 |
20070296822 | Lan et al. | Dec 2007 | A1 |
20080037624 | Walker et al. | Feb 2008 | A1 |
20080043832 | Barkley et al. | Feb 2008 | A1 |
20080072267 | Monta et al. | Mar 2008 | A1 |
20080089414 | Wang et al. | Apr 2008 | A1 |
20080101403 | Michel et al. | May 2008 | A1 |
20080109707 | Dell et al. | May 2008 | A1 |
20080126278 | Bronstein et al. | May 2008 | A1 |
20080134005 | Izzat et al. | Jun 2008 | A1 |
20080144553 | Shao et al. | Jun 2008 | A1 |
20080209300 | Fukushima et al. | Aug 2008 | A1 |
20080225944 | Pore et al. | Sep 2008 | A1 |
20080250294 | Ngo et al. | Oct 2008 | A1 |
20080260042 | Shah et al. | Oct 2008 | A1 |
20080270528 | Girardeau et al. | Oct 2008 | A1 |
20080273591 | Brooks et al. | Nov 2008 | A1 |
20080310745 | Ye et al. | Dec 2008 | A1 |
20090006927 | Sayadi et al. | Jan 2009 | A1 |
20090007159 | Rangarajan et al. | Jan 2009 | A1 |
20090010325 | Nie et al. | Jan 2009 | A1 |
20090013086 | Greenbaum | Jan 2009 | A1 |
20090022157 | Rumbaugh et al. | Jan 2009 | A1 |
20090031390 | Rajakarunanayake et al. | Jan 2009 | A1 |
20090059067 | Takanohashi et al. | Mar 2009 | A1 |
20090059917 | Lussier et al. | Mar 2009 | A1 |
20090080510 | Wiegand et al. | Mar 2009 | A1 |
20090103635 | Pahalawatta | Apr 2009 | A1 |
20090122867 | Mauchly et al. | May 2009 | A1 |
20090125812 | Blinnikka et al. | May 2009 | A1 |
20090138784 | Tamura et al. | May 2009 | A1 |
20090144417 | Kisel et al. | Jun 2009 | A1 |
20090161763 | Rossignol et al. | Jun 2009 | A1 |
20090180537 | Park et al. | Jul 2009 | A1 |
20090196342 | Divorra Escoda et al. | Aug 2009 | A1 |
20090238277 | Meehan | Sep 2009 | A1 |
20090241147 | Kim et al. | Sep 2009 | A1 |
20090245351 | Watanabe | Oct 2009 | A1 |
20090249158 | Noh et al. | Oct 2009 | A1 |
20090254657 | Melnyk et al. | Oct 2009 | A1 |
20090268819 | Nishida | Oct 2009 | A1 |
20090276686 | Liu et al. | Nov 2009 | A1 |
20090276817 | Colter et al. | Nov 2009 | A1 |
20090303176 | Chen et al. | Dec 2009 | A1 |
20090322854 | Ellner | Dec 2009 | A1 |
20100040349 | Landy | Feb 2010 | A1 |
20100054333 | Bing et al. | Mar 2010 | A1 |
20100077058 | Messer | Mar 2010 | A1 |
20100080297 | Wang et al. | Apr 2010 | A1 |
20100086028 | Tanizawa et al. | Apr 2010 | A1 |
20100118945 | Wada et al. | May 2010 | A1 |
20100122127 | Oliva et al. | May 2010 | A1 |
20100128796 | Choudhury | May 2010 | A1 |
20100149301 | Lee et al. | Jun 2010 | A1 |
20100153828 | De Lind Van Wijngaarden et al. | Jun 2010 | A1 |
20100171882 | Cho et al. | Jul 2010 | A1 |
20100192078 | Hwang et al. | Jul 2010 | A1 |
20100202414 | Malladi et al. | Aug 2010 | A1 |
20100220172 | Michaelis | Sep 2010 | A1 |
20100235820 | Khouzam et al. | Sep 2010 | A1 |
20100306618 | Kim et al. | Dec 2010 | A1 |
20100309372 | Zhong | Dec 2010 | A1 |
20100309982 | Le Floch et al. | Dec 2010 | A1 |
20110033125 | Shiraishi | Feb 2011 | A1 |
20110069890 | Besley | Mar 2011 | A1 |
20110093273 | Lee et al. | Apr 2011 | A1 |
20110103480 | Dane | May 2011 | A1 |
20110131144 | Ashour et al. | Jun 2011 | A1 |
20110158529 | Malik | Jun 2011 | A1 |
20110194605 | Amon et al. | Aug 2011 | A1 |
20110218439 | Masui et al. | Sep 2011 | A1 |
20110235706 | Demircin et al. | Sep 2011 | A1 |
20110293001 | Lim et al. | Dec 2011 | A1 |
20120013705 | Taylor et al. | Jan 2012 | A1 |
20120014436 | Segall et al. | Jan 2012 | A1 |
20120014437 | Segall et al. | Jan 2012 | A1 |
20120014438 | Segall et al. | Jan 2012 | A1 |
20120014439 | Segall et al. | Jan 2012 | A1 |
20120014440 | Segall et al. | Jan 2012 | A1 |
20120014444 | Min et al. | Jan 2012 | A1 |
20120057630 | Saxena et al. | Mar 2012 | A1 |
20120084821 | Rogers | Apr 2012 | A1 |
20120110443 | Lemonik et al. | May 2012 | A1 |
20120147140 | Itakura et al. | Jun 2012 | A1 |
20120206562 | Yang et al. | Aug 2012 | A1 |
20120287999 | Li et al. | Nov 2012 | A1 |
20120290900 | Paniconi | Nov 2012 | A1 |
20120320975 | Kim et al. | Dec 2012 | A1 |
20120320984 | Zhou | Dec 2012 | A1 |
20130031441 | Ngo et al. | Jan 2013 | A1 |
20130044183 | Jeon et al. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
1777969 | Apr 2007 | EP |
0715711 | Jan 1995 | JP |
WO0249356 | Jun 2002 | WO |
WO2008006062 | Jan 2008 | WO |
Entry |
---|
Friedman, et al., “RTP: Control Protocol Extended Reports (RTPC XR),” Network Working Group RFC 3611 (The Internet Society 2003) (52 pp). |
Frossard, Pascal; “Joint Source/FEC Rate Selection for Quality-Optimal MPEG-2 Video Delivery”, IEEE Transactions on Image Processing, vol. 10, No. 12, (Dec. 2001) pp. 1815-1825. |
International Search Report and Written Opinion for International Application No. PCT/US2011/051818 dated Nov. 21, 2011 (16 pages). |
Korhonen, Jari; Frossard, Pascal; “Flexible forward error correction codes with application to partial media data recovery”, Signal Processing: Image Communication vol. 24, No. 3 (Mar. 2009) pp. 229-242. |
Li, A., “RTP Payload Format for Generic Forward Error Correction”, Network Working Group, Standards Track, Dec. 2007, (45 pp). |
Liang, Y.J.; Apostolopoulos, J.G.; Girod, B., “Analysis of packet loss for compressed video: does burst-length matter?,” Acoustics, Speech and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International conference on, vol. 5, No., pp. V, 684-7 vol. 5, Apr. 6-10, 2003. |
Roca, Vincent, et al., Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec, INRIA Rhone-Alpes, Planete project, France, Date Unknown, (12 pp). |
Wikipedia, the free encyclopedia, “Low-density parity-check code”, http://en.wikipedia.org/wiki/Low-density—parity-check—code, Jul. 30, 2012 (5 pp). |
Yoo, S. J.B., “Optical Packet and burst Switching Technologies for the Future Photonic Internet,” Lightwave Technology, Journal of, vol. 24, No. 12, pp. 4468, 4492, Dec. 2006. |
Yu, Xunqi, et al; “The Accuracy of Markov Chain Models in Predicting Packet-Loss Statistics for a Single Multiplexer”, IEEE Transaactions on Information Theory, vol. 54, No. 1 (Jan. 2008) pp. 489-501. |
Chen, Yu, et al., “An Error Concealment Algorithm for Entire Frame Loss in Video Transmission,” Picture Coding Symposium, 2004. |
European Search Report for European Patent Application No. 08146463.1 dated Jun. 23, 2009. |
Feng, Wu-chi; Rexford, Jennifer; “A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video”, Paper, 1992, 22 pages. |
Han et al., “Jointly Optimized Spatial Prediction and Block Transform for Video and Image Coding,” IEEE Transactions on Image Processing, vol. 21, No. 4 (Apr. 2012). |
Hartikainen, E. and Ekelin, S. Tuning the Temporal Characteristics of a Kalman-Filter Method for End-to-End Bandwidth Estimation. IEEE E2EMON. Apr. 3, 2006. |
International Search Report for International Application No. PCT/EP2009/057252 mailed on Aug. 27, 2009. |
JongWon Kim, Young-Gook Kim, HwangJun Song, Tien-Ying Kuo, Yon Jun Chung, and C.-C. Jay Kuo; “TCP-friendly Internet Video Streaming employing Variable Frame-rate Encoding and Interpolation”; IEEE Trans. Circuits Syst. Video Technology, Jan. 2000; vol. 10 pp. 1164-1177. |
Khronos Group Inc. OpenMAX Integration Layer Application Programming Interface Specification. Dec. 16, 2005, 326 pages, Version 1.0. |
Neogi, A., et al., Compression Techniques for Active Video Content; State University of New York at Stony Brook; Computer Science Department; pp. 1-11. |
Peng, Qiang, et al., “Block-Based Temporal Error Concealment for Video Packet Using Motion Vector Extrapolation,” IEEE 2003 Conference of Communications, Circuits and Systems and West Sino Expositions, vol. 1, No. 29, pp. 10-14 (IEEE 2002). |
“Rosenberg, J. D. RTCWEB I-D with thoughts on the framework. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/mail-archive/web/dispatch/current/msg03383.html on Aug. 1, 2011.” |
“Rosenberg, J.D., et al. An Architectural Framework for Browser based Real-Time Communications (RTC) draft-rosenberg-rtcweb-framework-00. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/id/draft-rosenberg-rtcweb-framework-00.txt on Aug. 1, 2011.” |
Scalable Video Coding, SVC, Annex G extension of H264. |
Stiller, Christoph; “Motion-Estimation for Coding of Moving Video at 8 kbit/s with Gibbs Modeled Vectorfield Smoothing”, SPIE vol. 1360 Visual Communications and Image Processing 1990, 9 pp. |
Yan, Bo and Gharavi, Hamid, “A Hybrid Frame Concealment Algorithm for H.264/AVC,” IEEE Transactions on Image Processing, vol. 19, No. 1, pp. 98-107 (IEEE, Jan. 2010). |
Chae-Eun Rhee et al. (:A Real-Time H.264/AVC Encoder with Complexity-Aware Time Allocation, Circuits and Systems for video Technology, IEEE Transactions on, vol. 20, No. 12, pp. 1848, 1862, Dec. 2010). |
Gachetti (Matching techniques to compute image motion, Image and Vision Computing, vol. 18, No. 3, Feb. 2000, pp. 247-260. |
Sony. “PlayStation® Move Motion Controller”, published online at [http//us.playstation.com/ps3/playstation-move/], retrieved Mar. 14, 2012, 4 pages. |
Microsoft. “Xbox 360 + Kinect”, published online at :[http://www.xbox.com/en=US/kinect], retrieved Mar. 14, 2012. 2 pages. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May, 2003. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
“Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services”. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
“VP8 Data Format and Decoding Guide”. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Dated May 18, 2011. |
Bankoski et al. “Technical Overview of VP8, an Open Source Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski, J., Koleszar, J., Quillio, L., Salonen, J., Wilkins, P., and Y. Xu, “VP8 Data Format and Decoding Guide”, RFC 6386, Nov. 2011. |
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp. |
Han et al., “Toward Jointly Optimal Spatial Prediction and Adaptive Transform in Video/Image Coding,” ICASSP 2010 (Dallas, TX, Mar. 14-19, 2010). |
Wright, R. Glenn, et al.; “Multimedia—Electronic Technical Manual for ATE”, IEEE 1996, 3 pp. |