The present invention relates in general to video encoding and decoding.
An increasing number of applications today make use of digital video for various purposes including, for example, remote business meetings via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. As technology is evolving, people have higher expectations for video quality and expect high resolution video even when transmitted over communications channels having limited bandwidth.
To permit higher quality transmission of video while limiting bandwidth consumption, a number of video compression schemes are noted including proprietary formats such as VPx (promulgated by On2 Technologies, Inc. of Clifton Park, N.Y.) and H.264, standard promulgated by ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), including present and future versions thereof. H.264 is also known as MPEG-4 Part 10 or MPEG-4 AVC (formally, ISO/IEC 14496-10).
These compression schemes may use prediction techniques to minimize the amount of data required to transmit video information. Prediction techniques can allow for multiple past transmitted frames and future frames to be transmitted out of order and used as a potential reference frame predictors for macroblocks in a frame. For example, video compression schemes, such as the MPEG or H.264 standard allow for transmission of frames out of order and use them to produce better predictors by use of forward or bidirectional prediction. Further, for example, the H.264 video compression standard allows for multiple past reference frames to be used as a predictor.
Disclosed herein are systems, methods, and apparatus for video coding using a constructed reference frames.
An aspect of the disclosed embodiments is a method of encoding a video stream. Encoding a video stream may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded video and outputting an output bitstream. Generating the encoded video includes receiving an input video stream, generating a constructed reference frame, generating an encoded constructed reference frame by encoding the constructed reference frame, including the encoded constructed reference frame in an output bitstream such that the constructed reference frame is a non-showable frame, generating an encoded frame by encoding a current frame from the input video stream using the constructed reference frame as a reference frame, and including the encoded frame in the output bitstream.
Another aspect of the disclosed embodiments is a method of decoding an encoded video stream. Decoding an encoded video stream may include generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a decoded video for presentation to a user and outputting the decoded video. Generating the decoded video includes receiving an encoded video stream, generating a decoded constructed reference frame by decoding an encoded constructed reference frame from the encoded video stream, such that the decoded constructed reference frame is a non-showable frame, generating a decoded current frame by decoding an encoded current frame from the encoded video stream using the decoded constructed reference frame as a reference frame, and including the decoded current frame in the decoded video such that the decoded constructed reference frame is omitted from the decoded video.
Another aspect of the disclosed embodiments is a non-transitory computer-readable storage medium, comprising executable instructions that, when executed by a processor, facilitate performance of operations, including generating an encoded video and outputting an output bitstream. Generating the encoded video includes receiving an input video stream, generating a constructed reference frame, generating an encoded constructed reference frame by encoding the constructed reference frame, including the encoded constructed reference frame in an output bitstream such that the constructed reference fame frame is a non-showable frame, generating an encoded frame by encoding a current from the input video stream using the constructed reference frame as a reference frame, and including the encoded frame in the output bitstream.
These and other embodiments of the invention, including methods of extracting a constructed reference frame from a series of digital video frames, are described in additional detail hereinafter.
The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:
Referring to
Referring to
When input video stream 16 is presented for encoding, each frame 17 within input video stream 16 is processed in units of macroblocks. At intra/inter prediction stage 18, each macroblock is encoded using either intra prediction or inter prediction mode. In either case, a prediction macroblock can be formed based on a reconstructed frame. In the case of intra-prediction, a prediction macroblock is formed from samples in the current frame that have been previously encoded and reconstructed. In the case of inter-prediction, a prediction macroblock is formed from one or more constructed reference frames as described in additional detail herein.
Next, still referring to
The reconstruction path in
Referring to
When compressed bitstream 26 is presented for decoding, the data elements can be entropy decoded by entropy decoding stage 25 (using for, for example, Context Adaptive Binary Arithmetic Coding) to produce a set of quantized coefficients. Dequantization stage 27 dequantizes the coefficients, and inverse transform stage 29 inverse transforms the coefficients to produce a derivative residual that is identical to that created by the reconstruction stage in the encoder 14. Using header information decoded from the compressed bitstream 26, at intra/inter prediction stage 23, decoder 21 creates the same prediction macroblock as was created in encoder 14. At the reconstruction stage 31, the prediction macroblock is added to the derivative residual to create a reconstructed macroblock. The loop filter 34 can be applied to the reconstructed macroblock to further reduce blocking artifacts. A deblocking filter 33 is applied to the reconstructed macroblock to reduce blocking distortion, and the result is output as output video stream 35.
Referring again to encoder 14, video encoding methods compress video signals by using lossless or lossy compression algorithms to compress each frame or blocks of each frame of a series of frames. As can be implied from the description above, intra-frame coding refers to encoding a frame using data from that frame, while inter-frame coding refers to predictive encoding schemes such as schemes that comprise encoding a frame based on other so-called “reference” frames. For example, video signals often exhibit temporal redundancy in which frames near each other in the temporal sequence of frames have at least portions that match or at least partially match each other. Encoders can take advantage of this temporal redundancy to reduce the size of encoded data by encoding a frame in terms of the difference between the current frame and one or more reference frames.
Video encoders may use motion compensation based algorithms that match blocks of the frame being encoded to portions of one or more other frames. The block of the encoded frame may be shifted in the frame relative to the matching portion of the reference frame. This shift is characterized by a motion vector. Any differences between the block and partially matching portion of the reference frame may be characterized in terms of a residual. The encoder 14 may thus encode a frame as data that comprises one or more of the motion vectors and residuals for a particular partitioning of the frame. A particular partition of blocks for encoding the frame may be selected by approximately minimizing a cost function that, for example, balances encoding size with distortion to the content of the frame resulting from encoding.
As described briefly above, many video coding algorithms first partition each picture into macroblocks. Then, each macroblock is coded using some form of predictive coding method such as motion compensation. Some video coding standards use different types of predicted macroblocks in their coding. In one scenario, a macroblock may be one of three types: 1) Intra (I) macroblock that uses no information from other pictures in its coding; 2) Unidirectionally Predicted (P) macroblock that uses information from one preceding picture; and 3) Bidirectionally Predicted (B) macroblock that uses information from one preceding picture and one future picture.
To facilitate higher quality compressed video, it is helpful to have the best matching reference frame in order to have the smallest difference to encode, which generally results in a more compact encoding. Currently, reference frames are based on past frames, future frames, or an intra-frame so that the encoder can find the best matching block to use in the predictive process as shown in, for example, U.S. Application Publication No. 2005/0286629. However, reference frames currently used are based on real frames that are shown to the end-user.
This results in several problems, the main one being that such reference frames do not provide the highest-quality reference data and can result in lower quality video compression.
In contrast, the reference frame created and used by the encoder 14 described herein is a constructed reference frame, which is a frame of image data that is encoded into the bitstream and serves to improve the encoding of subsequently transmitted frames. Unlike a conventional reference frame, a constructed reference frame is not shown to the user. Due to the flexibility of the techniques described herein, a constructed reference frame may not even have the same dimensions as the video stream's raw image frames or the frames displayed to the user. Instead, the constructed reference frame serves as a predictor, giving subsequent frames a better predictive choice than a prior transmitted frame might offer. The creation of a constructed reference frame is not defined by the bitstream. Instead, creating the best possible constructed reference frame is a task left to the encoder. In this way, the computational expense of constructing a reference frame is done by the encoder rather than the decoder.
An embodiment of the present invention uses one or more constructed reference frame buffers as a predictor for pieces of the current frame data. This includes the usage of these frame buffers for motion compensated and non motion compensated prediction. It also covers the usage of combination of a constructed reference frame with a real reference frame for prediction as in typical bidirectional prediction modes.
Generally, the constructed reference frame can be built by a number of methods and used in a variety of ways for encoding. Methods for building the constructed reference frame are first generally described below before specific examples are described.
According to a first method of creating the constructed reference frame, a copy of an existing frame is encoded into the bitstream some time before that frame would normally appear in a sequence of image frames. A relevant parameter to the encoding herein is the quality of the encoding of the reference frame or “boost.” The more reliable the constructed reference frame is, the more valuable precise encoding of that frame can be. Conversely, a reference frame of limited predictive value need not be encoded to a very high level of precision. In this first method, the copy of this frame is usually, but not necessarily encoded at a somewhat higher than average quality.
Other frames are encoded according to conventional techniques using this constructed reference frame. When the target frame used to encode the constructed reference frame is encountered in the bitstream, it would be encoded with reference to the copy of the existing frame, that is, the constructed reference frame. Such encoding would occur, for example, with a lower quality level or lower boost than that used to encode the constructed reference frame.
Another method of creating a constructed reference frame generally includes selecting the target frame as above and using temporal filtering to remove video noise from several source frames centered on that target frame. Such a constructed reference frame is shown in
A further possible embodiment is shown with reference to
A third method of creating a constructed reference frame is to create only a high quality background frame for encoding using background extraction and/or motion segmentation. Various techniques for background extraction and motion segmentation are known in the art. Generally, any block that has a high motion vector (i.e., is moving fast) is considered foreground and is not copied into the constructed reference frame. Any block that has a (0,0) motion vector or other low motion vector (i.e., is moving slowly) is considered background and is copied into the constructed reference frame.
Of course, although this method describes creating only a high quality background frame, there is no limit in theory to the number of constructed frames encoded. Accordingly, it is also possible to segment the foreground and background into separate constructed reference frames.
Although not previously mentioned, in order to facilitate usage of the reference frames in the described manner, encoding of an alpha channel for use in constructed reference frames may be desirable.
Another method of creating the constructed reference frame is to use image super resolution to construct a frame of a different size than the target frame. There is no requirement that the reference frame exactly matches the size and dimensions of the actual video being encoded. For example, in a zoom out, pan or rotate, a larger area is slowly revealed over several frames. A constructed reference frame that is larger than the original frame provides higher quality prediction for the border areas.
One method of creating such a constructed reference frame is shown by example in
After step 58, the frame is incremented at step 60, and the new frame becomes current frame A in step 54. Steps 56, 58 and 60 are repeated until the number of current frame A is greater than N+X number of frames as indicated by step 56. Then, processing advances to step 62, where a bounding region is created that covers the entire set of frames when aligned on top of each other by use of the global motion vector. In next step 64, a new image is created that is larger in dimensions that the source frames. Preferably, the new image is large enough to cover the entire region as it is moved about.
After finding the global motion vectors and creating a new image that completely bounds the set of video frames in step 64, the remaining steps are performed for each pixel in the new image. Namely, in step 66 a pixel in the new image is selected. In step 68, the frame A is again set to the start frame N so that the following steps are performed for each frame A from start frame N to frame N+X. First, in step 70, the encoder 14 checks whether the number of frame A is greater than N+X number of frames. If not, the encoder 14 queries in step 71 whether the selected pixel is in current frame A. If the selected pixel is in current frame A in step 71, processing advances to step 72, where the encoder 14 adds the pixel to a candidate set. Processing then advances to step 73, where the frame is incremented. If the selected pixel is not in current frame A in step 71, processing advances directly to step 73 to increment the frame. Then, the frame as incremented is set as current frame A in step 68, and the selected pixel is searched for in the new frame in step 71. This process is completed for each frame of the set of frames to form the candidate set. Once all of the frames have been checked for the selected pixel (as indicated by a yes response to the query in step 70), processing advances to step 74, where a number of steps are performed for the candidate set.
Namely, in step 74, the newest pixel is selected from the candidate set, and each remaining pixel of the candidate set is compared to that newest pixel. Specifically, in step 75, a pixel in the candidate set is selected. In step 76, the encoder 14 determines whether the intensity of that pixel is greater than a predetermined threshold away from the intensity of the newest pixel. This predetermined threshold is determined by experimentation and depends, in part, on the intensity range of the pixels in the frames. If the intensity of the selected pixel is greater than the predetermined threshold away from the intensity of the newest pixel, that pixel is removed from the candidate set in step 77. If all the pixels in the candidate set are checked in step 78 (and either left in the candidate set by a no response to the query in step 76 or removed from the candidate set in step 77 due to a yes response to the query in step 76), processing advances to step 79. Otherwise, a new pixel from the candidate set is selected in step 75 for comparison with the newest pixel in step 76.
In step 79, the average intensity of the pixels remaining in the candidate set is calculated. This average intensity could be a weighted average based on, as one example, the position of the pixel in the frame. Then, in step 80, the average intensity is stored as the current pixel intensity value in the constructed reference frame created from the new image. That is, the average intensity value is stored associated with the pixel position of the pixel selected from the new image that was used to develop the candidate set. In step 82, the encoder 14 queries whether or not all of the pixels in the new image have been reviewed. If they have, processing ends. If not, the next pixel in the new image is selected in step 66. Processing in steps 70 to 80 then repeats for the next pixel so that a candidate set for that pixel is selected and an average intensity value is assigned.
A fifth method of creating a constructed reference frame involves using a scoring methodology to score each block or macroblock within a frame and then computing an overall score for the entire frame. This score can be used to pick which existing frame is used to construct the reference frame (i.e., what offset value, measured in time, is provided between the current frame and the frame that is used to build the constructed reference frame). Several scoring criterion can be used. For example, scoring criteria can include the ratio of error in intra-prediction vs. inter-prediction. In this case, the higher the ratio, the greater the time offset that can be used and the higher the boost that can be applied. Another criterion is the motion vector. The less motion, the greater the time offset and boost can be. Another criterion is zoom in vs. zoom out and still another is the rate of decay in prediction quality.
Next discussed are further details with respect to the selection of constructed reference frames and the update interval and bit-rate boost that should be applied.
In one particularly preferred embodiment of the invention, the use, frequency and quality of constructed reference frames is determined by use of a two pass encoding mechanism. Certain other embodiments might be implemented in one pass encoders and might use different metrics.
In the first pass, information is gathered about the characteristics of the video clip, that is, the series of source frames or images. Each macroblock is encoded in one of two ways, a simple DC predicted intra mode or an inter mode that uses a motion vector and that refers to the previous frame reconstruction buffer.
The reconstructed error score is noted for both encoding methods, and a record is kept of the cumulative score for the frame for the intra mode and for the best mode of either the intra or motion compensated inter mode. Usually the best mode is the inter coding mode. Accordingly, in the following description the cumulative best score will be referred to as the frame's inter error score although the inter coding mode is not necessarily the best mode for each frame.
A record is also kept of the percentage of macroblocks where the best mode for encoding is inter rather than intra, the percentage of the inter coded macroblocks where a zero (null) motion vector is selected and summary information regarding the motion vectors used.
The percentage of the inter coded macroblocks where a zero (null) motion vector is selected indicates how much of the image is static.
The summary information regarding the motion vectors used comprises the number of macroblocks for which a non zero vector is used and a sum value and sum of absolute values for each of the motion vector components (x,y). From these, an average motion vector for the frame (if there are some positive values and some negative values then they may cancel out) and an average motion vector magnitude for the frame can be calculated.
One of the uses for the information gathered in the first pass is to decide whether or how frequently to encode constructed reference frames and also how many bits to spend on them, which comprises the second pass of the encoding mechanism. The constructed frames are encoded at a somewhat higher than average quality (that is, a higher level of boost) in certain embodiments.
The benefit gained by encoding a “boosted” constructed reference frame is dependent in large part on the quality of the prediction from one frame to another within a short sequence of frames. As described above briefly with respect to one embodiment of constructing such a reference frame, a measure used to establish this can be the intra/inter ratio. The intra/inter ratio is the ratio of the summed intra error score for the frame (as measured in the first pass) divided by the cumulative inter (or best) error score for the frame. A large intra/inter ratio (IIRatio) indicates that the use of inter coding gives a very large benefit, which in turn suggests that the frame is well predicted by the preceding frame.
For a sequence of frames to which a constructed reference frame may be relevant, up to a defined maximum interval (Max_interval), a boost score is calculated as described below and as shown with reference to
In step 90, the variable ThisFrameDecayRate is set equal to the variable Nextframe % InterCoded. The variable ThisFrameDecayRate represents the decay rate of frame A. The variable Nextframe % InterCoded is the record described above that is kept for the next frame of the percentage of macroblocks where the best mode for encoding is inter rather than intra coding. Where the Nextframe % InterCoded number is low, this indicates that a lot of blocks in the next frame were poorly predicted by the current frame (and hence ended up being intra coded).
After step 90, processing by the encoder 14 advances to step 92, where a variable DistanceFactor is set. DistanceFactor as calculated in step 92 generally indicates the desirability of boost for the frame and the relative amount of boost that should be performed. Essentially, it is a multiplier to be used to work out BoostScore as described in additional detail hereinafter. The larger the amount of motion, the smaller the value of DistanceFactor because high motion makes it desirable to minimize or eliminate boost. Similarly, if low motion is indicated in the frame, it is reflected by a higher value of DistanceFactor because a higher level of boost is desirable. In step 92, DistanceFactor is set equal to the variable ThisFrameAverageMotionVectorLength divided by, in this case, 300.0. This divisor is based, in part, on the number of pixel units in which the variable ThisFrameAverageMotionVectorLength is specified. In this case, that variable is specified in 1/8 pixel units. The variable ThisFrameAverageMotionVectorLength is the average motion vector for the current frame that is calculated from the summary information regarding the motion vectors described above. The divisor 300 here represents an average motion vector of about 300/8 pixels and was determined by experimentation. This is a high level of movement that indicates that it is undesirable to apply boost to the frame. The divisor, as mentioned, is based in part on the number of pixel units in which the variable ThisFrameAverageMotionVectorLength is specified. It can also be based on the size of the frame. For example, HD would likely require a higher divisor so that proper boost is applied.
In next step 94, the variable DistanceFactor is compared to the number 1.0. If DistanceFactor is less than or equal to 1.0, DistanceFactor is set to 1.0-DistanceFactor in step 96. Otherwise, DistanceFactor is set to zero in step 98. Regardless of the setting of DistanceFactor, processing advances to step 100, where the encoder 14 compares DistanceFactor to the variable ThisFrameDecayRate. If DistanceFactor is less than ThisFrameDecayRate in step 100, processing advances to step 102, where the variable ThisFrameDecayRate takes on the value DistanceFactor. Then, processing advances to step 104. If DistanceFactor is not less than ThisFrameDecayRate in step 100, processing advances directly to step 104.
In step 104, the variable DecayFactor is set equal to the previous value for DecayFactor multiplied by the variable ThisFrameDecayRate. DecayFactor is a value that starts at 1.0 and diminishes with each frame according to the % of the blocks in the next frame that were inter coded in the first pass (as indicated by variable ThisFrameDecayRate). As mentioned previously, where the Nextframe % InterCoded number is low, this indicates that a lot of blocks in the next frame were poorly predicted by the current frame (and hence ended up being intra coded). Therefore, once a macroblock has been intra coded once in a sequence, it is assumed that for that macroblock, the predictive link between frames at opposite ends of the sequence has been broken. DecayFactor provides a relatively crude metric as to how well this predictive link is maintained.
In one embodiment, DecayFactor may also be reduced if the level of motion in the current frame (as measured in the first pass) was high. As mentioned above, ThisFrameAverageMotionVectorLength is specified in 1/8 pixel units in this example. As with a high level of intra coding in a frame, the assumption is that very fast motion (large motion vectors) will reduce the quality of the predictive link between the two ends of the sequence.
After step 104, processing advances to step 106. In step 106, the variable BoostScore is updated to the sum of the previous BoostScore and the result of the multiplication of IIRatio, a MultiplierValue, DecayFactor and a ZoomFactor. The IIRatio and DecayFactor have been discussed previously. MultiplierValue provides a coarse mechanism that can be used by the encoder 14 to adjust boost levels for a particular video clip or application type. ZoomFactor is a value based on the number of motion vectors in the current frame that point outwards versus the number that point inwards. When zooming out, more boost is desirable. When zooming in, less boost is desirable. One way of determining the value of ZoomFactor is to set a counter that increments for each outwardly directed vector and decrements for each inwardly directed vector. When divided by the number of vectors, a value between −1 and +1 results. The scale is then shifted to between 0 and +2, so that the value of ZoomFactor is between 0 and +2. The value of ZoomFactor is larger (that is, greater than 1.0 in this example) when there is a zoom out and smaller when there is a zoom in. BoostScore represents the desired boost for the constructed reference frame used for encoding the frames from N to N+Max_interval.
The encoder 14 advances to the next frame in step 108, and the processing loop will either continue until the maximum interval has been reached or, according to one embodiment, until a set of breakout conditions has been met. The use of breakout conditions allows the encoder 14 to select shorter intervals between constructed reference frame updates where appropriate.
One embodiment using breakout conditions is described with reference to
In
If the number of frames is greater than the value of Min_interval, the remaining breakout conditions are checked. Only one of the conditions needs to be met in order to indicate that breakout conditions are met and breakout should occur, that is, that processing in
First, in step 112, the value of variable MyRatioAccumulator is checked. MvRatioAccumulator is a value that is determined using information gathered in the first pass about the characteristics of the motion. MvRatioAccumulator accumulates the result of dividing the average absolute motion vector by the average motion vector for each frame and is essentially a measure of the randomness of the movement in the frame. A large value indicates that the positive and negative vectors in the frame have cancelled each other out, as may be the case in a zoom, for example, where vectors on opposite sides of the image may be pointing in opposite directions. A value approaching 1.0 indicates that all the vectors are pointing broadly in the same direction (as occurs in, for example, a pan). In such a case, a new constructed reference frame is not needed.
If the variable MyRatioAccumulator is greater than 60 in step 112, then the breakout conditions are met in step 114. The value of 60 indicates, in this case, the desirability of having a constructed reference frame produced more often. The value of 60 is by example only, and other values can be used based on characteristics of the source frames such as discussed previously (e.g., size of frames and motion vector length).
If the variable MyRatioAccumulator is not greater than 60 in step 112, then analysis of the remaining breakout conditions advances to step 116, where the value of variable AbsMvinOutAccumulator is checked. AbsMvinOutAccumulator is also a value that is determined using information gathered in the first pass about the characteristics of the motion. More specifically, AbsMvinOutAccumulator indicates the balance of vectors pointing away from the center of the image compared to those pointing towards the center of the image and can be calculated in a similar manner to that described with respect to ZoomFactor. This helps distinguish zoom in conditions from zoom out conditions. In step 116, if the value of variable AbsMvinOutAccumulator is greater than 2, the breakout conditions are met in step 114 such that a new constructed reference frame appears desirable. Otherwise, processing advances to check the final breakout condition in step 118. The value 2 is a threshold determined by experimentation and would vary based on characteristics such as the size of the frames and the motion vector length.
In step 118, BoostScore is compared to the previous BoostScore (PreviousBoostScore). If BoostScore is less than PreviousBoostScore+2.0, a situation has occurred where the rate of increase in the boost score from one frame to the next has decreased below a threshold amount. Accordingly, when BoostScore is less than PreviousBoostScore+2.0 in step 118, the breakout conditions are met in step 114 such that a new constructed reference frame appears desirable. Otherwise, all of the breakout conditions have been checked, and processing for breakout conditions ends. Processing in
The value 2.0 in step 118 is a threshold used as an indicator that the quality of prediction between the two ends of the sequence has dropped below a minimum acceptable level such that the spacing between the previous constructed reference frame and the subsequent constructed reference frame as determined in
The algorithm described with respect to
However, regardless of the results determined in
In one embodiment, the following criteria are used to determine if an updated constructed reference frame is desirable. If the BoostScore for the sequence is above a threshold amount (indicating a good correlation of the constructed reference frame with the sequence of frames), the average value of DecayFactor for the frames in the sequence was above a threshold value (indicating good prediction over the sequence), and no rapid zoom was detected (especially conditions of zooming in where image data is leaving the image), then the update is desirable. These criteria are preferably checked after each loop where a new constructed reference frame is indicated in response to the analysis in
Note that algorithms defined above for determining the appropriateness, interval and boost for constructed reference frames, or a similar one, could also be used for defining the optimal number of B frames between successive P frames, and the distribution of bits between P and B frames, in encoders/decoders that support bidirectional prediction.
The constructed reference frame need not be displayed to the end user (that is, need not be included in the final decoded video output) and need not correspond to an actual image. As such, the size and configuration of the constructed reference frame are arbitrary and can be determined programmatically by the encoder 14 to optimize the quality of the encoding.
One benefit is that the decode need not re-perform the computations used to create the constructed reference frame. Thus, a computationally expensive processes can be used on by the encoder 14 to derive the constructed reference frame, but this process need not be performed by the decoder 21, thus permitting faster, lighter and more efficient decoding.
The above-described embodiments have been described in order to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.
This application is a continuation of U.S. patent application Ser. No. 15/186,800, filed Jun. 20, 2016, now U.S. Pat. No. 10,165,306, which is a continuation of U.S. patent application Ser. No. 13/658,396, filed Oct. 23, 2012, now U.S. Pat. No. 9,374,596, which claims priority to U.S. patent application Ser. No. 12/329,041, filed Dec. 5, 2008, which claims priority to U.S. provisional patent application No. 61/096,189, filed Sep. 11, 2008, each of which is incorporated herein in the entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
4816906 | Kummerfeldt et al. | Mar 1989 | A |
5386234 | Veltman | Jan 1995 | A |
5398068 | Liu et al. | Mar 1995 | A |
5442458 | Rabbani et al. | Aug 1995 | A |
5483287 | Siracusa | Jan 1996 | A |
5485279 | Yonemitsu et al. | Jan 1996 | A |
5568200 | Pearlstein et al. | Oct 1996 | A |
5576767 | Lee et al. | Nov 1996 | A |
5586285 | Hasbun et al. | Dec 1996 | A |
5717394 | Schwartz et al. | Feb 1998 | A |
5748789 | Lee et al. | May 1998 | A |
5777812 | Kim | Jul 1998 | A |
5828848 | MacCormack et al. | Oct 1998 | A |
5978030 | Jung | Nov 1999 | A |
6072537 | Gurner | Jun 2000 | A |
6075875 | Gu | Jun 2000 | A |
6084912 | Reitmeier et al. | Jul 2000 | A |
6115076 | Linzer | Sep 2000 | A |
6178205 | Cheung et al. | Jan 2001 | B1 |
6181822 | Miller et al. | Jan 2001 | B1 |
6222174 | Tullis et al. | Apr 2001 | B1 |
6236682 | Ota et al. | May 2001 | B1 |
6292837 | Miller et al. | Sep 2001 | B1 |
6327304 | Miller et al. | Dec 2001 | B1 |
6330281 | Mann | Dec 2001 | B1 |
6335985 | Sambonsugi | Jan 2002 | B1 |
6370267 | Miller et al. | Apr 2002 | B1 |
6658618 | Gu et al. | Dec 2003 | B1 |
6718308 | Nolting | Apr 2004 | B1 |
6774924 | Kato et al. | Aug 2004 | B2 |
6774929 | Kopp | Aug 2004 | B1 |
6895051 | Nieweglowski et al. | May 2005 | B2 |
6909749 | Yang et al. | Jun 2005 | B2 |
6956573 | Bergen | Oct 2005 | B1 |
7010034 | Bruls | Mar 2006 | B2 |
7027654 | Ameres et al. | Apr 2006 | B1 |
7050503 | Prakash et al. | May 2006 | B2 |
7085319 | Prakash et al. | Aug 2006 | B2 |
7177360 | Koto et al. | Feb 2007 | B2 |
7221710 | Lee | May 2007 | B2 |
7253831 | Gu | Aug 2007 | B2 |
7346106 | Jiang | Mar 2008 | B1 |
7406053 | Cheung et al. | Jul 2008 | B2 |
7430261 | Forest et al. | Sep 2008 | B2 |
7499492 | Ameres et al. | Mar 2009 | B1 |
7515637 | Payson | Apr 2009 | B2 |
7529199 | Wijnands et al. | May 2009 | B1 |
7532808 | Lainema | May 2009 | B2 |
7606310 | Ameres et al. | Oct 2009 | B1 |
7671894 | Yea et al. | Mar 2010 | B2 |
7681104 | Sim-Tang | Mar 2010 | B1 |
7728840 | Hung | Jun 2010 | B2 |
7734821 | Wang et al. | Jun 2010 | B2 |
7773677 | Lee | Aug 2010 | B2 |
7974233 | Banerjee | Jul 2011 | B2 |
8005137 | Han et al. | Aug 2011 | B2 |
8111752 | Kumar et al. | Feb 2012 | B2 |
8284846 | Lamy-Bergot et al. | Oct 2012 | B2 |
8310521 | Zhang et al. | Nov 2012 | B2 |
8638854 | Bankoski et al. | Jan 2014 | B1 |
8718140 | Cai | May 2014 | B1 |
8965140 | Xu | Feb 2015 | B1 |
9014266 | Gu et al. | Apr 2015 | B1 |
20020067768 | Hurst | Jun 2002 | A1 |
20020071485 | Caglar | Jun 2002 | A1 |
20020118295 | Karczewicz et al. | Aug 2002 | A1 |
20020159634 | Lipton | Oct 2002 | A1 |
20030081233 | Obrador | May 2003 | A1 |
20030165331 | Van Der Schaar | Sep 2003 | A1 |
20030198382 | Chen | Oct 2003 | A1 |
20030202594 | Lainema | Oct 2003 | A1 |
20030202598 | Turaga | Oct 2003 | A1 |
20030206193 | Sato | Nov 2003 | A1 |
20030215014 | Koto et al. | Nov 2003 | A1 |
20040036782 | Knapp | Feb 2004 | A1 |
20040037357 | Bagni et al. | Feb 2004 | A1 |
20040042549 | Huang et al. | Mar 2004 | A1 |
20040075749 | Kondo | Apr 2004 | A1 |
20040080669 | Nagai et al. | Apr 2004 | A1 |
20040184533 | Wang | Sep 2004 | A1 |
20040202252 | Lee | Oct 2004 | A1 |
20040228410 | Ameres et al. | Nov 2004 | A1 |
20050008240 | Banerji et al. | Jan 2005 | A1 |
20050031030 | Kadono et al. | Feb 2005 | A1 |
20050123056 | Wang et al. | Jun 2005 | A1 |
20050147167 | Dumitras et al. | Jul 2005 | A1 |
20050185045 | Kamariotis | Aug 2005 | A1 |
20050207490 | Wang et al. | Sep 2005 | A1 |
20050226321 | Chen | Oct 2005 | A1 |
20050259736 | Payson | Nov 2005 | A1 |
20050286629 | Dumitras et al. | Dec 2005 | A1 |
20060050149 | Lang et al. | Mar 2006 | A1 |
20060050695 | Wang | Mar 2006 | A1 |
20060083300 | Han et al. | Apr 2006 | A1 |
20060115166 | Sung | Jun 2006 | A1 |
20060126734 | Wiegand et al. | Jun 2006 | A1 |
20060126952 | Suzuki | Jun 2006 | A1 |
20060146830 | Lin | Jul 2006 | A1 |
20060159174 | Chono | Jul 2006 | A1 |
20060198443 | Liang et al. | Sep 2006 | A1 |
20060216003 | LeComte | Sep 2006 | A1 |
20060285598 | Tulkki | Dec 2006 | A1 |
20070009034 | Tulkki | Jan 2007 | A1 |
20070019730 | Lee et al. | Jan 2007 | A1 |
20070073779 | Walker et al. | Mar 2007 | A1 |
20070076982 | Petrescu | Apr 2007 | A1 |
20070092010 | Huang et al. | Apr 2007 | A1 |
20070109409 | Yea et al. | May 2007 | A1 |
20070130755 | Duquette et al. | Jun 2007 | A1 |
20070177665 | Zhou et al. | Aug 2007 | A1 |
20070180459 | Smithpeters | Aug 2007 | A1 |
20070199011 | Zhang | Aug 2007 | A1 |
20070206673 | Cipolli et al. | Sep 2007 | A1 |
20070211798 | Boyce et al. | Sep 2007 | A1 |
20070230563 | Tian et al. | Oct 2007 | A1 |
20070253479 | Mukherjee | Nov 2007 | A1 |
20080089595 | Park | Apr 2008 | A1 |
20080112486 | Takahashi et al. | May 2008 | A1 |
20080115185 | Qiu et al. | May 2008 | A1 |
20080123747 | Lee | May 2008 | A1 |
20080130755 | Loukas et al. | Jun 2008 | A1 |
20080130988 | Moriya et al. | Jun 2008 | A1 |
20080181314 | Tsuda | Jul 2008 | A1 |
20080205523 | Monro | Aug 2008 | A1 |
20080219351 | Kim et al. | Sep 2008 | A1 |
20080247463 | Buttimer et al. | Oct 2008 | A1 |
20080273599 | Park et al. | Nov 2008 | A1 |
20080317138 | Jia | Dec 2008 | A1 |
20090028247 | Suh et al. | Jan 2009 | A1 |
20090052894 | Murata | Feb 2009 | A1 |
20090103610 | Puri | Apr 2009 | A1 |
20090122859 | Yasuda | May 2009 | A1 |
20090147856 | Song et al. | Jun 2009 | A1 |
20090148058 | Dane et al. | Jun 2009 | A1 |
20090154563 | Hong et al. | Jun 2009 | A1 |
20090175330 | Chen et al. | Jul 2009 | A1 |
20090180532 | Zhang | Jul 2009 | A1 |
20090180533 | Bushell | Jul 2009 | A1 |
20090238269 | Pandit et al. | Sep 2009 | A1 |
20090238277 | Meehan | Sep 2009 | A1 |
20090323801 | Imajou | Dec 2009 | A1 |
20090323809 | Raveendran | Dec 2009 | A1 |
20090327918 | Aaron | Dec 2009 | A1 |
20100008424 | Pace | Jan 2010 | A1 |
20100020875 | Macq | Jan 2010 | A1 |
20100061444 | Wilkins et al. | Mar 2010 | A1 |
20100061461 | Bankoski et al. | Mar 2010 | A1 |
20100061645 | Wilkins et al. | Mar 2010 | A1 |
20100086027 | Panchal et al. | Apr 2010 | A1 |
20100104016 | Aoki | Apr 2010 | A1 |
20100149422 | Samuelsson | Jun 2010 | A1 |
20100171812 | Kim | Jul 2010 | A1 |
20100195721 | Wu et al. | Aug 2010 | A1 |
20100239015 | Wang et al. | Sep 2010 | A1 |
20100303150 | Hsiung | Dec 2010 | A1 |
20110007797 | Palmer | Jan 2011 | A1 |
20110069751 | Budagavi | Mar 2011 | A1 |
20110090960 | Leontaris et al. | Apr 2011 | A1 |
20110164684 | Sato et al. | Jul 2011 | A1 |
20120063513 | Grange et al. | Mar 2012 | A1 |
20120092452 | Tourapis et al. | Apr 2012 | A1 |
20120189058 | Chen et al. | Jul 2012 | A1 |
20120257677 | Bankoski et al. | Oct 2012 | A1 |
20120328005 | Yu et al. | Dec 2012 | A1 |
20130022099 | Liu et al. | Jan 2013 | A1 |
20130044817 | Bankoski et al. | Feb 2013 | A1 |
20130114695 | Joshi et al. | May 2013 | A1 |
20130242046 | Zhang et al. | Sep 2013 | A1 |
20130279589 | Gu et al. | Oct 2013 | A1 |
20140105286 | Caglar | Apr 2014 | A1 |
20140169449 | Samuelsson et al. | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
1980334 | Jun 2007 | CN |
1496706 | Jan 2005 | EP |
2403618 | Jan 2005 | GB |
2007325304 | Dec 2007 | JP |
20080064355 | Jul 2008 | KR |
200412157 | Jul 2004 | TW |
03084235 | Oct 2003 | WO |
WO-2005104552 | Nov 2005 | WO |
2008008331 | Jan 2008 | WO |
2011005624 | Jan 2011 | WO |
2012102973 | Aug 2012 | WO |
Entry |
---|
Bettio, Fabio, Enrico Gobbetti, Fabio Marton, and Giovanni Pintore. “High-quality networked terrain rendering from compressed bitstreams.” In Proceedings of the twelfth international conference on 3D web technology, pp. 37-44. 2007. (Year: 2007). |
Yi, Haoran, Deepu Rajan, and Liang-Tien Chia. “A motion-based scene tree for browsing and retrieval of compressed videos.” Information Systems 31, No. 7 (2006): 638-658. (Year: 2006). |
Kondi, Lisimachos Paul, and Aggelos K. Katsaggelos. “On the encoding of the anchor frame in video coding.” IEEE transactions on consumer electronics 43, No. 3 (1997): 279-285. (Year: 1997). |
Chiew, Tuan-Kiang, James TH Chung-How, David R. Bull, and C. Nishan Canagarajah. “Rapid block-based global motion estimation and its applications.” In 2002 Digest of Technical Papers. International Conference on Consumer Electronics (IEEE Cat. No. 02CH37300), pp. 228-229. IEEE, 2002. (Year: 2002). |
Pejhan, Sassan, Ti-Hao Chiang, and Ya-Qin Zhang. “Dynamic frame rate control for video streams.” In Proceedings of the seventh ACM international conference on Multimedia (Part 1), pp. 141-144. 1999. (Year: 1999). |
Worrall, Stewart T., Abdul Hamid Sadka, Peter Sweeney, and Ahmet M. Kondoz. “Motion adaptive error resilient encoding for mpeg-4.” In 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 01CH37221), vol. 3, pp. 1389-1392. IEEE, 2001. (Year: 2001). |
Chu, Hao-hua, Lintian Qiao, Klara Nahrstedt, Hua Wang, and Ritesh Jain. “A secure multicast protocol with copyright protection.” ACM SIGCOMM Computer Communication Review 32, No. 2 (2002): 42-60. (Year: 2002). |
Lee, Sung-Hee, Yoon-Cheol Shin, Seungjoon Yang, Heon-Hee Moon, and Rae-Hong Park. “Adaptive motion-compensated interpolation for frame rate up-conversion.” IEEE Transactions on Consumer Electronics 48, No. 3 (2002): 444-450. (Year: 2002). |
Fang S et al.: “The Construction of Combined List for HEVC”,6. JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011-Jul. 22, 2011;Torino;(Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16); URL:http://wftp3.itu.int/av-arch/jctvc-site/, No. JCTVC-F573. Jul. 16, 2011 (Jul. 16, 2011). |
Hendry et al., “AHG21: Explicit Reference Pictures Signaling with Output Latency Count Scheme”, 7 JCT-VC Meeting; 98 MPEG Meeting Nov. 21, 2011-Nov. 30, 2011; Geneva. |
High efficiency video coding (HEVC) text specification draft 6, JCTVC-H1003, JCT-VC 7th meeting, Geneva, Switzerland, Nov. 21-30, 2011. |
Sjoberg R. et al/ “Absolute signaling of refrence pictures”, 6 JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011-Jul. 22, 2011; Torinino (Joint Collaborative Team on Video Coding of ISO/*EC JTC1/Sc29/WG11 and ITU-T SG. 16). |
Wang et al.,On reference picture list construction for uni-predicted partitions, JCT-VC Meeting, JCTVCE348, MPEG Meeting, Geneva, Mar. 11, 2011. |
Shen, L., et al. Fast mode decision for multiview video coding, 2009, IEEE, entire document. |
Yang, H. et al., Optimizing Motion Compensated Prediction for Error Resilient Video Coding, IEE Transactions on Image Processing, vol. 19, No. 1 Jan. 2010, entire document. |
Bo Hong: “Introduction to H.264”, Internet citation, XP002952898, pp. 5, 14-15, Nov. 22, 2002. |
Borman S. et al., “Super-Resolution From Image Sequences—A Review”, Proceedings of Midwest Symposium on Circuits and Systems, pp. 374-378, Aug. 9, 1998. |
Chen, Michael C., et al.; “Design and Optimization of a Differentially Coded Variable Block Size Motion Compensation System”, IEEE 1996, 4 pp. |
Chen, Xing C., et al.; “Quadtree Based Adaptive Lossy Coding of Motion Vectors”, IEEE 1996, 4 pp. |
Ebrahimi, Touradj, et al.; “Joint motion estimation and segmentation for very low bitrate video coding”, SPIE vol. 2501, 1995, 12 pp. |
Feng Wu et al, “Efficient Background Video Coding with Static Sprite Generation and Arbitrary-Shape Spatial Prediction Techniques”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 5, pp. 394-405, May 1, 2003. |
Girod B. et al., “3-D Image Models and Compression: Synthetic Hybrid or Natural Fit?”, International Conference on Image Processing, vol. 2, pp. 525-529, Oct. 24, 1999. |
Guillotel, Philippe, et al.; “Comparison of motion vector coding techniques”, SPIE vol. 2308, 1994, 11 pp. |
Hiroshi Watanabe et al., “Sprite Coding in Object-Based Video Coding Standard: MPEG-4”, Proceedings of Multiconference on Systemics, Cybernetics and Informatics, vol. 13, pp. 420-425, Jul. 1, 2001. |
International Search Report and Written opinion, from related matter, International Application No. PCT/US2009/056448 dated Aug. 3, 2010. |
International Search Report for related matter PCT/US2013/037058 dated Dec. 16, 2013. |
Irani M et al, “Video Compression Using Mosaic Representations”, Signal Processing Image Communication, vol. 7 No. 4., pp. 529-552, Nov. 1, 1995. |
Karczewicz, Marta, et al.; “Video Coding Using Motion Compensation With Polynomial Motion Vector Fields”, IEEE COMSOC EURASIP, First International Workshop on Wireless Image/Video Communications—Sep. 1996, 6 pp. |
Kim, Jong Won, et al.; “On the Hierarchical Variable Block Size Motion Estimation Technique for Motion Sequence Coding”, SPIE Visual Communication and Image Processing 1993, Cambridge, MA, Nov. 8, 1993, 29 pp. |
Liu, Bede, et al.; “A simple method to segment motion field for video coding”, SPIE vol. 1818, Visual Communications and Image Processing 1992, 10 pp. |
Liu, Bede, et al.; “New Fast Algorithms for the Estimation of Block Motion Vectors”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 2, Apr. 1993, 10 pp. |
Liu, P., et al, “A fast and novel intra and inter modes decision prediction algorithm for H.264/AVC based-on the characteristics of; * u macro-block”, 2009 Fifth Inti. Confr. on Intelligent Information Hiding and Multimedia Signal Processing, pp. 286-289,; http:/ /i eeexp lore. ieee .o rg/s tamp/stamp .jsp ?tp=&arn umber =533 7 483. |
Luttrell, Max, et al.; “Simulation Results for Modified Error Resilient Syntax With Data Partitioning and RVLC”, ITU—Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (Question 15), Sixth Meeting: Seoul, SouthKorea, Nov. 2, 1998, 34 pp. |
Martin, Graham R., et al.; “Reduced Entropy Motion Compensation Using Variable Sized Blocks”, SPIE vol. 3024, 1997, 10 pp. |
“Introduction to Video Coding Part 1: Transform Coding”, Mozilla, Mar. 2012, 171 pp. |
Nicolas, H., et al.; “Region-based motion estimation using deterministic relaxation schemes for image sequence coding”, IEEE 1992, 4 pp. |
Nokia, Inc., Nokia Research Center, “MVC Decoder Description”, Telecommunication Standardization Sector, Study Period 1997-2000, Geneva, Feb. 7, 2000, 99 pp. |
ON2 Technologies Inc., White Paper TrueMotion VP7 Video Codec, Jan. 10, 2005, 13 pages, Document Version: 1.0, Clifton Park, New York. |
ON2 Technologies, Inc., White Paper On2's TrueMotion VP7 Video Codec, Jul. 11, 2008, pp. 7 pages, Document Version: 1.0, Clifton Park, New York. |
Orchard, Michael T.; “Exploiting Scene Structure in Video Coding”, IEEE 1991, 5 pp. |
Orchard, Michael T.; “Predictive Motion-Field Segmentation for Image Sequence Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 1, Feb. 1993, 17 pp. |
“Overview VP7 Data Format and Decoder”, Version 1.5, On2 Technologies, Inc., Mar. 28, 2005, 65 pp. |
Park, Jun Sung, et al., “Selective Intra Prediction Mode Decision for H.264/AVC Encoders”, World Academy of Science, Engineering and Technology 13, (2006). |
Schiller, H., et al.; “Efficient Coding of Side Information In A Low Bitrate Hybrid Image Coder”, Signal Processing 19 (1990) Elsevier Science Publishers B.V. 61-73, 13 pp. |
Schuster, Guido M., et al.; “A Video Compression Scheme With Optimal Bit Allocation Among Segmentation, Motion, and Residual Error”, IEEE Transactions on Image Processing, vol. 6, No. 11, Nov. 1997, 16 pp. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication, International Telecommunication Union, ITU-T Recommendation H.263, Feb. 1998, 167 pp. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Amendment 2: New profiles for professional applications, International Telecommunication Union, Apr. 2007, 75 pp. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile, International Telecommunication Union, Jun. 2006, 16 pp. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Version 3, International Telecommunication Union, Mar. 2005, 343 pp. |
Steliaros, Michael K., et al.; “Locally-accurate motion estimation for object-based video coding”, SPIE vol. 3309, 1997, 11 pp. |
Stiller, Christoph; “Motion-Estimation for Coding of Moving Video at 8 kbit/s with Gibbs Modeled Vectorfield Smoothing”, SPIE vol. 1360 Visual Communications and Image Processing 1990, 9 pp. |
Strobach, Peter; “Tree-Structured Scene Adaptive Coder”, IEEE Transactions on Communications, vol. 38, No. 4, Apr. 1990, 10 pp. |
VP6 Bitstream and Decoder Specification, Version 1.03, (On2 Technologies, Inc.), Dated Oct. 29, 2007. |
Wiegand, Thomas, et al.; Long-Term Memory Motion-Compensated Prediction, date unknown. |
Wiegand, Thomas, et al.; “Rate-Distortion Optimized Mode Selection for Very Low Bit Rate Video Coding and the Emerging H.263 Standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 2, Apr. 1996, 9 pp. |
Wright, R. Glenn, et al.; “Multimedia—Electronic Technical Manual for ATE”, IEEE 1996, 3 pp. |
Zhang, Kui, et al.; “Variable Block Size Video Coding With Motion Prediction and Motion Segmentation”, SPIE vol. 2419, 1995, 9 pp. |
Zhi Liu, Zhaoyang Zhang, Liquan Shen, Mosaic Generation in H.264 Compressed Domain, IEEE 2006. |
Athanasios, et al.,“Weighted prediction methods for improved motion compensation,” Image Processing (ICIP), 2009 16th IEEE International Conference, Nov. 7, 2009, pp. 1029-1032. |
Bankoski et al. “Technical Overview of VP8, an Open Souice Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski et al., “VP8 Data Format and Decoding Guide”, Independent Submission RFC 6389, Nov. 2011, 305 pp. |
Bankoski et al., “VP8 Data Format and Decoding Guide draft-bankoski-vp8-bitstream-02”, Network Working Group, Internet-Draft, May 18, 2011, 288 pp. |
Carreira, Joao et al. “Constrained Parametric Min-Cuts for Automatic Object Segmentation”, 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, Jun. 13-18, 2010. |
Chong Soon Lim et al. Reference Lists for B Pictures Under Low Delay Constraints, 95. MPEG Meeting; Jan. 24, 2011; Jan. 21, 2011, 2011. |
EP127356814 Search Report dated Oct. 30, 2014. |
Number | Date | Country | |
---|---|---|---|
20190124363 A1 | Apr 2019 | US |
Number | Date | Country | |
---|---|---|---|
61096189 | Sep 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15186800 | Jun 2016 | US |
Child | 16221853 | US | |
Parent | 13658396 | Oct 2012 | US |
Child | 15186800 | US | |
Parent | 12329041 | Dec 2008 | US |
Child | 13658396 | US |