Modern video transmission and display systems, and particularly those systems that present high-definition content, require significant data compression in order to produce a visually acceptable motion picture, because transmission media simply cannot transmit an uncompressed sequence of video frames at a fast enough rate to appear as continuous motion to the human eye. At the same time, and again to produce a visually-acceptable picture, the compression technique used should not unduly sacrifice image quality by discarding too much frame data.
To achieve these dual, and conflicting goals, video compression and encoding standards such as MPEG and H.264 take advantage of temporal redundancy in the sequence of video frames. In other words, in the vast majority of video sequences of interest to a person, adjacent frames typically show the same objects or features, which may move slightly from one frame to another due either to the movement of the object in the scene being shot (producing local motion in a frame), the movement of the camera shooting the scene (producing global motion), or both.
Video compression standards employ motion estimation to define regions in an image, which may correspond to objects, and associate with those regions a motion vector that describes the inter-frame movement of the content in each region so as to avoid redundant encoding and transmission of objects or patterns that appear in more than one sequential frame, despite appearing at slightly different locations in sequential frames. Motion vectors may be represented by a translational model or many other models that approximate the motion of a real video camera, such as rotation, translation, or zoom. Accordingly, motion estimation is the process of calculating and encoding motion vectors as a substitute for duplicating the encoding of similar information in sequential frames.
Though motion vectors may relate to the whole image, more often they relate to small regions if the image, such as rectangular blocks, arbitrary shapes, boundaries of objects, or even individual pixels. There are various methods for finding motion vectors. One of the popular methods is block-matching, in which the current image is subdivided into rectangular blocks of pixels, such as 4.times.4 pixels, 4.times.8 pixels, 8.times.8 pixels, 16.times.16 pixels, etc., and a motion vector (or displacement vector) is estimated for each block by searching for the closest-matching block in the reference image, within a pre-defined search region of a subsequent frame.
As implied by this discussion, the use of motion vectors improves coding efficiency for any particular block of an image by permitting a block to be encoded only in terms of a motion vector pointing to a corresponding block in another frame, and a “residual” or differential between the target and reference blocks. The goal is therefore to determine a motion vector for a block in a way that minimizes the differential that needs to be encoded. Accordingly, numerous variations of block matching exist, differing in the definition of the size and placement of blocks, the method of searching, the criterion for matching blocks in the current and reference frame, and several other aspects.
With conventional motion compensation, an encoder performs motion estimation and signals the motion vectors as part of the bitstream. The bits spent on sending motion vectors can account for a significant portion of the overall bit budget, especially for low bit rate applications. Recently, motion vector competition (MVC) techniques have been proposed to reduce the amount of motion information in the compressed bitstream. MVC improves the coding of motion vector data by differentially encoding the motion vectors themselves in terms of a motion vector predictor and a motion vector differential, where the motion vector predictor is usually selected by the encoder from a number of candidates so as to optimize rate distortion, where the candidate motion vectors consist of previously encoded motion vectors for either adjacent blocks in the same frame and/or a subset of motion vectors in a preceding frame. In other words, just as the use of a motion vector and a differential improves coding efficiency of block data by eliminating redundancies between information in sequential frames, the coding of motion vectors can exploit redundancies in situations where motion vectors between sequential frames do not change drastically, by identifying an optimal predictor, from a limited set of previously-encoded candidates, so as to minimize the bit length of the differential. The predictor set usually contains both spatial motion vector neighbors and temporally co-located motion vectors, and possibly spatiotemporal vectors.
Even using motion vector competition techniques when encoding video, however, the necessary bit rate to preserve a desired quality is often too high for the transmission medium used to transmit the video to a decoder. What is needed, therefore, is an improved encoding system for video transmission.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Referring to
As can be seen in
It should be understood that the foregoing illustration was simplified in that different block sizes may be used, each block may represent a single pixel, and many more motion vectors could be included in the candidate set. For example, all motion vectors previously calculated in the current frame could be included in the candidate set, as well as any motion vectors calculated for preceding frames. Moreover, the candidate set may include a desired number of arbitrary motion vectors useful to capture large and sudden motions in the scene.
Referring to
Note that none of the symbols are a prefix of another symbol, so that the decoder 12 can correctly parse the received bitstream by, in this example, stopping at a received zero and decode the received bitstream with reference to a corresponding table 16. Moreover, the encoder and decoder will preferably collect statistics as the bitstream is encoded and decoded and rearrange the assignments of symbols to the motion vector candidates, in the respective tables 14 and 16, so that at any given time the motion vector having the highest frequency receives the shortest symbol, etc. This process is generally referred to as entropy coding, and will usually result in significant, lossless compression of the bitstream. The encoder 10 and the decoder 12 use the same methodology to construct and update the tables 14 and 16 initialized from the beginning of the bitstream, respectively, so that for every symbol, the table 16 used to encode that symbol is identical to the table used to decode the symbol.
Even with entropy coding, the system shown in
First, the set of candidate motion vector predictors may be trimmed to eliminate duplicate vectors. For example, in
These two additional techniques may significantly reduce the overhead of signaling the selected motion vector predictor. However, the consequence of these techniques is that the entropy decoding of the motion vector predictor will depend on the motion predictor set. That is, a bitstream cannot be correctly parsed before the complete set of motion predictors are available and correctly constructed. Such a constraint has severe impact on the decoder's error resilience, resulting in two types of disadvantages. First is temporal dependency; if a picture is corrupted or lost, decoding of subsequent pictures could fail in the parsing stage. Second is spatial dependency; if certain area of a picture is corrupted, decoding of subsequent areas in the same picture could fail in the parsing stage.
This may be a significant disadvantage. If motion vector data from either a prior frame or a current frame is lost, but needed to reconstruct the full candidate set of motion vectors, then the decoder will be unable to even parse the bitstream until an independently-coded frame is reached. This is a more severe consequence than the mere inability to decode correctly parsed data due to the loss of information used to code motion vectors, differential motion vectors, and residuals, because in this latter circumstance any parsed data, subsequently received in the bitstream and that does not rely on the missing data, can be decoded. Once the decoder cannot parse the bitstream, however, it has no way of decoding any subsequent symbols.
Though counterintuitive, the tradeoff between error resilience and overhead reduction is not intractable. The present inventors further realized that, just as coding efficiency gains are realized by signaling a selected one from a candidate set of motion vectors, coding efficiency gains could theoretically be achieved by signaling a selected one of a group of ordered candidate sets. This gain in coding efficiency could work, not only in tandem with techniques such as motion vector trimming and using truncated unary codes, but actually as a substitute for those techniques, i.e. preserving spatial and temporal independence when parsing the bitstream by not trimming duplicate candidate motion vectors and not truncating the highest-bit-length symbol.
Specifically, referring to
Implicit in the foregoing discussion is the assumption that there is some non-random distribution among the plurality of all possible candidate sets of motion vectors. If, for example, the respective individual candidate sets simply comprise all permutations of the symbols included in each, randomly distributed with respect to each other, there would be no reason to expect a net gain in coding efficiency because the number of candidate sets of motion vectors, needed to guarantee that a sufficient number of candidate motion vectors appear in a candidate set high enough in the table to benefit from a reduced code length, would be too large. Essentially, what efficiency gained in coding the selected one of the candidate motion vector is lost in the overhead of coding the symbol associated with the particular candidate set. This makes sense; just as the entropy coding of motion vectors works due to the predictable spatial and temporal relationship between the motion vectors, making some candidate motion vectors more likely than others, the disclosed nested entropy encoding structure would be expected to further compress the bitstream only if some of the possible permutations of symbols in the candidate set are more likely than others, such that the higher-code-length candidate sets are not used as often as the lower-code-length candidate sets.
Upon investigation, the present inventors discovered that, not only does the disclosed nested entropy encoding structure in fact improve coding efficiency, but the syntax elements of neighboring pixels or blocks of pixels are correlated with the probabilities of the ordering of candidate motion vectors in a set. Referring to
With the syntax symbol from the syntax model 24, the encoder 10 may then use an applicable motion vector symbol for the selected motion vector for the current block from a VLC table 28a, 28b, 28c, 28d, etc, and encode the motion vector symbol in a bitstream to the decoder 12. The encoder 10 also updates the order of the motion vector symbols in the VLC table used based on the selected symbol. In one embodiment, any change in the frequency distribution of symbols in a table results in the symbols being reordered. In an alternate embodiment, the encoder 10 (and the decoder 12) keeps track of the most frequently-occurring symbol in the un-reordered set and ensures that that symbol is at the top of the table, i.e. that it has the smallest code length. Note that, in this example, because the syntax symbol is determined solely by the syntax of previously-encoded data, the encoder need not encode the syntax symbol along with the motion vector symbol, so long as the decoder 12 uses the same syntax model to determine the particular VLC table 30a, 30b, 30c, and 30d, from which to extract the received motion vector symbol. In other words, when the encoder 10 uses the syntax of the previously-encoded data to differentiate the VLC tables, updating the order of symbols in those tables in the process, a very high degree of coding efficiency can be achieved.
When the decoder 12 receives a coded bitstream from the encoder 10, the decoder parses the bitstream to determine the relevant VLC table for a received symbol, using a syntax model 26 if available, to decode the received symbols to identify the selected motion vector from the candidate set. The decoder also updates the respective VLC tables in the same manner as does the encoder 10.
The motion vector predictor set may contain candidate motion vectors spatially predictive of a selected motion vector (i.e. candidates in the same frame as the current block), candidate motion vectors temporally predictive of a selected motion vector (i.e. candidates at the co-located block in the frame preceding the current block), and candidate motion vectors spatiotemporally predictive of a selected motion vector (i.e. candidates in the frame preceding the current block spatially offset from the co-located block). As noted previously, the disclosed nested entropy encoding structure permits a decoder to parse a bitstream without trimming candidate motion vectors or truncating code symbols, thereby preserving spatial and temporal independence in the parsing process, and preserving error resilience while at the same time achieving significant coding efficiencies. Alternatively, the nested entropy encoding structure can be used in tandem with the techniques of trimming candidate motion vectors or truncating code symbols, while at least partially preserving error resilience.
For example, referring to
As another example, a predefined rule may prevent the candidate motion vector set trimming module 42 from trimming motion vector predictors derived from regions that are located in different slices, so as to preserve spatial independence. As an additional embodiment, a predefined rule may prevent the candidate motion vector set trimming module 42 from trimming motion vector predictors derived from regions that are located in different entropy slices, where an entropy slice is a unit of the bit-stream that may be parsed without reference to other data in the current frame.
These two rules are stated for purposes of illustration only, as additional rules may be created as desired.
At step 56, according to predefined rules of the rule set, selected candidate motion vectors may be selectively removed from the subset of duplicates. It is this step that enables spatial and/or temporal independence to be preserved. Optionally, candidate motion vectors can also be added to the subset of duplicate motion vectors, for reasons explained in more detail below. Stated on a conceptual level, the purpose of steps 54 and 56 is simply to apply a rule set to identify those motion vectors that will be trimmed from the full candidate set. Once this subset has been identified, the candidate motion vectors in this subset is trimmed at step 58 and the encoder then encodes the selected motion vector, from those remaining, based on the size of the trimmed set at step 60.
To illustrate the functionality of the generalized technique shown in
If, on the other hand, the temporal_mvp_flag signals that a temporal predictor is not selected by the encoder, the candidate set can not only be trimmed of duplicates, but in some embodiments can also be trimmed of temporal predictors, resulting in a drastically diminished candidate set that needs to be encoded. It should also be recognized that, if an applicable rule set permits both temporal and spatial dependencies, the a temporal_mvp_flag can be used, regardless of its value, to trim duplicates of the temporal or spatial subset signaled by the flag and to trim the entire subset not signaled by the flag.
As it happens, the inventors have determined that there is a reasonable correlation between the value of the disclosed temporal_mvp_flag and the value of a constrained_intra_pred_flag, associated with a frame, and often used in an encoded video bit stream. Specifically, the inventors have determined that there is a strong correlation between these two flags when the value of the constrained_intra_pred_flag is 1, and a substantially less strong correlation when the value of the constrained_intra_pred_flag is 0. Accordingly, to save overhead in signaling a selected motion vector, the encoder may optionally be configured to not encode the disclosed temporal_mvp_flag when the constrained_intra_pred_flag is set to 1 for the frame of a current pixel, such that the decoder will simply insert or assume an equal value for the temporal_mvp_flag in that instance, and to otherwise encode the temporal_mvp_flag. Alternatively, the disclosed temporal_mvp_flag may simply be assigned a value equal to the constrained_intra_pred_flag, but preferably in this latter circumstance the value of a 0 should be associated in the defined rule set as causing the result of simply trimming duplicate vectors in the candidate set.
The disclosed nested entropy encoding structure can be additionally applied to this temporal_mvp_flag syntax. In one embodiment, top and left neighboring flags are used to determine the predictor set template used in the entropy coding of temporal_mvp_flag. This may be beneficial if, as is the usual case, the encoder and decoder exclusively assigns entropy symbols to coded values, and also where the temporal_mvp_flag may take on many values. In another embodiment, the predictor set template for the coding of the selected motion vector for the candidate set is made depending on the temporal_mvp_flag of the current block.
Also, another embodiment of the invention signals if the motion vector predictor is equal to motion vectors derived from the current frame or motion vectors derived from a previously reconstructed/transmitted frame, as was previously described with respect to the temporal_mvp_flag. In this particular embodiment, however, the flag is sent indexed by the number of unique motion vector predictors derived from the current frame. For example, a predictor set template in this embodiment could distinguish all possible combinations of a first code value that reflects the combination of flags in the two blocks to the left and above the current block, e.g. 00, 01, 10, 11 (entropy coded as 0, 10, 110, and 1110) as indexed by a second code value reflective of the number of unique motion vectors in the candidate set. Alternatively, a context template in this embodiment could identify all possible combinations of a first code value that reflects whether the flags in the two blocks to the left and above the current block are identical or not, e.g. 00 and 11 entropy coded as 0 and 01 and 10 entropy coded as 10, for example, and a second code value reflective of the number of unique motion vectors in the candidate set.
An encoding scheme may include a candidate set of motion vectors that includes a large number of temporally co-located motion vectors from each of a plurality of frames, such as the one illustrated in
In some embodiments, the operation used to group smaller-sized blocks of pixels into larger blocks may be signaled in a bit-stream from an encoder to a decoder. For example, the operation may be signaled in a sequence parameter set, or alternatively, the operation may be signaled in the picture parameter set, slice header, or for any defined group of pixels. Furthermore, the operation can be determined from a level or profile identifier that is signaled in the bit-stream.
In some embodiments, the number of smallest sized blocks that are grouped to larger blocks may be signaled in a bit-stream from an encoder to a decoder. For example, said number may signaled in the sequence parameter set, or alternatively the number may be signaled in the picture parameter set, slice header, or for any defined group of pixels. The number may be determined from a level or profile identifier that is signaled in the bit-stream. In some embodiments, the number may be expressed as a number of rows of smallest-sized blocks and a number of column of smallest-sized blocks.
It should be understood that the preceding embodiments of an encoder and/or a decoder may be used in any one of a number of hardware, firmware, or software implementations. For example, an encoder may be used in a set-top recorder, a server, desktop computer, etc., while a decoder may be implemented in a display device, a set-top cable box, a set-top recorder, a server, desktop computer, etc. These examples are illustrative and not limiting. If implemented in firmware and/or software, the various components of the disclosed encoder and decoder may access any available processing device and storage to perform the described techniques.
The terms and expressions that have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
This application is a continuation of co-pending U.S. application Ser. No. 16/999,612, filed Aug. 21, 2020, which is a continuation of U.S. application Ser. No. 16/522,232, filed Jul. 25, 2019, now issued as U.S. Pat. No. 10,757,413, which is a continuation of U.S. application Ser. No. 16/130,875, filed Sep. 13, 2018, now issued as U.S. Pat. No. 10,397,578, which is a continuation of U.S. application Ser. No. 15/906,582, filed Feb. 27, 2018, now issued as U.S. Pat. No. 10,104,376, which is a continuation of U.S. application Ser. No. 15/623,627, filed Jun. 15, 2017, now issued as U.S. Pat. No. 10,057,581, which is a continuation of U.S. application Ser. No. 15/189,307, filed Jun. 22, 2016, now issued as U.S. Pat. No. 9,794,570, which is a continuation of U.S. application Ser. No. 14/824,305, filed Aug. 12, 2015, now issued as U.S. Pat. No. 9,414,092, which is a continuation of U.S. application Ser. No. 12/896,795, filed Oct. 1, 2010. The contents of the foregoing applications are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5469226 | David et al. | Nov 1995 | A |
5731840 | Kikuchi et al. | Mar 1998 | A |
7020671 | Saha | Mar 2006 | B1 |
7199735 | Wen et al. | Apr 2007 | B1 |
7281771 | Wen et al. | Oct 2007 | B1 |
7382925 | Boliek et al. | Jun 2008 | B2 |
8116578 | Han et al. | Feb 2012 | B2 |
8271293 | You | Sep 2012 | B2 |
8290055 | Ameres et al. | Oct 2012 | B2 |
8675736 | Huang et al. | Mar 2014 | B2 |
9008182 | Tasai et al. | Apr 2015 | B2 |
9584813 | Su et al. | Feb 2017 | B2 |
10587890 | Su et al. | Mar 2020 | B2 |
10757413 | Su et al. | Aug 2020 | B2 |
11032565 | Su et al. | Jun 2021 | B2 |
11457216 | Su et al. | Sep 2022 | B2 |
20040114689 | Zhang et al. | Jun 2004 | A1 |
20040184542 | Fujimoto | Sep 2004 | A1 |
20040190615 | Abe et al. | Sep 2004 | A1 |
20040213468 | Lee et al. | Oct 2004 | A1 |
20040233076 | Zhou | Nov 2004 | A1 |
20040263361 | Pearson et al. | Dec 2004 | A1 |
20050053143 | Holcomb et al. | Mar 2005 | A1 |
20050062885 | Kadono et al. | Mar 2005 | A1 |
20050114093 | Cha et al. | May 2005 | A1 |
20050226335 | Lee et al. | Oct 2005 | A1 |
20060008006 | Cha et al. | Jan 2006 | A1 |
20060013310 | Lee et al. | Jan 2006 | A1 |
20060126962 | Sun | Jun 2006 | A1 |
20060165301 | Cha et al. | Jul 2006 | A1 |
20060245497 | Tourapis et al. | Nov 2006 | A1 |
20070014358 | Tourapis et al. | Jan 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20080043832 | Barkley et al. | Feb 2008 | A1 |
20080043845 | Nakaishi | Feb 2008 | A1 |
20080159400 | Lee et al. | Jul 2008 | A1 |
20080181308 | Wang et al. | Jul 2008 | A1 |
20090060036 | Kotaka | Mar 2009 | A1 |
20090168878 | Kawashima | Jul 2009 | A1 |
20090304084 | Hallapuro et al. | Dec 2009 | A1 |
20100027663 | Dai et al. | Feb 2010 | A1 |
20100054334 | Yoo et al. | Mar 2010 | A1 |
20100232507 | Cho et al. | Sep 2010 | A1 |
20100290530 | Huang et al. | Nov 2010 | A1 |
20110176613 | Tsai et al. | Jul 2011 | A1 |
20110211640 | Kim et al. | Sep 2011 | A1 |
20120082229 | Su | Apr 2012 | A1 |
20120134419 | Pateux et al. | May 2012 | A1 |
20130022122 | Oh | Jan 2013 | A1 |
20130027230 | Marpe et al. | Jan 2013 | A1 |
20130242046 | Zhang | Sep 2013 | A1 |
20130243093 | Chen | Sep 2013 | A1 |
20140126643 | Park et al. | May 2014 | A1 |
20150163508 | Bici et al. | Jun 2015 | A1 |
20150365673 | Kerofsky | Dec 2015 | A1 |
20160037180 | Su et al. | Feb 2016 | A1 |
20190014323 | Su et al. | Jan 2019 | A1 |
20200014929 | Su et al. | Jan 2020 | A1 |
20200244985 | Su et al. | Jul 2020 | A1 |
20210044801 | Su et al. | Feb 2021 | A1 |
20210368197 | Su et al. | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
1780402 | May 2006 | CN |
H10-224800 | Aug 1998 | JP |
2001-285873 | Oct 2001 | JP |
2004-241880 | Aug 2004 | JP |
2007-532036 | Nov 2007 | JP |
2009-055519 | Mar 2009 | JP |
2010-515398 | May 2010 | JP |
2013-537772 | Oct 2013 | JP |
2015-125233 | Jul 2015 | JP |
5996728 | Sep 2016 | JP |
6563557 | Aug 2019 | JP |
WO 2009115901 | Sep 2009 | WO |
Entry |
---|
Bossen et al. “Simplified motion vector coding method,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 2nd Meeting: Geneva, CH, Jul. 21-28, 2010 (Document: JCTVC-8094), 5 pgs. |
G. Laroche et al. article A Spatio-Temporal Competing Scheme for the Rate-Distortion Optimized Selection and Coding of Motion Vectors, 14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, Sep. 4-8, 2006, 5 pages. |
Guillo et al. “Test Model under Consideration,” JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 2nd Meeting: Geneva, CH, Jul. 21-28, 2010 (Document: JCTVC-B205). |
International Search Report dated Jan. 17, 2012, International Patent Application No. PCT/JP2011/073149, Sharp Kabushiki Kaisha, 10 pages. |
International Search Report, dated Jan. 17, 2012, International Pal. App. No. PCT/JP2011/073153, Sharp Kabushiki Kaisha, 9 pages. |
Jung et al. “Competition-Based Scheme for Motion Vector Selection and Coding,” ITU—Telecommunications Standardization Sector Study Group 16 Question 6 Video Coding Experts Group (VCEG) 29th Meeting: Klagenfurt, Austria, Jul. 17-17, 2006 (Document: VCEG-AC06), 7 pages. |
PCT International Preliminary Report on Patentability in International Appln. No. PCT/JP2011/073149, dated Apr. 25, 2017, 7 pages. |
PCT International Preliminary Report on Patentability in International Appln. No. PCT/JP2011/073153, dated Apr. 11, 2013, 6 pages. |
Puri et al., “Video coding using the H.264/MPEG-4 AVC compression standard,” Signal Processing: Image Communication 19 (2004) pp. 793-849. |
Richardson, rain E.G. H.264 and MPEG-4 Video Compression: Video Coding for next Generation Multimedia. Chichester: Wiley, 2003, pp. vii-281. |
Su et al., “On motion vector competition,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-C257, 3rd Meeting: Guangzhou, CN, Oct. 7-15, 2010, 2 pages. |
Yang et al., “An efficient motion vector coding algorithm based on adaptive predictor selection,” Proceedings of 2010 IEEE International Symposium on Circuits and Systems, May 30-Jun. 2, 2010, 2175-2178. |
U.S. Appl. No. 17/339,148, filed Jun. 4, 2021, Su. |
U.S. Appl. No. 16/812,941, filed Mar. 9, 2020, Su. |
U.S. Appl. No. 14/882,586, filed Oct. 14, 2015, Su. |
U.S. Appl. No. 2/896,800, filed Oct. 1, 2010, Su. |
Number | Date | Country | |
---|---|---|---|
20230105786 A1 | Apr 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16999612 | Aug 2020 | US |
Child | 17952725 | US | |
Parent | 16522232 | Jul 2019 | US |
Child | 16999612 | US | |
Parent | 16130875 | Sep 2018 | US |
Child | 16522232 | US | |
Parent | 15906582 | Feb 2018 | US |
Child | 16130875 | US | |
Parent | 15623627 | Jun 2017 | US |
Child | 15906582 | US | |
Parent | 15189307 | Jun 2016 | US |
Child | 15623627 | US | |
Parent | 14824305 | Aug 2015 | US |
Child | 15189307 | US | |
Parent | 12896795 | Oct 2010 | US |
Child | 14824305 | US |