Embodiments of the present invention comprise methods and systems for detecting pictorial regions in digital images.
The content of a digital image can have considerable impact on the compression of the digital image, both in terms of compression efficiency and compression artifacts. Pictorial regions in an image are not efficiently compressed using compression algorithms designed for the compression of text. Similarly, text images are not efficiently compressed using compression algorithms that are designed and optimized for pictorial content. Not only is compression efficiency affected when a compression algorithm designed for one type of image content is used on a different type of image content, but the decoded image may exhibit visible compression artifacts.
Further, image enhancement algorithms designed to sharpen text, if applied to pictorial image content, may produce visually annoying artifacts in some areas of the pictorial content. In particular, pictorial regions containing strong edges may be affected. While smoothing operations may enhance a natural image, the smoothing of text regions is seldom desirable.
The detection of regions of a particular content type in a digital image can improve compression efficiency, reduce compression artifacts, and improve image quality when used in conjunction with a compression algorithm or image enhancement algorithm designed for the particular type of content.
The semantic labeling of image regions based on content is also useful in document management systems and image databases.
Reliable and efficient detection of regions of pictorial content and other image regions in digital images is desirable.
Embodiments of the present invention comprise methods and systems for identifying pictorial regions in a digital image using a masked entropy feature and region growing.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
An exemplary region-detection system 20 is shown in
The effectiveness and reliability of a region-detection system may depend on the feature or features used for the classification.
For the purposes of this specification, associated claims, and included drawings, the term histogram will be used to refer to frequency-of-occurrence information in any form or format, for example, that represented as an array, a plot, a linked list and any other data structure associating a frequency-of-occurrence count of a value, or group of values, with the value, or group of values. The value, or group of values, may be related to an image characteristic, for example, color (luminance or chrominance), edge intensity, edge direction, texture, and any other image characteristic.
Embodiments of the present invention comprise methods and systems for region detection in a digital image. Some embodiments of the present invention comprise methods and systems for region detection in a digital image wherein the separation between feature values corresponding to image regions may be accomplished by masking, prior to feature extraction, pixels in the image for which a masking condition is met. In some embodiments, the masked pixel values may not be used when extracting the feature value from the image.
In some exemplary embodiments of the present invention shown in
In the exemplary embodiments of the present invention shown in
When a pixel is accumulated in the histogram 74, a counter for counting the number of non-mask pixels in the block of the masked image may be incremented 75. When all pixels in a block have been examined 78, 79, the histogram may be normalized 69. The histogram may be normalized 69 by dividing each bin count by the number of non-mask pixels in the block of the masked image. In alternate embodiments, the histogram may not be normalized and the counter may not be present.
Alternately, the masked image may be represented in two components: a first component that is a binary image, also considered a mask, in which masked pixels may be represented by one of the bit values and unmasked pixels by the other bit value, and a second component that is the digital image. The logical combination of the mask and the digital image forms the masked image. The histogram formation may be accomplished using the two components of the masked image in combination.
An entropy measure 55 may be calculated 56 for the histogram 53 of a block of the masked image. The entropy measure 55 may be considered an image feature of the input image. The entropy measure 55 may be considered any measure of the form:
where N is the number of histogram bins, h(i) is the accumulation or count of bin i, and f(•) may be a function with mathematical characteristics similar to a logarithmic function. The entropy measure 55 may be weighted by the proportion of pixels that would have been counted in a bin, but were masked. The entropy measure is of the form:
where w(i) is the weighting function. In some embodiments of the present invention, the function f(h(i)) may be log2(h(i)).
In the embodiments of the present invention shown in
In some embodiments of the present invention shown in
In some embodiments of the present invention, the masked data may not be quantized, but the number of histogram bins may be less than the number of possible masked data values. In these embodiments, a bin in the histogram may represent a range of masked data values.
In some embodiments of the present invention shown in
In alternate embodiments of the present invention shown in
In some embodiments of the present invention, a moving window of pixel values centered, in turn, on each pixel of the image, may be used to calculate the entropy measure for the block containing the centered pixel. The entropy may be calculated from the corresponding block in the masked image. The entropy value may be used to classify the pixel at the location on which the moving window is centered.
In other embodiments of the present invention, the entropy value may be calculated for a block of the image, and all pixels in the block may be classified with the same classification based on the entropy value.
In some embodiments of the present invention shown in
In some embodiments of the present invention, the masking condition may be based on the edge strength at a pixel.
In some embodiments of the present invention, a level of confidence in the degree to which the masking condition is satisfied may be calculated. The level of confidence may be used when accumulating a pixel into the histogram. Exemplary embodiments in which a level of confidence is used are shown in
In exemplary embodiments of the present invention shown in
In the exemplary embodiments of the present invention shown in
When a pixel is accumulated in the histogram 175, a counter for counting the number of non-mask pixels in the block of the masked image may be incremented 178. When all pixels in a block have been examined 180, 179, the histogram may be normalized 130. The histogram may be normalized 130 by dividing each bin count by the number of non-mask pixels in the block of the masked image. In alternate embodiments, the histogram may not be normalized and the counter not be present.
An entropy measure 155 may be calculated 156 for the histogram of a neighborhood of the masked image as described in the previous embodiments. In the embodiments of the present invention shown in
In some embodiments of the present invention, the masking condition may comprise a single image condition. In some embodiments, the masking condition may comprise multiple image conditions combined to form a masking condition.
In some embodiments of the present invention, the entropy feature may be used to separate the image into two regions. In some embodiments of the present invention, the entropy feature may be used to separate the image into more than two regions.
In some embodiments of the present invention, the full dynamic range of the data may not be used. The histogram may be generated considering only pixels with values between a lower and an upper limit of dynamic range.
In some embodiments of the present invention, the statistical entropy measure may be as follows:
where N is the number of bins, h(i) is the normalized
histogram count for bin i, and log2(0)=1 may be defined for empty bins.
The maximum entropy may be obtained for a uniform histogram distribution,
for every bin. Thus,
The entropy calculation may be transformed into fixed-point arithmetic to return an unsigned, 8-bit, uint8, measured value, where zero corresponds to no entropy and 255 corresponds to maximum entropy. The fixed-point calculation may use two tables: one table to replace the logarithm calculation, denoted log_table below, and a second table to implement division in the histogram normalization step, denoted rev_table. Integer entropy calculation may be implemented as follows for an exemplary histogram with nine bins:
where log_shift, rev_shift, and accum_shift may be related to the precision of the log, division, and accumulation operations, respectively.
An alternate hardware implementation may use an integer divide circuit to calculate n, the normalized histogram bin value.
In the example, the number of bins is nine (N=9), which makes the normalization multiplier 255/Emax=81.
The fixed-point precision of each calculation step may be adjusted depending upon the application and properties of the data being analyzed. Likewise the number of bins may also be adjusted.
In some embodiments of the present invention, pictorial regions may be detected in an image using a staged refinement process that may first analyze the image and its derived image features to determine likely pictorial regions. Verification and refinement stages may follow initial determination of the likely pictorial regions. In some embodiments of the present invention, masked entropy may be used to initially separate pictorial image regions from non-pictorial image regions. Due to the uniform nature of page background and local background regions in a digital image, such regions will have low entropy measures. Pictorial regions may have larger entropy measures due to the varying luminance and chrominance information in pictorial regions compared to the more uniform background regions. Text regions, however, may also have large entropy measures due to the edge structure of text. It may be desirable to mask text pixels when determining entropy measures for identifying pictorial regions in images. Alternatively, masking of all strong edge structures, which may include buildings, signs, and other man-made structures in pictorial regions in addition to text, may reduce identification of text regions as pictorial regions while not significantly reducing the identification of pictorial regions. While pictorial regions typically have greater entropy measures, more uniform pictorial regions such as sky regions, may have low entropy measure, and such regions may be missed in the detection of pictorial regions based on entropy or masked entropy.
Some embodiments of the present invention shown in
In some embodiments of the present invention, the initial pictorial map 183 may be generated as shown in
The region growing 192 from the pictorial-region seeds 193 may be controlled by bounding conditions. Pictorial regions may be grown from the high-confidence pictorial-region seeds into the less reliable pictorial-feature response areas. In some embodiments, the pictorial region may be grown until a pixel with a low-confidence level is encountered. In this way, pictorial regions may be grown to include pixels based on their connectivity to those pixels with a strong pictorial-feature response.
In some embodiments, additional information may be used in the region growing process. In some embodiments the additional information may be related to background region identification. A labeled background map indicating background regions may be used in the region growing. In some embodiments, the labeled background map may include, in addition to indices indicating membership in a background region and indexing a background color palette, two reserved labels. One of the reserved labels may represent candidate pictorial pixels as identified by the background color analysis and detection, and the other reserved label may represent pixels with unreliable background color analysis and labeling. In some embodiments, the map label “1” may indicate that a pixel belongs to a candidate pictorial region. The map labels “2” through “254” may indicate background regions, and the map label “255” may represent an unknown or unreliable region.
In some embodiments, the region growing may proceed into regions of low confidence if those regions were labeled as pictorial candidates by the background color analysis and labeling. The pictorial regions may not grow into regions labeled as background. When the growing process encounters a pixel labeled as unknown or unreliable, the growing process my use a more conservative bounding condition or tighter connectivity constraints to grow into the unknown or unreliable pixel. In some embodiments, a more conservative bounding condition may correspond to a higher confidence level threshold. In some embodiments, if a candidate pixel is labeled as a pictorial candidate by the background color analysis, only one neighboring pixel may be required to belong to a pictorial region for the pictorial region to grow to the candidate pixel. If the candidate pixel is labeled as unknown or unreliable by the background color analysis, at least two neighboring pixels may be required to belong to a pictorial region for the pictorial region to grow to the candidate pixel. The neighboring pixels may be the causal neighbors for a particular scan direction, the four or eight nearest neighbors, or any other defined neighborhood of pixels. In some embodiments of the present invention, the connectivity constraint may be adaptive.
In some embodiments of the present invention, refinement may be performed after initial region growing as described above.
In some embodiments of the present invention, verification of the refined pictorial map may follow. Pictorial map verification may be based on the size of a pictorial region. Small regions identified as pictorial regions may be removed and relabeled. In some embodiments, regions identified as pictorial regions may be eliminated from the pictorial region classification by the verification process based on the shape of the region, the area of the region within a bounding shape, the distribution of the region within a bounding shape, or a document layout criterion. In alternate embodiments, verification may be performed without refinement. In alternate embodiments, hole-filling refinement may be followed by small-region verification which may be subsequently followed by concave-region-filling refinement.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
This application is a continuation-in-part of U.S. patent application Ser. No. 11/367,244, entitled “Methods and Systems for Detecting Regions in Digital Images,” filed on Mar. 2, 2006.
Number | Name | Date | Kind |
---|---|---|---|
4414635 | Gast et al. | Nov 1983 | A |
4741046 | Matsunawa et al. | Apr 1988 | A |
5001767 | Yoneda et al. | Mar 1991 | A |
5034988 | Fujiwara | Jul 1991 | A |
5157740 | Klein et al. | Oct 1992 | A |
5265173 | Griffin et al. | Nov 1993 | A |
5280367 | Zuniga | Jan 1994 | A |
5293430 | Shiau et al. | Mar 1994 | A |
5339172 | Robinson | Aug 1994 | A |
5353132 | Katsuma | Oct 1994 | A |
5379130 | Wang et al. | Jan 1995 | A |
5481622 | Gerhardt et al. | Jan 1996 | A |
5546474 | Zuniga | Aug 1996 | A |
5581667 | Bloomberg | Dec 1996 | A |
5588072 | Wang | Dec 1996 | A |
5642137 | Kitazumi | Jun 1997 | A |
5649025 | Revankar | Jul 1997 | A |
5682249 | Harrington et al. | Oct 1997 | A |
5689575 | Sako et al. | Nov 1997 | A |
5694228 | Peairs et al. | Dec 1997 | A |
5696842 | Shirasawa et al. | Dec 1997 | A |
5767978 | Revankar et al. | Jun 1998 | A |
5768403 | Suzuki et al. | Jun 1998 | A |
5778092 | MacLeod et al. | Jul 1998 | A |
5809167 | Al-Hussein | Sep 1998 | A |
5848185 | Koga et al. | Dec 1998 | A |
5854853 | Wang | Dec 1998 | A |
5867277 | Melen et al. | Feb 1999 | A |
5900953 | Bottou et al. | May 1999 | A |
5903363 | Yaguchi et al. | May 1999 | A |
5917945 | Cymbalski | Jun 1999 | A |
5923775 | Snyder et al. | Jul 1999 | A |
5943443 | Itonori et al. | Aug 1999 | A |
5946420 | Noh | Aug 1999 | A |
5949555 | Sakai et al. | Sep 1999 | A |
5956468 | Ancin | Sep 1999 | A |
5960104 | Conners et al. | Sep 1999 | A |
5987171 | Wang | Nov 1999 | A |
5995665 | Maeda | Nov 1999 | A |
6020979 | Zeck et al. | Feb 2000 | A |
6084984 | Ishikawa | Jul 2000 | A |
6175427 | Lehmbeck et al. | Jan 2001 | B1 |
6175650 | Sindhu et al. | Jan 2001 | B1 |
6178260 | Li et al. | Jan 2001 | B1 |
6198797 | Majima et al. | Mar 2001 | B1 |
6215904 | Lavallee | Apr 2001 | B1 |
6222932 | Rao et al. | Apr 2001 | B1 |
6233353 | Danisewicz | May 2001 | B1 |
6246791 | Kurzweil et al. | Jun 2001 | B1 |
6252994 | Nafarieh | Jun 2001 | B1 |
6256413 | Hirabayashi | Jul 2001 | B1 |
6272240 | Li et al. | Aug 2001 | B1 |
6298173 | Lopresti | Oct 2001 | B1 |
6301381 | Hayashi | Oct 2001 | B1 |
6308179 | Petersen et al. | Oct 2001 | B1 |
6347153 | Triplett et al. | Feb 2002 | B1 |
6360007 | Robinson et al. | Mar 2002 | B1 |
6360009 | Li et al. | Mar 2002 | B2 |
6373981 | de Queiroz et al. | Apr 2002 | B1 |
6389164 | Li et al. | May 2002 | B2 |
6400844 | Fan et al. | Jun 2002 | B1 |
6473522 | Lienhart et al. | Oct 2002 | B1 |
6522791 | Nagarajan | Feb 2003 | B2 |
6526181 | Smith et al. | Feb 2003 | B1 |
6535633 | Schweid et al. | Mar 2003 | B1 |
6577762 | Seeger et al. | Jun 2003 | B1 |
6594401 | Metcalfe et al. | Jul 2003 | B1 |
6661907 | Ho et al. | Dec 2003 | B2 |
6668080 | Torr et al. | Dec 2003 | B1 |
6718059 | Uchida | Apr 2004 | B1 |
6728391 | Wu et al. | Apr 2004 | B1 |
6728399 | Doll | Apr 2004 | B1 |
6731789 | Tojo | May 2004 | B1 |
6731800 | Barthel et al. | May 2004 | B1 |
6766053 | Fan et al. | Jul 2004 | B2 |
6778291 | Clouthier | Aug 2004 | B1 |
6782129 | Li et al. | Aug 2004 | B1 |
6901164 | Sheffer | May 2005 | B2 |
6950114 | Honda et al. | Sep 2005 | B2 |
6993185 | Guo et al. | Jan 2006 | B2 |
7020332 | Nenonen et al. | Mar 2006 | B2 |
7027647 | Mukherjee et al. | Apr 2006 | B2 |
7062099 | Li et al. | Jun 2006 | B2 |
7079687 | Guleryus | Jul 2006 | B2 |
7133565 | Toda et al. | Nov 2006 | B2 |
7181059 | Duvdevani et al. | Feb 2007 | B2 |
7190409 | Yamazaki et al. | Mar 2007 | B2 |
7206443 | Duvdevani et al. | Apr 2007 | B1 |
7221805 | Bachelder | May 2007 | B1 |
7375749 | Hattori | May 2008 | B2 |
7483484 | Liu et al. | Jan 2009 | B2 |
7518755 | Gotoh et al. | Apr 2009 | B2 |
7538907 | Nagasaka | May 2009 | B2 |
7746392 | Hayaishi | Jun 2010 | B2 |
20010016077 | Oki | Aug 2001 | A1 |
20010050785 | Yamazaki | Dec 2001 | A1 |
20020027617 | Jeffers et al. | Mar 2002 | A1 |
20020031268 | Prabhakar et al. | Mar 2002 | A1 |
20020037100 | Toda et al. | Mar 2002 | A1 |
20020064307 | Koga et al. | May 2002 | A1 |
20020076103 | Lin et al. | Jun 2002 | A1 |
20020106133 | Edgar et al. | Aug 2002 | A1 |
20020110283 | Fan et al. | Aug 2002 | A1 |
20020168105 | Li | Nov 2002 | A1 |
20030086127 | Ito et al. | May 2003 | A1 |
20030107753 | Sakamoto | Jun 2003 | A1 |
20030133612 | Fan | Jul 2003 | A1 |
20030133617 | Mukherjee | Jul 2003 | A1 |
20030156760 | Navon et al. | Aug 2003 | A1 |
20030228064 | Gindele et al. | Dec 2003 | A1 |
20040001624 | Curry et al. | Jan 2004 | A1 |
20040001634 | Mehrotra | Jan 2004 | A1 |
20040042659 | Guo et al. | Mar 2004 | A1 |
20040083916 | Isshiki | May 2004 | A1 |
20040096102 | Handley | May 2004 | A1 |
20040119856 | Nishio et al. | Jun 2004 | A1 |
20040179742 | Li | Sep 2004 | A1 |
20040190027 | Foster et al. | Sep 2004 | A1 |
20040190028 | Foster et al. | Sep 2004 | A1 |
20040205568 | Breuel et al. | Oct 2004 | A1 |
20040240733 | Hobson et al. | Dec 2004 | A1 |
20050008221 | Hull et al. | Jan 2005 | A1 |
20050100219 | Berkner et al. | May 2005 | A1 |
20050100220 | Keaton et al. | May 2005 | A1 |
20050129310 | Herley | Jun 2005 | A1 |
20050163374 | Ferman et al. | Jul 2005 | A1 |
20050174586 | Yoshida et al. | Aug 2005 | A1 |
20050180647 | Curry et al. | Aug 2005 | A1 |
20050219390 | Tajima et al. | Oct 2005 | A1 |
20050248671 | Schweng | Nov 2005 | A1 |
20050276510 | Bosco et al. | Dec 2005 | A1 |
20050281474 | Huang | Dec 2005 | A1 |
20050286758 | Zitnick et al. | Dec 2005 | A1 |
20060072830 | Nagarajan et al. | Apr 2006 | A1 |
20060133690 | Bloomberg et al. | Jun 2006 | A1 |
20060153441 | Li | Jul 2006 | A1 |
20060221090 | Takeshima et al. | Oct 2006 | A1 |
20060229833 | Pisupati et al. | Oct 2006 | A1 |
20060269159 | Kim et al. | Nov 2006 | A1 |
20070291120 | Campbell et al. | Dec 2007 | A1 |
20080123945 | Andrew et al. | May 2008 | A1 |
20080212864 | Bornefalk | Sep 2008 | A1 |
20080301767 | Picard et al. | Dec 2008 | A1 |
20080310721 | Yang et al. | Dec 2008 | A1 |
Number | Date | Country |
---|---|---|
06-152945 | May 1994 | JP |
07-107275 | Apr 1995 | JP |
08-065514 | Mar 1996 | JP |
09-186861 | Jul 1997 | JP |
09-204525 | Aug 1997 | JP |
09-251533 | Sep 1997 | JP |
11-213090 | Jun 1999 | JP |
2002-325182 | Nov 2002 | JP |
2003-008909 | Jan 2003 | JP |
2003-123072 | Apr 2003 | JP |
2003-303346 | Oct 2003 | JP |
2004-110606 | Apr 2004 | JP |
2005-159576 | Jun 2005 | JP |
2005-210650 | Aug 2005 | JP |
2005-353101 | Dec 2005 | JP |
2007-235953 | Sep 2007 | JP |
2005067586 | Jul 2005 | WO |
2006066325 | Jun 2006 | WO |
Entry |
---|
Jean Duong, Hubert Emptoz and Ching Y. Suen, Extraction of Text Areas in Printed Document Images, ACM Symposium on Document Engineering, Nov. 9-10, 2001, pp. 157-164, Atlanta, GA, USA. |
Feng et al., “Exploring the Use of Conditional Random Field Models and HMMs for Historical Handwritten Document Recognition,” Dial'06, Apr. 2006, pp. 1-8, IEEE. |
Richard Berry and Jim Burnell, “The histogram is a graph of pixel value versus the number of pixels having that value,” 2000, pp. 1-3, from: www.willbell.com/AIP4Win—Updater/Histogram%20Tool.pdf. |
Rainer Lienhart and Axel Wernicke, “Localizing and Segmenting Text in Images and Videos,” IEEE Transactions on Circuits and Systems for Video Technology, Apr. 2002, pp. 256-268, vol. 12, No. 4, IEEE, USA. |
U.S. Appl. No. 11/424,281—Office action dated Jun. 9, 2009. |
U.S. Appl. No. 11/424,297—Office action dated Apr. 28, 2009. |
U.S. Appl. No. 11/424,290—Office action dated Nov. 27, 2007. |
U.S. Appl. No. 11/424,290—Office action dated May 28, 2008. |
U.S. Appl. No. 11/424,290—Office action dated Oct. 27, 2008. |
U.S. Appl. No. 11/424,290—Supplemental Office action dated Feb. 10, 2009. |
U.S. Appl. No. 11/367,244—Office Action dated Mar. 30, 2009. |
Japanese Patent Application No. 2007-229562—Office action—Mailing date Mar. 3, 2009. |
U.S. Appl. No. 11/367,244—Office action dated Nov. 3, 2009. |
U.S. Appl. No. 11/424,281—Office action dated Nov. 13, 2009. |
U.S. Appl. No. 11/424,297—Office action dated Oct. 22, 2009. |
Japanese Patent Application No. 2007-035511—Office action—Mailing date Jul. 21, 2009. |
Japanese Patent Application No. 2007-035511—Office action—Mailing date Dec. 15, 2009. |
U.S. Appl. No. 11/424,290—Office action dated Jul. 17, 2009. |
Japanese Office Action—Patent Application No. 2007-159363—Mailing Date Jan. 25, 2011. |
U.S. Appl. No. 11/424,281—Notice of Allowance dated May 3, 2010. |
U.S. Appl. No. 11/470,519—Notice of Allowance dated Sep. 20, 2010. |
U.S. Appl. No. 11/367,244—Notice of Allowance dated Oct. 7, 2010. |
U.S. Appl. No. 11/424,297—Office Action dated May 5, 2010. |
U.S. Appl. No. 11/424,290—Office Action dated Dec. 21, 2010. |
U.S. Appl. No. 12/982,718—Office Action dated Mar. 31, 2011. |
U.S. Appl. No. 11/470,519—Office Action dated May 27, 2010. |
U.S. Appl. No. 11/367,244—Office Action dated Apr. 30, 2010. |
Japanese Office Action—Patent Application No. 2007-159364—Mailing Date Jan. 25, 2011. |
USPTO Office Action—U.S. Appl. No. 11/424,290—Mailing Date Sep. 1, 2011. |
USPTO Office Action—U.S. Appl. No. 12/982,718—Mailing Date Nov. 28, 2011. |
USPTO Notice of Allowance—U.S. Appl. No. 13/007,951—Mailing Date Nov. 28, 2011. |
Office Action—U.S. Appl. No. 11/424,290—Notification Date Jun. 11, 2012. |
Number | Date | Country | |
---|---|---|---|
20070206857 A1 | Sep 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11367244 | Mar 2006 | US |
Child | 11424296 | US |