The present invention provides a method and apparatus for providing improved foreground/background separation in a digital image.
A focus map may be built using a depth from defocus (DFD) algorithm, for example, as disclosed in “Rational Filters for Passive Depth from Defocus” by Masahiro Watanabe and Shree K. Nayar (1995), hereby incorporated by reference. The basic idea is that a depth map of a given scene can be theoretically computed from two images of the same scene. Ideally, for calculating a DFD map, a telecentric lens is used, and only focus varies between the two image acquisitions. This is generally not true of existing digital cameras.
Another technique for separating foreground from background is disclosed in US published patent application no. 2006/0285754, which is assigned to the same assignee as the present application, and is hereby incorporated by reference. Here, the difference in exposure levels between flash and non-flash images of a scene are used to provide a foreground/background map. The main advantage of using depth from defocus over a flash/non-flash based technique, is that depth from defocus is independent of the scene illumination and so can be advantageous for outdoor or well-illuminated scenes.
A further technique for separating foreground from background is disclosed in U.S. patent applications No. 60/773,714 and Ser. No. 11/573,713, which are hereby incorporated by reference. Here, a difference in high frequency coefficients between corresponding regions of images of a scene taken at different focal lengths are used to provide a foreground/background map. Again in this case, the foreground/background map is independent of the scene illumination and so this technique can be useful for outdoor or well-illuminated scenes.
In any case, the foreground/background map produced by each of the above techniques or indeed any other technique may not work correctly. It is thus desired to provide improved methods of foreground/background separation in a digital image.
A method is provided for providing foreground/background separation in a digital image of a scene. A first map is provided including one or more regions within a main digital image. Each region has one or more pixels with a common characteristic. A subject profile is provided corresponding to a region of interest of the main digital image. One or more of the regions is/are compared with the subject profile to determine if any of them intersect with the profile region. One or more of the regions are designated as a foreground region based on the comparison.
The providing of the first map may include provisionally defining each region of the image as foreground or background. The one or more regions include at least one region provisionally defined as foreground.
The designating may include comparing a foreground region with the subject profile. Responsive to the foreground region not substantially intersecting the subject profile, a designation of said foreground region is changed to a background region.
The providing of the first map may be based on a comparison of two or more images nominally of the same scene. One or more of the images that are compared may include a lower resolution version of the main image. One or more of the images that are compared may include the main digital image. Two or more images that are compared may be aligned and/or may be matched in resolution. One or more of the images that are compared may be captured just before or after the main digital image is captured.
The providing of said first map may include providing two of more images each of different focus and nominally of the scene. The method may include calculating from the images a depth of focus map indicating pixels of the main digital image as either foreground or background. The focus map may be blurred. The method may include thresholding the blurred map to an intermediate focus map indicating regions as either foreground or background. Regions within said intermediate focus map may be filled to provide the first map.
The providing of the first map may include providing two or more images each of different focus and nominally of the scene. High frequency coefficients of corresponding regions in the images may be compared to determine whether the regions are foreground or background to provide the first map.
The providing of the first map may include providing two or more images at different exposure levels nominally of the scene. Luminance levels of corresponding regions in the images may be compared to determine whether the regions are foreground or background to provide the first map.
Any of the methods described herein may be operable in a digital image acquisition device that is arranged to select the subject profile according to content of the main digital image and/or the device may be arranged to operate in a portrait mode wherein the subject profile includes an outline of a person. The outline may include one of a number of user selectable outlines and/or may be automatically selected from multiple outlines based on the content of the main digital image.
Any of the methods described herein may be operable in a general purpose computer arranged to receive the first map in association with the main digital image, and/or may be arranged to receive one or more additional images nominally of the scene in association with the main digital image and/or may be arranged to calculate the first map from a combination of one or more additional images and the main digital image.
The providing of a subject profile may include determining at least one region of the image including a face. An orientation may be determined of the face. The subject profile may be defined as including the face and a respective region below the face in the main image in accordance with the orientation.
The providing of the first map may also include analyzing at least one region of the main digital image in a color space to determine a color distribution within the regions. The color distribution may have multiple distinct color peaks. The regions may be segmented based on proximity of a pixel's color to the color peaks.
The comparing may include, for each region intersecting said subject profile, calculating a reference reflectance characteristic, and for each region not intersecting the subject profile, calculating a reflectance characteristic. The non-intersecting region reflectance characteristic may be compared with the reference reflectance characteristic for a region corresponding in color to the non-intersecting region. A non-intersecting region may be designated as foreground when the non-intersecting region reflectance characteristic is determined to be within a threshold of the reference reflectance characteristic.
A second image may be provided nominally of the scene. Reflectance characteristics may be calculated as a function of a difference in luminance levels between corresponding regions in the main image and the second image.
The main image may be one of a stream of images. The determining of at least one region of the main image including a face may include detecting a face in at least one image of the stream acquired prior to the main image. The face may be tracked through the stream of images to determine the face region in the main image.
A further method is provided for foreground/background separation in a digital image of a scene. A first map is provided including one or more regions provisionally defined as one of foreground or background within a main digital image. One or more of the regions may be analyzed to determine a distribution of luminance within pixels of the region. Responsive to the luminance distribution for a region having more than one distinct luminance peak, the region is divided into more than one sub-region based on proximity of pixel luminances to the luminance peaks. The method further includes changing in the map a designation of one or more sub-regions based on the division.
The method may include providing a subject profile corresponding to a region of interest of the main digital image. At least one provisionally defined region may be compared with the subject profile to determine whether the region intersects with the profile region. The method may further include changing in the map a designation of one or more regions or sub-regions based on comparison.
The providing of the first map includes analyzing one or more regions of the digital image in a color space to determine a color distribution within the region. The color distribution may have multiple distinct color peaks. The regions may be segmented based on proximity of pixel color to the color peaks. The analyzed regions may be provisionally defined as foreground within the first map. The digital image may be provided in LAB space, and the color space may include [a,b] values for pixels and the luminance may include L values for pixels.
A further method is provided for improved foreground/background separation in a digital image of a scene. A main digital image may be acquired. At least one region of said main image is determined to include a face, and an orientation of the face is determined. A foreground region is defined in the image including the face, and a respective region below the face is also defined in accordance with the orientation.
An apparatus is provided for providing improved foreground/background separation in a digital image of a scene. The apparatus includes a processor and one or more processor-readable media for programming the processor to control the apparatus to perform any of the methods described herein above or below
This patent application file contains at least one drawing executed in black and white photographs. Copies of this patent or patent application publication with black and white photographs drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
a) shows an in-focus image of a subject;
b) shows a DFD map for the image; and
c) shows the DFD map of
a) shows a first color segmented version of the foreground regions of the image of
b) shows a profile for a subject;
c) shows the result of combining the profile of
d) shows the image information for the identified foreground regions of the image of
a) shows another in-focus image of a subject;
b) shows a DFD map of the image;
c) shows a first color segmented version of the foreground regions of the image; and
d) shows the result of combining a profile with the regions of
a) shows another in-focus image of a subject;
b) shows a first color segmented version of the foreground regions of the image; and
c) shows a further improved color segmented version of the foreground regions of the image when processed according to an embodiment of the present invention;
a)-(c) show luminance histograms for regions identified in
The present invention is employed where there is a need for foreground/background segmentation of a digital image. There are many reasons for needing to do so, but in particular, this is useful where one of the foreground or the background of an image needs to be post-processed separately from the other of the foreground or background. For example, for red-eye detection and correction, it can be computationally more efficient to only search and/or correct red-eye defects in foreground regions rather than across a complete image. Alternatively, it may be desirable to apply blur only to background regions of an image. Thus, the more effectively foreground can be separated from background, the better the results of image post-processing.
In the preferred embodiment, improved foreground/background segmentation is implemented within digital camera image processing software, hardware or firmware. The segmentation can be performed at image acquisition time; in a background process, which runs during camera idle time; or in response to user interaction with image post-processing software. It will nonetheless be seen that the invention could equally be implemented off-line within image processing software running on a general-purpose computer.
In any case, in the preferred embodiment, a user operating a camera selects, for example, a portrait mode and optionally a particular type of portrait mode, for example, close-up, mid-shot, full length or group. In portrait mode, the camera then acquires a main image or indeed the camera acquires one of a sequence of preview or post-view images generally of the main image scene. Generally speaking, these preview and post-view images are of a lower resolution than the main image. As outlined above, at some time after image acquisition, image processing software calculates either for the main image or one of the preview/post-view images an initial foreground/background map.
The preferred embodiment will be described in terms of the initial map being a DFD map, although it will be appreciated that the invention is applicable to any form of initial foreground/background map as outlined above. In the embodiment, the segmentation process provides from the initial map, a final foreground/background map, where the foreground region(s), ideally, contain the image subject and which can be used in further image processing as required.
a) shows an in-focus image of a scene including a subject (person) 10 and
Referring now to
The initial DFD map 20, for example, as shown in
A threshold is then applied, step 24, to the smoothed continuously valued image from step 22. This provides a binary map in general having larger and smoother contiguous regions than the initial DFD map 20.
Regions of the binary map obtained at step 24 are then filled, step 26, to remove small regions within larger regions. For the initial image of
The pixels classified as background in the image of
The remainder of the image is segmented by color, using any suitable technique, step 30. In the preferred embodiment, a “mean shift” algorithm, based on D. Comaniciu & P. Meer, “Mean Shift: A Robust Approach toward Feature Space Analysis” IEEE Trans. Pattern Analysis Machine Intell., Vol. 24, No. 5, 603-619, 2002) is employed. In general, this technique involves identifying discrete peaks in color space and segmenting the image into regions labelled according to their proximity to these peaks.
While this technique can be performed in RGB space, for the sake of computational complexity, the preferred embodiment operates on [a,b] parameters from an LAB space version of the foreground region 14, 16 pixels. This means that for an image captured in RGB space, only pixels for candidate foreground regions need to be transformed into LAB space. In any case, it should be noted that this [a,b] based segmentation is luminance (L in LAB space) independent. This segmentation produces a map as shown in
In a first improvement of foreground/background segmentation according to the present invention, a portrait template corresponding to the acquired image is provided,
In any case, the color segments provided in step 30 are combined with the profile 32 to retain only color regions that overlap to a significant extent with the profile 32. Thus, with reference to
It will be seen that sub-regions 30(g)(1) and 30(g)(2), because they may have similar [a,b], characteristics have been included in region 30(a) which in turn has been classified as a foreground region, whereas sub-region 30(g)(2) should more suitably be classed as a background.
It is also acknowledged that parts of the foreground can be (wrongly) removed from the foreground map from various reasons. For instance, in
Another example of the segmentation of steps 22-34 is illustrated with reference to
In this case that, because color segmentation did not separate the subject's hair from the balcony's edges, region 40(c), the balcony edges have been wrongly included in the final map as foreground regions.
In a still further example,
In a second improvement of foreground/background segmentation according to the present invention, foreground regions are analysed according to luminance, step 36. This step can be performed in addition to, independently of, or before or after step 34. In the preferred embodiment, this analysis is again performed on an LAB space version of the foreground region 14, 16 pixels and so can beneficially use only the L values for pixels as is described in more detail below.
In step 36, the intensity of the pixels in regions of the image of interest is analysed to determine if the luminance distribution of a region is unimodal or bimodal. This, in turn, allows difficult images to have their foreground/background regions better separated by applying unimodal or bimodal thresholding to different luminance sub-regions within regions of the image.
In the case of
It should also be noted that multi-modal histograms could also be found for a region, indicating that the region should be split into more than two regions. However, the instances of such a distribution are likely to be very rare.
Given that regions which exhibit such a bi-modal distribution in luminance should be ideally segmented further, it is useful to conveniently classify a given histogram as either unimodal or bimodal. Referring to
c) presents the result of the final segmentation, where one can see the correct separation of T-shirt/TV and of hair/face pairs. Regions which are considered unimodal are not changed.
Using the present invention, more of an in-focus subject can be correctly separated from the background, even in difficult images, i.e., images with background located very close to the subject. Even when portions of background cannot be separated from the foreground or vice versa, the artifacts are less likely to be big, and the final map can be more useful for further post-processing of the image.
There are a number of practical issues, which need to be considered when implementing the invention:
When the initial map is derived from a DFD map, then the scaling factor between the in-focus and out-of-focus images will need to be known. This needs to be accessible from the camera configuration at image acquisition, as it cannot be computed automatically. It is derivable from knowing the focal length for the acquired image, and so this should be made available by the camera producer with the acquired image.
It will also be seen that where the initial map is derived from a DFD map, some shifting between images may have taken place, depending upon the time between acquiring the two images. It will be seen that the subject may move significantly with respect to the background, or the whole scene may be shifted owing to camera displacement. As such appropriate alignment between images prior to producing the DFD map should be performed.
As indicated earlier, the invention can be implemented using either full resolution images or sub-sampled versions of such images, such as pre-view or post-view images. The latter may in fact be necessary where a camera producer decides double full resolution image acquisition to provide a full resolution DFD map is not feasible. Nonetheless, using a pair comprising a full-resolution and a preview/postview, or even a pair of previews/postviews for foreground/background mapping may be sufficient and also preferable from a computational efficiency point of view.
It will also be seen that it may not be appropriate to mix flash and non-flash images of a scene for calculating the DFD map. As such, where the main image is acquired with a flash, non-flash preview and post-view images may be best used to provide the foreground/background map in spite of the difference in resolution vis-à-vis the main image.
In a still further aspect of the present invention there is provided a further improved segmentation method for foreground-background separation in digital images.
In the embodiments of
However, it has been found that an alternative profile can be provided by detecting the position and orientation of one or more faces in an image, and adding to the or each face region, a respective area, preferably including a column below the face region as indicated by the orientation of the face. As before, a profile including each face region and associated column can be assumed to comprise foreground pixels only.
While this profile can be used instead of the profile(s) 32 of
Referring now to
An image is acquired 700. As before, the image can be either be a pre- or post-view image or include a down-sampled version of a main acquired image.
A face is either detected in the acquired image or, if the face is detected in a previous image of a stream of images including the acquired image, the face region is tracked to determine a face position and its orientation within the acquired image 710. The detection of the position and orientation as well as tracking of a detected face is preferably carried out as disclosed in U.S. patent application No. 60/746,363 filed Aug. 11, 2006.
The orientation of a detected/tracked face is used to determine an area below the detected face in the direction of the face and the combined face region and associated area provides a profile template assumed to contain foreground pixels only, 720.
Referring now to
It can be seen that a number of separate objects lie within or intersect the region 82. In the example, these might comprise regions bounding the subject's shirt (A), the subject's neck and face right side (B), the subject's face left side (C) and the subject's hair (D).
Preferably, these objects are segmented, step 730, by means of a color object detection technique such as Color Wizard, edge detection, or other region separation techniques applied on color or grey level images including but not limited to those described for the embodiments of
Preferably, each foreground object that intersects the region 82 is further subjected to luminance analysis, step 750, to determine whether the luminance distribution of the object is unimodal or bimodal as described above. Applying unimodal or bimodal thresholding to different luminance sub-objects within objects intersecting the region 82 can lead to better separation of the foreground/background objects. Thus, objects previously identified as foreground, may now comprise a sub-object identified as a foreground object and a sub-object identified as a background object.
Again, this analysis is preferably performed on an LAB space version of the foreground object pixels and so can beneficially use only the L values for pixels.
Any object (or sub-object) identified in steps 740 and optionally 750 that does not lie within or intersect region 82 is designated as a background object, 760. In this manner, the image is separated into foreground and background objects.
In this embodiment of the present invention, foreground/background segmentation is carried out only on a restricted portion of the image including the region 82. In a still further aspect of the present invention there is provided a further improved segmentation method for foreground-background separation of complete digital images.
Referring now to
A first and second image nominally of the same scene are acquired, 900, 905. As before, these images can either be pre- or post-view images or include a down-sampled version of a main acquired image. For this embodiment, one of the images is taken with a flash and the other without a flash to provide a difference in exposure levels between images.
The images are aligned (not shown), so that object segments identified in the image in steps 900 to 950 (corresponding with steps 700 to step 750 of
Where foreground objects identified in step 940/950 are segmented by color, each object comprises a number of pixels having a one of a number of particular color characteristics e.g. similar AB values in LAB space.
The embodiment of
In this embodiment, the acquired image on which face detection/tracking was performed is thus compared with the second image of the scene to determine 960 an average reflective characteristic k for each object in the aligned images according to the following equation:
where LFlash is the luminance of an object in the flash image and LNon-Flash is the luminance of the corresponding object in the non-flash image. If the value of k>0, the object is reflective and if k<0, the object is not reflective, which situations may occur due to interference or noise.
For each unlabeled object, i.e. objects which do not intersect or lie within the region 82, having the same color combination as an identified foreground object, its average reflective characteristic k is compared with a threshold value kth derived from that of the associated foreground object, step 970. So for example, in
Thus, in the present embodiment, if the unlabeled object has an average reflective characteristic k of greater than approximately 70% of the associated foreground object, it is identified as a foreground object, 980. Otherwise, it is identified as a background object, 990.
In the case where an unlabeled object comprises pixels having a color combination that does not correspond to the color combination of any of the identified foreground objects, the threshold value kth for objects of that color may be estimated as a function of the reflective characteristic(s) of identified foreground objects, e.g. objects A . . . D, having the most similar color combinations.
In the embodiments of
Thus, a sub-region of each intersecting object wholly within the region 82 is confirmed as a foreground object. Now a reflective characteristic is calculated for each pixel of the object lying outside the region 82. Growing out from the region 82, object pixels neighboring the sub-region are compared pixel-by pixel with the reflective characteristic of the object sub-region with the region 82. Again, where a pixel value is above a threshold proportion of the reflective characteristic k, say 70%, it is confirmed as being a foreground pixel. The sub-region is therefore grown either until all pixels of the object are confirmed as foreground or until all pixels neighbouring the growing sub-region are classed as background. Smoothing and hole filling within the grown region may then follow before the foreground/background map is finalized.
This application claims the benefit of priority under 35 USC §119 to U.S. provisional patent application No. 60/746,363, filed May 3, 2006.
Number | Name | Date | Kind |
---|---|---|---|
4683496 | Tom | Jul 1987 | A |
5046118 | Ajewole et al. | Sep 1991 | A |
5063448 | Jaffray et al. | Nov 1991 | A |
5086314 | Aoki et al. | Feb 1992 | A |
5109425 | Lawton | Apr 1992 | A |
5130935 | Takiguchi | Jul 1992 | A |
5164993 | Capozzi et al. | Nov 1992 | A |
5231674 | Cleaveland et al. | Jul 1993 | A |
5329379 | Rodriguez et al. | Jul 1994 | A |
5500685 | Kokaram | Mar 1996 | A |
5504846 | Fisher | Apr 1996 | A |
5534924 | Florant | Jul 1996 | A |
5594816 | Kaplan et al. | Jan 1997 | A |
5621868 | Mizutani et al. | Apr 1997 | A |
5724456 | Boyack et al. | Mar 1998 | A |
5812787 | Astle | Sep 1998 | A |
5844627 | May et al. | Dec 1998 | A |
5878152 | Sussman | Mar 1999 | A |
5880737 | Griffin et al. | Mar 1999 | A |
5949914 | Yuen | Sep 1999 | A |
5990904 | Griffin | Nov 1999 | A |
6005959 | Mohan et al. | Dec 1999 | A |
6008820 | Chauvin et al. | Dec 1999 | A |
6018590 | Gaborski | Jan 2000 | A |
6061476 | Nichani | May 2000 | A |
6069635 | Suzuoki et al. | May 2000 | A |
6069982 | Reuman | May 2000 | A |
6122408 | Fang et al. | Sep 2000 | A |
6125213 | Morimoto | Sep 2000 | A |
6198505 | Turner et al. | Mar 2001 | B1 |
6240217 | Ercan et al. | May 2001 | B1 |
6243070 | Hill et al. | Jun 2001 | B1 |
6282317 | Luo et al. | Aug 2001 | B1 |
6292194 | Powell, III | Sep 2001 | B1 |
6295367 | Crabtree et al. | Sep 2001 | B1 |
6326964 | Snyder et al. | Dec 2001 | B1 |
6407777 | DeLuca | Jun 2002 | B1 |
6483521 | Takahashi et al. | Nov 2002 | B1 |
6526161 | Yan | Feb 2003 | B1 |
6535632 | Park et al. | Mar 2003 | B1 |
6538656 | Cheung et al. | Mar 2003 | B1 |
6577762 | Seeger et al. | Jun 2003 | B1 |
6577821 | Malloy Desormeaux | Jun 2003 | B2 |
6593925 | Hakura et al. | Jul 2003 | B1 |
6631206 | Cheng et al. | Oct 2003 | B1 |
6670963 | Osberger | Dec 2003 | B2 |
6678413 | Liang et al. | Jan 2004 | B1 |
6683992 | Takahashi et al. | Jan 2004 | B2 |
6744471 | Kakinuma et al. | Jun 2004 | B1 |
6756993 | Popescu et al. | Jun 2004 | B2 |
6781598 | Yamamoto et al. | Aug 2004 | B1 |
6803954 | Hong et al. | Oct 2004 | B1 |
6804408 | Gallagher et al. | Oct 2004 | B1 |
6807301 | Tanaka | Oct 2004 | B1 |
6836273 | Kadono | Dec 2004 | B1 |
6842196 | Swift et al. | Jan 2005 | B1 |
6850236 | Deering | Feb 2005 | B2 |
6930718 | Parulski et al. | Aug 2005 | B2 |
6952225 | Hyodo et al. | Oct 2005 | B1 |
6956573 | Bergen et al. | Oct 2005 | B1 |
6987535 | Matsugu et al. | Jan 2006 | B1 |
6989859 | Parulski | Jan 2006 | B2 |
6990252 | Shekter | Jan 2006 | B2 |
7013025 | Hiramatsu | Mar 2006 | B2 |
7027643 | Comaniciu et al. | Apr 2006 | B2 |
7035477 | Cheatle | Apr 2006 | B2 |
7042505 | DeLuca | May 2006 | B1 |
7054478 | Harman | May 2006 | B2 |
7064810 | Anderson et al. | Jun 2006 | B2 |
7081892 | Alkouh | Jul 2006 | B2 |
7102638 | Raskar et al. | Sep 2006 | B2 |
7103227 | Raskar et al. | Sep 2006 | B2 |
7103357 | Kirani et al. | Sep 2006 | B2 |
7130453 | Kondo et al. | Oct 2006 | B2 |
7149974 | Girgensohn et al. | Dec 2006 | B2 |
7206449 | Raskar et al. | Apr 2007 | B2 |
7218792 | Raskar et al. | May 2007 | B2 |
7295720 | Raskar | Nov 2007 | B2 |
7317843 | Sun et al. | Jan 2008 | B2 |
7359562 | Raskar et al. | Apr 2008 | B2 |
7394489 | Yagi | Jul 2008 | B2 |
7469071 | Drimbarean et al. | Dec 2008 | B2 |
7574069 | Setlur et al. | Aug 2009 | B2 |
7593603 | Wilensky | Sep 2009 | B1 |
7613332 | Enomoto et al. | Nov 2009 | B2 |
7630006 | DeLuca et al. | Dec 2009 | B2 |
7657060 | Cohen et al. | Feb 2010 | B2 |
7702149 | Ohkubo et al. | Apr 2010 | B2 |
7747071 | Yen et al. | Jun 2010 | B2 |
8045801 | Kanatsu | Oct 2011 | B2 |
20010000710 | Queiroz et al. | May 2001 | A1 |
20010012063 | Maeda | Aug 2001 | A1 |
20010017627 | Marsden et al. | Aug 2001 | A1 |
20010053292 | Nakamura | Dec 2001 | A1 |
20020028014 | Ono | Mar 2002 | A1 |
20020031258 | Namikata | Mar 2002 | A1 |
20020076100 | Luo | Jun 2002 | A1 |
20020080261 | Kitamura et al. | Jun 2002 | A1 |
20020080998 | Matsukawa et al. | Jun 2002 | A1 |
20020089514 | Kitahara et al. | Jul 2002 | A1 |
20020093670 | Luo et al. | Jul 2002 | A1 |
20020180748 | Popescu et al. | Dec 2002 | A1 |
20020191860 | Cheatle | Dec 2002 | A1 |
20030038798 | Besl et al. | Feb 2003 | A1 |
20030039402 | Robins et al. | Feb 2003 | A1 |
20030052991 | Stavely et al. | Mar 2003 | A1 |
20030063795 | Trajkovic et al. | Apr 2003 | A1 |
20030086134 | Enomoto | May 2003 | A1 |
20030086164 | Abe | May 2003 | A1 |
20030091225 | Chen | May 2003 | A1 |
20030103159 | Nonaka | Jun 2003 | A1 |
20030123713 | Geng | Jul 2003 | A1 |
20030161506 | Velazquez et al. | Aug 2003 | A1 |
20030169944 | Dowski et al. | Sep 2003 | A1 |
20030184671 | Robins et al. | Oct 2003 | A1 |
20040047513 | Kondo et al. | Mar 2004 | A1 |
20040080623 | Cleveland et al. | Apr 2004 | A1 |
20040109614 | Enomoto et al. | Jun 2004 | A1 |
20040145659 | Someya et al. | Jul 2004 | A1 |
20040189822 | Shimada | Sep 2004 | A1 |
20040201753 | Kondo et al. | Oct 2004 | A1 |
20040208385 | Jiang | Oct 2004 | A1 |
20040223063 | DeLuca et al. | Nov 2004 | A1 |
20050017968 | Wurmlin et al. | Jan 2005 | A1 |
20050031224 | Prilutsky et al. | Feb 2005 | A1 |
20050041121 | Steinberg et al. | Feb 2005 | A1 |
20050058322 | Farmer et al. | Mar 2005 | A1 |
20050140801 | Prilutsky et al. | Jun 2005 | A1 |
20050213849 | Kreang-Arekul et al. | Sep 2005 | A1 |
20050243176 | Wu et al. | Nov 2005 | A1 |
20050271289 | Rastogi | Dec 2005 | A1 |
20060008171 | Petschnigg et al. | Jan 2006 | A1 |
20060039690 | Steinberg et al. | Feb 2006 | A1 |
20060098889 | Luo et al. | May 2006 | A1 |
20060104508 | Daly et al. | May 2006 | A1 |
20060153471 | Lim et al. | Jul 2006 | A1 |
20060171587 | Kanatsu | Aug 2006 | A1 |
20060181549 | Alkouh | Aug 2006 | A1 |
20060193509 | Criminisi et al. | Aug 2006 | A1 |
20060280361 | Umeda | Dec 2006 | A1 |
20060280375 | Dalton et al. | Dec 2006 | A1 |
20060285754 | Steinberg et al. | Dec 2006 | A1 |
20070098260 | Yen et al. | May 2007 | A1 |
20070237355 | Song et al. | Oct 2007 | A1 |
20080112599 | Capata et al. | May 2008 | A1 |
20080219518 | Steinberg et al. | Sep 2008 | A1 |
20110222730 | Steinberg et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
1367538 | Dec 2003 | EP |
1800259 | Jan 2008 | EP |
2227002 | Sep 2008 | EP |
1918872 | Jul 2009 | EP |
2165523 | Apr 2011 | EP |
2281879 | Nov 1990 | JP |
4127675 | Apr 1992 | JP |
6014193 | Jan 1994 | JP |
8223569 | Aug 1996 | JP |
10285611 | Oct 1998 | JP |
2000-102040 | Apr 2000 | JP |
2000-299789 | Oct 2000 | JP |
2001-101426 | Apr 2001 | JP |
2001-223903 | Aug 2001 | JP |
2001-229390 | Aug 2001 | JP |
2002-112095 | Apr 2002 | JP |
2002-373337 | Dec 2002 | JP |
2003-058894 | Feb 2003 | JP |
2003-281526 | Oct 2003 | JP |
2004-064454 | Feb 2004 | JP |
2004-166221 | Jun 2004 | JP |
2004-185183 | Jul 2004 | JP |
2004-236235 | Aug 2004 | JP |
2005-004799 | Jan 2005 | JP |
2005-229198 | Aug 2005 | JP |
2006-024206 | Jan 2006 | JP |
2006-080632 | Mar 2006 | JP |
2006-140594 | Jun 2006 | JP |
WO-9426057 | Nov 1994 | WO |
WO 02052839 | Jul 2002 | WO |
WO-02089046 | Nov 2002 | WO |
WO03019473 | Mar 2003 | WO |
WO-2004017493 | Feb 2004 | WO |
WO-2004036378 | Apr 2004 | WO |
WO-2004059574 | Jul 2004 | WO |
WO2005015896 | Feb 2005 | WO |
WO-2005015896 | Feb 2005 | WO |
WO2005076217 | Aug 2005 | WO |
WO-2005076217 | Aug 2005 | WO |
WO2005076217 | Oct 2005 | WO |
WO 2005099423 | Oct 2005 | WO |
WO-2005101309 | Oct 2005 | WO |
WO2005076217 | Apr 2006 | WO |
2006045441 | May 2006 | WO |
2006050782 | May 2006 | WO |
WO-2007025578 | Mar 2007 | WO |
WO-2007073781 | Jul 2007 | WO |
2007093199 | Aug 2007 | WO |
WO-2007093199 | Aug 2007 | WO |
WO-2007095477 | Aug 2007 | WO |
2007093199 | Mar 2008 | WO |
WO2008109708 | Sep 2008 | WO |
WO2010017953 | Feb 2010 | WO |
Entry |
---|
Swain et al., “Defocus-based image segmentation,” May 1995, IEEE, vol. 4, pp. 2403-2406. |
Ashikhmin, Michael, “A tone mapping algorithm for high contrast images,” ACM International Conference Proceeding Series; vol. 28, Proceedings of the 13th Eurographics workshop on Rendering, pp. 145-156, Year of Publication: 2002, ISBN:1-58113-534-3. http://portal.acm.org/citation.cfm?id=581916&coll=Portal&dl=ACM&CFID=17220933&CFTOKEN=89149269. |
Barreiro, R.B., et al., “Effect of component separation on the temperature distribution of the cosmic microwave background,” Monthly Notices of the Royal Astronomical Society, 2006, vol. 368, No. 1 (May 1), p. 226-246. Current Contents Search®. Dialog® File No. 440 Accession No. 23119677. |
Benedek, C., et al., “Markovian framework for foreground-background-shadow separation of real world video scenes,” 7th Asian Conference on Computer Vision, Proceedings v 3851 LNCS 2006., 2006. Ei Compendex®. Dialog® File No. 278 Accession No. 11071345. |
Eriksen, H.K., et al., “Cosmic microwave background component separation by parameter estimation,” Astrophysical Journal, vol. 641, No. 2, pt. 1, p. 665-82 (2006). INSPEC. Dialog® File No. 2 Accession No. 9947674. |
Haneda, E., “Color Imaging XII: Processing, Hardcopy, and Applications,” in Proceedings of Society of Optical Engineers, vol. 6493, (Jan. 29, 2007). http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=FREESR&smode=strresultssort=chron&maxdisp=25&threshold=0&possible1=separation&possible1zone=article&bool1and&possible4=foreground&possible4zone=article&bool4=and&possible2=background&possible2zone=article&fromyear=1893&toyear=2007&OUTLOG=NO&viewabs=PSISDG&key=DISPLAY&docID=1&page=1&chapter=0. |
Homayoun Kamkar-Parsi, A., “A multi-criteria model for robust foreground extraction,” Proceedings of the third ACM international workshop on Video surveillance & sensor networks, pp. 67-70, Year of Publication: 2005, ISBN:1-59593-242-9. ACM Press. http://portal.acm.org/citation.cfm?id=1099410&coll=Portal&dl=ACM&CFID=17220933&CFTOKEN=89149269. |
Jin, J., “Medical Imaging,” Proceedings of SPIE—vol. 2710, 1996, Image Processing, Murray H. Loew, Kenneth M. Hanson, Editors, Apr. 1996, pp. 864-868. http://scitation.aip.org/vsearch/servlet/VerityServlet?KEY=FREESR&smode=strresults&sort=chron&maxdisp=25&threshold=0&possible1=separation&possible1zone=article&bool1=and&possible4=foreground&possible4zone=article&bool4=and&possible2=background&possible2zone=article&fromyear=1893&toyear=2007&OUTLOG=NO&viewabs=PSISDG&key=DISPLAY&docID=14&page=1&chapter=0. |
Jingyi, Yu, et al., “Real-time reflection mapping with parallax,” Symposium on Interactive 3D Graphics, Proceedings of the 2005 symposium on Interactive 3D graphics and games, pp. 133-138, 2005, ISBN: 1-59593-013-2. ACM Press. http//portal.acm.org/citation.cfm?id=1053449&coll=Portal&dl=ACM&CFID=17220933&CFTOKEN=89149269. |
Leray, et al., “Spatially distributed two-photon excitation fluorescence in scattering media: Experiments and time-resolved Monte Carlo simulations,” Optics Communications vol. 272, Issue 1, Apr. 1, 2007, pp. 269-278. http://www.sciencedirect.com/science?—ob=ArticleURL&—udi=B6TVF-4MFCKHC-7&—user=10&—coverDate=04%2F01%2F2007&—alid=550910686&—rdoc=1&—fmt=summary&—orig=search&—cdi=5533&—sort=d&—docanchor=&view=c&—ct=19&—acct=C000050221&—version=1&—urlVersion=0&—userid=10&md5=ee8131d7f6479ae973e3291a94d997d1. |
Liyuan Li, et al., “Foreground object detection from videos containing complex background,” Proceedings of the eleventh ACM international conference on Multimedia, pp. 2-10, 2003 ISBN: 1-58113-722-2. ACM Press. http://portal.acm.org/citation.cfm?id=957017&coll=Portal&dl=ACM&CFID=17220933&CFTOKEN=89149269. |
Neri, A., et al., “Automatic moving object and background separation,” Signal Processing v 66 n 2 Apr. 1998. p. 219-232. Ei Compendex®. Dialog® File No. 278 Accession No. 8063256. |
Saito, T., et al., “Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process,” Proceedings of Society for Optical Engineering Computational Imaging IV—Electronic Imaging v 6065 2006. Ei Compendex®. Dialog® File No. 278 Accession No. 10968692. |
Simard, Patrice Y., et al., “A foreground/background separation algorithm for image compression,” Data Compression Conference Proceedings 2004. Ei Compendex®. Dialog® File No. 278 Accession No. 9897343. |
Television Asia, “Virtual sets and chromakey update: superimposing a foreground captured by one camera onto a background from another dates back to film days, but has come a long way since,” Television Asia, vol. 13, No. 9, p. 26, Nov. 2006. Business & Industry®. Dialog® File No. 9 Accession No. 4123327. |
Tzovaras, D., et al., “Three-dimensional camera motion estimation and foreground/background separation for stereoscopic image sequences,” Optical Engineering, vol. 36, No. 2, p. 574-9. Feb. 1997. INSPEC. Dialog® File No. 2 Accession No. 6556637. |
Utpal, G., et al., “On foreground-background separation in low quality document images,” International Journal on Document Analysis and Recognition, vol. 8, No. 1, p. 47-63. INSPEC. Dialog® File No. 2 Accession No. 9927003. |
“Rational Filters for Passive Depth from Defocus” by Masahiro Watanabe and Shree K. Nayar (1995). |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, PCT/US07/61956, dated Mar. 14, 2008, 9 pages. |
PCT International Search Report, PCT/US2007/068190, dated Sep. 29, 2008, 4 pages. |
Adelson, E.H., “Layered Representations for Image Coding, http://web.mit.edu/persci/people/adelson/pub.sub.—pdfs/layers91.pdf.”, Massachusetts Institute of Technology, 1991, 20 pages. |
Aizawa, K. et al., “Producing object-based special effects by fusing multiple differently focused images, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, IEEE transactions on circuits and systems for video technology, 2000, pp. 323-330, vol. 10—Issue 2. |
Bei R, Thaddeus, “Feature-Based Image Metamorphosis,” In Siggraph '92, Silicon Graphics Computer Systems, 2011 Shoreline Blvd. Mountain View CA 94043, http://www.hammerhead.com/thad/thad.html. |
Boutell, M. et al., “Photo classification by integrating image content and camera metadata”, Pattern Recognition, Proceedings of the 17th International Conference, 2004, pp. 901-904, vol. 4. |
Chen, Shenchang et al., “View interpolation for image synthesis, ISBN:0-89791-601-8, http://portal.acm.org/citation.cfm?id=166153&coli=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843223.”, International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 20th annual conference on Computer graphics and interactive techniques, 1993, pp. 279-288, ACM Press. |
Eisemann, E. et al., “Flash Photography Enhancement via Intrinsic Relighting, ACM Transactions on URL: http://graphics.stanford.edu/{georgp/vision.htm”, 2002, pp. 1-12. |
European Patent Office, Communication pursuant to Article 94(3) EPC for EP Application No. 06776529.7, dated Jan. 30, 2008, 3 pages. |
European Patent Office, extended European Search Report for EP application No. 07024773.9, dated Jun. 3, 2008, 5 pages. |
European Patent Office, extended European Search Report for EP application No. 07756848.3, dated May 27, 2009, 4 pages. |
Favaro, Paolo, “Depth from focus/defocus, http://homepages.inf.ed.ac.uk/rbf/Cvonline/LOCAL—COPIES/FAVARO1/dfdtutorial.html.”, 2002. |
Final Office Action mailed Feb. 4, 2009, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005. |
Final Office Action mailed Sep. 18, 2009, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005. |
Hashi Yuzuru et al., “A New Method to Make Special Video Effects. Trace and Emphasis of Main Portion of Images, Japan Broadcasting Corp., Sci. and Techical Res. Lab., JPN, Eizo Joho Media Gakkai Gijutsu Hokoku, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, 2003, pp. 23-26, vol. 27. |
Heckbert, Paul S., “Survey of Texture Mapping, http://citeseer.ist.psu.edu/135643.html”, Proceedings of Graphics Interface '86. IEEE Computer Graphics and Applications, 1986, pp. 56-67 & 207-212. |
Jin, Hailin et al., “A Variational Approach to Shape from Defocus, {ECCV} (2), http://citeseerist.psu.edu/554899.html”, 2002, pp. 18-30. |
Kelby, Scott, “Photoshop Elements 3: Down & Dirty Tricks, ISBN: 0-321-27835-6, One Hour Photo: Portrait and studio effects”, 2004, Chapter 1, Peachpit Press. |
Kelby, Scott, “The Photoshop Elements 4 Book for Digital Photographers, XP002406720, ISBN: 0-321-38483-0, Section: Tagging Images of People (Face Tagging)”, 2005, New Riders. |
Khan, E.A., “Image-based material editing, http://portal.acm.org/citation.cfm?id=1141937&coll=GUIDE&dl=GUIDE&CFID=68-09268&CFTOKEN=82843223”, International Conference on Computer Graphics and Interactive Techniques, 2006, pp. 654 663, ACM Press. |
Komatsu, Kunitoshi et al., “Design of Lossless Block Transforms and Filter Banks for Image Coding, http://citeseerist.psu.edu/komatsu99design.html”. |
Leubner, Christian, “Multilevel Image Segmentation in Computer-Vision Systems, http://citeseerist.psu.edu/565983.html”. |
Li, Han et al., “A new model of motion blurred images and estimation of its parameter”, Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '86, 1986, pp. 2447-2450, vol. 11. |
Li, S. et al., “Multifocus image fusion using artificial neural networks, DOI=http://dx.doi.org/10.1016/S0167-8655(02)00029-6”, Pattern Recogn. Lett, 2002, pp. 985-997, vol. 23. |
McGuire, M. et al., “Defocus video matting, DOI=http://doi.acm.org/10.1145/1073204.1073231”, ACM Trans. Graph., 2005, pp. 567-576, vol. 24—Issue 3. |
Non-Final Office Action mailed Aug. 6, 2008, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005. |
Non-Final Office Action mailed Jul. 13, 2009, for U.S. Appl. No. 11/421,027, filed May 30, 2006. |
Non-Final Office Action mailed Mar. 10, 2009, for U.S. Appl. No. 11/217,788, filed Aug. 30, 2005. |
Non-Final Office Action mailed Nov. 25, 2008, for U.S. Appl. No. 11/217,788, filed Aug. 30, 2005. |
Office Action in co-pending European Application No. 06 776 529.7-2202, entitled “Communication Pursuant to Article 94(3) EPC”, dated Sep. 30, 2008, 3 pages. |
Owens, James, “Method for depth of field (DOE) adjustment using a combination of object segmentation and pixel binning”, Research Disclosure, 2004, vol. 478, No. 97, Mason Publications. |
Pavlidis Tsompanopoulos Papamarkos, “A Multi-Segment Residual Image Compression Technique”http://citeseerist.psu.edu/554555.html. |
PCT International Preliminary Report on Patentability, for PCT Application No. PCT/EP2006/007573, dated Jul. 1, 2008, 9 pages. |
PCT International Preliminary Report on Patentability, for PCT Application No. PCT/EP2006/008229, dated Aug. 19, 2008, 15 pages. |
PCT International Preliminary Report on Patentability, for PCT Application No. PCT/US2007/061956, dated Oct. 27, 2008, 3 pages. |
PCT International Preliminary Report on Patentability, for PCT Application No. PCT/US2007/068190, dated Nov. 4, 2008, 8 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration (PCT/EP2006/007573), dated Nov. 27, 2006. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT application No. PCT/EP2006/008229, dated Jan. 14, 2008, 18 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2006/005109, Oct. 4, 2006, 14 pages. |
Petschnigg, G. et al., “Digital Photography with Flash and No Flash Image Pairs”, The Institution of Electrical Engineers, 2004, pp. 664-672. |
Potmesil, Michael et al., “A lens and aperture camera model for synthetic image generation, ISBN:0-89791-045-1, http://portal.acm.org/citation.cfm?id=806818&coli=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843222.”, International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 8th annual conference on Computer graphics and interactive techniques, 1981, pp. 297-305, ACM Press. |
Rajagopalan, A.N. et al., “Optimal recovery of depth from defocused images using an mrf model, http://citeseer.ist.psu.edu/rajagopalan98optimal.html”, In Proc. International Conference on Computer Vision, 1998, pp. 1047-1052. |
Reinhard, E. et al., “Depth-of-field-based alpha-matte extraction, http://doi.acm.org/10.1145/1080402.1080419”, In Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, 2005, pp. 95-102, vol. 95. |
Sa, A. et al., “Range-Enhanced Active Foreground Extraction, XP010851333”, Image Processing, IEEE International Conference, 2005, pp. 81-84. |
Schechner, Y.Y. et al., “Separation of transparent layers using focus, http:|/citeseer.ist.psu.edu/article/schechner98separation.html”, Proc. ICCV, 1998, pp. 1061-1066. |
Serrano, N. et al., “A computationally efficient approach to indoor/outdoor scene classification, XP010613491, ISBN: 978-0-7695-1695-0.”, Pattern Recognition, 2002 Proceedings. 16th Intl Conference, IEEE Comput. Soc, 2002, pp. 146-149, vol. 4. |
Subbarao, M. et al., “Depth from Defocus: A Spatial Domain Approach, Technical Report No. 9212.03, http://citeseerist.psu.edu/subbarao94depth.html”, Computer Vision Laboratory, SUNY. |
Subbarao, Murali et al., “Noise Sensitivity Analysis of Depth-from-Defocus by a Spatial-Domain Approach, http://citeseer.ist.psu.edu/subbarao97noise.html”. |
Sun, J. et al., “Flash Matting”, ACM Transactions on Graphics, 2006, pp. 772-778, vol. 25—Issue 3. |
Swain C. and Chen T. “Defocus-based image segmentation” In Proceedings ICASSP-95, vol. 4, pp. 2403-2406, Detroit, MI, May 1995, IEEE, http://citeseerist.psu.edu/swain95defocusbased—html, Sun, J., Li, Y., Kang, S. B., and Shum, H. 2006. Flash matting. |
Szummer, M. et al., “Indoor-outdoor image classification”, Content-Based Access of Image and Video Database, Proceedings., IEEE International Workshop, IEEE Comput. Soc, 1998, pp. 42-51. |
U.S. Appl. No. 10/772,767, filed Feb. 4, 2004, by inventors Michael J. DeLuca, et al. |
Ziou, D. et al., “Depth from Defocus Estimation in Spatial Domain, http://citeseer.ist.psu.edu/ziou99depth.html”, CVIU, 2001, pp. 143-165, vol. 81—Issue 2. |
Corinne Vachier, Luc Vincent, Valuation of Image Extrema Using Alternating Filters by Reconstruction, Proceedings of the SPIE—The International Society for Optical Engineering, 1995, vol. 2568, pp. 94-103. |
EPO Communication pursuant to Article 94(3) EPC, for European patent application No. 05707215.9, report dated Sep. 14, 2010, 11 Pages. |
EPO Communication under Rule 71(3) EPC, for European patent application No. 09706058.6, report dated Oct. 4, 2010, 6 Pages. |
EPO Extended European Search Report, for European application No. 10164430.0, dated Sep. 6, 2010, including the extended European search report, pursuant to Rule 62 EPC, the European search report (R. 61 EPC) or the partial European search report/declaration of no search (R. 63 EPC) and the European search pinion, 8 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/US2008/055964, dated Jul. 30, 2008, 8 pages. |
PCT Written Opinion of the International Searching Authority, for PCT Application No. PCT/US2008/055964, dated Jul. 24, 2008, 5 pages. |
PCT International Preliminary Report on Patentability for PCT Application No. PCT/US2008/055964, dated Sep. 8, 2009, 6 pages. |
Cuiping Zhang and Fernand S. Cohen, Component-Based Active Appearance Models for Face Modelling, D. Zhang and A.K. Jain (Eds.): ICB 2006, LNCS 3832, pp. 206-212, 2005, Springer-Verlag Berlin Heidelberg 2005. |
Fundus Photograph Reading Center—Modified 3-Standard Field Color Fundus Photography and Fluorescein Angiography Procedure, Retrieved from the Internet on Oct. 19, 2011, URL: http://eyephoto.ophth.wisc.edu/Photography/Protocols/mod3-ver1.4.html, 3 Pages. |
Anatomy of the Eye, Retrieved from the Internet on Oct. 19, 2011, URL: http://www.stlukeseye.com/anatomy, 3 pages. |
Fovea centralis, Retrieved from the Internet on Oct. 19, 2011, URL: http://en.wikipedia.org/wiki/Fovea, 4 pages. |
Non-Final Office Action mailed Apr. 28, 2011, for U.S. Appl. No. 11/936,085, filed Nov. 7, 2007. |
Non-Final Office Action mailed Apr. 28, 2011, for U.S. Appl. No. 11/937,377, filed Nov. 8, 2007. |
Non-Final Office Action mailed Mar. 31, 2011, for U.S. Appl. No. 12/551,312, filed Aug. 31, 2009. |
Non-Final Office Action mailed May 2, 2011, for U.S. Appl. No. 12/824,214, filed Jun. 27, 2010. |
Notice of Allowance mailed Feb. 4, 2011, for U.S. Appl. No. 12/611,387, filed Nov. 3, 2009. |
Notice of Allowance mailed Mar. 3, 2011, for U.S. Appl. No. 12/543,405, filed Aug. 18, 2009. |
Final Office Action mailed Feb. 16, 2011, for U.S. Appl. No. 12/543,405, filed Aug. 18, 2009. |
Final Office Action mailed Jan. 5, 2011, for U.S. Appl. No. 12/611,387, filed Nov. 3, 2009. |
Notice of Allowance mailed May 12, 2011, for U.S. Appl. No. 12/043,025, filed Mar. 5, 2008. |
Final Office Action mailed Feb. 2, 2011, for U.S. Appl. No. 12/613,457, filed Nov. 5, 2009. |
Notice of Allowance mailed Mar. 17, 2011, for U.S. Appl. No. 12/042,335, filed Mar. 5, 2008. |
Patent Abstracts of Japan, for Publication No. JP2002-247596, published Aug. 30, 2002, (Appl. No. 2001-044807), Program for Specifying Red Eye Area in Image, Image Processor and Recording Medium. 1 Page. |
Patent Abstracts of Japan, publication No. 2002-373337, publication date: Dec. 26, 2002, Method and Device for Changing Pixel Image Into Segment. |
Patent Abstracts of Japan, publication No. 2001-229390, publication date: Aug. 24, 2001, Device and Method for Processing Image, Recording Medium and Program. |
Patent Abstracts of Japan, publication No. 2003-058894, Publication Date: Feb. 28, 2003, Method and Device for Segmenting Pixeled Image. |
Patent Abstracts of Japan, publication No. 2004-236235, Publication Date: Aug. 19, 2004, Imaging Device. |
Patent Abstracts of Japan, publication No. 2005-004799, Publication Date: Jan. 6, 2005, Object Extraction Apparatus. |
Patent Abstracts of Japan, publication No. 2005-229198, Publication Date: Aug. 25, 2005, Image Processing Apparatus and Method, and Program. |
EPO Communication regarding the transmission of the European search report, including European search opinion, and Supplementary European search report, for European application No. 07797335.2, report dated Mar. 30, 2012, 3 Pages. |
Eisemann E., Durand F.: “Flash Photography Enhancement via Intrinsic Relighting” ACM Transactions on Graphics, vol. 23, No. 3, Aug. 12, 2004, pp. 673-678, XP002398968, DOI:http://dx.doi.org/10.1145/1015706.1015778. |
Braun M., Petschnigg G.: “Information Fusion of Flash and Non-Flash Images”[Online] Dec. 31, 2002, pp. 1-12, XP002398967 Retrieved from the Internet: URL:http://graphics.stanford.edu/˜georgp/vision.htm> [retrieved on Sep. 14, 2006]. |
Sun J., Li Y., Kang S. B., Shum H.-Y.: “Flash Matting”, ACM Transactions on Graphics, vol. 25, No. 3, Jul. 31, 2006, pp. 772-778, XP002398969, DOI: http://dx.doi.org/10.1145/1141911.1141954. |
Sa A, et al.: “Range-Enhanced Active Foreground Extraction” Image Processing, 2005. ICIP 2005. IEEE International Conference on Genova, Italy Sep. 11-14, 2005, Piscataway, NJ, USA,IEEE, Sep. 11, 2005, pp. 81-84, XP010851333 ISBN: 0-7803-9134-9, DOI: http://dx.doi.org/10.1109/ICIP.2005.1529996. |
Database Inspec [Online] The Institution of Electrical Engineers, Stevenage, GB; Aug. 2004, Petschnigg G et al: “Digital photography with flash and no-flash image pairs” XP002398974 Database accession No. 8265342 & ACM SIGGRAPH Aug. 8-12, 2004 Los Angeles, CA, USA, vol. 23, No. 3, Aug. 12, 2005, pp. 664-672, ACM Transactions on Graphics ACM USA ISSN: 0730-0301. |
Number | Date | Country | |
---|---|---|---|
20070269108 A1 | Nov 2007 | US |
Number | Date | Country | |
---|---|---|---|
60746363 | May 2006 | US |