Automatic detection and correction of non-red eye flash defects

Information

  • Patent Grant
  • 8184900
  • Patent Number
    8,184,900
  • Date Filed
    Monday, August 20, 2007
    17 years ago
  • Date Issued
    Tuesday, May 22, 2012
    12 years ago
Abstract
A technique for detecting large and small non-red eye flash defects in an image is disclosed. The method comprises selecting pixels of the image which have a luminance above a threshold value and labeling neighboring selected pixels as luminous regions. A number of geometrical filters are applied to the luminous regions to remove false candidate luminous regions.
Description
BACKGROUND

1. Field of the Invention


The present invention relates to a system and method for automatically detecting and correcting non red-eye flash defects in an image, and in particular, white-eye flash defects.


2. Description of the Related Art


Published PCT patent application No. WO 03/071484 A1 to Pixology, discloses a variety of techniques for red-eye detection and correction in digital images. In particular, Pixology discloses detecting “glint” of a red-eye defect and then analyzing the surrounding region to determine the full extent of the eye defect.


U.S. Pat. No. 6,873,743 to Steinberg discloses a similar technique where initial image segmentation is based on both a red chrominance component and a luminance component.


White-eye defects (white eyes) do not present the red hue of the more common red eye defects. White eye occurs more rarely but under the same conditions as red eye, i.e. pictures taken with a flash in poor illumination conditions. In some cases, white eyes appear slightly golden by acquiring a yellowish hue.


There are two main types of white-eye, small and large. Small white eyes, as illustrated at reference 10 in FIG. 1, appear on far distant subjects. They resemble luminous dots and information in their neighborhood about other facial features is poor and therefore unreliable. Large white eyes 20 as illustrated in FIG. 2, are very well defined and one can rely on information around them. In general, a white eye is large if it occupies a region including more than 150 pixels (for a 1600×200 pixel image).


It is desired to have a technique for detecting and/or correcting white eye defects.


SUMMARY OF THE INVENTION

A method is provided for detecting non-red eye flash defects in an image. One or more luminous regions are defined in said image. Each region has at least one pixel having luminance above a luminance threshold value and a redness below a red threshold value. At least one filter is applied to a region corresponding to each luminous region. The roundness of a region corresponding to each luminous region is calculated. In accordance with the filtering and the roundness, it is determined whether the region corresponds to a non-red eye flash defect.


The defining may include selecting pixels of the image which have a luminance above a luminance threshold value and a redness below a red threshold value, and grouping neighboring selected pixels into the one or more luminous regions.


The method may further include correcting the non-red eye flash defect. One or more pixels of a detected defect region may be darkened. It may be determined not to darken pixels within a detected defect region having an intensity value greater than a threshold. The correcting, for each pixel of a detected defect region, may include setting its intensity value to an intensity value substantially equal to an average of the intensity values of pixels on a boundary of the defect region. An averaging filter may be applied to the region after the correcting.


The at least one filter may include any of a size filter for determining if said region is greater than a size expected for said non-red flash defect, a filter for adding pixels to a luminous region located with the luminous region and which have luminance below the luminance threshold value or a redness above the red threshold value, a skin filter for determining if the region is located within a region of an image characteristic of skin, or a face filter for determining if the region is located within a region of an image characteristic of a face, or any combination thereof.


The roundness calculation may be performed by a filter to determine if the region is a non-red eye flash defect.


For each luminous region, a corresponding aggregated region may be determined by determining a seed pixel for the aggregated region within a luminous region, and iteratively adding non-valley neighbouring pixels to the aggregated region until no non-valley neighboring pixels adjacent to the aggregated region remain. The region corresponding to each luminous region may be the aggregated region corresponding to the luminous region. Contrast may be calculated for an aggregated region by computing a ratio of the average intensity values of pixels on a boundary of said aggregated region to the intensity value of the seed pixel. It may be determined whether each aggregated region has a yellowness above a yellow threshold value. An average saturation may be calculated for each aggregated region and it may be determined whether the saturation exceeds a threshold value.


The at least one filter may be applied to the luminous region. An intensity gradient may be calculated for each luminous region. A Hough transform may be performed on each intensity gradient. A most representative circle on each transformed region may be determined, and each circle verified.


A digital image processing device is also provided that is operable to detect non-red eye flash defects in an image, and which includes a controller that is arranged to define one or more luminous regions in the image. Each region has at least one pixel with a luminance above a luminance threshold value and a redness below a red threshold value. At least one filter is applied to a region corresponding to each luminous region. The roundness of a region corresponding to each luminous region is calculated. In accordance with the filtering and the roundness, it is determined whether the region corresponds to a non-red eye flash defect.


The device may be a digital camera or camera phone, a general purpose, portable or hand-held computer, a printer or a digital scanner, or any combination thereof.


A further method is provided for correcting a white eye defect in a digital image. The method includes acquiring a digital image, and determining a luminance of pixels within the digital image. Those pixels having a luminance above a certain threshold are selected as candidate regions for correction of a white eye defect. The selected pixels are filtered, and white eye defect is corrected for non-filtered pixels among the candidate regions.


The filtering may include geometrical filtering of pixels based on a size or shape or both of a selected pixel region. A selected pixel region may be above a threshold size, for example. Skin tone or human face filtering of pixels may be based on neighboring pixels to a selected pixel region not having a skin tone or other human face characteristic.


Roundness may be calculated for a selected pixel region, which may be corrected if the roundness does not exceed a certain threshold value of roundness.


The correcting may include calculating a contrast of a selected pixel region, which may be corrected if it does not exceed a certain threshold value of contrast.


The filtering may include checking whether an average saturation of a selected pixel region exceeds a certain threshold saturation, and correcting the selected pixel region only if the threshold is exceeded.


A bright pixel may be selected as a seed pixel. A candidate region may be determined by aggregating outwardly from the seed pixel to combine those pixels that are not valley points with the seed pixel as an aggregated region until a minimum number of non-valley neighbors are left or a threshold size is reached, or a combination thereof. The minimum number may be zero. Intensities of points in the aggregated region may be set to an average intensity of valley points delimiting the region. The aggregated region may be smoothed.


The filtering may include determining and analyzing edges of candidate regions. An intensity gradient may be computed for one or more candidate regions. The one or more candidate regions having intensity gradient computed may be limited to include only candidate regions having a minimum size. A Hough transformation may be performed on the intensity gradient image corresponding to each candidate region. Candidate circles produced by the Hough transformation may be determined, and the candidate region may be filtered and not corrected when the seed pixel is not included in the candidate circle or the average gradient along the circle is below a threshold, or both.


A candidate region that is merely a glint may be filtered.


The method may also include detecting and correcting a red eye defect within the digital image.


One or more digital storage devices are also provided having executable program code embodied therein for programming one or more processors to perform a method as described herein.





BRIEF DESCRIPTION OF THE DRAWINGS
Statement Regarding Color Drawings

This patent application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.


Embodiments will now be described, by way of example, with reference to the accompanying drawings, in which:



FIGS. 1(
a) and (1b) (hereinafter “FIG. 1”) illustrate an image with small white-eye defects;



FIGS. 2(
a) and (2b) (hereinafter “FIG. 2”) illustrate an image with a large white-eye defect;



FIG. 3 depicts a flow diagram of the automatic detection and correction of small white-eye defects;



FIG. 4 depicts a flow diagram of the automatic detection and correction of large white-eye defects;



FIG. 5(
a) illustrates a grey-level version of an image to be corrected;



FIG. 5(
b) illustrates an edge-image of the image of FIG. 5(a) carried out using a Sobel gradient; and



FIG. 5(
c) illustrates a most representative circle of the image of FIG. 5(b) as produced using the Hough Transform.





DETAILED DESCRIPTION OF THE EMBODIMENTS

A method is provided for automatic detection and correction of small white eyes. A flowchart illustrating one embodiment is shown in FIG. 3. In this embodiment, an eye defect is said to be white or golden if it is bright, for example, in Lab color space the local average luminance l is higher than 100, and is not too saturated, for example, in Lab color space, the absolute value of a and b parameters does not exceed 15.


Initially, the luminance of each pixel of an acquired image 250 to be corrected is determined and a selection of all the pixels whose luminance is larger than a threshold value is made, 300. In the preferred embodiment, the acquired image is in RGB space and the intensity is calculated as I=max[R,G] and the intensity threshold value is 220. Also, to avoid highly-saturated colors (such as pure red or pure green) the saturation computed as abs(R−G) is compared to a threshold of 35, and discarded if higher. As such, only high-luminance pixels are retained, which provide seeds for a future region growing procedure.


In alternative implementations, the formula for luminance can be taken as the Y value for an image in YCbCr space. However it will be appreciated that luminance can be taken as the L value for an image in CIE-Lab space or indeed any other suitable measure can be employed.


The selected pixels are then labeled 310. This involves identifying selected pixels neighboring other selected pixels and labeling them as luminous regions of connected selected pixels.


These luminous regions are then subjected to a plurality of geometrical filters 320 in order to remove luminous regions, which are not suitable candidates for white eyes.


In the preferred embodiment, the regions first pass through a size filter 321 for removing regions whose size is greater than an upper limit. The upper limit is dependent on the size of the image, and in an embodiment, the upper limit is 100 pixels for a 2 megapixel image.


Filtered regions then pass through a shape filter 322, which removes all suitably sized luminous regions, which are not deemed round enough. The roundness of the luminous regions is assessed by comparing the ratio of the two variances along the two principal axes with a given threshold. Regions such as those comprising less than approximately 5-10 pixels, are exempt from passing through the shape filter, as for such small regions, shape is irrelevant.


Filling factor 323 is a process that removes empty regions bounded by the luminous regions if certain criteria are met. In the preferred embodiment, the ratio of the area of luminous region to the area of the bounded empty region is determined and if this ratio is below a certain threshold, for example, 0.5 in one embodiment, the luminous region is removed.


The remaining luminous regions are finally passed through a skin filter 324 and a face filter, 325 to prevent white spots being mis-detected as white eyes based on the fact that they neighbor something that is not characteristic of the human face or skin color.


Skin around white-eye tends to be under-illuminated and turn slightly reddish. A wide palette of skin prototypes is maintained for comparison with the pixels of the luminous regions. For each luminous region, the ratio of pixels, characteristic to the human skin, to pixels, which are not characteristic to the human skin, in a bounding box, is computed and compared to a threshold value. In the preferred embodiment, the threshold is quite restrictive at 85-90%.


Similarly, a wide palette of possible face colors is maintained for comparison with the pixels of the luminous regions. For each luminous region, the ratio of pixels, characteristic to the human face, to pixels, which are not characteristic to the human face, in a bounding box, is computed and compared to a threshold value. In the preferred embodiment, the threshold is quite restrictive at 85-90%. If the imposed percentage is met or exceeded, the region proceeds to the step of region growing, 330.


Region growing 330 begins by selecting the brightest pixel of each successfully filtered luminous region as a seed. Each neighbor of the seed is examined to determine whether or not it is a valley point. A valley point is a pixel that has at least two neighboring pixels with higher intensity values, located on both sides of the given pixel in one of its four main directions (horizontal, vertical and its two diagonals). As illustrated below in table 1, the central pixel with intensity 99 is a valley point because it has two neighbors in a given direction that both have greater intensity values. Table 2 illustrates a central pixel, 99, which is not a valley point because there is no saddle configuration on one of the four main directions.











TABLE 1







100
98
103


70
99
104


104
105
98


















TABLE 2







100
98
103


70
99
104


98
105
98









Starting from the seed, an aggregation process examines the seed pixel's neighbors and adds these to the aggregated region provided that they are not valley points. This examination and aggregation process continues until there are no non-valley neighbors left unchecked or until a maximum threshold size is reached. If a maximum threshold size is reached, the region is deemed not to be a white eye and no further testing is carried out on this region.


The outcome of this stage is a number of aggregated regions, which have been grown from the brightest points of each previously defined and filtered luminous region, and aggregated according to the valley point algorithm. It will be seem however that in alternative implementations, aggregation could take place before filtering and so the filters 320 could be applied to aggregated regions rather than luminous regions.


A number of computations are then carried out on these aggregated regions 340.


The roundness of the aggregated region is calculated 341 as R=perimeter2/(4·π·Area), where R≧1. R=1 for the perfect circle, and thus the larger the R value, the more elongated the shape. White-eyes should be round and so must be characterized by a value of R that does not exceed a certain threshold value. In the preferred embodiment, the threshold value for R is a function of eye's size. Thus we expect an eye to be rounder as its size increases (the smaller the eye, the poorer the approximation of its shape by a circle, and the less accurate the circle representation in the discrete plane). Three thresholds are used in the preferred embodiment (for a 2 megapixel image—these will scale linearly for larger/smaller image sizes):


R=1.1 for large eye (i.e., size between 65 and 100 pixels—for a 2 megapixel image);


R=1.3 for medium-sized eyes (size between 25 and 65 pixels); and


R=1.42 for small eyes (size less than 25 pixels).


The contrast of the aggregated regions is then computed 342 as the ratio of the average intensity of the valley points delimiting the aggregated region to the maximum intensity value inside the region, i.e. the intensity of the brightest seed point from step 330. As small white eyes occur normally in low illumination conditions, the contrast should be high.


Most of the small white-eyes have a yellowish hue meaning that they have at least some pixels characterized by high values of the b component in Lab space. Therefore the maximum value of b, bmax, is a good discriminator between actual white-eyes and for instance, eye glints or other point-like luminous reflections.


In one embodiment, the pixels being processed are in RGB color space. In order to obtain a value for the b component, the aggregated regions are transformed from RGB color space to Lab color space.


The maximum value of the b component, bmax, in Lab colour space is then calculated and compared with a threshold, bthreshold, at 343. If bmax≧bthreshold, the average saturation in the region is then computed at 344. Otherwise, the aggregated region is deemed not to be white-eye.


The average saturation in the aggregated region is computed as: S=√{square root over ((a2+b2))} 344. White-eyes are more colored than other regions and as such the region's average saturation must exceed a threshold in order for a candidate region to be declared white-eye at 350. Aggregated regions passing the tests outlined above are labeled white-eyes and undergo a correction procedure 399 according to the preferred embodiment for the present invention.


The correction procedure comprises setting the intensity I in LAB space, of the aggregated region's points to the average intensity of the valley points delimiting the region as used in the contrast calculation at 342. In the preferred embodiment, the whole aggregated region is then smoothed by applying a 3×3 averaging filter.


According to a further embodiment, there is provided a method for automatic detection and correction of large white eyes, as depicted in the flowchart of FIG. 4. The main characteristics of large white eyes is that by being very well defined, their shape is round and they are well separated from the iris.


Referring to FIG. 4, it can be seen that the first five stages of the large white-eye automatic detection process, thresholding 400, labeling, 410, size filter 430, shape filter 440 and filling factor 450, are identical to those of the small white-eye automatic detection process as described above. However, it will be seen that the threshold applied in the size filter 430 will be larger than for the step 322 and that different parameters may also be required for the other stages.


Nonetheless, once the luminous regions have passed through the geometrical filters 420, the next steps determine and analyze the edges of the suspected large white-eyes.


First, an intensity gradient of each luminous region is computed 460. The gradient is calculated from a grey-scale version of each luminous region as depicted in FIG. 5(a). Gradient is any function that has a high response at points where image variations are great. Conversely, the response of the gradient is low in uniform areas. In the preferred embodiment, the gradient is computed by linear filtering with two kernels, one for the horizontal gradient, Gx, and one for the vertical gradient, Gy. The modulus of the gradient is then computed as G=sqrt(Gx2+Gy2) and is further thresholded to obtain edge points and produce a binary edge-image as depicted in FIG. 5(b). In the preferred embodiment, step 460 is carried out using a simple Sobel gradient. However it will be appreciated that any gradient function such as Prewitt or Canny may be used.


Once the edges of the suspected large white-eye regions have been determined, a Hough Transform is performed on each gradient image, 470. A Hough Transform detects shapes that can be parameterized, for example, lines, circles, ellipses etc and is applied to binary images, usually computed as edge maps from intensity images. The Hough Transform is based on an alternative space to that of the image, called accumulator space. Each point (x,y) in the original image contributes to all points in the accumulator space, in this case, corresponding to the possible circles that may be formed to contain the (x,y) point. Thus, all points corresponding to an existing circle in the original edge-image will all contribute to that point in the accumulator space corresponding to that particular circle.


Next, the most representative circle as produced by the Hough Transform must be detected for each region, 480. This step comprises inspecting the points in the Hough accumulator space, which have a significant value. This value is dependent on the number of points in the original edge image, which contribute to each point in the accumulator space. If no representative circle is found, there is deemed to be no large white eye present in that region of the image.


However, if a high value point is found, then the corresponding circle in the original image is checked and a verification of the circle 490 is carried out.


This involves checking for example whether the most representative circle encircles the original seed point for the luminous region and/or whether the average gradient along the circle exceeds a threshold.


If a circle of a luminous region is verified, the region is corrected, 499, by darkening the pixels in the interior of the circle. In the preferred embodiment, the intensity of the pixels is set to 50 and an averaging filter is applied.


Preferably, however, the correction also takes into account the possibility of the luminous region including a glint, which should not be darkened. In RGB space, glint candidates are selected as high luminance pixels (min(R, G)≧=220 and max(R, G)==255). If a very round (both in aspect ratio and elongation), luminous, and desaturated region is found within the interior of a luminous region, its pixels are removed from the luminous region pixels to be corrected.


In the case where further eye-color information is available, for example in the case where person-recognition procedures are available with a database of previously captured images, the additional color information stored with that person's information in the database can be advantageously incorporated into the correction of both large and small white-eye.


In methods that may be performed according to preferred embodiments herein and that may have been described above and/or claimed below, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations.


In addition, all references cited above herein, in addition to the background and summary of the invention sections, are hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components. The following references are also incorporated by reference:


U.S. patent application Ser. Nos. 11/462,035, 11/282,955, and


U.S. published patent applications nos. 2002/0136450, 2005/0047655, 2004/0184670, 2004/0240747, 2005/0047656, 2005/0041121, 2005-0140801, 2005-0031224; and


U.S. Pat. No. 6,407,777.


The present invention is not limited to the embodiments described herein, which may be amended or modified without departing from the scope of the present invention.

Claims
  • 1. A method for detecting flash eye defects in an image, comprising using a processor in performing the following: defining one or more luminous regions in said image, each region having at least one pixel having luminance above a luminance threshold value;grouping neighboring selected pixels into said one or more luminous regions;for each of said one or more luminous regions, applying a face filter for determining a probability that said region is located within a region of an image characteristic of a face;for each region determined as having at least a threshold probability of being within a face, determining a further probability whether said region has a size, shape or color, or combinations thereof, characteristic of a pupil region of an eye region of a face;for each region having at least a further threshold probability of having a size, shape or color, or combinations thereof, characteristic of a pupil region of an eye region of a face, determining whether said region corresponds to a flash eye defect including determining that said region does correspond to a candidate flash eye defect region upon determining that a flash was used when the image was acquired, or that said region does not correspond to a flash eye defect region upon determining that a flash was not used when the image was acquired; andstoring, displaying, transmitting, or printing said image or a corrected or further processed version of said image, or combinations thereof.
  • 2. A method according to claim 1, wherein said defining comprises selecting pixels of the image which have a luminance above a luminance threshold value and a redness above or below a redness threshold value.
  • 3. The method according to claim 1, further comprising correcting said flash eye defect.
  • 4. The method according to claim 1, further comprising applying a filter for adding pixels to a luminous region located with said luminous region and which have luminance below said luminance threshold value or a redness above or below a redness threshold value.
  • 5. The method of claim 1, wherein said shape comprises roundness, and the method further comprises calculating said roundness to determine if said region corresponds to an eye feature.
  • 6. The method according to claim 1, further comprising, for each luminous region, determining a corresponding aggregated region by: (i) determining a seed pixel for said aggregated region within a luminous region; and (ii) iteratively adding non-valley neighbouring pixels to said aggregated region until no non-valley neighboring pixels adjacent to said aggregated region remain.
  • 7. The method according to claim 1, further comprising calculating an intensity gradient of each luminous region.
  • 8. A digital image processing device operable to detect flash eye defects in an image, and comprising a controller arranged to: define one or more luminous regions in said image, each region having at least one pixel having luminance above a luminance threshold value;group neighboring selected pixels into said one or more luminous regions;for each of said one or more luminous regions, apply a face filter for determining a probability that said region is located within a region of an image characteristic of a face;for each region determined as having at least a threshold probability of being within a face, determine a further probability whether said region has a size, shape or color, or combinations thereof, characteristic of a pupil region of an eye region of a face;for each region having at least a further threshold probability of having a size, shape or color, or combinations thereof, characteristic of a pupil region of an eye region of a face, determine whether said region corresponds to a flash eye defect including determining that said region does correspond to a candidate flash eye defect region opon determining that a flash was used when the image was acquired, or that said region does not correspond to a flash eye defect region upon determining that a flash was not used when the image was acquired; andstore, display, transmit, or print said image or a corrected or further processed version of said image, or combinations thereof.
  • 9. A device according to claim 8, comprising: a digital camera or camera phone, a general purpose, portable or hand-held computer, a printer or a digital scanner, or combinations thereof.
  • 10. One or more digital storage devices having executable program code embodied thereon for programming one or more processors to perform a method of correcting a flash eye defect in a digital image, the method comprising: acquiring a digital image;determining a luminance of pixels within the digital image;selecting those pixels having a luminance above a certain threshold as candidate regions for correction of a flash eye defect;defining one or more luminous regions in said image, including determining that each region has each of the following: at least a threshold probability of having a size, shape or color, or combinations thereof, characteristic of a pupil region of an eye region of a face; andat least one pixel having luminance above a luminance threshold value; anddetermining that said region does correspond to a candidate flash eye defect region, upon determining that a flash was used when the image was acquired, or that said region does not correspond to a flash eye defect region upon determining that a flash was not used when the image was acquired;filtering the selected pixels; andcorrecting the white eye defect for non-filtered pixels among the selected pixels.
  • 11. The one or more storage devices of claim 10, wherein the filtering comprises geometrical filtering of pixels based on a size or shape or both of a selected pixel region.
  • 12. The one or more storage devices of claim 10, wherein the filtering comprises geometrical filtering of pixels based on a size of a selected pixel region being above a threshold size.
  • 13. The one or more storage devices of claim 10, wherein the correcting comprising calculating a roundness of a selected pixel region, and correcting the roundness if it does not exceed a certain threshold value of roundness.
  • 14. The one or more storage devices of claim 10, wherein the filtering comprises checking whether an average saturation of a selected pixel region exceeds a certain threshold saturation, and correcting the selected pixel region only if the threshold is exceeded.
  • 15. The one or more storage devices of claim 10, the method further comprising: (i) selecting a bright pixel as a seed pixel; and (ii) aggregating outwardly from the seed pixel to combine those pixels that are not valley points with the seed pixel as an aggregated region until a minimum number of non-valley neighbors are left or a threshold size is reached, or a combination thereof.
  • 16. The one or more storage devices of claim 15, the method further comprising smoothing the aggregated region.
  • 17. The one or more storage devices of claim 15, the method further comprising computing an intensity gradient for one or more candidate regions.
  • 18. The one or more storage devices of claim 10, the method further comprising filtering a candidate region that comprises merely a glint.
  • 19. The one or more storage devices of claim 10, the method further comprising detecting and correcting a red eye defect within the digital image.
  • 20. The method of claim 1, further comprising applying a glint filter that filters any candidate region that comprises merely a glint.
  • 21. The device of claim 8, wherein the controller is further arranged to apply a glint filter that filters any candidate region that comprises merely a glint.
  • 22. The one or more storage devices of claim 10, wherein the defining further including determining that each luminous region has a measure of redness below a redness threshold value determined as a ratio of a red component value of the at least one pixel to one or both of a blue component value and a green component value of the at least one pixel.
  • 23. The method of claim 1, further comprising determining a further probability that each luminous region has a measure of redness below a redness threshold value determined as a ratio of a red component value of the at least one pixel to one or both of a blue component value and a green component value of the at least one pixel.
  • 24. The device of claim 8, wherein the controller is further arranged to determining a further probability that each luminous region has a measure of redness below a redness threshold value determined as a ratio of a red component value of the at least one pixel to one or both of a blue component value and a green component value of the at least one pixel.
PRIORITY

This application is a Continuation of U.S. patent application Ser. No. 11/674,633, filed Feb. 13, 2007, now U.S. Pat. No. 7,336,821, which claims the benefit of priority under 35 USC §119 to U.S. provisional patent application No. 60/773,714, filed Feb. 14, 2006.

US Referenced Citations (322)
Number Name Date Kind
4285588 Mir Aug 1981 A
4577219 Klie et al. Mar 1986 A
4646134 Komatsu et al. Feb 1987 A
4777620 Shimoni et al. Oct 1988 A
4881067 Watanabe et al. Nov 1989 A
4978989 Nakano et al. Dec 1990 A
5016107 Sasson et al. May 1991 A
5070355 Inoue et al. Dec 1991 A
5130789 Dobbs et al. Jul 1992 A
5164831 Kuchta et al. Nov 1992 A
5164833 Aoki Nov 1992 A
5202720 Fujino et al. Apr 1993 A
5231674 Cleveland et al. Jul 1993 A
5249053 Jain Sep 1993 A
5274457 Kobayashi et al. Dec 1993 A
5301026 Lee Apr 1994 A
5303049 Ejima et al. Apr 1994 A
5335072 Tanaka et al. Aug 1994 A
5384601 Yamashita et al. Jan 1995 A
5400113 Sosa et al. Mar 1995 A
5432863 Benati et al. Jul 1995 A
5432866 Sakamoto Jul 1995 A
5452048 Edgar Sep 1995 A
5455606 Keeling et al. Oct 1995 A
5537516 Sherman et al. Jul 1996 A
5568187 Okino Oct 1996 A
5568194 Abe Oct 1996 A
5649238 Wakabayashi et al. Jul 1997 A
5671013 Nakao Sep 1997 A
5678073 Stephenson, III et al. Oct 1997 A
5694926 DeVries et al. Dec 1997 A
5708866 Leonard Jan 1998 A
5719639 Imamura Feb 1998 A
5719951 Shackleton et al. Feb 1998 A
5724456 Boyack et al. Mar 1998 A
5734425 Takizawa et al. Mar 1998 A
5748764 Benati et al. May 1998 A
5748784 Sugiyama May 1998 A
5751836 Wildes et al. May 1998 A
5761550 Kancigor Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5805720 Suenaga et al. Sep 1998 A
5805727 Nakano Sep 1998 A
5805745 Graf Sep 1998 A
5815749 Tsukahara et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5847714 Naqvi et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5862217 Steinberg et al. Jan 1999 A
5862218 Steinberg Jan 1999 A
5892837 Luo et al. Apr 1999 A
5949904 Delp Sep 1999 A
5974189 Nicponski Oct 1999 A
5990973 Sakamoto Nov 1999 A
5991456 Rahman et al. Nov 1999 A
5991549 Tsuchida Nov 1999 A
5991594 Froeber et al. Nov 1999 A
5999160 Kitamura et al. Dec 1999 A
6006039 Steinberg et al. Dec 1999 A
6009209 Acker et al. Dec 1999 A
6011547 Shiota et al. Jan 2000 A
6016354 Lin et al. Jan 2000 A
6028611 Anderson et al. Feb 2000 A
6035072 Read Mar 2000 A
6035074 Fujimoto et al. Mar 2000 A
6036072 Lee Mar 2000 A
6101271 Yamashita et al. Aug 2000 A
6104839 Cok et al. Aug 2000 A
6118485 Hinoue et al. Sep 2000 A
6134339 Luo Oct 2000 A
6151403 Luo Nov 2000 A
6172706 Tatsumi Jan 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6195127 Sugimoto Feb 2001 B1
6201571 Ota Mar 2001 B1
6204858 Gupta Mar 2001 B1
6233364 Krainiouk et al. May 2001 B1
6249315 Holm Jun 2001 B1
6252976 Schildkraut et al. Jun 2001 B1
6266054 Lawton et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6275614 Krishnamurthy et al. Aug 2001 B1
6278491 Wang et al. Aug 2001 B1
6285410 Marni Sep 2001 B1
6292574 Schildkraut et al. Sep 2001 B1
6295378 Kitakado et al. Sep 2001 B1
6298166 Ratnakar et al. Oct 2001 B1
6300935 Sobel et al. Oct 2001 B1
6381345 Swain Apr 2002 B1
6393148 Bhaskar May 2002 B1
6396963 Shaffer et al. May 2002 B2
6407777 DeLuca Jun 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6426775 Kurokawa Jul 2002 B1
6429924 Milch Aug 2002 B1
6433818 Steinberg et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6441854 Fellegara et al. Aug 2002 B2
6459436 Kumada et al. Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6496655 Malloy Desormeaux Dec 2002 B1
6501911 Malloy Desormeaux Dec 2002 B1
6505003 Malloy Desormeaux Jan 2003 B1
6510520 Steinberg Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6614471 Ott Sep 2003 B1
6614995 Tseng Sep 2003 B2
6621867 Sazzad et al. Sep 2003 B1
6628833 Horie Sep 2003 B1
6700614 Hata Mar 2004 B1
6707950 Burns et al. Mar 2004 B1
6714665 Hanna et al. Mar 2004 B1
6718051 Eschbach Apr 2004 B1
6724941 Aoyama Apr 2004 B1
6728401 Hardeberg Apr 2004 B1
6765686 Maruoka Jul 2004 B2
6786655 Cook et al. Sep 2004 B2
6792161 Imaizumi et al. Sep 2004 B1
6798913 Toriyama Sep 2004 B2
6859565 Baron Feb 2005 B2
6873743 Steinberg Mar 2005 B2
6885766 Held et al. Apr 2005 B2
6895112 Chen et al. May 2005 B2
6900882 Iida May 2005 B2
6912298 Wilensky Jun 2005 B1
6937997 Parulski Aug 2005 B1
6967680 Kagle et al. Nov 2005 B1
6980691 Nesterov et al. Dec 2005 B2
6984039 Agostinelli Jan 2006 B2
7024051 Miller et al. Apr 2006 B2
7027662 Baron Apr 2006 B2
7030927 Sasaki Apr 2006 B2
7035461 Luo et al. Apr 2006 B2
7035462 White et al. Apr 2006 B2
7042501 Matama May 2006 B1
7042505 DeLuca May 2006 B1
7062086 Chen et al. Jun 2006 B2
7116820 Luo et al. Oct 2006 B2
7133070 Wheeler et al. Nov 2006 B2
7155058 Gaubatz et al. Dec 2006 B2
7171044 Chen et al. Jan 2007 B2
7216289 Kagle et al. May 2007 B2
7224850 Zhang et al. May 2007 B2
7269292 Steinberg Sep 2007 B2
7289664 Enomoto Oct 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7310443 Kris et al. Dec 2007 B1
7315631 Corcoran et al. Jan 2008 B1
7336821 Ciuc et al. Feb 2008 B2
7352394 DeLuca et al. Apr 2008 B1
7362368 Steinberg et al. Apr 2008 B2
7369712 Steinberg et al. May 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7436998 Steinberg et al. Oct 2008 B2
7454040 Luo et al. Nov 2008 B2
7515740 Corcoran et al. Apr 2009 B2
20010015760 Fellegara et al. Aug 2001 A1
20010031142 Whiteside Oct 2001 A1
20010052937 Susuki Dec 2001 A1
20020019859 Watanabe Feb 2002 A1
20020041329 Steinberg Apr 2002 A1
20020051571 Jackway et al. May 2002 A1
20020054224 Wasula et al. May 2002 A1
20020085088 Eubanks Jul 2002 A1
20020090133 Kim et al. Jul 2002 A1
20020093577 Kitawaki et al. Jul 2002 A1
20020093633 Milch Jul 2002 A1
20020105662 Patton et al. Aug 2002 A1
20020114513 Hirao Aug 2002 A1
20020126893 Held et al. Sep 2002 A1
20020131770 Meier et al. Sep 2002 A1
20020136450 Chen et al. Sep 2002 A1
20020141661 Steinberg Oct 2002 A1
20020150292 O'Callaghan Oct 2002 A1
20020150306 Baron Oct 2002 A1
20020159630 Buzuloiu et al. Oct 2002 A1
20020172419 Lin et al. Nov 2002 A1
20020176623 Steinberg Nov 2002 A1
20030007687 Nesterov et al. Jan 2003 A1
20030021478 Yoshida Jan 2003 A1
20030025808 Parulski et al. Feb 2003 A1
20030025811 Keelan et al. Feb 2003 A1
20030044063 Meckes et al. Mar 2003 A1
20030044070 Fuersich et al. Mar 2003 A1
20030044176 Saitoh Mar 2003 A1
20030044177 Oberhardt et al. Mar 2003 A1
20030044178 Oberhardt et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030058343 Katayama Mar 2003 A1
20030058349 Takemoto Mar 2003 A1
20030095197 Wheeler et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030113035 Cahill et al. Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030137597 Sakamoto et al. Jul 2003 A1
20030142285 Enomoto Jul 2003 A1
20030161506 Velazquez et al. Aug 2003 A1
20030190072 Adkins et al. Oct 2003 A1
20030194143 Iida Oct 2003 A1
20030202715 Kinjo Oct 2003 A1
20040017481 Takasumi et al. Jan 2004 A1
20040027593 Wilkins Feb 2004 A1
20040032512 Silverbrook Feb 2004 A1
20040032526 Silverbrook Feb 2004 A1
20040033071 Kubo Feb 2004 A1
20040037460 Luo et al. Feb 2004 A1
20040041924 White et al. Mar 2004 A1
20040046878 Jarman Mar 2004 A1
20040047491 Rydeck Mar 2004 A1
20040056975 Hata Mar 2004 A1
20040057623 Schuhrke et al. Mar 2004 A1
20040057705 Kohno Mar 2004 A1
20040057715 Tsuchida et al. Mar 2004 A1
20040090461 Adams May 2004 A1
20040093432 Luo et al. May 2004 A1
20040114796 Kaku Jun 2004 A1
20040114797 Meckes Jun 2004 A1
20040114829 LeFeuvre et al. Jun 2004 A1
20040114904 Sun et al. Jun 2004 A1
20040119851 Kaku Jun 2004 A1
20040120598 Feng Jun 2004 A1
20040125387 Nagao et al. Jul 2004 A1
20040126086 Nakamura et al. Jul 2004 A1
20040141657 Jarman Jul 2004 A1
20040150743 Schinner Aug 2004 A1
20040160517 Iida Aug 2004 A1
20040165215 Raguet et al. Aug 2004 A1
20040184044 Kolb et al. Sep 2004 A1
20040184670 Jarman et al. Sep 2004 A1
20040196292 Okamura Oct 2004 A1
20040196503 Kurtenbach et al. Oct 2004 A1
20040213476 Luo et al. Oct 2004 A1
20040223063 DeLuca et al. Nov 2004 A1
20040227978 Enomoto Nov 2004 A1
20040228542 Zhang et al. Nov 2004 A1
20040233299 Ioffe et al. Nov 2004 A1
20040233301 Nakata et al. Nov 2004 A1
20040234156 Watanabe et al. Nov 2004 A1
20040239779 Washisu Dec 2004 A1
20040240747 Jarman et al. Dec 2004 A1
20040258308 Sadovsky et al. Dec 2004 A1
20050001024 Kusaka et al. Jan 2005 A1
20050013602 Ogawa Jan 2005 A1
20050013603 Ichimasa Jan 2005 A1
20050024498 Iida et al. Feb 2005 A1
20050031224 Prilutsky et al. Feb 2005 A1
20050041121 Steinberg et al. Feb 2005 A1
20050046730 Li Mar 2005 A1
20050047655 Luo et al. Mar 2005 A1
20050047656 Luo et al. Mar 2005 A1
20050053279 Chen et al. Mar 2005 A1
20050058340 Chen et al. Mar 2005 A1
20050058342 Chen et al. Mar 2005 A1
20050062856 Matsushita Mar 2005 A1
20050063083 Dart et al. Mar 2005 A1
20050068452 Steinberg et al. Mar 2005 A1
20050074164 Yonaha Apr 2005 A1
20050074179 Wilensky Apr 2005 A1
20050078191 Battles Apr 2005 A1
20050117132 Agostinelli Jun 2005 A1
20050129331 Kakiuchi et al. Jun 2005 A1
20050134719 Beck Jun 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050147278 Rui et al. Jul 2005 A1
20050151943 Iida Jul 2005 A1
20050163498 Battles et al. Jul 2005 A1
20050168965 Yoshida Aug 2005 A1
20050196067 Gallagher et al. Sep 2005 A1
20050200736 Ito Sep 2005 A1
20050207649 Enomoto et al. Sep 2005 A1
20050212955 Craig et al. Sep 2005 A1
20050219385 Terakawa Oct 2005 A1
20050219608 Wada Oct 2005 A1
20050220346 Akahori Oct 2005 A1
20050220347 Enomoto et al. Oct 2005 A1
20050226499 Terakawa Oct 2005 A1
20050232490 Itagaki et al. Oct 2005 A1
20050238230 Yoshida Oct 2005 A1
20050243348 Yonaha Nov 2005 A1
20050275734 Ikeda Dec 2005 A1
20050276481 Enomoto Dec 2005 A1
20050280717 Sugimoto Dec 2005 A1
20050286766 Ferman Dec 2005 A1
20060008171 Petschnigg et al. Jan 2006 A1
20060017825 Thakur Jan 2006 A1
20060038916 Knoedgen et al. Feb 2006 A1
20060039690 Steinberg et al. Feb 2006 A1
20060045352 Gallagher Mar 2006 A1
20060050300 Mitani et al. Mar 2006 A1
20060066628 Brodie et al. Mar 2006 A1
20060082847 Sugimoto Apr 2006 A1
20060093212 Steinberg et al. May 2006 A1
20060093213 Steinberg et al. May 2006 A1
20060093238 Steinberg et al. May 2006 A1
20060098867 Gallagher May 2006 A1
20060098875 Sugimoto May 2006 A1
20060119832 Iida Jun 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060150089 Jensen et al. Jul 2006 A1
20060203108 Steinberg et al. Sep 2006 A1
20060204052 Yokouchi Sep 2006 A1
20060204110 Steinberg et al. Sep 2006 A1
20060221408 Fukuda Oct 2006 A1
20060285754 Steinberg et al. Dec 2006 A1
20070110305 Corcoran et al. May 2007 A1
20070116379 Corcoran et al. May 2007 A1
20070116380 Ciuc et al. May 2007 A1
20070133863 Sakai et al. Jun 2007 A1
20070154189 Harradine et al. Jul 2007 A1
20070201724 Steinberg et al. Aug 2007 A1
20070263104 DeLuca et al. Nov 2007 A1
20070263928 Akahori Nov 2007 A1
20080002060 DeLuca et al. Jan 2008 A1
20080013798 Ionita et al. Jan 2008 A1
20080043121 Prilutsky et al. Feb 2008 A1
20080112599 Nanu et al. May 2008 A1
20080144965 Steinberg et al. Jun 2008 A1
20080186389 DeLuca et al. Aug 2008 A1
20080211937 Steinberg et al. Sep 2008 A1
20080232711 Prilutsky et al. Sep 2008 A1
20080240555 Nanu et al. Oct 2008 A1
Foreign Referenced Citations (55)
Number Date Country
5224271 Sep 1993 EP
0979487 Jan 1998 EP
884694 Dec 1998 EP
911759 Apr 1999 EP
911759 Apr 1999 EP
1199672 Apr 2002 EP
1229486 Aug 2002 EP
1288858 Mar 2003 EP
1288859 Mar 2003 EP
1288860 Mar 2003 EP
1293933 Mar 2003 EP
1296510 Mar 2003 EP
1429290 Jun 2004 EP
1478169 Nov 2004 EP
1528509 May 2005 EP
1429290 Jul 2008 EP
841609 Jul 1960 GB
3205989 Sep 1991 JP
4192681 Jul 1992 JP
7-281285 Oct 1995 JP
09-214839 Aug 1997 JP
20134486 May 2000 JP
22247596 Aug 2002 JP
22271808 Sep 2002 JP
2003-030647 Jan 2003 JP
9802844 Jan 1998 WO
WO-9917251 Apr 1999 WO
9933684 Jul 1999 WO
WO-0171421 Sep 2001 WO
WO 0192614 Dec 2001 WO
WO-0245063 Jun 2002 WO
WO-03026278 Mar 2003 WO
WO-03071484 Aug 2003 WO
2004034696 Apr 2004 WO
WO-2005015896 Feb 2005 WO
WO-20050141558 May 2005 WO
WO-2005076217 Aug 2005 WO
WO-2005076217 Aug 2005 WO
WO 2005087994 Sep 2005 WO
WO-2005109853 Nov 2005 WO
WO-2006011635 Feb 2006 WO
WO-2006018056 Feb 2006 WO
WO-2006045441 May 2006 WO
WO-2007057063 May 2007 WO
WO-2007057064 May 2007 WO
2007095553 Aug 2007 WO
WO-2007093199 Aug 2007 WO
WO-2007093199 Aug 2007 WO
WO-2007142621 Dec 2007 WO
WO-2008023280 Feb 2008 WO
2007095553 Aug 2008 WO
WO-2008109644 Sep 2008 WO
WO-2008109644 Sep 2008 WO
WO 2010017953 Feb 2010 WO
WO 2010025908 Mar 2010 WO
Related Publications (1)
Number Date Country
20080049970 A1 Feb 2008 US
Provisional Applications (1)
Number Date Country
60773714 Feb 2006 US
Continuations (1)
Number Date Country
Parent 11674633 Feb 2007 US
Child 11841855 US