Flicker band automated detection system and method

Information

  • Patent Grant
  • 8737832
  • Patent Number
    8,737,832
  • Date Filed
    Friday, February 9, 2007
    17 years ago
  • Date Issued
    Tuesday, May 27, 2014
    10 years ago
Abstract
A flicker band automated detection system and method are presented. In one embodiment an incidental motion mitigation exposure setting method includes receiving image input information; performing a motion mitigating flicker band automatic detection process; and implementing exposure settings based upon results of the motion mitigating flicker band automatic detection process. The auto flicker band detection process includes performing a motion mitigating process on an illumination intensity indication. Content impacts on an the motion mitigated illumination intensity indication are minimized. The motion mitigated illumination intensity indication is binarized. A correlation of the motion mitigated illumination intensity and a reference illumination intensity frequency is established.
Description
FIELD OF THE INVENTION

The present invention relates to the field of digital cameras. More particularly the present invention relates to a flicker band automated detection system and method.


BACKGROUND OF THE INVENTION

Electronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems have facilitated increased productivity and reduced costs in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. For example, digital cameras have had a significant impact on the field of photography and usually offer a number of features that enhance image quality. Nevertheless, there are several phenomena such as flicker or running bands in an image frame that can adversely impact results. Setting a rolling shutter time to a multiple of a corresponding illumination variation (e.g., 1/100 seconds or 1/120 seconds) often mitigates adverse impacts associated with the flicker bands. However, determining the illumination variance frequency is usually very difficult and often susceptible to a number of inaccuracies.


A rolling shutter approach is often utilized by a variety of devices (e.g., a CMOS imagers) to control the optical integration time. Rolling shutter approaches typically enable an equal optical integration time to be achieved for pixels in an image frame however this optical integration does not typically happen for all pixels simultaneously. The actual interval used for integration is usually dependent on the vertical position of the pixel in an image frame. Rolling shutter approaches typically utilize at least two pointers, a reset pointer and a read pointer, to define shutter width. Shutter width is the number of lines (“distance”) between the two pointers. These pointers continuously move through the pixel array image frame from top to bottom, jumping from line to line at timed intervals. The reset pointer typically starts the integration for pixels in a line, and subsequently, the read pointer reaches the same line and initiates signal readout. Shutter width multiplied by the line time gives the duration of the optical integration time. If the illumination intensity of the light source does not remain constant over time, rolling shutter methods of integration time control can lead to the possibility of flicker or “running bands” in the image frame.


Flicker band issues are typically caused by scene illumination sources with varying light intensity. For example, illumination intensity usually fluctuates in light sources driven by alternating current (AC) supply power. The light intensity of AC powered light sources (e.g., fluorescent lamps) usually is enveloped by a 100 Hz or 120 Hz wave if the power source is operating at 50 Hz or 60 Hz correspondingly. If the video rate is 15 frames per second, there could be six (50 Hz power) or eight (60 Hz power) flicker dark bands overlying the preview image.


Determining the illumination variance frequency is traditionally resource intensive and difficult. A straightforward frequency transform (e.g., FFT) of the image frame consumes significant amounts of computation resources and the results can be relatively easily corrupted by the image content. Some traditional attempts at detecting mismatches between the frequency of an illumination source and the duration of optical integration time for an imager with rolling shutter involve detecting a sine image fluctuation in the difference of the two consecutive image frames. However, these approaches are often limited by averages from a single line or row of an image. In addition, a slight vertical shift in the camera such as the shaking hand of a photographer, could greatly deteriorate the results. Relatively low confidence levels associated with these traditional approaches are exacerbated by the utilization of zero crossing point estimation of flicker wave periods.


SUMMARY

A flicker band automated detection system and method are presented. In one embodiment an incidental motion mitigation exposure setting method includes receiving image input information; performing a motion mitigating flicker band automatic detection process; and implementing exposure settings based upon results of the motion mitigating flicker band automatic detection process. The motion mitigating flicker band automatic detection process includes performing a motion mitigating process on an illumination intensity indication. Content impacts on the motion mitigated illumination intensity indication are minimized. The motion mitigated illumination intensity indication is binarized. A correlation of the motion mitigated illumination intensity and a reference illumination intensity frequency is established.





DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention by way of example and not by way of limitation. The drawings referred to in this specification should be understood as not being drawn to scale except if specifically noted.



FIG. 1 is a flow chart of an exemplary incidental motion mitigation exposure setting method in accordance with one embodiment of the present invention.



FIG. 2 is a block diagram of an exemplary automated flicker band compensation imaging system in accordance with one embodiment of the present invention.



FIG. 3 is a flow chart of an exemplary auto flicker band detection process 300 in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION

Reference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.


Some portions of the detailed descriptions, which follow, are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means generally used by those skilled in data processing arts to effectively convey the substance of their work to others skilled in the art. A procedure, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, optical, or quantum signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying” or the like, refer to the action and processes of a computer system, or similar processing device (e.g., an electrical, optical, or quantum, computing device), that manipulates and transforms data represented as physical (e.g., electronic) quantities. The terms refer to actions and processes of the processing devices that manipulate or transform physical quantities within a computer system's component (e.g., registers, memories, other such information storage, transmission or display devices, etc.) into other data similarly represented as physical quantities within other components.



FIG. 1 is a flow chart of incidental motion mitigation exposure setting method 100 in accordance with one embodiment of the present invention.


In step 100, test illumination input information is received. The test illumination information can be initial illumination information sensed by a digital imager. The information can be received from a variety of digital imagers. In one embodiment the image input information is received from a CMOS imager. The auto exposure (AE) is set to a predetermined reference illumination frequency for capturing the initial image test input information. In one exemplary implementation, initial sensor timing for the received initial test input information is set so that frame height is not a multiple of the inverse of twice the reference illumination frequency. For example, if an auto exposure (AE) reference illumination frequency is initially set at the reference illumination frequency of 50 Hz the sensor timing frame is set up so that the frame height is not a multiple of 1/100 seconds. This ensures that the location of flicker bands will move from frame to frame if the flicker bands exist.


In step 120, a motion mitigating flicker band automatic detection process is performed. The motion mitigating flicker band automatic detection process automatically identifies an illumination frequency. In one embodiment, the motion mitigating flicker band automatic detection process provides a confidence level by utilizing correlation scores to a know reference frequency. Down-sampled averages are utilized in the determination of the correlation score to minimize adverse impacts of incidental motion or jitters in hand movements while capturing the image. Additional explanation and implementations of motion mitigating flicker band automatic detection processes is presented below.


In step 130, exposure settings are implemented. The exposure settings are implemented based upon said identified illumination frequency. In one embodiment of the present invention, a digital camera exposure reset from an initial exposure setting to an exposure based upon the identified illumination frequency is executed. In one exemplary implementation, auto exposure controls are utilized to switch between exposure tables (e.g., a 50 Hz exposure table or a 60 Hz exposure table).



FIG. 2 is a block diagram of automated flicker band compensation imaging system 200 in accordance with one embodiment of the present invention. Automated flicker band compensation imaging system 200 includes lens 210, shutter 220, light sensor 230, processor 240 and memory 250. Lens 210 is coupled to shutter 220 which is coupled to light sensor 230 which in turn is coupled to processor 240. Processor 240 is coupled to shutter 220 and memory 250 which includes flicker band automatic detection instructions 255.


The components of automated flicker band compensation imaging system 200 cooperatively operate to automatically detect and minimize flicker band effects while capturing digital images. Lens 210 focuses light reflected from an object onto light sensor 230. Light sensor 230 senses light from lens 210 and converts the light into electrical digital image information. In one embodiment of the present invention, light sensor 230 is a complementary metal oxide silicon (CMOS) light sensor. Shutter 210 controls the length of time light is permitted to pass from lens 210 to light sensor 230. Light sensor 230 forwards the digital image information to processor 240 for processing. Processor 240 performs digital image information processing including automated flicker band detection and mitigation. Memory 250 stores digital image information and instructions for directing processor 240, including flicker band detection instructions 255 for performing automated flicker band detection and mitigation. In one embodiment, a subtraction buffer included in the processor 240 stores content neutralized illumination intensity values utilized by the processor. In an alternate embodiment, the subtraction buffer is included in memory 250.



FIG. 3 is a flow chart of auto flicker band detection process 300 in accordance with one embodiment of the present invention.


In step 310, a motion mitigating process is performed on an illumination intensity indication. In one embodiment, the motion mitigation process includes performing image down-sampling and averaging illumination intensity values from a plurality of pixel rows of the resulting sub-sampled image. In one exemplary implementation, the number of downsampled pixel rows is maintained above a minimum that is greater than the video frame rate. In one embodiment reliable down-sampled pixels are selected for the averaging, wherein the reliable down-sampled pixels include pixels associated with a smooth well lit surface.


In step 320, content impacts are minimized on the motion mitigated illumination intensity indication. In one embodiment, minimizing content impacts on the illumination intensity indication includes subtracting illumination intensity values of a first frame from illumination intensity values of a second frame. In one exemplary implementation, the current frame column buffer is subtracted from the previous frame column buffer. The subtraction results are stored in a new buffer referred to as the subtraction buffer. In one exemplary implementation, a periodic waveform is included in the entries of the subtraction buffer if there is flicker band problem.


In step 330, the motion mitigated illumination intensity indication is binarized. In one embodiment, binarizing the illumination intensity indication includes comparing each entry value in a subtraction buffer with a moving average value and if a row in the subtraction buffer has a value larger than the mean, the values changes to +1, otherwise it's change to −1. For example if the j-th row in the subtraction buffer has a value larger than the mean, the value is changed to +1, otherwise it's changed to −1.


In step 340, a correlation of the motion mitigated illumination intensity to a reference illumination intensity frequency is determined. In one embodiment a correlation score is utilized to establish the correlation of the motion mitigated illumination intensity to a reference illumination intensity frequency. For example, the correlation score is assigned based upon the following manipulation:






score
=





0


<
_



k


T
2


i


=
1

X








S


[
i
]




R


[

i
+
k

]









Where S[i] is the brightness indication sequence, R[i+k] is the reference sequence, X is the number of samples, k is the offset and the parameter T is the distance between two flicker bands in the sub sampled image. The unit for the T parameter is the number of rows. Additional confidence is provided by the correlation of the motion mitigated illumination intensity to a reference illumination intensity frequency. The additional confidence level provided by the present invention enables a conscious decision (e.g., a decision that has very low false positive error probability) on the current flickering frequency when the reference frequency is known to be relatively stable. For example the reference frequency is selected to correspond with the power source frequency since the power frequency does not change very often. The present invention correlation score also provides enhanced noise immunity, since one or two earliest entries in the column vector can still produce high correlation results.


The downsampling reduces the influence of incidental movement or camera jitter on determining the illumination intensity frequency. The larger of the down scaling ratio, the more translational invariant that frame subtraction performed for minimizing content impacts in step 320. In one exemplary implementation, the number of downsampled pixel rows is maintained above the minimum. For example, if the video frame rate is 15 frames per second and the luminance frequency is 60 hertz, there will be eight horizontal bands in the image frame. In order to capture the variations of those bands, the number of downsample rows is maintained above


It is appreciated that the generation of the column vector from the down sampled image can be implemented in a variety of different ways. In one exemplary implementation, the average of the downsample pixels is taken from multiple entire rows. If the surface is smooth the frame subtraction has a better chance of successfully removing the same content of camera jitter. Therefore the downsample pixels are chosen from a location corresponding to relatively smooth and well lit surface to generate the column vector. In another exemplary implementation simple tracking of the down sampled pixels is performed before the row averaging, so that the influence of the horizontal translation can be reduced.


Thus, the present invention facilitates automated selection of exposure settings with a high degree of confidence and minimal impacts due to jitter.


The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.

Claims
  • 1. An incidental motion mitigation exposure setting method comprising: receiving image input information;performing a motion mitigating flicker band automatic detection process, wherein said motion mitigating flicker band automatic detection process includes: performing a motion mitigating process on an illumination intensity indication, wherein said motion mitigating process includes image down sampling and averaging pixel intensity readings over a plurality of pixel rows;minimizing content impacts on said illumination intensity indication results of said motion mitigating process;binarizing said motion mitigated illumination intensity indication; andfinding a correlation of said motion mitigated illumination intensity and a reference illumination intensity frequency; andimplementing exposure settings based upon results of said motion mitigating flicker band automatic detection process.
  • 2. The incidental motion mitigation exposure setting method of claim 1 wherein said motion mitigating flicker band automatic detection process automatically identifies an illumination frequency.
  • 3. The incidental motion mitigation exposure setting method of claim 2 wherein said exposure settings are implemented based upon said identified illumination frequency.
  • 4. The incidental motion mitigation exposure setting method of claim 1 wherein said image input information is received from a CMOS imaging apparatus.
  • 5. The incidental motion mitigation exposure setting method of claim 1 further comprising setting up a sensor timing frame that is not a multiple of an illumination intensity reference frequency.
  • 6. The incidental motion mitigation exposure setting method of claim 1 wherein a correlation score is utilized to establish said correlation and said correlation score is assigned based upon the following manipulation:
  • 7. A flicker band automatic detection imaging system comprising: a lens for focusing light reflected from an object;a light sensor for sensing light from said lens and converting said light into electrical digital image information;a shutter for controlling a length of time said light is permitted to pass from said lens to said light sensor;a processor for processing said digital image information forwarded from said light sensor and directing motion mitigated flicker band automatic detection and mitigation, wherein said motion mitigating flicker band automatic detection and mitigation includes: performing a motion mitigating process on an illumination intensity indication, wherein said motion mitigating process includes image down sampling and averaging pixel intensity readings over a plurality of pixel rows;minimizing content impacts on said illumination intensity indication results of said motion mitigating process;binarizing said motion mitigated illumination intensity indication; andfinding a correlation of said motion mitigated illumination intensity and a reference illumination intensity frequency; anda memory for storing digital image information and instructions for directing said processor, including motion mitigated flicker band automatic detection instructions for performing flicker band automatic detection and mitigation.
  • 8. A flicker band automatic detection imaging system of claim 7 wherein said motion mitigating auto flicker band detection process automatically identifies an illumination intensity frequency and establishes a corresponding exposure setting.
  • 9. A flicker band automatic detection imaging system of claim 7 wherein said processor controls said length of time said shutter permits light to pass from said lens to said light sensor based upon said established exposure setting.
  • 10. A flicker band automatic detection imaging system of claim 7 wherein content neutralized illumination intensity values are utilized by said processor.
  • 11. A flicker band automatic detection imaging system of claim 7 wherein said directing motion mitigated flicker band automatic detection identifies an illumination intensity frequency based upon a known reference illumination intensity frequency.
  • 12. A motion mitigating flicker band automatic detection process comprising performing a motion mitigating process on an illumination intensity indication;minimizing content impacts on an said motion mitigated illumination intensity indication;binarizing said motion mitigated illumination intensity indication; andfinding a correlation of said motion mitigated illumination intensity and a reference illumination intensity freqeuncy.
  • 13. A motion mitigating flicker band automatic detection process of claim 12 wherein said motion mitigation process includes: performing image sub-sampling; andaveraging illumination intensity values from a plurality of pixel rows of said resulting sub-sampled image.
  • 14. A motion mitigating flicker band automatic detection process of claim 12 wherein a number of downsampled pixel rows is maintained above a minimum that is greater than a video frame rate.
  • 15. A motion mitigating flicker band automatic detection process of claim 12 wherein minimizing content impacts on said illumination intensity indication includes subtracting illumination intensity values of a first frame from illumination intensity values of a second frame.
  • 16. A motion mitigating flicker band automatic detection process of claim 12 wherein said correlation score is assigned based upon the following manipulation:
  • 17. A motion mitigating flicker band automatic detection process of 13 further comprising selecting reliable down-sampled pixels for said averaging.
  • 18. A motion mitigating flicker band automatic detection process of claim 13 wherein said reliable down-sampled pixels include pixels associated with smooth well lit surface.
  • 19. A motion mitigating flicker band automatic detection process of claim 12 wherein binarizing said illumination intensity indication includes comparing each entry value in a subtraction buffer with a moving average value and if a row in the subtraction buffer as a value larger than the mean the values change to one, otherwise the values change to negative one.
  • 20. An incidental motion mitigation exposure setting method comprising: receiving image input information;performing a motion mitigating flicker band automatic detection process, wherein said motion mitigating flicker band automatic detection process includes:performing a motion mitigating process on an illumination intensity indication;minimizing content impacts on said illumination intensity indication results of said motion mitigating process;binarizing said motion mitigated illumination intensity indication; andfinding a correlation of said motion mitigated illumination intensity and a reference illumination intensity frequency; andimplementing exposure settings based upon results of said motion mitigating flicker band automatic detection process.
  • 21. An incidental motion mitigation exposure setting method of claim 20 wherein said motion mitigation process includes: performing image sub-sampling; andaveraging illumination intensity values from a plurality of pixel rows of said resulting sub-sampled image.
  • 22. An incidental motion mitigation exposure setting method of claim 20 wherein a number of downsampled pixel rows is maintained above a minimum that is greater than a video frame rate.
  • 23. An incidental motion mitigation exposure setting method of claim 20 wherein minimizing content impacts on said illumination intensity indication includes subtracting illumination intensity values of a first frame from illumination intensity values of a second frame.
  • 24. An incidental motion mitigation exposure setting method of claim 20 wherein a correlation score is utilized to establish said correlation and said correlation score is assigned based upon the following manipulation:
  • 25. An incidental motion mitigation exposure setting method of claim 20 further comprising selecting reliable down-sampled pixels for said averaging.
  • 26. An incidental motion mitigation exposure setting method of claim 20 wherein said reliable down-sampled pixels include pixels associated with smooth well lit surface.
  • 27. A incidental motion mitigation exposure setting method of claim 20 wherein subtraction values are equal to subtracting illumination intensity values of a first frame from illumination intensity values of a second frame and binarizing said illumination intensity indication includes comparing said subtraction values with a moving average value and if a row of said subtraction values is larger than the mean the subtraction values change to one, otherwise the subtraction values change to negative one.
RELATED APPLICATIONS

This application claims the benefit of and priority filing date of provisional application 60/772,437 entitled “A Flicker Band Automated Detection System and Method” filed Feb. 10, 2006, which is incorporated herein by this reference.

US Referenced Citations (181)
Number Name Date Kind
3904818 Kovac Sep 1975 A
4253120 Levine Feb 1981 A
4646251 Hayes et al. Feb 1987 A
4685071 Lee Aug 1987 A
4739495 Levine Apr 1988 A
4771470 Geiser et al. Sep 1988 A
4920428 Lin et al. Apr 1990 A
4987496 Greivenkamp, Jr. Jan 1991 A
5175430 Enke et al. Dec 1992 A
5261029 Abi-Ezzi et al. Nov 1993 A
5305994 Matsui et al. Apr 1994 A
5387983 Sugiura et al. Feb 1995 A
5475430 Hamada et al. Dec 1995 A
5513016 Inoue Apr 1996 A
5608824 Shimizu et al. Mar 1997 A
5652621 Adams, Jr. et al. Jul 1997 A
5793433 Kim et al. Aug 1998 A
5878174 Stewart et al. Mar 1999 A
5903273 Mochizuki et al. May 1999 A
5905530 Yokota et al. May 1999 A
5995109 Goel et al. Nov 1999 A
6016474 Kim et al. Jan 2000 A
6078331 Pulli et al. Jun 2000 A
6111988 Horowitz et al. Aug 2000 A
6118547 Tanioka Sep 2000 A
6141740 Mahalingaiah et al. Oct 2000 A
6151457 Kawamoto Nov 2000 A
6175430 Ito Jan 2001 B1
6252611 Kondo Jun 2001 B1
6256038 Krishnamurthy Jul 2001 B1
6281931 Tsao et al. Aug 2001 B1
6289103 Sako et al. Sep 2001 B1
6314493 Luick Nov 2001 B1
6319682 Hochman Nov 2001 B1
6323934 Enomoto Nov 2001 B1
6392216 Peng-Tan May 2002 B1
6396397 Bos et al. May 2002 B1
6438664 McGrath et al. Aug 2002 B1
6486971 Kawamoto Nov 2002 B1
6504952 Takemura et al. Jan 2003 B1
6584202 Montag et al. Jun 2003 B1
6594388 Gindele et al. Jul 2003 B1
6683643 Takayama et al. Jan 2004 B1
6707452 Veach Mar 2004 B1
6724423 Sudo Apr 2004 B1
6724932 Ito Apr 2004 B1
6737625 Baharav et al. May 2004 B2
6760080 Moddel et al. Jul 2004 B1
6785814 Usami et al. Aug 2004 B1
6806452 Bos et al. Oct 2004 B2
6839062 Aronson et al. Jan 2005 B2
6856441 Zhang et al. Feb 2005 B2
6891543 Wyatt May 2005 B2
6900836 Hamilton, Jr. May 2005 B2
6950099 Stollnitz et al. Sep 2005 B2
7009639 Une et al. Mar 2006 B1
7015909 Morgan, III et al. Mar 2006 B1
7023479 Hiramatsu et al. Apr 2006 B2
7088388 MacLean et al. Aug 2006 B2
7092018 Watanabe Aug 2006 B1
7106368 Daiku et al. Sep 2006 B2
7133041 Kaufman et al. Nov 2006 B2
7133072 Harada Nov 2006 B2
7146041 Takahashi Dec 2006 B2
7221779 Kawakami et al. May 2007 B2
7227586 Finlayson et al. Jun 2007 B2
7245319 Enomoto Jul 2007 B1
7305148 Spampinato et al. Dec 2007 B2
7343040 Chanas et al. Mar 2008 B2
7486844 Chang et al. Feb 2009 B2
7502505 Malvar et al. Mar 2009 B2
7580070 Yanof et al. Aug 2009 B2
7626612 John et al. Dec 2009 B2
7627193 Alon et al. Dec 2009 B2
7671910 Lee Mar 2010 B2
7728880 Hung et al. Jun 2010 B2
7750956 Wloka Jul 2010 B2
7817187 Silsby et al. Oct 2010 B2
7859568 Shimano et al. Dec 2010 B2
7860382 Grip Dec 2010 B2
7912279 Hsu et al. Mar 2011 B2
8049789 Innocent Nov 2011 B2
8238695 Davey et al. Aug 2012 B1
8456547 Wloka Jun 2013 B2
8456548 Wloka Jun 2013 B2
8456549 Wloka Jun 2013 B2
8471852 Bunnell Jun 2013 B1
8570634 Luebke et al. Oct 2013 B2
8571346 Lin Oct 2013 B2
8588542 Takemoto et al. Nov 2013 B1
20010001234 Addy et al. May 2001 A1
20010012113 Yoshizawa et al. Aug 2001 A1
20010012127 Fukuda et al. Aug 2001 A1
20010015821 Namizuka et al. Aug 2001 A1
20010019429 Oteki et al. Sep 2001 A1
20010021278 Fukuda et al. Sep 2001 A1
20010033410 Helsel et al. Oct 2001 A1
20010050778 Fukuda et al. Dec 2001 A1
20010054126 Fukuda et al. Dec 2001 A1
20020012131 Oteki et al. Jan 2002 A1
20020015111 Harada Feb 2002 A1
20020018244 Namizuka et al. Feb 2002 A1
20020027670 Takahashi et al. Mar 2002 A1
20020033887 Hieda et al. Mar 2002 A1
20020041383 Lewis, Jr. et al. Apr 2002 A1
20020044778 Suzuki Apr 2002 A1
20020054374 Inoue et al. May 2002 A1
20020063802 Gullichsen et al. May 2002 A1
20020105579 Levine et al. Aug 2002 A1
20020126210 Shinohara et al. Sep 2002 A1
20020146136 Carter, Jr. Oct 2002 A1
20020149683 Post Oct 2002 A1
20020158971 Daiku et al. Oct 2002 A1
20020167202 Pfalzgraf Nov 2002 A1
20020167602 Nguyen Nov 2002 A1
20020191694 Ohyama et al. Dec 2002 A1
20020196470 Kawamoto et al. Dec 2002 A1
20030035100 Dimsdale et al. Feb 2003 A1
20030067461 Fletcher et al. Apr 2003 A1
20030122825 Kawamoto Jul 2003 A1
20030142222 Hordley Jul 2003 A1
20030146975 Joung et al. Aug 2003 A1
20030169353 Keshet et al. Sep 2003 A1
20030169918 Sogawa Sep 2003 A1
20030197701 Teodosiadis et al. Oct 2003 A1
20030218672 Zhang et al. Nov 2003 A1
20030222995 Kaplinsky et al. Dec 2003 A1
20030223007 Takane Dec 2003 A1
20040001061 Stollnitz et al. Jan 2004 A1
20040001234 Curry et al. Jan 2004 A1
20040032516 Kakarala Feb 2004 A1
20040066970 Matsugu Apr 2004 A1
20040100588 Hartson et al. May 2004 A1
20040101313 Akiyama May 2004 A1
20040109069 Kaplinsky et al. Jun 2004 A1
20040189875 Zhai et al. Sep 2004 A1
20040218071 Chauville et al. Nov 2004 A1
20040247196 Chanas et al. Dec 2004 A1
20050007378 Grove Jan 2005 A1
20050007477 Ahiska Jan 2005 A1
20050030395 Hattori Feb 2005 A1
20050046704 Kinoshita Mar 2005 A1
20050099418 Cabral et al. May 2005 A1
20050111110 Matama May 2005 A1
20050175257 Kuroki Aug 2005 A1
20050185058 Sablak Aug 2005 A1
20050238225 Jo et al. Oct 2005 A1
20050243181 Castello et al. Nov 2005 A1
20050248671 Schweng Nov 2005 A1
20050261849 Kochi et al. Nov 2005 A1
20050286097 Hung et al. Dec 2005 A1
20060050158 Irie Mar 2006 A1
20060061658 Faulkner et al. Mar 2006 A1
20060087509 Ebert et al. Apr 2006 A1
20060119710 Ben-Ezra et al. Jun 2006 A1
20060133697 Uvarov et al. Jun 2006 A1
20060176375 Hwang et al. Aug 2006 A1
20060197664 Zhang et al. Sep 2006 A1
20060274171 Wang Dec 2006 A1
20060290794 Bergman et al. Dec 2006 A1
20060293089 Herberger et al. Dec 2006 A1
20070091188 Chen et al. Apr 2007 A1
20070147706 Sasaki et al. Jun 2007 A1
20070171288 Inoue et al. Jul 2007 A1
20070236770 Doherty et al. Oct 2007 A1
20070247532 Sasaki Oct 2007 A1
20070285530 Kim et al. Dec 2007 A1
20080030587 Helbing Feb 2008 A1
20080043024 Schiwietz et al. Feb 2008 A1
20080062164 Bassi et al. Mar 2008 A1
20080101690 Hsu et al. May 2008 A1
20080143844 Innocent Jun 2008 A1
20080231726 John Sep 2008 A1
20090002517 Yokomitsu et al. Jan 2009 A1
20090010539 Guarnera et al. Jan 2009 A1
20090037774 Rideout et al. Feb 2009 A1
20090116750 Lee et al. May 2009 A1
20090128575 Liao et al. May 2009 A1
20090160957 Deng et al. Jun 2009 A1
20090257677 Cabral et al. Oct 2009 A1
20100266201 Cabral et al. Oct 2010 A1
Foreign Referenced Citations (42)
Number Date Country
1275870 Dec 2000 CN
0392565 Oct 1990 EP
1449169 May 2003 EP
1378790 Jul 2004 EP
1447977 Aug 2004 EP
1550980 Jul 2005 EP
2045026 Oct 1980 GB
2363018 May 2001 GB
61187467 Aug 1986 JP
62151978 Jul 1987 JP
07015631 Jan 1995 JP
8036640 Feb 1996 JP
08-079622 Mar 1996 JP
2000516752 Dec 2000 JP
2001052194 Feb 2001 JP
2002-207242 Jul 2002 JP
2003-085542 Mar 2003 JP
2004-221838 Aug 2004 JP
2004221838 Aug 2004 JP
2005094048 Apr 2005 JP
2005-182785 Jul 2005 JP
2005520442 Jul 2005 JP
2006025005 Jan 2006 JP
2006086822 Mar 2006 JP
2006-094494 Apr 2006 JP
2006121612 May 2006 JP
2006134157 May 2006 JP
2007019959 Jan 2007 JP
2007-148500 Jun 2007 JP
2009021962 Jul 2007 JP
2007-233833 Sep 2007 JP
2007282158 Oct 2007 JP
2008085388 Apr 2008 JP
2008113416 May 2008 JP
2008277926 Nov 2008 JP
1020040043156 May 2004 KR
1020060068497 Jun 2006 KR
1020070004202 Jan 2007 KR
03043308 May 2003 WO
2004063989 Jul 2004 WO
2007056459 May 2007 WO
2007093864 Aug 2007 WO
Non-Patent Literature Citations (39)
Entry
http://Slashdot.org/articles/07/09/06/1431217.html.
http:englishrussia.com/?p=1377.
“A Pipelined Architecture for Real-Time Correction of Barrel Distortion in Wide-Angle Camera Images”, Hau, T. Ngo, Student Member, IEEE and Vijayan K. Asari, Senior Member IEEE, IEEE Transaction on Circuits and Systems for Video Technology: vol. 15 No. 3 Mar. 2005 pp. 436-444.
“Calibration and removal of lateral chromatic aberration in images” Mallon, et al. Science Direct Copyright 2006; 11 pages.
“Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Seperation” Weerasighe, et al Visual Information Processing Lab, Motorola Austrailan Research Center pp. IV-3233-IV3236, 2002.
D. Doo, M. Sabin “Behaviour of recrusive division surfaces near extraordinary points”; Sep. 1978; Computer Aided Design; vol. 10, pp. 356-360.
D. W. H. Doo; “A subdivision algorithm for smoothing down irregular shaped polyhedrons”; 1978; Interactive Techniques in Computer Aided Design; pp. 157-165.
Davis, J., Marschner, S., Garr, M., Levoy, M., Filling holes in complex surfaces using volumetric diffusion, Dec. 2001, Stanford University, pp. 1-9.
E. Catmull, J. Clark, “recursively generated B-Spline surfaces on arbitrary topological meshes”; Nov. 1978; Computer aided design; vol. 10; pp. 350-355.
J. Bolz, P. Schroder; “rapid evaluation of catmull-clark subdivision surfaces”; Web 3D '02, 2002.
J. Stam; “Exact Evaluation of Catmull-clark subdivision surfaces at arbitrary parameter values”; Jul. 1998; Computer Graphics; vol. 32; pp. 395-404.
Krus, M., Bourdot, P., Osorio, A., Guisnel, F., Thibault, G., Adaptive tessellation of connected primitives for interactive walkthroughs in complex industrial virtual environments, Jun. 1999, Proceedings of the Eurographics workshop, pp. 1-10.
Kumar, S., Manocha, D., Interactive display of large scale trimmed NURBS models, 1994, University of North Carolina at chapel Hill, Technical Report, pp. 1-36.
Kuno et al. “New Interpolation Method Using Discriminated Color Correlation for Digital Still Cameras” IEEE Transac. On Consumer Electronics, vol. 45, No. 1, Feb. 1999, pp. 259-267.
Loop, C., DeRose, T., Generalized B-Spline surfaces of arbitrary topology, Aug. 1990, SIGRAPH 90, pp. 347-356.
M. Halstead, M. Kass, T. DeRose; “efficient, fair interpolation using catmull-clark surfaces”; Sep. 1993; Computer Graphics and Interactive Techniques, Proc; pp. 35-44.
T. DeRose, M., Kass, T. Troung; “subdivision surfaces in character animation”; Jul. 1998; Computer Graphics and Interactive Techniques, Proc; pp. 85-94.
Takeuchi, S., Kanai, T., Suzuki, H., Shimada, K., Kimura, F., Subdivision surface fitting with QEM-based mesh simplification and reconstruction of approximated B-spline surfaces, 2000, Eighth Pacific Conference on computer graphics and applications, pp. 202-212.
Keith R. Slavin; Application as Filed entitled “Efficient Method for Reducing Noise and Blur in a Composite Still Image From a Rolling Shutter Camera”; U.S. Appl. No. 12/069,669, filed Feb. 11, 2008.
Donald D. Spencer, “Illustrated Computer Graphics Dictionary”, 1993, Camelot Publishing Company, p. 272.
Duca et al., “A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques”, ACM SIGGRAPH Jul. 2005, pp. 453-463.
gDEBugger, graphicRemedy, http://www.gremedy.com, Aug. 8, 2006, pp. 1-18.
http://en.wikipedia.org/wiki/Bayer—filter; “Bayer Filter”; Wikipedia, the free encyclopedia; pp. 1-4.
http://en.wikipedia.org/wiki/Color—filter—array; “Color Filter Array”; Wikipedia, the free encyclopedia; pp. 1-5.
http://en.wikipedia.org/wiki/Color—space; “Color Space”; Wikipedia, the free encyclopedia; pp. 1-4.
http://en.wikipedia.org/wiki/Color—translation; “Color Management”; Wikipedia, the free encyclopedia; pp. 1-4.
http://en.wikipedia.org/wiki/Demosaicing; “Demosaicing”; Wikipedia, the free encyclopedia; pp. 1-5.
http://en.wikipedia.org/wiki/Half—tone; “Halftone”, Wikipedia, the free encyclopedia; pp. 1-5.
http://en.wikipedia.org/wiki/L*a*b*; “Lab Color Space”; Wikipedia, the free encyclopedia; pp. 1-4.
Ko et al., “Fast Digital Image Stabilizer Based on Gray-Coded Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 45, No. 3, pp. 598-603, Aug. 1999.
Ko, et al., “Digital Image Stabilizing Algorithms Basd on Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 44, No. 3, pp. 617-622, Aug. 1988.
Morimoto et al., “Fast Electronic Digital Image Stabilization for Off-Road Navigation”, Computer Vision Laboratory, Center for Automated Research University of Maryland, Real-Time Imaging, vol. 2, pp. 285-296, 1996.
Paik et al., “An Adaptive Motion Decision system for Digital Image Stabilizer Based on Edge Pattern Matching”, IEEE Transactions on Consumer Electronics, vol. 38, No. 3, pp. 607-616, Aug. 1992.
Parhami, Computer Arithmetic, Oxford University Press, Jun. 2000, pp. 413-418.
S. Erturk, “Digital Image Stabilization with Sub-Image Phase Correlation Based Global Motion Estimation”, IEEE Transactions on Consumer Electronics, vol. 49, No. 4, pp. 1320-1325, Nov. 2003.
S. Erturk, “Real-Time Digital Image Stabilization Using Kalman Filters”, http://www,ideallibrary.com, Real-Time Imaging 8, pp. 317-328, 2002.
Uomori et al., “Automatic Image Stabilizing System by Full-Digital Signal Processing”, vol. 36, No. 3, pp. 510-519, Aug. 1990.
Uomori et al., “Electronic Image Stabiliztion System for Video Cameras and VCRS”, J. Soc. Motion Pict. Telev. Eng., vol. 101, pp. 66-75, 1992.
Weerasinghe et al.; “Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Separation”; Visual Information Proessing lab, Motorola Australian Research Center; IV 3233-IV3236.
Provisional Applications (1)
Number Date Country
60772437 Feb 2006 US