Embodiments of the present invention are generally related to digital image signal processing.
As technology has advanced, cameras have advanced accordingly but still face certain persistent issues. Particularly, as light passes through a camera lens, the light is bent as the light refracts. This bending of light results in inconsistent brightness across the sensor such that areas in the middle are much brighter than areas on the edges. This distortion is known as lens shading or vignetting. Variations or imperfections in the lens have an increasing impact on the inconsistency of light coming out of the lens. Further, light may get stuck or not pass through as a result of interacting with the lens housing. Thus, light coming through a lens system and forming an image on a film plane (digital sensor or film) will be unevenly attenuated across the image plane and color spectrum due to imperfections in the lens and image forming medium (film or digital array of sensors). The overall result is that if a “flat” field of light enters the lens, then the film or digital sensor nevertheless receives an “unflat” field of light with varying brightness.
Conventionally, a high order polynomial may be used to represent this distortion and can be applied across the image plane to attempt to overcome the impact of lens shading and lens imperfections thereby correcting the image. However, the high order polynomials are computationally expensive and are complicated to execute on hardware of fixed precision. For example, a 10th power polynomial may have 100 individual terms and a high order polynomial may require evaluation at each pixel meaning that, for instance, after 20 pixels, the amount of computations required increases rapidly. Further, higher order polynomials are numerically unstable as small variations can result in large changes in the polynomial. Also, as one moves around a surface defined by a polynomial to the 9th or 10th order, the polynomial coefficients provide little intuition as to the magnitude of the changes in the surface value in any direction. All these characteristics make polynomial representation not a viable solution for the lens shading problem in terms of being computationally intensive and not intuitive.
Accordingly, what is needed is a system for correcting image data from lens shading effects in an efficient manner. Embodiments provide for determining calibration data operable to be used for correcting image data (e.g., to overcome lens shading, misshapen lenses and other effects). Embodiments of present invention further provide for using the calibration data in a process of correcting image data for lens shading effects. In one embodiment, the correction of image data is performed via utilization of a spline surfaces (e.g., Bezier surface). The use of spline surfaces facilitates efficient hardware implementations and provides an intuitive and computationally stable selection. The image correction may be performed on a per channel and illumination type basis.
In one embodiment, the present invention is a computer implemented method for image signal processing. The method includes receiving image data which may be received from an optical image sensor associated with a camera lens (e.g., CMOS sensor or CCD sensor) and includes data for one or more color channels. A Bezier patch array is then accessed specific to the lens and sensor. The Bezier patch array includes control points of a surface which is a reciprocal function of detected lens shading. The Bezier patch array is utilized with the image data to produce corrected image data. The corrected image data is thus corrected for a variety of effects including lens shading and lens imperfections, etc.
In another embodiment, the present invention is implemented as an image signal processing system. The system includes an optical sensor (e.g., CMOS sensor or CCD sensor) operable to capture light information and a processing unit operable to receive image signal data in an ordered format from the optical sensor. The processing unit is further operable to process the image signal data to correct an image based on a plurality of values reflecting a reciprocal surface. The reciprocal surface may have been determined by calibration embodiments described herein. The system further includes a channel selector for selecting a channel image signal (e.g., red, greens, or blue) for the processing unit to receive and a memory operable to store a plurality of correction data (e.g., specific reciprocal surfaces) comprising correction information for each of a plurality of color channels. The memory may further store correction information (e.g., specific reciprocal surfaces) for each of a plurality of illumination types (e.g., fluorescent, tungsten or incandescent, daylight or the like). The reciprocal surfaces may be multiple patches of a Bezier surface.
In another embodiment, the present invention is implemented as a method for calibrating an image signal processor. The method includes receiving light from a uniform field and sampling the light with a digital optical sensor associated with a lens. The method further includes determining a plurality of reciprocal values for each location (e.g., pixel) corresponding to the uniform field. Based on the reciprocal values, a plurality of control points is determined. The control points define a reciprocal surface based on the plurality of reciprocal values and the reciprocal surface is operable to be used for correcting an image e.g., correcting lens shading effects. For example, the calibration facilitates overcoming of lens shading and lens imperfection effects. In one embodiment, the reciprocal surface is represented as a multiple number of patches of a Bezier surface.
Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.
Notation and Nomenclature:
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of an integrated circuit (e.g., computing system 100 of
Exemplary Operating Environment:
CPU 110 and the ISP 104 can also be integrated into a single integrated circuit die and CPU 110 and ISP 104 may share various resources, such as instruction logic, buffers, functional units and so on, or separate resources may be provided for image processing and general-purpose operations. System 100 can be implemented as, for example, a digital camera, cell phone camera, portable device (e.g., audio device, entertainment device, handheld device), webcam, video device (e.g., camcorder) and the like.
Sensor 102 receives light via a lens (not shown) and converts the light received into a signal (e.g., digital or analog). Sensor 102 may be any of a variety of optical sensors including, but not limited to, complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) sensors. Sensor 102 is coupled to communications bus 114 and may provide image data received over communications bus 114.
Image signal processor (ISP) 104 is coupled to communications bus 114 and processes the signal generated by sensor 102. More specifically, image signal processor 104 processes data from sensor 102 for storing in memory 106. For example, image signal processor 104 may compress and determine a file format for an image to be stored in within memory 106.
Input module 108 allows entry of commands into system 100 which may then, among other things, control the sampling of data by sensor 102 and subsequent processing by ISP 104. Input module 108 may include, but it not limited to, navigation pads, keyboards (e.g., QWERTY), up/down buttons, touch screen controls (e.g., via display 112) and the like.
Central processing unit (CPU) 110 receives commands via input module 108 and may control a variety of operations including, but not limited to, sampling and configuration of sensor 102, processing by ISP 104, and management (e.g., addition, transfer, and removal) of images and/or video from memory 106.
Embodiments of present invention provide for correction of image data. Embodiments further provide for calibration of data operable to be used for correcting image data (e.g., to overcome lens shading, misshapen lenses, and other effects). In one embodiment, the correction of image data is performed via utilization of a spline surface (e.g., Bezier surface). The use of spline surfaces facilitates efficient hardware implementation. The image correction may be performed on a per channel and illumination type basis or combination thereof.
Embodiments of the present invention compensate for bending of light as light of different colors passes through a lens and falls on a color filter array (e.g., color filters 308-312 and sensor array 300). For example, light ray 320 is bent due to light ray 320 being refracted as it passes through lens 301 and green 1 filter 308. Embodiments of the present invention are further operable to compensate for optical crosstalk. Optical crosstalk can occur when light rays are bent as the light ray is refracted as it passes through more than one color filter prior to reaching a sensor. For example, light ray 324 is bent due to being refracted as it passes through lens 301, green 2 filter 312, and then red filter 310 then reaches red sensor 304. It is noted that as light ray 324 passes through red filter 310 and green 2 filter 312, light ray is filtered in a manner not intended by the design of color filter array 300.
Embodiments of the present invention are further operable to compensate for electrical crosstalk. Electrical crosstalk can occur when light rays are bent and reach the material between sensors. For example, light ray 322 is bent due to being refracted upon passing through lens 301 and red filter 310 and then reaches substrate 314. Upon reaching substrate 314, photons of light ray 322 may impact the performance of sensors (e.g., green 1 sensor 302 and red sensor 304). Such impacts may include increasing electrical leakage among components of sensors 302 and 304 (e.g., well leakage).
It is appreciated that embodiments of the present invention may correct image data for a variety of sensor configurations including, but not limited to, panchromatic cells and vertical color filters. It is further appreciated that different types of lighting e.g., of different color temperature, may result in different bending of light as light goes through filters 308-312. Therefore, embodiments determine and use different sets of control points per illuminate. For example, embodiments may utilize a different set of control points for each illuminate (e.g., florescent, tungsten, and daylight) for each color channel.
With reference to
In block 402, light of a given color temperature is applied to a uniform brightness field to a lens. In one embodiment, a specific light source of a given color temperature is selected to illuminate the uniform field (e.g., fluorescent, tungsten, and daylight).
In block 404, the light is sampled with a digital optical sensor for a given sensor channel. The digital optical sensor may be of any design, e.g., a complementary metal-oxide-semiconductor (CMOS) sensor or a charge-coupled device (CCD). It is appreciated that the light received may have been effected (e.g., bent) after passing though the lens (e.g., lens 200) and/or a color filter array (e.g., color filter array 300). The distribution of brightness across the sensor is therefore not uniform, e.g., because of lens shading effects.
In block 406, a plurality of reciprocal values are determined for each sensor location so that the sensor's values and it reciprocal value lead to a uniform field, e.g., one. In other words, the plurality of reciprocal values are determined based on the image values of the uniform field such that when the reciprocal values are applied (e.g., multiplied) to each position (e.g., pixel) the result is a flat field on each pass, for only color sensors of a particular color, e.g., channel.
In block 408, a plurality of control points are determined for the current channel. In one embodiment, the control points define a spline surface based on the plurality of reciprocal values. The surface or “reciprocal surface” is created such that when multiplied by the values from the sensor, the values of the sensor are flattened out so that the original flat field is obtained. The reciprocal surface is operable to be used for correcting an image. For example, the reciprocal surface may provide compensation for the bending of light as it enters a lens (e.g., lens 200), a misshapen lens (e.g., lens 222), or a color filter array (e.g., color filter array 300). The plurality of control points may be determined on a per channel basis (e.g., color channel basis). The plurality of control points may further be determined on an illumination type basis based on the color temperature of the source of light (e.g., florescent, daylight, or incandescent). It is appreciated that the control points represent a compressed form of some higher order surface.
In one embodiment, the reciprocal surface may be a Bezier surface made up of many patches. In one embodiment, the control points are control points for a bi-cubic Bezier surface. The determination of a Bezier surface may be determined by a series of linear interpretations. It is appreciated that the use of a Bezier surface has many desirable properties. For example, if control points are scaled or affine transformations are applied, the effect is the same as applying the transformation to the surface and changes to the surface occur in an intuitive computationally stable manner. In contrast with high order polynomials, application of such transformations would result in weird, non-intuitive changes. Further, Bezier surfaces are separable surfaces meaning that a two dimensional calculation can be solved as two one dimensional calculations, thereby allowing reuse of hardware.
Further, Bezier surfaces, as well as other spline surfaces, exhibit the convex hull property, meaning that the surface is bounded. Embodiments of the present invention are thus well suited for hardware implementations utilizing fixed precision circuits. It is appreciated that use of splines (e.g., Bezier surfaces) overcome the problems associated with high order polynomials (e.g., numerical instability and computationally expensive).
In one embodiment, the Bezier surface may comprise a plurality of Bezier patches. More particularly, the Bezier surface may be divided up into a plurality of patches. For example, a Bezier surface of nine patches per each channel of red, green 1, green 2, and blue may be determined for each illumination type. The number of patches may be variable and be a configurable option. Each patch may be defined by control points. For example, for a cubic Bezier surface there may be 16 control points per patch.
In one embodiment, the control points share internal boundary patches. That is, a portion of the control points are within a boundary of a Bezier patch. The location of control points on boundaries ensures patches join seamlessly. It is appreciated that having control points on the boundaries reduces the overall number of control points. A set of Bezier surfaces and patches may be determined for each color channel and each illumination type. For example, sharing of control points on control boundaries may result in 100 points for a set of 9 patches for each color channel where each patch has 16 control points. It is appreciated that embodiments of the present invention may determine and utilize Bezier patches of any degree.
It is appreciated that the boundaries of the patches can be variably spaced across the sensor surface. For example, boundaries may be moved around according to the surface so that areas where the surface is substantially uneven may have more control points so as to better reflect the shape of the surface. As another example, the boundaries may be selected to correspond to certain areas of a lens being particularly uneven.
At block 409 of
At block 410, another color channel is selected. Block 406 may then be performed and a plurality of reciprocal values for the selected channel are computed, etc.
At block 412, a different color temperature source may be selected and the process continues back to block 404. If the three color temperatures are used, e.g., daylight, tungsten, fluorescent, etc., and the four colors are used, e.g., red, green 1, green 2, and blue, then process 400 will define twelve different Bezier surfaces, each with a 100 control points and each having nine patches with sixteen control points per patch (in one example).
Memory 508 may include channel correction information 510. Channel correction information 510 may include correction information for each of a plurality of color channels for each of a plurality of light sources. It is appreciated that channel and lighting correction information may be based on variety of splines including, but not limited to, Bezier splines, hermite splines, cubic hermite splines, kochanek-bartels splines, polyharmonic splines, perfect splines, smoothing splines, and thin plate splines.
Optical sensor 502 is an array that is operable to capture light information. Optical sensor 502 may be a complementary metal-oxide-semiconductor (CMOS) sensor or a charge-coupled device (CCD) sensor. Optical sensor 502 sends image information captured as image signal data to processing unit 504. Channel selector 506 selects a color channel of the image signal for processing unit 504 to receive. For example, channel selector 506 may select a red color channel of image data to be received by processing unit 504.
In one embodiment, processing unit 504 may be an image signal processor (e.g. ISP 104). In another embodiment, processing unit 504 may be a programmable device. Processing unit 504 is operable to receive image signal data in an ordered format (e.g., scan line order) and is further operable to processes image signal data to correct an image based on a plurality of values reflecting a reciprocal surface. Processing unit 504 may correct image data based on a reciprocal surface which when applied corrects for the various distortion effects on light as that light travels to optical sensor 502 (e.g., bending of light passing through a lens). As described herein, the reciprocal surface may be a Bezier surface. The Bezier surface may include a plurality of Bezier patches having one or more control points on a boundary of the plurality of Bezier patches. In one embodiment, processing unit 504 corrects the image data on a per color channel basis and a per light source basis.
In one embodiment, processing unit 504 takes advantage of the fact that image data is received in an ordered format. More specifically, processing unit 504 may take advantage of the ordered format by determining a distance from the previous point and how much the reciprocal surface has changed thereby avoiding reevaluating the reciprocal surface at each location (e.g., pixel). Embodiments of the present invention thus take advantage of incoming data coherency.
In block 602, image data is received from a sensor array. In one embodiment, the image data is received from an optical image sensor and the image data comprises data for one or more color channels (e.g., red, green, blue). As described herein, the optical sensor may be a complementary metal-oxide-semiconductor (CMOS) sensor or a charge-coupled device (CCD) sensor. It is appreciated that embodiments of the present invention are able to correct image data independent of the sensor type used.
In block 604, the color temperature of the light source is detected. As described herein, the color temperature may include florescent, tungsten, and daylight.
In block 606, data of a given color channel is selected. As described herein, the color channel may be a red, green 1, green 2, or blue color channel.
In block 608, a Bezier patch array for the given color and the detected color temperature is accessed. In one embodiment, the Bezier patch array comprises control points of a surface which is usable for compensating for lens shading and other effects. For example, the control points may correspond to the reciprocal surface for correcting image data received via a misshapen lens (e.g., lens 222). In one embodiment, the Bezier patch array comprises one or more bi-cubic Bezier patches. As described herein, the Bezier patch array may comprise 100 control points. Further, a portion of the control points may be located on boundaries of a Bezier patch. More specifically, the Bezier patch array may include a Bezier surface for each color channel and a Bezier surface for each illumination type (e.g., tungsten, florescent, or daylight).
In block 606, the Bezier patch array is applied to the image data of the given color to produce corrected image data. The Bezier patch array is utilized to flatten out an image field that was bent by lens (e.g., lens 200 or color filter array 300). As described herein, the Bezier patch array may be used to correct image data on a color channel and illumination basis. For example, image data for a red channel of a pixel with X and Y coordinates may be corrected with the Bezier patch array. The reciprocal value of the Bezier patch is multiplied by the red channel value to obtain a flat field value for the corresponding point in the red channel. Image data for other channels may then be processed in a substantially similar manner with a Bezier surfaces for the corresponding channel.
In block 612, a check is performed if there are more color channels to be processed. If there are more color channels to be processed, block 614 is performed. In block 614, the next color channel is selected for processing.
If there are no more color channels to be processed, block 616 is performed. At block 616, the corrected image data is provided (e.g., to an ISP pipeline for additional processing).
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
This application is a Divisional application of, and claims priority to, Ser. No. 12/752,878, entitled “SYSTEM AND METHOD FOR LENS SHADING IMAGE CORRECTION (as amended),” with filing date Apr. 1, 2010, which claims the benefit of and priority to the provisional patent application Ser. No. 61/170,014, entitled “SYSTEM AND METHOD FOR IMAGE CORRECTION,” with filing date Apr. 16, 2009, and both hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3904818 | Kovac | Sep 1975 | A |
4253120 | Levine | Feb 1981 | A |
4646251 | Hayes et al. | Feb 1987 | A |
4685071 | Lee | Aug 1987 | A |
4739495 | Levine | Apr 1988 | A |
4771470 | Geiser et al. | Sep 1988 | A |
4920428 | Lin et al. | Apr 1990 | A |
4926166 | Fujisawa et al. | May 1990 | A |
4987496 | Greivenkamp, Jr. | Jan 1991 | A |
5175430 | Enke et al. | Dec 1992 | A |
5261029 | Abi-Ezzi et al. | Nov 1993 | A |
5305994 | Matsui et al. | Apr 1994 | A |
5387983 | Sugiura et al. | Feb 1995 | A |
5448260 | Zenda et al. | Sep 1995 | A |
5475430 | Hamada et al. | Dec 1995 | A |
5513016 | Inoue | Apr 1996 | A |
5606348 | Chiu | Feb 1997 | A |
5608824 | Shimizu et al. | Mar 1997 | A |
5652621 | Adams, Jr. et al. | Jul 1997 | A |
5699076 | Tomiyasu | Dec 1997 | A |
5793433 | Kim et al. | Aug 1998 | A |
5878174 | Stewart et al. | Mar 1999 | A |
5903273 | Mochizuki et al. | May 1999 | A |
5905530 | Yokota et al. | May 1999 | A |
5995109 | Goel et al. | Nov 1999 | A |
6016474 | Kim et al. | Jan 2000 | A |
6078331 | Pulli et al. | Jun 2000 | A |
6118547 | Tanioka | Sep 2000 | A |
6141740 | Mahalingaiah et al. | Oct 2000 | A |
6151457 | Kawamoto | Nov 2000 | A |
6175430 | Ito | Jan 2001 | B1 |
6252611 | Kondo | Jun 2001 | B1 |
6281931 | Tsao et al. | Aug 2001 | B1 |
6289103 | Sako et al. | Sep 2001 | B1 |
6314493 | Luick | Nov 2001 | B1 |
6319682 | Hochman | Nov 2001 | B1 |
6323934 | Enomoto | Nov 2001 | B1 |
6392216 | Peng-Tan | May 2002 | B1 |
6396397 | Bos et al. | May 2002 | B1 |
6438664 | McGrath et al. | Aug 2002 | B1 |
6486971 | Kawamoto | Nov 2002 | B1 |
6504952 | Takemura et al. | Jan 2003 | B1 |
6584202 | Montag et al. | Jun 2003 | B1 |
6594388 | Gindele et al. | Jul 2003 | B1 |
6683643 | Takayama et al. | Jan 2004 | B1 |
6707452 | Veach | Mar 2004 | B1 |
6724423 | Sudo | Apr 2004 | B1 |
6724932 | Ito | Apr 2004 | B1 |
6737625 | Baharav et al. | May 2004 | B2 |
6760080 | Moddel et al. | Jul 2004 | B1 |
6785814 | Usami et al. | Aug 2004 | B1 |
6806452 | Bos et al. | Oct 2004 | B2 |
6839062 | Aronson et al. | Jan 2005 | B2 |
6856441 | Zhang et al. | Feb 2005 | B2 |
6891543 | Wyatt | May 2005 | B2 |
6900836 | Hamilton, Jr. | May 2005 | B2 |
6950099 | Stollnitz et al. | Sep 2005 | B2 |
7009639 | Une et al. | Mar 2006 | B1 |
7015909 | Morgan, III et al. | Mar 2006 | B1 |
7023479 | Hiramatsu et al. | Apr 2006 | B2 |
7088388 | MacLean et al. | Aug 2006 | B2 |
7092018 | Watanabe | Aug 2006 | B1 |
7106368 | Daiku et al. | Sep 2006 | B2 |
7133072 | Harada | Nov 2006 | B2 |
7146041 | Takahashi | Dec 2006 | B2 |
7221779 | Kawakami et al. | May 2007 | B2 |
7227586 | Finlayson et al. | Jun 2007 | B2 |
7245319 | Enomoto | Jul 2007 | B1 |
7305148 | Spampinato et al. | Dec 2007 | B2 |
7343040 | Chanas et al. | Mar 2008 | B2 |
7486844 | Chang et al. | Feb 2009 | B2 |
7502505 | Malvar et al. | Mar 2009 | B2 |
7580070 | Yanof et al. | Aug 2009 | B2 |
7626612 | John et al. | Dec 2009 | B2 |
7627193 | Alon et al. | Dec 2009 | B2 |
7671910 | Lee | Mar 2010 | B2 |
7728880 | Hung et al. | Jun 2010 | B2 |
7750956 | Wloka | Jul 2010 | B2 |
7817187 | Silsby et al. | Oct 2010 | B2 |
7859568 | Shimano et al. | Dec 2010 | B2 |
7860382 | Grip | Dec 2010 | B2 |
7912279 | Hsu et al. | Mar 2011 | B2 |
8049789 | Innocent | Nov 2011 | B2 |
8456547 | Wloka | Jun 2013 | B2 |
8456548 | Wloka | Jun 2013 | B2 |
8456549 | Wloka | Jun 2013 | B2 |
8471852 | Bunnell | Jun 2013 | B1 |
8737832 | Lin et al. | May 2014 | B1 |
8768160 | Lin et al. | Jul 2014 | B2 |
9001016 | Herz et al. | Apr 2015 | B2 |
9177368 | Cabral et al. | Nov 2015 | B2 |
20010001234 | Addy et al. | May 2001 | A1 |
20010012113 | Yoshizawa et al. | Aug 2001 | A1 |
20010012127 | Fukuda et al. | Aug 2001 | A1 |
20010015821 | Namizuka et al. | Aug 2001 | A1 |
20010019429 | Oteki et al. | Sep 2001 | A1 |
20010021278 | Fukuda et al. | Sep 2001 | A1 |
20010033410 | Helsel et al. | Oct 2001 | A1 |
20010050778 | Fukuda et al. | Dec 2001 | A1 |
20010054126 | Fukuda et al. | Dec 2001 | A1 |
20020012131 | Oteki et al. | Jan 2002 | A1 |
20020015111 | Harada | Feb 2002 | A1 |
20020018244 | Namizuka et al. | Feb 2002 | A1 |
20020027670 | Takahashi et al. | Mar 2002 | A1 |
20020033887 | Hieda et al. | Mar 2002 | A1 |
20020041383 | Lewis, Jr. et al. | Apr 2002 | A1 |
20020044778 | Suzuki | Apr 2002 | A1 |
20020054374 | Inoue et al. | May 2002 | A1 |
20020063802 | Gullichsen et al. | May 2002 | A1 |
20020105579 | Levine et al. | Aug 2002 | A1 |
20020126210 | Shinohara et al. | Sep 2002 | A1 |
20020146136 | Carter, Jr. | Oct 2002 | A1 |
20020149683 | Post | Oct 2002 | A1 |
20020158971 | Daiku et al. | Oct 2002 | A1 |
20020167202 | Pfalzgraf | Nov 2002 | A1 |
20020167602 | Nguyen | Nov 2002 | A1 |
20020191694 | Ohyama et al. | Dec 2002 | A1 |
20020196470 | Kawamoto et al. | Dec 2002 | A1 |
20030035100 | Dimsdale et al. | Feb 2003 | A1 |
20030067461 | Fletcher et al. | Apr 2003 | A1 |
20030122825 | Kawamoto | Jul 2003 | A1 |
20030142222 | Hordley | Jul 2003 | A1 |
20030146975 | Joung et al. | Aug 2003 | A1 |
20030169353 | Keshet et al. | Sep 2003 | A1 |
20030169918 | Sogawa | Sep 2003 | A1 |
20030197701 | Teodosiadis et al. | Oct 2003 | A1 |
20030218672 | Zhang et al. | Nov 2003 | A1 |
20030222995 | Kaplinsky et al. | Dec 2003 | A1 |
20030223007 | Takane | Dec 2003 | A1 |
20040001061 | Stollnitz et al. | Jan 2004 | A1 |
20040001234 | Curry et al. | Jan 2004 | A1 |
20040032516 | Kakarala | Feb 2004 | A1 |
20040066970 | Matsugu | Apr 2004 | A1 |
20040100588 | Hartson et al. | May 2004 | A1 |
20040101313 | Akiyama | May 2004 | A1 |
20040109069 | Kaplinsky et al. | Jun 2004 | A1 |
20040189875 | Zhai et al. | Sep 2004 | A1 |
20040218071 | Chauville et al. | Nov 2004 | A1 |
20040247196 | Chanas et al. | Dec 2004 | A1 |
20050007378 | Grove | Jan 2005 | A1 |
20050007477 | Ahiska | Jan 2005 | A1 |
20050030395 | Hattori | Feb 2005 | A1 |
20050046704 | Kinoshita | Mar 2005 | A1 |
20050099418 | Cabral et al. | May 2005 | A1 |
20050175257 | Kuroki | Aug 2005 | A1 |
20050185058 | Sablak | Aug 2005 | A1 |
20050238225 | Jo et al. | Oct 2005 | A1 |
20050243181 | Castello et al. | Nov 2005 | A1 |
20050248671 | Schweng | Nov 2005 | A1 |
20050261849 | Kochi et al. | Nov 2005 | A1 |
20050286097 | Hung et al. | Dec 2005 | A1 |
20060050158 | Irie | Mar 2006 | A1 |
20060061658 | Faulkner et al. | Mar 2006 | A1 |
20060087509 | Ebert et al. | Apr 2006 | A1 |
20060119710 | Ben-Ezra et al. | Jun 2006 | A1 |
20060133697 | Uvarov et al. | Jun 2006 | A1 |
20060176375 | Hwang et al. | Aug 2006 | A1 |
20060197664 | Zhang et al. | Sep 2006 | A1 |
20060274171 | Wang | Dec 2006 | A1 |
20060290794 | Bergman et al. | Dec 2006 | A1 |
20060293089 | Herberger et al. | Dec 2006 | A1 |
20070091188 | Chen et al. | Apr 2007 | A1 |
20070147706 | Sasaki et al. | Jun 2007 | A1 |
20070171234 | Crawfis et al. | Jul 2007 | A1 |
20070171288 | Inoue et al. | Jul 2007 | A1 |
20070236770 | Doherty et al. | Oct 2007 | A1 |
20070247532 | Sasaki | Oct 2007 | A1 |
20070285530 | Kim et al. | Dec 2007 | A1 |
20070291038 | Herz et al. | Dec 2007 | A1 |
20070291233 | Culbertson et al. | Dec 2007 | A1 |
20080030587 | Helbing | Feb 2008 | A1 |
20080062164 | Bassi et al. | Mar 2008 | A1 |
20080101690 | Hsu et al. | May 2008 | A1 |
20080143844 | Innocent | Jun 2008 | A1 |
20080231726 | John | Sep 2008 | A1 |
20090002517 | Yokomitsu et al. | Jan 2009 | A1 |
20090010539 | Guamera et al. | Jan 2009 | A1 |
20090116750 | Lee et al. | May 2009 | A1 |
20090160957 | Deng et al. | Jun 2009 | A1 |
20090257677 | Cabral et al. | Oct 2009 | A1 |
20100266201 | Cabral et al. | Oct 2010 | A1 |
Number | Date | Country |
---|---|---|
1275870 | Dec 2000 | CN |
0392565 | Oct 1990 | EP |
1377890 | Jan 2004 | EP |
1447977 | Aug 2004 | EP |
1449169 | Aug 2004 | EP |
1550980 | Jul 2005 | EP |
2045026 | Oct 1980 | GB |
2363018 | Dec 2001 | GB |
61187467 | Aug 1986 | JP |
62151978 | Jul 1987 | JP |
07015631 | Jan 1995 | JP |
8036640 | Feb 1996 | JP |
08-079622 | Mar 1996 | JP |
2000516752 | Dec 2000 | JP |
2001052194 | Feb 2001 | JP |
2002-207242 | Jul 2002 | JP |
2003-085542 | Mar 2003 | JP |
2004-2211838 | Aug 2004 | JP |
2005094048 | Apr 2005 | JP |
2005-182785 | Jul 2005 | JP |
2005520442 | Jul 2005 | JP |
2006025005 | Jan 2006 | JP |
2006086822 | Mar 2006 | JP |
2006-094494 | Apr 2006 | JP |
2006121612 | May 2006 | JP |
2006134157 | May 2006 | JP |
2007019959 | Jan 2007 | JP |
2007-148500 | Jun 2007 | JP |
2009021962 | Jul 2007 | JP |
2007-233833 | Sep 2007 | JP |
2007282158 | Oct 2007 | JP |
2008085388 | Apr 2008 | JP |
2008113416 | May 2008 | JP |
2008277926 | Nov 2008 | JP |
1020040043156 | May 2004 | KR |
1020060068497 | Jun 2006 | KR |
1020070004202 | Jan 2007 | KR |
03043308 | May 2003 | WO |
2004063989 | Jul 2004 | WO |
2007056459 | May 2007 | WO |
2007093864 | Aug 2007 | WO |
Entry |
---|
Goshtasby, Ardeshir; “Correction of Image Deformation From Lens Distortion Using Bezier Patches”; 1989; Computer Vision, Graphics, and Image Processing, vol. 47; pp. 358-394. |
“Slashdot”; http://Slashdot.org/articles/07/09/06/1431217.html; Oct. 1, 2007. |
“The Matrix Goggles”; http:englishrussia.com/?p=1377; Sep. 3, 2007. |
Donald D. Spencer, “Illustrated Computer Graphics Dictionary”, pp. 1993, Camelot Publishing Company, p. 272. |
Duca et al., “A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques”, ACM SIGGRAPH Jul. 2005, pp. 453-463. |
gDEBugger, graphicRemedy, http://www.gremedy.com, Aug. 8, 2006, pp. 1-18. |
(http://en.wikipedia.org/wiki/Bayer—filter; “Bayer Filter”; Wikipedia, the free encyclopedia; pp. 1-4. |
http://en.wikipedia.org/wiki/Color—filter—array; “Color Filter Array”; Wikipedia, the free encyclopedia; pp. 1-5. |
http://en.wikipedia.org/wiki/Color—space; “Color Space”; Wikipedia, the free encyclopedia; pp. 1-4. |
http://en.wikipedia.org/wiki/Color—translation; “Color Management”; Wikipedia, the free encyclopedia; pp. 1-4. |
http://en.wikipedia.org/wiki/Demosaicing; “Demosaicing”; Wikipedia, the free encyclopedia; pp. 1-5. |
http://en.wikipedia.org/wiki/Half—tone; “Halftone”; Wikipedia, the free encyclopedia; pp. 1-5. |
http://en.wikipedia.org/wiki/L*a*b*; “Lab Color Space”; Wikipedia, the free encyclopedia; pp. 1-4. |
Keith R. Slavin; Application as Filed entitled “Efficient Method for Reducing Noise and Blur in a Composite Still Image From a Rolling Shutter Camera”; Application No. 12069669; Filed Feb. 11, 2008. |
Ko et al., “Fast Digital Image Stabilizer Based on Gray-Coded Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 45, No. 3, pp. 598-603, Aug. 1999. |
Ko, et al., “Digital Image Stabilizing Algorithms Basd on Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 44, No. 3, pp. 617-622, Aug. 1988. |
Morimoto et al., “Fast Electronic Digital Image Stabilization for Off-Road Navigation”, Computer Vision Laboratory, Center for Automated Research University of Maryland, Real-Time Imaging, vol. 2, pp. 285-296, 1996. |
Paik et al., “An Adaptive Motion Decision system for Digital Image Stabilizer Based on Edge Pattern Matching”, IEEE Transactions on Consumer Electronics, vol. 38, No. 3, pp. 607-616, Aug. 1992. |
Parhami, Computer Arithmetic, Oxford University Press, Jun. 2000, pp. 413-418. |
S. Erturk, “Digital Image Stabilization with Sub-Image Phase Correlation Based Global Motion Estimation”, IEEE Transactions on Consumer Electronics, vol. 49, No. 4, pp. 1320-1325, Nov. 2003. |
S. Erturk, “Real-Time Digital Image Stabilization Using Kalman Filters”, http://www,ideallibrary.com, Real-Time Imaging 8, pp. 317-328, 2002. |
Uomori et al., “Automatic Image Stabilizing System by Full-Digital Signal Processing”, vol. 36, No. 3, pp. 510-519, Aug. 1990. |
Uomori et al., “Electronic Image Stabiliztion System for Video Cameras and VCRS”, J. Soc. Motion Pict. Telev. Eng., vol. 101, pp. 66-75, 1992. |
Weerasinghe et al.; “Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Separation”; Visual Information Proessing lab, Motorola Australian Research Center; IV 3233-IV3236. |
D. Doo, M. Sabin, “behaviour of recursive division surfaces near extraordinary points”; Sep. 1978; Computer Aided Design; vol. 10, pp. 356-360. |
D. W. H. Doo; “A subdivision algorithm for smoothing down irregular shaped polyhedrons”; 1978; Interactive Techniques in Computer Aided Design; pp. 157-165. |
Davis, J., Marschner, S., Garr, M., Levoy, M., Filling holes in complex surfaces using volumetric diffusion, Dec. 2001, Stanford University, pp. 1-9. |
E. Catmull, J.Clark, “recursively generated B-Spline surfaces on arbitrary topological meshes”; Nov. 1978; Computer aided design; vol. 10; pp. 350-355. |
J. Bolz, P. Schroder; “Rapid evaluation of catmull-clark subdivision surfaces”; Web 3D '02. |
J. Stam; “Exact Evaluation of Catmull-clark subdivision surfaces at arbitrary parameter values”; Jul. 1998; Computer Graphics; vol. 32; pp. 395-404. |
Krus, M., Bourdot, P., Osorio, A., Guisnel, F., Thibault, G., Adaptive tessellation of connected primitives for interactive walkthroughs in complex industrial virtual environments, Jun. 1999, Proceedings of the Eurographics workshop, pp. 1-10. |
Loop, C., DeRose, T., Generalized B-Spline surfaces of arbitrary topology, Aug. 1990, Sigraph 90, pp. 347-356. |
M. Halstead, M. Kass, T. DeRose; “efficient, fair interpolation using catmull-clark surfaces”; Sep. 1993; Computer Graphics and Interactive Techniques, Proc; p. 35-44. |
T. DeRose, M. Kass, T. Truong; “subdivision surfaces in character animation”; Jul. 1998; Computer Graphics and Interactive Techniques, Proc; pp. 85-94. |
Takeuchi, S., Kanai, T., Suzuki, H., Shimada, K., Kimura, F., Subdivision surface fitting with QEM-based mesh simplification and reconstruction of approximated B-spline surfaces, 2000, Eighth Pacific Conference on computer graphics and applications, pp. 202-212. |
“A Pipelined Architecture for Real-Time Correction of Barrel Distortion in Wide-Angle Camera Images”, Hau, T. Ngo, Student Member, IEEE and Vijayan K. Asari, Senior Member IEEE, IEEE Transaction on Circuits and Systems for Video Technology: vol. 15 No. 3 Mar. 2005 pp. 436-444. |
“Calibration and removal of lateral chromatic aberration in images” Mallon, et al. Science Direct Copyright 2006; 11 pages. |
“Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Seperation” Weerasighe, et al Visual Information Processing Lab, Motorola Austrailan Research Center pp. IV-3233-IV3236, 2002. |
Kuno et al. “New Interpolation Method Using Discriminated Color Correlation for Digital Still Cameras” IEEE Transac. on Consumer Electronics, vol. 45, No. 1, Feb. 1999, pp. 259-267. |
Number | Date | Country | |
---|---|---|---|
20140293066 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61170014 | Apr 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12752878 | Apr 2010 | US |
Child | 14301253 | US |