Embodiments of the present invention are generally related to digital image signal processing.
As technology has advanced, cameras have advanced accordingly but still face certain persistent issues. Particularly, as light passes through a camera lens, the light is bent as the light refracts. This bending of light results in inconsistent brightness across the sensor such that areas in the middle are much brighter than areas on the edges. Variations or imperfections in the lens have an increasing impact on the inconsistency of light coming out of the lens. Further, light may get stuck or not pass through as a result of interacting with the lens housing. This distortion is known as lens shading or vignetting. Thus, light coming through a lens system and forming an image on a film plane (digital sensor or film) will be unevenly attenuated across the image plane and color spectrum due to imperfections in the lens and due to the angle of the light as it strikes image forming medium (film or digital array of sensors) in particular the color filter array which filters the light and guides it into the image forming device. The overall result is that if a “flat” field of light enters the lens, then the film or digital sensor nevertheless receives an “unflat” field of light with varying brightness and color.
Conventionally, a high order polynomial may be used to represent this distortion and can be applied across the image plane to attempt to overcome the impact of lens shading and lens imperfections thereby correcting the image. However, high order polynomials are computationally expensive and are complicated to execute on hardware of fixed precision. For example, a 10th power polynomial may have 100 individual terms and a high order polynomial may require evaluation at each pixel meaning that, for instance, after 20 pixels, the amount of computations required increases rapidly. Further, higher order polynomials are numerically unstable as small variations can result in large changes in the polynomial. Also, as one changes a surface defined by a polynomial to the 9th or 10th order, the polynomial coefficients provide little intuition as to the magnitude of the changes in the surface value in any direction. All these characteristics make polynomial representation not a viable solution for the lens shading problem in terms of being computationally intensive and not intuitive.
Accordingly, what is needed is a system and method for correcting image data in an efficient manner. Embodiments provide for separably processing portions (e.g., patches) of an image based on spline surfaces (e.g., a Bezier surface) to correct for b distortions, e.g., from vignetting. The use of spline surfaces facilitates efficient hardware implementations (e.g., via linear interpolators) and provides an intuitive and computationally stable selection. Embodiments further correct pixels as the pixels are received from the sensor for a variety of effects including lens shading effects (vignetting), optical crosstalk, and electrical crosstalk. Moreover, the image correction may be performed on a per channel and illumination type basis.
In one embodiment, the present invention includes a computer implemented method for image signal processing. In one embodiment, the method includes accessing, within an electronic system, a plurality of control points for a Bezier surface or an array of Bezier patches and calculating a plurality of intermediate control points corresponding to a row of pixels of the patch. The method further includes receiving a pixel of an image and correcting the pixel based on the plurality of intermediate control points. The pixel is located in the row pixels of the patch. Because of the separable formulation of the spline patch approach pixels can then be received and corrected in scan line order as a stream on “a row of a patch” basis with a corrected image being output when each patch of the surface has been corrected. It is understood that in another embodiment that “columns of a patch” can be interchanged for “a row of a patch.”
In another embodiment, the present invention is implemented as an image signal processing system. The system includes a pixel receiving module operable to receive a plurality of pixels from an optical sensor (e.g., CMOS sensor or CCD sensor) and a control points access module operable to access control points of a Bezier surface (or any other spline surface). The system further includes an intermediate control points module operable to determine a plurality of intermediate control points for a plurality of pixels corresponding to a patch of the Bezier surface and a pixel correction module operable to correct pixels based on the plurality of intermediate control points (e.g., on a row by row basis of a patch).
In yet another embodiment, the present invention is implemented as a method for image signal processing. The method includes accessing, within an electronic system, a plurality of control points for a patch of a spline surface and calculating a plurality of intermediate control points corresponding to a row of pixels for each color channel of an image. The method further includes receiving a plurality of pixels and adjusting the plurality of pixels based on the plurality of intermediate control points and respective horizontal locations of the plurality of pixels. The plurality of pixels may include pixels located on rows of pixels corresponding to the horizontal positions of the patch of the spline surface and the plurality of pixels may comprise a plurality of color channels of an image.
Embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
a shows a block diagram of an exemplary lens operable to be used with one embodiment of the present invention.
b shows a block diagram of another exemplary lens operable to be used with one embodiment of the present invention.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.
Notation and Nomenclature:
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of an integrated circuit (e.g., system 100 of
Exemplary Operating Environment:
CPU 110 and the ISP 104 can also be integrated into a single integrated circuit die and CPU 110 and ISP 104 may share various resources, such as instruction logic, buffers, functional units and so on, or separate resources may be provided for image processing and general-purpose operations. System 100 can be implemented as, for example, a digital camera, cell phone camera, portable device (e.g., audio device, entertainment device, handheld device), webcam, video device (e.g., camcorder) and the like.
Sensor 102 receives light via a lens (not shown) and converts the light received into a signal (e.g., digital or analog). Sensor 102 may be any of a variety of optical sensors including, but not limited to, complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) sensors. Sensor 102 is coupled to communications bus 114 and may provide image data received over communications bus 114.
Image signal processor (ISP) 104 is coupled to communications bus 114 and processes the signal generated by sensor 102. More specifically, image signal processor 104 processes data from sensor 102 for storing in memory 106. For example, image signal processor 104 may compress and determine a file format for an image to be stored in within memory 106.
Input module 108 allows entry of commands into system 100 which may then, among other things, control the sampling of data by sensor 102 and subsequent processing by ISP 104. Input module 108 may include, but it not limited to, navigation pads, keyboards (e.g., QWERTY), up/down buttons, touch screen controls (e.g., via display 112) and the like.
Central processing unit (CPU) 110 receives commands via input module 108 and may control a variety of operations including, but not limited to, sampling and configuration of sensor 102, processing by ISP 104, and management (e.g., addition, transfer, and removal) of images and/or video from memory 106.
Embodiments provide for seperably processing portions (e.g., patches) of image based on spline surfaces (e.g., a Bezier surface). The use of spline surfaces facilitates efficient hardware implementations (e.g., via linear interpolators) and provides an intuitive and computationally stable selection. Embodiments further correct pixels as the pixels are received for a variety of effects including lens shading effects, optical crosstalk, and electrical crosstalk. The image correction may be performed on a per channel and illumination type basis. Embodiments of the present invention are operable to correct images based on any of a variety of spline surfaces.
a shows a block diagram of an exemplary lens operable to be used with one embodiment of the present invention. Lens 200 is a lens operable to be used in an image or video capture device (e.g., camera, digital camera, webcam, camcorder, portable device, cell phone, and the like). Lens 200 may be made of a variety of materials including, but not limited to, glass, plastic, or a combination thereof. Light ray 202 enters lens 200 substantially near the center of lens 200. Light ray 202 is bent as light ray 202 is refracted as it passes through lens 200. Light ray 204 enters lens 200 substantially near an edge of lens 200. As substantially illustrated, light ray 204 is bent by a substantially greater amount than light ray 202 thereby resulting at the sensor 205 in a difference in brightness between light received substantially near the center of lens 200 and the edge of lens 200. This distortion is known as lens shading or vignetting.
b shows a block diagram of an exemplary lens operable to be used with one embodiment of the present invention. Line 220 depicts a well shaped lens (e.g., lens 200). Lens 222 depicts a misshapen lens which may be more realistic of a lens in a variety of devices. It is appreciated that the effects of misshapen lens 222 may further impact the bending of light as it passes through lens 222. Embodiments of the present invention compensate and overcome the effects of light being bent by the lens and irregularities in the shape of lenses (e.g., lens 222). It is appreciated that lenses may have a variety of defects including, but not limited to, lopsidedness and waviness. It is further appreciated that variations in manufacturing processes of a lens can alter the location of the brightest spot. Of particular note, portable devices (e.g., cell phones) and low cost devices may have lenses that are plastic and not well constructed.
Embodiments of the present invention compensate for bending of light as light of different colors passes through a lens and falls on a color filter array (e.g., color filters 308-312 and sensor array 300). For example, light ray 320 is bent due to light ray 320 being refracted as it passes through lens 301 and green 1 filter 308. Embodiments of the present invention are further operable to compensate for optical crosstalk. Optical crosstalk can occur when light rays are bent as the light ray is refracted as it passes through more than one color filter prior to reaching a sensor. For example, light ray 324 is bent due to being refracted as it passes through lens 301, green 2 filter 312, and then red filter 310 then reaches red sensor 304. It is noted that as light ray 324 passes through red filter 310 and green 2 filter 312, light ray is filtered in a manner not intended by the design of color filter array 300.
Embodiments of the present invention are further operable to compensate for electrical crosstalk. Electrical crosstalk can occur when light rays are bent and reach the material between sensors. For example, light ray 322 is bent due to being refracted upon passing through lens 301 and red filter 310 and then reaches substrate 314. Upon reaching substrate 314, photons of light ray 322 may impact the performance of sensors (e.g., green 1 sensor 302 and red sensor 304). Such impacts may include increasing electrical leakage among components of sensors 302 and 304 (e.g., well leakage).
It is appreciated that embodiments of the present invention may correct image data for a variety of sensor configurations including, but not limited to, panchromatic cells and vertical color filters. It is further appreciated that different types of lighting e.g., of different color temperature, may result in different bending of light as light goes through filters 308-312. Therefore, embodiments use different sets of control points per illuminate. For example, embodiments may utilize a different set of control points for each illuminate (e.g., florescent, tungsten, and daylight) for each color channel.
It is further understood that the present invention is operable to use any type of spline and is not limited to just the Bezier formulation but any formulation with similar characteristics including, but not limited, to B-Splines, wavelet splines, and thin-plate splines.
Embodiments of the present invention are operable to handle any number of patches. The use of the patches allows embodiments of the present invention to scale to any number of pixels. In one embodiment, Bezier surface 400 includes nine Bezier patches. The control points may be equally spaced. As illustrated, the boundaries of the patches share control points thereby reducing the number of control points necessary and ensuring positional continuity between patches. In one exemplary embodiment, each patch has 16 control points which are shared boundary control points for a total of 100 control points per channel (e.g., color channel).
Embodiments of present invention calculate intermediate control points for each row of pixels of a patch. In one embodiment, at each pixel of a row of pixels, the intermediate control points are used to determine a gain which is multiplied by the intensity value of the pixel. The intermediate control points are used to correct the row of pixels for a variety of effects including lens shading effects, optical crosstalk, and electrical crosstalk. In another embodiment, “columns of a patch” can be interchanged for “a row of a patch.”
In one exemplary embodiment, each of intermediate control points 606 are computed based on the control points vertical to the intermediate control points. The intermediate control points are calculated based on linear interpolations. For example, intermediate control point iC1 is computed based on control points C1, C2, C3, and C4. Similarly, intermediate control point iC2 is computed based on control points C5, C6, C7, and C8. Intermediate control point iC3 is computed based on control points C9, C10, C11, and C12 and intermediate control point iC4 is computed based on control points C13, C14, C15, and C16.
Embodiments are thus able to process the image in a patch by patch basis and further process each patch on scan line basis. In one embodiment, patches are processed horizontally and then vertically. For example, an exemplary order of processing may include 602a, 602b, 602c, 602d, 602e, 602f, 602g, 602h, and 602i.
Memory 708 includes channel correction information 710. Channel correction information 710 includes sets of control points for the each color channel and each illuminate or color temperature. In one embodiment, channel correction information 710 includes respective sets of control points (e.g., for a Bezier surface) for blue, green 1, green 2, and red for each color temperature. It is appreciated that embodiments of the present invention are operable to process pixels received from any sensor configuration (e.g., interleaved color channels, single color channels, stacked colors channels, etc.).
Pixel receiving module 720 is operable to receive a plurality of pixels from an optical sensor (e.g., sensor 702). Pixel receiving module 720 is operable to receive pixels from sensors of any type (e.g., Bayer, etc.)
Illuminate detector module 728 is operable to detect an illuminate and select a plurality control points based on the illuminate detected for each color channel via control points access module 722. Control points access module 722 is operable to access control points of a spline surface (e.g., Bezier surface). Control points access module 722 accesses channel correction information 710 based on the illuminate detected by illuminate detector module 728.
Intermediate control points module 724 is operable to determine a plurality of intermediate control points for a plurality of pixels corresponding to a patch of the Bezier surface. Embodiments of the present invention compute the intermediate control points by separating the horizontal and vertical computations of a patch. The intermediate control points define the portion of the Bezier surface corresponding to a row of pixels for the patch of the spline surface. The intermediate control points are calculated based on the control points on a scan line of a patch basis. The intermediate control points can be reused once calculated for each pixel of a row of pixels in a patch. Thus, embodiments of the present invention advantageously use the fact that the spline surface values can be separably computed.
In one embodiment, intermediate control points for each color channel are determined (e.g., via linear interpolation). Each set of intermediate control points is calculated based on the control points of the patch of the Bezier surface. As each pixel of a different color channel is received, the corresponding intermediate control points are used to correct each pixel. In one embodiment, four intermediate control points are computed for each scan line of a patch (e.g., row of pixels of a patch). Counters for the location of pixels (e.g., counters tracking u and/or v) may be reset when intermediate control points are calculated for a new patch.
Pixel correction module 726 is operable to correct pixels based on the plurality of intermediate control points. As each pixel is received correction based on the intermediate control points can be performed. A counter can be used to track the pixel location within the row of patch of the color channel. As the next pixel is received, the corresponding value of the curve defined by the intermediate control points is used to correct the pixel. In one embodiment, the correction of pixels comprises a series of linear interpolations based on the de Casteljau algorithm (e.g., for a Bezier surface). The linear interpolations may be performed sequentially by a single linear interpolator or in parallel by a plurality of linear interpolators.
Prefetching module 732 is operable to prefetch a plurality of control points of a second or next patch of the spline surface. Tracking module 730 is operable to track the location of a pixel received within a patch of the spline surface and operable to signal the prefetching module 732 when a pixel is received is within a predetermined range of the end of a row of pixels of the patch.
In one embodiment, tracking module 730 is operable to signal intermediate control points module 724. Tracking module 730 may thus signal prefetching module 732 to prefetch the next set of control points (e.g., for the next adjacent patch or the patch that is first patch in the next row of patches) and signal intermediate control points module 724 to compute the intermediate control points for the first row of the next patch.
Tracking module 730 is operable to detect when processing is approaching the end of a row and end of a column (e.g., the height of a patch). In one embodiment, tracking module 730 comprises a horizontal counter and a vertical counter for tracking the pixel location both horizontally and vertically within a patch.
In one embodiment, tracking module 730 comprises a multiplexer in addition to the counters such that when the end of a row or patch is reached, the multiplexer changes the patch that is accessed by intermediate control points module 724 accesses. This thereby allows intermediate control points module 724 to compute the control points for the next patch and corresponding scan line of the next patch. In another embodiment, a pointer is changed to access the control points for the next patch.
Tracking module 730 is further operable to track the patch of pixels being corrected in relation to the other patches. In one embodiment, s and t are used to index the patches of a Bezier surface (e.g.,
Pixel color module 734 is operable to select a plurality of intermediate control points based on a color of pixel received. Pixel color module 734 can further select the plurality of intermediate control points corresponding to the color channel of a pixel (e.g., for pixel correction module 726). For example, if a red pixel is received, pixel color module 734 will select the intermediate control points computed for the red channel of a Bezier surface. As the next pixel is received, pixel color module 734 selects the plurality of intermediate control points corresponding to color channel of the next pixel. For example, pixel color module 734 may select a plurality of intermediate control points corresponding to a green color channel for a green pixel.
With reference to
At block 802, a plurality of control points for a first patch of a spline (e.g., Bezier) surface are accessed within an electronic system. At block 804, a plurality of intermediate control points corresponding to a row of pixels of the first patch are calculated (e.g., via linear interpolation), as described herein.
At block 806, a first pixel of an image is received. At block 808, the first pixel is corrected based on the plurality of intermediate control points. The correction of a pixel based on the plurality of intermediate control points is operable to compensate for lens shading effects, optical crosstalk, electrical crosstalk, and other sensor output irregularities, as described herein. As described herein, the correcting can be performed via linear interpolation. The correction may be computed with fixed precision or floating point precision.
At block 810, a second pixel of the image is received. In one embodiment, wherein the second pixel is horizontally adjacent to the first pixel (e.g., the next pixel of a row of pixels of a patch).
At block 812, a counter is incremented corresponding to a pixel location within the first patch of the spline surface (e.g., Bezier). In one embodiment, the counter corresponds to u or a horizontal axis of a patch and is incremented by Δu representing the spacing between pixels of a single color channel. At block 814, the second pixel is corrected based on the intermediate control points.
At block 816, whether the end of a row of pixels of a patch has been reached is determined. If the end of a row of pixels of a patch has been reached, block 826 is performed. If the end of a row of pixels of a patch has not been reached, block 818 is performed.
At block 818, whether the processing of pixels is nearing the end of a row is determined. If the current pixel that was corrected is within a predetermined range of an end of a row, block 820 is performed. If the current pixel is not within the predetermined range, block 806 is performed and another pixel of the image is received. At block 820, a plurality of control points for a second patch of the spline surface (e.g., Bezier) are prefetched. If the current pixel is near the end of a patch, the next adjacent patch can be prefetched. If the current pixel is near the end of a row of the image, the next patch of the next row of patches can be prefetched (e.g., the first patch of the second row of patches).
At block 826, whether the end of a row of the image has been reached is determined. If the end of a row of the image has been reached, block 822 is performed. If the end of a row of the image has not been reached, block 828 is performed.
At block 822, whether the row of pixels being processed is the last row of the image is determined. If the last row of the image has been processed, block 824 is performed and the corrected image data is output. If there are more rows of pixels and patches of the image to process, block 830 is performed.
At block 828, the current patch is set to the next adjacent patch. In one embodiment, when a row of a patch has been corrected the next adjacent patch is selected and corrected on the corresponding row. For example, the current patch may be set from patch 602b to patch 602c. Block 834 is then performed.
At block 830, the current patch is set to the patch of the next row of patches. In one embodiment, when the last row of a patch and end of a row of the image has been reached the next patch of the next row of patches is selected and correction starts from the first row of the patch. For example, the current patch may be set from patch 602c to 602d.
In one embodiment, block 832 is then performed and the vertical counter (e.g., v) is incremented corresponding to the next row of the image to be processed. Block 834 is then performed and the horizontal counter (e.g., u) is then reset (e.g., to zero).
In one exemplary embodiment, portions of process 800 are performed by the pseudo code of Table 1.
In one exemplary embodiment, after the intermediate control points are calculated (e.g., via linear interpolation), the u coordinate (e.g., horizontal coordinate of a patch) is used to compute the gain value using interpolation. Table 2 show exemplary equations for gain and corrected pixel values.
Gain=F(u, Δu, iC1−iCn), where u is horizontal position in the row the patch, Δu is the spacing between pixels (e.g., 1/(the number of pixels in a row of a patch), and iC1−iCn are the intermediate control points corresponding to the row of pixels of the patch.
Corrected Pixel value=Gain*Pixel value, where, in one embodiment, pixel value comprises an intensity value
Table 2—Exemplary Gain and Correction Equations
In one exemplary embodiment, on each new pixel, the u is updated by Δu (e.g., ui+1=ui+Δu) and the correction is applied. This may be repeated until a new patch is entered or a row of the image is completed and the location being processed drops down to the next row. When a new patch is entered, the control points of the patch are accessed and new intermediate control points are computed, u is set to zero, and the new Δu and new Δv are accessed for the new patch.
In one exemplary embodiment, when the next row is entered, the v value is incremented (e.g., vj+1=uj+Δv) and u is set to zero. New intermediate control points are calculated based on the new v value. If a new patch was entered by moving down a row, the new Δu and Δv are obtained. Within each patch Δu and Δv are fixed depending on the number of pixels in each direction covered by the patch.
In one exemplary embodiment, the hardware receives image pixels from the sensor or previous processing stages in a stream, one pixel at a time. Each pixel is given a unique address, x and y. The x and y addresses are reset to zero at the start of each frame. The x is incremented by one as each new pixel in the stream arrives. When the end of the each scan line is reached (e.g., end of a row of an image or sensor) the y address is incremented by one and the x address is reset to zero. The end of the scan line is specified by the width of the sensor. Similarly, the y address is reset to zero after the last scan line. The last scan line is specified by the height of the sensor.
In one exemplary embodiment, each patch may be defined by its dimensions: patch_width and patch_height and its internal spacing Δu and Δv, where Δu=1/patch_width and Δv=1/patch_height. In one exemplary embodiment, the hardware may use the x, y pixel address to determine the patch address, s and t. The patch address is set to s=0 and t=0 when the pixel address is x=0 and y=0. The hardware maintains intra-pixel addresses xp and yp. Address xp is set to zero whenever the patch address changes (e.g., whenever the pixel stream enters a new patch) otherwise xp is incremented whenever a new pixel from the pixel stream arrives. The yp address is incremented whenever the end of the scan line is reached, similar to the y address. The yp address is reset to zero whenever the patch address t changes.
In one exemplary embodiment, when the xp address reaches patch_width, the s patch address is incremented. When the x pixel address reaches the end of the scan line, the condition of x being equal to width (e.g., image width), s is reset to zero. When the yp address reaches patch_height, the t patch address is incremented. When the y pixel address reaches the end of the sensor image, the condition of y being equal to height (e.g., image height), t is reset to zero.
In one exemplary embodiment, each Bezier patch, indexed by s and t, is a two dimensional function ps,t(u,v). The independent variables u and v may take on values between zero and one. Mathematically, u and v reside in the domain [0,1]. Variables u and v are incremented by Δu and Δv respectively at the same time as xp and yp.
In one exemplary embodiment, the hardware uses each value of s, t, u, and v to evaluate the patch ps,t(u,v) as a separable function. The evaluation is separable in the sense that the evaluation is separated into a vertical computation followed by a similar computation in the horizontal direction. The vertical computation defines four horizontal intermediate control points, iCi, iC2, iC3, and iC4, for each new v. The four control points define a 1 dimensional horizontal Bezier curve, B(u). Each intermediate control point is based on an evaluation of single Bezier curve that is a function of the vertical variable, v. Since there are four intermediate control points, four Bezier curves are evaluated (e.g., B0(v), B1(v), B2(v), and B3(v), where B0(v) is a based on control points C1, C2, C3, and C4, B1(v) is based on control points C5, C6, C7, and C8, B2(v) is based on control points C9, C10, C11, and C12, and B3(v) is based on control points C13, C14, C15, and C16). Each of the vertical Bezier curves has four control points. The control pints are from the four vertical columns of the four control points for the patch indexed by s and t.
At block 902, a plurality of control points for a patch of a spline surface is accessed within an electronic system.
At block 904, a plurality of intermediate control points are calculated corresponding to a row of pixels for each color channel of the image. As described herein, the intermediate control points define a Bezier curve corresponding to the row of pixels.
At block 906, a plurality of pixels is received. In one embodiment, the plurality of pixels comprises a row of pixels corresponding to a patch of the spline surface and the plurality of pixels further comprises a plurality of color channels of an image.
At block 908, the plurality of pixels is adjusted based on the plurality of intermediate control points. The adjusting compensates for a variety of irregularities including lens shading. As described herein, the adjusting of the plurality of pixels may be performed (e.g., via linear interpolation) sequentially, in parallel, or a combination thereof.
At block 910, whether the adjusting of the image is done is determined (e.g., whether there are patches of the image left to process). If the last patch has been corrected, block 914 is performed and the corrected image is output. If there are more patches of the image to be corrected, block 912 is performed.
At block 912, the next patch is retrieved. Block 902 may then be performed as the retrieved patch is processed. In one embodiment, the next patch may be prefetched so that the patch is ready for processing.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
This application claims the benefit of and priority to the copending provisional patent application Ser. No. 61/170,014, entitled “SYSTEM AND METHOD FOR IMAGE CORRECTION,” with filing date Apr. 16, 2009, and hereby incorporated by reference in its entirety. This application is related to copending non-provisional patent application Ser. No. 12/752,878, entitled “SYSTEM AND METHOD FOR IMAGE CORRECTION,” with filing date Apr. 1, 2010, and hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3904818 | Kovac | Sep 1975 | A |
4253120 | Levine | Feb 1981 | A |
4646251 | Hayes et al. | Feb 1987 | A |
4685071 | Lee | Aug 1987 | A |
4739495 | Levine | Apr 1988 | A |
4771470 | Geiser et al. | Sep 1988 | A |
4920428 | Lin et al. | Apr 1990 | A |
4987496 | Greivenkamp, Jr. | Jan 1991 | A |
5175430 | Enke et al. | Dec 1992 | A |
5261029 | Abi-Ezzi et al. | Nov 1993 | A |
5305994 | Matsui et al. | Apr 1994 | A |
5387983 | Sugiura et al. | Feb 1995 | A |
5475430 | Hamada et al. | Dec 1995 | A |
5513016 | Inoue | Apr 1996 | A |
5608824 | Shimizu et al. | Mar 1997 | A |
5652621 | Adams, Jr. et al. | Jul 1997 | A |
5793433 | Kim et al. | Aug 1998 | A |
5878174 | Stewart et al. | Mar 1999 | A |
5903273 | Mochizuki et al. | May 1999 | A |
5905530 | Yokota et al. | May 1999 | A |
5995109 | Goel et al. | Nov 1999 | A |
6016474 | Kim et al. | Jan 2000 | A |
6078331 | Pulli et al. | Jun 2000 | A |
6111988 | Horowitz et al. | Aug 2000 | A |
6118547 | Tanioka | Sep 2000 | A |
6128000 | Jouppi et al. | Oct 2000 | A |
6141740 | Mahalingaiah et al. | Oct 2000 | A |
6151457 | Kawamoto | Nov 2000 | A |
6175430 | Ito | Jan 2001 | B1 |
6252611 | Kondo | Jun 2001 | B1 |
6256038 | Krishnamurthy | Jul 2001 | B1 |
6281931 | Tsao et al. | Aug 2001 | B1 |
6289103 | Sako et al. | Sep 2001 | B1 |
6314493 | Luick | Nov 2001 | B1 |
6319682 | Hochman | Nov 2001 | B1 |
6323934 | Enomoto | Nov 2001 | B1 |
6392216 | Peng-Tan | May 2002 | B1 |
6396397 | Bos et al. | May 2002 | B1 |
6438664 | McGrath et al. | Aug 2002 | B1 |
6469707 | Voorhies | Oct 2002 | B1 |
6486971 | Kawamoto | Nov 2002 | B1 |
6504952 | Takemura et al. | Jan 2003 | B1 |
6584202 | Montag et al. | Jun 2003 | B1 |
6594388 | Gindele et al. | Jul 2003 | B1 |
6683643 | Takayama et al. | Jan 2004 | B1 |
6707452 | Veach | Mar 2004 | B1 |
6724423 | Sudo | Apr 2004 | B1 |
6724932 | Ito | Apr 2004 | B1 |
6737625 | Baharav et al. | May 2004 | B2 |
6760080 | Moddel et al. | Jul 2004 | B1 |
6785814 | Usami et al. | Aug 2004 | B1 |
6806452 | Bos et al. | Oct 2004 | B2 |
6839062 | Aronson et al. | Jan 2005 | B2 |
6856441 | Zhang et al. | Feb 2005 | B2 |
6891543 | Wyatt | May 2005 | B2 |
6900836 | Hamilton, Jr. | May 2005 | B2 |
6950099 | Stollnitz et al. | Sep 2005 | B2 |
7009639 | Une et al. | Mar 2006 | B1 |
7015909 | Morgan, III et al. | Mar 2006 | B1 |
7023479 | Hiramatsu et al. | Apr 2006 | B2 |
7088388 | MacLean et al. | Aug 2006 | B2 |
7092018 | Watanabe | Aug 2006 | B1 |
7106368 | Daiku et al. | Sep 2006 | B2 |
7133041 | Kaufman et al. | Nov 2006 | B2 |
7133072 | Harada | Nov 2006 | B2 |
7146041 | Takahashi | Dec 2006 | B2 |
7221779 | Kawakami et al. | May 2007 | B2 |
7227586 | Finlayson et al. | Jun 2007 | B2 |
7245319 | Enomoto | Jul 2007 | B1 |
7305148 | Spampinato et al. | Dec 2007 | B2 |
7343040 | Chanas et al. | Mar 2008 | B2 |
7486844 | Chang et al. | Feb 2009 | B2 |
7502505 | Malvar et al. | Mar 2009 | B2 |
7580070 | Yanof et al. | Aug 2009 | B2 |
7626612 | John et al. | Dec 2009 | B2 |
7627193 | Alon et al. | Dec 2009 | B2 |
7671910 | Lee | Mar 2010 | B2 |
7728880 | Hung et al. | Jun 2010 | B2 |
7750956 | Wloka | Jul 2010 | B2 |
7817187 | Silsby et al. | Oct 2010 | B2 |
7859568 | Shimano et al. | Dec 2010 | B2 |
7860382 | Grip | Dec 2010 | B2 |
7912279 | Hsu et al. | Mar 2011 | B2 |
8049789 | Innocent | Nov 2011 | B2 |
8238695 | Davey et al. | Aug 2012 | B1 |
8456547 | Wloka | Jun 2013 | B2 |
8456548 | Wloka | Jun 2013 | B2 |
8456549 | Wloka | Jun 2013 | B2 |
8471852 | Bunnell | Jun 2013 | B1 |
20010001234 | Addy et al. | May 2001 | A1 |
20010012113 | Yoshizawa et al. | Aug 2001 | A1 |
20010012127 | Fukuda et al. | Aug 2001 | A1 |
20010015821 | Namizuka et al. | Aug 2001 | A1 |
20010019429 | Oteki et al. | Sep 2001 | A1 |
20010021278 | Fukuda et al. | Sep 2001 | A1 |
20010033410 | Helsel et al. | Oct 2001 | A1 |
20010050778 | Fukuda et al. | Dec 2001 | A1 |
20010054126 | Fukuda et al. | Dec 2001 | A1 |
20020012131 | Oteki et al. | Jan 2002 | A1 |
20020015111 | Harada | Feb 2002 | A1 |
20020018244 | Namizuka et al. | Feb 2002 | A1 |
20020027670 | Takahashi et al. | Mar 2002 | A1 |
20020033887 | Hieda et al. | Mar 2002 | A1 |
20020041383 | Lewis, Jr. et al. | Apr 2002 | A1 |
20020044778 | Suzuki | Apr 2002 | A1 |
20020054374 | Inoue et al. | May 2002 | A1 |
20020063802 | Gullichsen et al. | May 2002 | A1 |
20020105579 | Levine et al. | Aug 2002 | A1 |
20020126210 | Shinohara et al. | Sep 2002 | A1 |
20020146136 | Carter, Jr. | Oct 2002 | A1 |
20020149683 | Post | Oct 2002 | A1 |
20020158971 | Daiku et al. | Oct 2002 | A1 |
20020167202 | Pfalzgraf | Nov 2002 | A1 |
20020167602 | Nguyen | Nov 2002 | A1 |
20020191694 | Ohyama et al. | Dec 2002 | A1 |
20020196470 | Kawamoto et al. | Dec 2002 | A1 |
20030035100 | Dimsdale et al. | Feb 2003 | A1 |
20030067461 | Fletcher et al. | Apr 2003 | A1 |
20030122825 | Kawamoto | Jul 2003 | A1 |
20030142222 | Hordley | Jul 2003 | A1 |
20030146975 | Joung et al. | Aug 2003 | A1 |
20030169353 | Keshet et al. | Sep 2003 | A1 |
20030169918 | Sogawa | Sep 2003 | A1 |
20030197701 | Teodosiadis et al. | Oct 2003 | A1 |
20030218672 | Zhang et al. | Nov 2003 | A1 |
20030222995 | Kaplinsky et al. | Dec 2003 | A1 |
20030223007 | Takane | Dec 2003 | A1 |
20040001061 | Stollnitz et al. | Jan 2004 | A1 |
20040001234 | Curry et al. | Jan 2004 | A1 |
20040032516 | Kakarala | Feb 2004 | A1 |
20040066970 | Matsugu | Apr 2004 | A1 |
20040100588 | Hartson et al. | May 2004 | A1 |
20040101313 | Akiyama | May 2004 | A1 |
20040109069 | Kaplinsky et al. | Jun 2004 | A1 |
20040189875 | Zhai et al. | Sep 2004 | A1 |
20040218071 | Chauville et al. | Nov 2004 | A1 |
20040247196 | Chanas et al. | Dec 2004 | A1 |
20050007378 | Grove | Jan 2005 | A1 |
20050007477 | Ahiska | Jan 2005 | A1 |
20050030395 | Hattori | Feb 2005 | A1 |
20050046704 | Kinoshita | Mar 2005 | A1 |
20050099418 | Cabral et al. | May 2005 | A1 |
20050111110 | Matama | May 2005 | A1 |
20050175257 | Kuroki | Aug 2005 | A1 |
20050185058 | Sablak | Aug 2005 | A1 |
20050238225 | Jo et al. | Oct 2005 | A1 |
20050243181 | Castello et al. | Nov 2005 | A1 |
20050248671 | Schweng | Nov 2005 | A1 |
20050261849 | Kochi et al. | Nov 2005 | A1 |
20050286097 | Hung et al. | Dec 2005 | A1 |
20060050158 | Irie | Mar 2006 | A1 |
20060061658 | Faulkner et al. | Mar 2006 | A1 |
20060087509 | Ebert et al. | Apr 2006 | A1 |
20060119710 | Ben-Ezra et al. | Jun 2006 | A1 |
20060133697 | Uvarov et al. | Jun 2006 | A1 |
20060176375 | Hwang et al. | Aug 2006 | A1 |
20060197664 | Zhang et al. | Sep 2006 | A1 |
20060274171 | Wang | Dec 2006 | A1 |
20060290794 | Bergman et al. | Dec 2006 | A1 |
20060293089 | Herberger et al. | Dec 2006 | A1 |
20070091188 | Chen et al. | Apr 2007 | A1 |
20070147706 | Sasaki et al. | Jun 2007 | A1 |
20070171288 | Inoue et al. | Jul 2007 | A1 |
20070236770 | Doherty et al. | Oct 2007 | A1 |
20070247532 | Sasaki | Oct 2007 | A1 |
20070285530 | Kim et al. | Dec 2007 | A1 |
20080030587 | Helbing | Feb 2008 | A1 |
20080043024 | Schiwietz et al. | Feb 2008 | A1 |
20080062164 | Bassi et al. | Mar 2008 | A1 |
20080101690 | Hsu et al. | May 2008 | A1 |
20080143844 | Innocent | Jun 2008 | A1 |
20080231726 | John | Sep 2008 | A1 |
20090002517 | Yokomitsu et al. | Jan 2009 | A1 |
20090010539 | Guarnera et al. | Jan 2009 | A1 |
20090037774 | Rideout et al. | Feb 2009 | A1 |
20090116750 | Lee et al. | May 2009 | A1 |
20090128575 | Liao et al. | May 2009 | A1 |
20090160957 | Deng et al. | Jun 2009 | A1 |
20090257677 | Cabral et al. | Oct 2009 | A1 |
20100266201 | Cabral et al. | Oct 2010 | A1 |
Number | Date | Country |
---|---|---|
1275870 | Dec 2000 | CN |
0392565 | Oct 1990 | EP |
1449169 | May 2003 | EP |
1378790 | Jul 2004 | EP |
1447977 | Aug 2004 | EP |
1550980 | Jul 2005 | EP |
2045026 | Oct 1980 | GB |
2363018 | Dec 2001 | GB |
61187467 | Aug 1986 | JP |
62151978 | Jul 1987 | JP |
07015631 | Jan 1995 | JP |
8036640 | Feb 1996 | JP |
08-079622 | Mar 1996 | JP |
2000516752 | Dec 2000 | JP |
2001052194 | Feb 2001 | JP |
2002-207242 | Jul 2002 | JP |
2003-085542 | Mar 2003 | JP |
2003085542 | Mar 2003 | JP |
2004-221838 | Aug 2004 | JP |
2005094048 | Apr 2005 | JP |
2005-182785 | Jul 2005 | JP |
2005520442 | Jul 2005 | JP |
2006025005 | Jan 2006 | JP |
2006086822 | Mar 2006 | JP |
2006-094494 | Apr 2006 | JP |
2006121612 | May 2006 | JP |
2006134157 | May 2006 | JP |
2007019959 | Jan 2007 | JP |
2007-148500 | Jun 2007 | JP |
2009021962 | Jul 2007 | JP |
2007-233833 | Sep 2007 | JP |
2007282158 | Oct 2007 | JP |
2008085388 | Apr 2008 | JP |
2008113416 | May 2008 | JP |
2008277926 | Nov 2008 | JP |
1020040043156 | May 2004 | KR |
1020060068497 | Jun 2006 | KR |
1020070004202 | Jan 2007 | KR |
03043308 | May 2003 | WO |
03043308 | May 2003 | WO |
2004063989 | Jul 2004 | WO |
2004063989 | Jul 2004 | WO |
2007056459 | May 2007 | WO |
2007093864 | Aug 2007 | WO |
Entry |
---|
“A Pipelined Architecture for Real-Time Correction of Barrel Distortion in Wide-Angle Camera Images”, Hau, T. Ngo, Student Member, IEEE and Vijayan K. Asari, Senior Member IEEE, IEEE Transaction on Circuits and Systems for Video Technology: vol. 15 No. 3 Mar. 2005 pp. 436-444. |
“Calibration and removal of lateral chromatic aberration in images” Mallon, et al. Science Direct Copyright 2006; 11 pages. |
“Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Seperation” Weerasighe, et al Visual Information Processing Lab, Motorola Austrailan Research Center pp. IV-3233-IV3236, 2002. |
Kuno et al. “New Interpolation Method Using Discriminated Color Correlation for Digital Still Cameras” IEEE Transac. On Consumer Electronics, vol. 45, No. 1, Feb. 1999, pp. 259-267. |
D. Doo, M. Sabin, “behaviour of recursive division surfaces near extraordinary points”; Sep. 1978; Computer Aided Design; vol. 10, pp. 356-360. |
D. W. H. Doo; “A subdivision algorithm for smoothing down irregular shaped polyhedrons”; 1978; Interactive Techniques in Computer Aided Design; pp. 157-165. |
Davis, J., Marschner, S., Garr, M., Levoy, M., Filling holes in complex surfaces using volumetric diffusion, Dec. 2001, Stanford University, pp. 1-9. |
E. Catmull, J.Clark, “recursively generated B-Spline surfaces on arbitrary topological meshes”; Nov. 1978; Computer aided design; vol. 10; pp. 350-355. |
J. Bolz, P. Schroder; “rapid evaluation of catmull-clark subdivision surfaces”; Web 3D '02. |
M. Halstead, M. Kass, T. DeRose; “efficient, fair interpolation using catmull-clark surfaces”; Sep. 1993; Computer Graphics and Interactive Techniques, Proc; p. 35-44. |
Donald D. Spencer, “Illustrated Computer Graphics Dictionary”, 1993, Camelot Publishing Company, p. 272. |
Duca et al., “A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques”, ACM SIGGRAPH Jul. 2005, pp. 453-463. |
gDEBugger, graphicRemedy, http://www.gremedy.com, Aug. 8, 2006, pp. 1-18. |
Keith R. Slavin; Application As Filed entitled “Efficient Method for Reducing Noise and Blur in a Composite Still Image From a Rolling Shutter Camera”; U.S. Appl. No. 12/069,669, filed Feb. 11, 2008. |
Ko et al., “Fast Digital Image Stabilizer Based on Gray-Coded Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 45, No. 3, pp. 598-603, Aug. 1999. |
Ko, et al., “Digital Image Stabilizing Algorithms Basd on Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 44, No. 3, pp. 617-622, Aug. 1988. |
Morimoto et al., “Fast Electronic Digital Image Stabilization for Off-Road Navigation”, Computer Vision Laboratory, Center for Automated Research University of Maryland, Real-Time Imaging, vol. 2, pp. 285-296, 1996. |
Paik et al., “An Adaptive Motion Decision system for Digital Image Stabilizer Based on Edge Pattern Matching”, IEEE Transactions on Consumer Electronics, vol. 38, No. 3, pp. 607-616, Aug. 1992. |
Parhami, Computer Arithmetic, Oxford University Press, Jun. 2000, pp. 413-418. |
S. Erturk, “Digital Image Stabilization with Sub-Image Phase Correlation Based Global Motion Estimation”, IEEE Transactions on Consumer Electronics, vol. 49, No. 4, pp. 1320-1325, Nov. 2003. |
S. Erturk, “Real-Time Digital Image Stabilization Using Kalman Filters”, http://www,ideallibrary.com, Real-Time Imaging 8, pp. 317-328, 2002. |
Uomori et al., “Automatic Image Stabilizing System by Full-Digital Signal Processing”, vol. 36, No. 3, pp. 510-519, Aug. 1990. |
Uomori et al., “Electronic Image Stabiliztion System for Video Cameras and VCRs”, J. Soc. Motion Pict. Telev. Eng., vol. 101, pp. 66-75, 1992. |
E. Catmull, J. Clark, “recursively generated B-Spline surfaces on arbitrary topological meshes”; Nov. 1978; Computer aided design; vol. 10; pp. 350-355. |
J. Bolz, P. Schroder; “rapid evaluation of catmull-clark subdivision surfaces”; Web 3D '02 , 2002. |
J. Stam; “Exact Evaluation of Catmull-clark subdivision surfaces at arbitrary parameter values”; Jul. 1998; Computer Graphics; vol. 32; pp. 395-404. |
Krus, M., Bourdot, P., Osorio, A., Guisnel, F., Thibault, G., Adaptive tessellation of connected primitives for interactive walkthroughs in complex industrial virtual environments, Jun. 1999, Proceedings of the Eurographics workshop, pp. 1-10. |
Kumar, S., Manocha, D., Interactive display of large scale trimmed NURBS models, 1994, University of North Carolina at Chapel Hill, Technical Report, pp. 1-36. |
Loop, C., DeRose, T., Generalized B-Spline surfaces of arbitrary topology, Aug. 1990, SIGRAPH 90, pp. 347-356. |
M. Halstead, M. Kass, T. DeRose; “efficient, fair interpolation using catmull-clark surfaces”; Sep. 1993; Computer Graphics and Interactive Techniques, Proc; pp. 35-44. |
T. DeRose, M. Kass, T. Truong; “subdivision surfaces in character animation”; Jul. 1998; Computer Graphics and Interactive Techniques, Proc; pp. 85-94. |
Takeuchi, S., Kanai, T., Suzuki, H., Shimada, K., Kimura, F., Subdivision surface fitting with QEM-based mesh simplification and reconstruction of approximated B-spline surfaces, 2000, Eighth Pacific Conference on computer graphics and applications, pp. 202-212. |
Goshtasby, Ardeshir; “Correction of Image Deformation From Lens Distortion Using Bezier Patches”; 1989; Computer Vision, Graphics, and Image Processing, vol. 47; pp. 358-394. |
Number | Date | Country | |
---|---|---|---|
20100266201 A1 | Oct 2010 | US |
Number | Date | Country | |
---|---|---|---|
61170014 | Apr 2009 | US |