The embodiments described herein relate generally to the field of digital image processing, and more specifically to methods and apparatuses providing for dewarping and/or perspective correction of a captured digital image.
Microelectronic imaging devices are used in digital cameras, wireless devices with picture capabilities, and many other applications. Cellular telephones, personal digital assistants (PDAs), computers, cameras equipped on automobiles, and stand alone cameras, for example, are incorporating microelectronic imaging devices for capturing and sending pictures. The growth rate of microelectronic imaging devices has been steadily increasing as they become smaller and produce better images having higher pixel counts.
Microelectronic imaging devices include image sensors that use charged coupled device (CCD) systems, complementary metal-oxide semiconductor (CMOS) systems or other imager technology. CCD image sensors have been widely used in digital cameras and other applications. CMOS image sensors are also popular because they have low production costs, high yields, and small sizes.
A camera system uses at least one lens to focus one or more images of a scene to be captured by an imaging device. The imaging device includes a pixel array that comprises a plurality of photosensitive pixels arranged in a predetermined number of columns and rows. Each pixel in the array typically has an individual assigned address. A lens focuses light on the pixels of the pixel array, which then generate signals representing incident light. These signals are then processed to produce an output image.
It is sometimes desirable to alter an image captured by an imaging device. For example, a captured image may be distorted due to the lens used to capture the image. Straight lines in a scene may appear to be curved in the captured image due to the design of the lens used to focus the scene onto the pixel array. This distortion is commonly called “warping,” and may be particularly noticeable where certain types of wide angle lenses are used.
Warping in a captured image 110 can be corrected through non-linear image processing (known as “dewarping”).
It may also be desirable to alter the apparent viewpoint of the camera. For example, image processing (known as “perspective correction”) can be used to make it appear as though the camera is capturing the image from a position further away from the viewer and looking at the scene from a more downward angle, reducing the effect of perspective causing lines to appear as though they converge in the distance. For example, the side edges of the rectangular object 118 appear to converge away from the camera in the dewarped image 112 of
Dewarping and perspective correction may require significant processing of the input image 110. Further, the processing may require large amounts of hardware to implement. Accordingly, there is a need and desire for a spatially and temporally efficient method for providing dewarping and/or perspective correction of an image.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments that may be practiced. It should be understood that like reference numbers represent like elements throughout the drawings. These example embodiments are described in sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be utilized, and that structural and electrical changes may be made, only some of which are discussed in detail below.
Embodiments described herein perform the dewarping and/or perspective correction shown in
In image processor 200, an output pixel value fo(xo, yo) for an output pixel address (xo, yo) in the output image is determined from a stored input pixel value fi(xint, yint) from the input image. For each output pixel address (xo, yo) in the output image, address mapping unit 222 calculates horizontal and vertical input indexes xi, yi indicating a corresponding input pixel address (xint, yint) associated with an input pixel value fi(xint, yint) stored in buffer 220 to be placed at the output pixel address (xo, yo). When calculating the horizontal and vertical input indexes xi, yi, address mapping unit 222 uses received parameters, discussed below, to determine the amount of radial gain (i.e., horizontal and vertical scaling) applied to the output pixel address (xo, yo) in order to account for warping in the input image, and the amount of horizontal gain (i.e., horizontal scaling) and vertical offset (i.e., vertical shifting) to be applied to the output pixel address (xo, yo) in order to account for variation between the desired perspective in the output image and the original perspective in the captured image.
In image processor 200, address mapping unit 222 calculates and outputs horizontal and vertical indexes xi, yi as integer values xint, yint. The indexes xint, yint are transmitted to the buffer 224, which outputs the corresponding input pixel value fi(xint, yint) as output pixel value fo(xo, yo) to a storage device, e.g., a random access memory 1094 (
It should be understood that pixel values of an image are organized as a grid of (M×N) coordinates. Each input pixel value fi(x, y) and output pixel value fo(x, y) is assigned a corresponding pixel address of Cartesian coordinates (xM, yN) measured from the top left corner of the grid. It should be understood that, for addressing purposes, the horizontal and vertical indexes x, y of the input pixel address are integer values. Thus, in image processor 200, the horizontal and vertical input indexes xi, yi calculated by the address mapping unit 222 (as described below) are integer values xint, yint. As described in later embodiments (e.g., image processor 300 of
It should also be understood that the input and output images may have differing numbers of pixels (i.e., different values of M and N), and thus different center addresses. The potential difference between center pixel addresses of the input image 110 and the output image 114 can be accounted for by performing image processing on the offset of the respective horizontal and vertical indexes (i.e., calculating how far the corresponding horizontal and vertical input indexes xi, yi are from the center of the captured image 110), as described further below.
The corresponding input pixel address (xint, yint) represents a portion of the captured image which, after dewarping and perspective correction, properly belongs at the output pixel address (xo, yo). For example, referring back to
Image processor 300 further includes an interpolation filter 326. The interpolation filter 326 and buffer 320 act as a storage circuit for the plurality of lines of pixel values fi(x, y), and output pixel values according to the address mapping unit 322. As with the address mapping unit 222 of image processor 200 (
In the illustrated embodiment of image processor 300, the horizontal and vertical indexes xi, yi are calculated by address mapping unit 322 as scaled floating point numbers with integer components xint, yint and fractional components xfraction, yfraction. The integer components xint, yint are transmitted from address mapping unit 322 to buffer 324, which outputs the corresponding input pixel value fi(xint, yint). In addition to outputting the corresponding input pixel value fi(xint, yint), buffer 324 also outputs to interpolation filter 326 pixel values corresponding to a plurality of neighboring input pixel addresses.
The fractional components xfraction, yfraction are output from address mapping unit 322 to interpolation filter 326. Interpolation filter 326 calculates an output pixel value fo(xo, yo) corresponding to the output pixel address (xo, yo) as a function of the determined input pixel value fi(xint, yint) and the neighboring pixel values, according to the fractional components xfraction, yfraction. For instance, interpolation filter 326 may interpolate the pixel values output by buffer 324, giving weight to each pixel value according to the fractional components xfraction, yfraction. The interpolation filter 326 outputs the calculated output pixel value fo(xo, yo) to a storage device, e.g., random access memory 1094 (
In image processor 300, the integer components xint, yint and fractional components xfraction, yfraction of the horizontal and vertical indexes xi, yi calculated by address mapping unit 322 may be separated at the output of address mapping unit 322, according to any known methods (e.g., integer and component filters). Alternatively, the entire scaled floating point values of horizontal and vertical indexes xi, yi may be output by address mapping unit 322 to both buffer 324 and interpolation filter 326, and the inputs of buffer 324 and interpolation filter 326 may each filter the received horizontal and vertical indexes xi, yi to receive the appropriate components.
In another embodiment of image processor 300, address mapping unit 322 may instead calculate the horizontal and vertical indexes xi, yi as integer values (xint, yint), and calculate the output pixel value fo(xo, yo) according to pre-determined processes. In this alternative embodiment, address mapping unit 322 communicates the horizontal and vertical indexes xi, yi to buffer 324. Buffer 324 outputs the corresponding input pixel value fi(xint, yint) and the input pixel values corresponding to a plurality of neighboring input pixel addresses to interpolation filter 326. Because the horizontal and vertical indexes xi, yi are calculated as integer values xint, yint, rather than receiving and using fractional components xfraction, yfraction, interpolation filter 326 calculates an output pixel value fo(xo, yo) according to pre-determined processes, such as averaging of the pixel values output by buffer 324, and outputs fo(xo, yo) to a storage device, e.g., random access memory 1094 (
Address mapping unit 400 calculates corresponding input address indexes xi, yi as described below. In the described embodiment, the input address indexes xi, yi are calculated as scaled floating point values having integer components xint, yint and fractional components xfraction, yfraction, such as for use in image processor 300 (
Address mapping unit 400 receives the horizontal and vertical output indexes xo, yo from the output address generator 320 (
The sets of polynomial function coefficients Px, Py, Pr may be input from an external program or a user, or may be pre-programmed in address mapping unit 400 as based on known dewarping and perspective correction values for a particular imaging system, or for a desired result. For example, it may be desirable to input horizontal and vertical index polynomial coefficients Px, Py, calculated through known processes, based upon the desired perspective. It may also be desirable to store radial gain polynomial coefficients Pr in the address mapping unit 400 if address mapping unit 400 is configured for receiving an input image from a camera system (e.g., system 1000 of
Address mapping unit 400 also receives an output horizontal perspective center index xcpo which represents the horizontal center of the output image, and an optical horizontal perspective center index xcpi which represents the location of the horizontal center for the desired perspective of the input image. These values also may be input from an external program or a user, or may be pre-programmed in address mapping unit 400. Address mapping unit 400 also receives horizontal and vertical optical center indexes xcenter, ycenter, respectively, which represent the center pixel address of the input image.
It should be understood that embodiments of address mapping units described herein, such as address mapping units 400 (
In the illustrated embodiment shown in
The perspective-corrected vertical index yp may be calculated by address mapping unit 400 as described below. The vertical output index yo is input into a vertical index polynomial function generator 430. The vertical index polynomial function generator 430 also receives the set of vertical index polynomial coefficients Py. The result of the vertical index polynomial function generator 430, perspective-corrected vertical index yp, accounts for vertical shifting to be implemented during perspective correction.
The perspective-corrected horizontal index xp may be calculated by address mapping unit 400 as described below. The vertical output index yo is input into a horizontal index polynomial function generator 432. The horizontal index polynomial function generator 432 also receives the set of horizontal index polynomial coefficients Px. The horizontal index polynomial function generator 432 calculates and outputs a perspective-corrected horizontal gain gx. The perspective-corrected horizontal gain gx represents the amount of gain (i.e., horizontal scaling) to be applied to the horizontal offset xoffset according to the vertical output index yo, in order to implement the desired perspective correction.
The horizontal output index xo is input into a first subtraction circuit 431 along with the output horizontal perspective center index xcpo, the result of which is the horizontal offset xoffset. The horizontal offset xoffset is entered into a first product circuit 433 along with the perspective-corrected horizontal gain gx. The output product xoffset×gx is entered into a first summing circuit 435 along with the input horizontal perspective center index xcpi, thus adjusting the corrected horizontal offset xoffset×gx to be relative to pixel addresses in the input image, and centered at the desired new perspective center. The output of the first summing circuit 435, xcpi+(xoffset×gx), gives the value of the perspective-corrected horizontal index xp.
In the illustrated embodiment, the vertical output index yo remains constant for a line (x0→M, yN) of output pixel values. Therefore, the perspective-corrected vertical index yp and the perspective-corrected horizontal gain gx also remain constant for a line (x0→M, yN) of output pixel values.
In the illustrated embodiment, the address mapping unit 400 also accounts for dewarping of the pixel image when calculating horizontal and vertical input indexes corresponding to the horizontal and vertical output indexes xo, yo of the desired output pixel address (xo, yo). Horizontal and vertical input indexes xi, yi can be determined from the perspective-corrected horizontal and vertical indexes xp, yp as described below.
The horizontal and vertical optical center indexes xcenter, ycenter, which represent the center pixel address (xcenter, ycenter) of the input pixel image fi(x, y), respectively, are entered into second and third subtraction circuits 437, 438 along with the perspective-corrected horizontal and vertical indexes xp, yp, respectively. The results of the second and third subtraction circuits 437, 438 are the horizontal and vertical optical offsets xopt, yopt. The horizontal and vertical optical offsets xopt, yopt are each squared in second and third product circuits 439, 440, respectively, and both squared optical offsets xopt2, yopt2 are entered into a second summing circuit 441. The result of second summing circuit 441 is squared radius r2, which represents the radial distance of the perspective-corrected pixel address (xp, yp) from the center pixel address (xcenter, ycenter) of the input image fi(x, y).
The squared radius r2 is input into a radial gain polynomial function generator 434. Radial gain polynomial function generator 434 also receives the set of radial gain polynomial coefficients Pr. Radial gain polynomial coefficients Pr may be programmed based on the degree of warping of the input image fi(x, y). The radial gain polynomial function generator 434 outputs the radial gain gr.
The horizontal and vertical optical offsets xopt, yopt are each multiplied by the radial gain gr at fourth and fifth product circuits 442, 443. At third and fourth summing circuits 444, 445, the horizontal and vertical optical center indexes xcenter, ycenter are added back to the respective dewarped horizontal and vertical optical offsets xopt×gr, yopt×gr, thus centering the dewarped horizontal and vertical offset optical offsets xopt×gr, yopt×gr relative to the center pixel address (xcenter, ycenter) of the input image fi(x, y). The resulting sums (xopt×gr)+xcenter, (yopt×gr)+ycenter represent the respective horizontal and vertical input indexes xi, yi indicating the address of the input pixel value fi(xint, yint) that will be used to determine the output pixel value fo(xo, yo) at the desired output pixel address (xo, yo).
As with address mapping unit 400 (
Address mapping unit 700 includes a polynomial function generator 750. In the illustrated embodiment, the polynomial function generator 750 calculates all polynomial functions for the perspective-corrected horizontal gain gx, the perspective-corrected vertical index yp, and the radial gain gr. Address mapping unit 700 also includes a parameter multiplexer (PMUX) 756, a variable multiplexer (VMUX) 758, and a coordinate multiplexer (RMUX) 760. The multiplexers 756, 758, 760 are controlled by a control unit 754. Polynomial function generator 750 receives input from the parameter multiplexer 756 and variable multiplexer 758.
Address mapping unit 700 receives an output horizontal perspective center index xcpo, an optical horizontal perspective center index xcpi, and horizontal and vertical optical center indexes xcenter, ycenter, respectively. As in address mapping units 400 (
Address mapping unit 700 also includes a horizontal gain register 766 for storing the perspective-corrected horizontal gain gx, a vertical optical offset register 762 for storing the vertical optical offset yopt, a squared vertical optical offset register 764 for storing the squared value of the vertical optical offset yopt2, and an odd polynomial register 768 for storing odd terms yp odd, gx odd of the perspective-corrected vertical index yP and the perspective-corrected horizontal gain gx, both of which are output by the polynomial function generator 750. Outputs of the registers 766, 762, 764, 768 are also controlled by the control unit 754, as described below.
Parameter multiplexer 756 receives five inputs, each of which is one of five sets of polynomial coefficients: the odd Py odd and even Py even terms for the vertical index polynomial coefficients Py; the odd Px odd and even Px even terms for the horizontal index polynomial coefficients Px; and the radial gain polynomial coefficients Pr. The polynomial coefficients Py odd, Py even, Px odd, Px even, Pr may be input from an external program or a user, or may be pre-programmed in address mapping unit 700. The odd Py odd, Px odd and even Py even, Px even terms for the horizontal and vertical index polynomial coefficients Px, Py may be received separately by address mapping unit 700, or separated at the input to the parameter multiplexer 756.
Variable multiplexer 758 receives five inputs, the first four of which are the squared value of the vertical output index (yo)2, as output by a first product circuit 759. The final input to variable multiplexer 758 is a squared radius r2 determined for each output pixel address (xo, yo).
Coordinate multiplexer 760 receives two inputs, the first being the vertical optical offset yopt stored in the vertical optical offset register 762. The other input to coordinate multiplexer 760 is the horizontal optical offset xopt. Thus, only the final inputs to both variable multiplexer 758 (i.e., squared radius r2) and coordinate multiplexer 760 (i.e., horizontal optical offset xopt) change within a given row of output pixel values (x0→M, yN).
Address mapping unit 700 also includes several switches 751, 753, 755, 757 further described herein. The first switch 751 separates terms of polynomials yP and gx from the radial gain polynomial gr when these polynomials are output by the polynomial function generator 750. The second switch 753 separates even terms yp even, gx even from odd yp odd, gx odd terms of the perspective-corrected vertical index yP and the perspective-corrected horizontal gain gx. The third switch 755 separates the perspective-corrected vertical index yP from the perspective-corrected horizontal gain gx. The fourth switch 757 separates the squared value of the vertical optical offset yopt2 from the squared value of the horizontal optical offset xopt2. The switches 751, 753, 755, 757 switch between one of two positions, creating alternate connections depending upon the current cycle of the address mapping unit 700, as further described below. The switches 751, 753, 755, 757 may be controlled by control unit 754, or alternatively may be self-controlled according to alternating input values.
The address mapping unit 700 calculates corresponding input address indexes xi, yi as described below. In the described embodiment, the input address indexes xi, yi are calculated as scaled floating point values, such as for use with interpolation filter 326 of image processor 300 (
The operation of address mapping unit 700 is described herein. For each row of output pixel addresses, address mapping unit 700 cycles through five different states. The first two states determine the odd and even terms of the vertical optical offset yopt. The second two states are used to determine the odd and even terms of the perspective-corrected horizontal gain gx. In the final state, the horizontal and vertical input indexes xi, yi corresponding to each output pixel address (xo, yo) in the row of output pixel addresses (x0→M, yN) are determined and output by address mapping unit 700.
Beginning with each new row of output pixel addresses, address mapping unit 700 is set to the first state. The control module 754 is programmed to set parameter multiplexer 756 and variable multiplexer 758 to output their respective first inputs. First switch 751 is set to direct the output of the polynomial function generator 750 to the second switch 753, which in turn is set to output to odd polynomial register 768. Odd polynomial register 768 is set to receive a value for storing.
Address mapping unit 700 receives output pixel indexes xo, yo of the desired output pixel address (xo, yo) from output address generator 320 (
Address mapping unit 700 is next set to the second state. Control module 754 switches the second switch 753 to output to a first summing circuit 763, and the third switch to direct the output of the first summing circuit 763 to a first subtraction circuit 765. The odd polynomial register 768 is set to output the odd terms yp odd of the perspective-corrected vertical index yp to a second product circuit 761, where it is multiplied by the vertical output index yo.
Control module 754 also switches the parameter multiplexer 756 and the variable multiplexer 758 to output their respective second inputs (i.e., Py even and y02) to the polynomial function generator. The polynomial function generator 750 computes the even terms yp even of the perspective-corrected vertical index yp from the even coefficients Py even of the vertical index polynomial coefficients Py and the squared vertical output index y02. The even terms yp even of the perspective-corrected vertical index yp are passed through first and second switches 751, 753 and input to a first summing circuit 763 with the odd terms yp odd output by the second product circuit 761, thus producing the perspective-corrected vertical index yp. The perspective-corrected vertical index yp passes through the third switch 755, the vertical optical center index ycenter is subtracted at first subtraction circuit 765, and the resulting vertical optical offset yopt is stored in the vertical optical offset register 762.
Address mapping unit 700 is next set to the third state. First switch 751 remains set to direct the output of the polynomial function generator 750 towards second switch 753, while second switch 753 is reset to output to odd polynomial register 768. Odd polynomial register 768 is set to receive a value for storing.
Control module 754 switches the parameter multiplexer 756 and the variable multiplexer 758 to output their respective third inputs to the polynomial function generator 750. Variable multiplexer 758 outputs the squared vertical output index y02, while parameter multiplexer 756 outputs the odd terms Px odd of the horizontal index polynomial coefficients Px. The polynomial function generator 750 produces the odd, terms gx odd of the perspective-corrected horizontal gain gx; the odd terms gx odd are temporarily stored in the odd polynomial register 768.
Address mapping unit 700 is next set to the fourth state. Control module 754 sets second switch 753 to output to first summing circuit 763, and sets third switch 755 to direct the output of the first summing circuit 763 to the horizontal gain register 766. The odd polynomial register 768 is set to output the odd terms gx odd of the perspective-corrected horizontal gain gx to second product circuit 761, where they are multiplied by the vertical output index yo.
Control module 754 also switches the parameter multiplexer 756 and the variable multiplexer 758 to output their respective fourth inputs to the polynomial function generator 750. The polynomial function generator 750 computes the even terms gx even of the perspective-corrected horizontal gain gx from the squared vertical output index y02 and the even terms Px even of the horizontal index polynomial coefficients Px. The even terms gx even of the perspective-corrected horizontal gain gx are passed through first and second switches 751, 753 and input to a first summing circuit 763 with the odd terms gx odd output by the second product circuit 761, thus producing the perspective-corrected horizontal gain gx. The perspective-corrected horizontal gain gx passes through third switch 755 and is stored in the horizontal gain register 766. This value of the perspective-corrected horizontal gain gx is maintained in the horizontal gain register 766 and used to calculate corresponding input pixel addresses (xi, yi) for the rest of the row of desired output addresses (x0→M, yN).
During either the third or fourth states of the polynomial function generator 750, control module 754 sets radial gain multiplexer 760 to output its first input, and sets fourth switch 757 to output to the squared vertical optical offset register 764. The vertical optical offset yopt is output by the vertical optical offset register 762, squared by a fourth product circuit 770, passed through the fourth switch 757, and stored in the squared vertical optical offset register 764. This squared vertical optical offset yopt2 is used to calculate corresponding input pixel addresses (xi, yi) for the rest of the row of desired output addresses (x0→M, yN).
Address mapping unit 700 now enters the final state. With the squared vertical optical offset yopt2 and perspective-corrected horizontal gain gx calculated through the first four states, corresponding horizontal and vertical input indexes can be determined for each desired output address in the row (x0→M, yN), as described below.
Control module 754 sets parameter multiplexer 756, variable multiplexer 758, and radial multiplexer 760 to output their respective final inputs. Parameter multiplexer 756 outputs the radial gain coefficients Pr to the polynomial function generator 750. First switch 751 is set to output to a reciprocal function block 752, and fourth switch 757 is set to output to a third summing circuit 771. Squared vertical optical offset register 764 is set to output the squared vertical optical offset yopt2 to the third summing circuit 771, and horizontal gain register 766 is set to output the horizontal gain gx to a third product circuit 768.
Each horizontal output index xo is input into a second subtraction circuit 767, where the output horizontal perspective center index xcpo is subtracted. In a third product circuit 768, the resulting horizontal offset xoffset is multiplied by the perspective-corrected horizontal gain gx that is output by the horizontal gain register 766. This product xoffset×gx is summed by second summing circuit 769 with the difference of the input horizontal perspective center index xcpi and the horizontal optical center index xcenter, and the result of the summation (xcpi−xcenter)+(xoffset×gx) is the horizontal optical offset xopt.
Radial multiplexer 760 receives and outputs the horizontal optical offset xopt, which is then squared by fourth product circuit 770 to generate a squared horizontal optical offset xopt2. The squared horizontal optical offset xopt2 is directed by fourth switch 757 into the third summing circuit 771 along with the squared vertical optical offset yopt2 output by the squared vertical optical offset register 764. The sum output by third summing circuit 771 is the squared radius r2. Variable multiplexer 758 receives and outputs the squared radius r2 to polynomial function generator 750. Parameter multiplexer 756 outputs the set of radial gain polynomial coefficients Pr to the polynomial function generator 750. Polynomial function generator 750 generates the radial gain gr, which then passes through first switch 751 and reciprocal function block 752.
Both the stored vertical optical offset yopt and the current horizontal optical offset xopt are multiplied by the radial gain gr at fifth and sixth product circuits 772, 773, respectively. These values are then offset by the horizontal and vertical optical center indexes ycenter, xcenter at fourth and fifth summing circuits 774, 775, thus centering the dewarped horizontal and vertical offset optical offset indexes xopt×gr, yopt×gr relative to the center pixel address (xcenter, ycenter) of the input image fi(x, y). The resulting sums (xopt×gr)+xcenter, (yopt×gr)+ycenter represent the respective horizontal and vertical input indexes xi, yi that will be used to determine the pixel value f(xo, yo) at the desired output pixel address (xo, yo).
g(a)=PNaN+PN-1aN−1 . . . +P2a2+P1a+P0. Equation 1:
The above described embodiments provide spatially efficient methods and apparatuses for implementing dewarping and/or perspection correction when processing an input image. Although certain advantages and embodiments have been described above, those skilled in the art will recognize that substitutions, additions, deletions, modifications and/or other changes may be made. For example, the product, summing, and subtraction circuits in embodiments described herein may be implemented by a single arithmetic circuit or program, multiple circuits or programs, or through other known circuits or devices. The processing described may be implemented on a stand-alone image processor, as part of an imaging device, or as part of a system-on-chip device that contains image acquisitions and processing circuitry. The polynomial function generators in the embodiments described herein may be of any appropriate order for image processing. Accordingly, embodiments of the image processor and address mapping units are not limited to those described above.
The image processing described in embodiments herein may be implemented using either hardware or software or via a combination of hardware and software. For example, in an integrated system-on-chip semiconductor CMOS imaging device 900, as illustrated in
To capture an image, the imager control circuit 983 triggers the pixel array 980, via the row and column decoders 982, 985 and row and column drivers 981, 984 to capture frames of an image. For each frame captured, each pixel cell generally outputs both a pixel reset signal vrst and a pixel image signal vsig, which are read by a sample and hold circuit 986 according to a sampling scheme, for example, a correlated double sampling (“CDS”) scheme. The pixel reset signal vrst represents a reset state of a pixel cell. The pixel image signal vsig represents the amount of charge generated by the photosensor in the pixel cell in response to applied light during an integration period. The pixel reset and image signals vrst, vsig are sampled, held and amplified by the sample and hold circuit 986.
The sample and hold circuit 986 outputs amplified pixel reset and image signals Vrst, Vsig. The difference between Vsig and Vrst represents the actual pixel cell output with common-mode noise eliminated. The differential signal (e.g., Vrst−Vsig) is produced by differential amplifier 987 for each readout pixel cell. The differential signals are digitized by an analog-to-digital (A/D) converter 988.
The analog-to-digital converter 988 supplies the digitized pixel signals fi(x,y) to image processor 300, which receives and stores the pixel signals from the ADC 988 and performs dewarping and perspective correction, as described above. In the illustrated embodiment, image processor 300 includes output address generator 320, address mapping unit 322, buffer 324, and interpolation filter 326. Image processor 300 outputs pixel signals f(xo, yo) for storage in a memory, such as the random access memory 1094 (
Embodiments of the methods and apparatuses described herein may be used in any system which employs a moving image or video imaging device, including, but not limited to a computer system, camera system, scanner, machine vision, vehicle navigation, video phone, surveillance system, auto focus system, star tracker system, motion detection system, image stabilization system, and other imaging systems. Example digital camera systems in which the invention may be used include video digital cameras, still cameras with video options, cell-phone cameras, handheld personal digital assistant (PDA) cameras, and other types of cameras.
It should be noted that although the embodiments have been described with specific reference to CMOS imaging devices, they have broader applicability and may be used in any imaging apparatus which generates pixel output values, including charge-coupled devices CCDs and other imaging devices.
Number | Date | Country | Kind |
---|---|---|---|
0813124.5 | Jul 2008 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
4841292 | Zeno | Jun 1989 | A |
5870076 | Lee et al. | Feb 1999 | A |
5903319 | Busko et al. | May 1999 | A |
6069668 | Woodham et al. | May 2000 | A |
6236404 | Iimura et al. | May 2001 | B1 |
6346967 | Gullichsen | Feb 2002 | B1 |
6747702 | Harrigan | Jun 2004 | B1 |
6885392 | Mancuso et al. | Apr 2005 | B1 |
7002603 | Tapson | Feb 2006 | B2 |
7067808 | Kochi et al. | Jun 2006 | B2 |
7126616 | Jasa et al. | Oct 2006 | B2 |
7224392 | Cahill et al. | May 2007 | B2 |
7268803 | Murata et al. | Sep 2007 | B1 |
7333642 | Green | Feb 2008 | B2 |
7532760 | Kaplinsky et al. | May 2009 | B2 |
20020180727 | Guckenberger et al. | Dec 2002 | A1 |
20020191838 | Setterholm | Dec 2002 | A1 |
20030043303 | Karuta et al. | Mar 2003 | A1 |
20040156558 | Kim | Aug 2004 | A1 |
20050058360 | Berkey et al. | Mar 2005 | A1 |
20050083248 | Biocca et al. | Apr 2005 | A1 |
20050174437 | Iga | Aug 2005 | A1 |
20050180655 | Ohta et al. | Aug 2005 | A1 |
20060050074 | Bassi | Mar 2006 | A1 |
20060274972 | Peterson | Dec 2006 | A1 |
20070196004 | Green | Aug 2007 | A9 |
20070198586 | Hardy et al. | Aug 2007 | A1 |
20070206877 | Wu et al. | Sep 2007 | A1 |
20070227026 | Krachtus | Oct 2007 | A1 |
20070268530 | Gagliano et al. | Nov 2007 | A1 |
20070273692 | Woo et al. | Nov 2007 | A1 |
20070280554 | Chernichenko et al. | Dec 2007 | A1 |
20070285420 | Brown | Dec 2007 | A1 |
20080074415 | Woo et al. | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
2 498 484 | Aug 2006 | CA |
1996389 | Jul 2007 | CN |
1 276 074 | Jan 2003 | EP |
2000-331151 | Nov 2000 | JP |
2004-72553 | Mar 2004 | JP |
2005-18195 | Jan 2005 | JP |
2005-234776 | Sep 2005 | JP |
2006-141005 | Jun 2006 | JP |
2007-28273 | Feb 2007 | JP |
2007-249967 | Sep 2007 | JP |
Entry |
---|
Xu et al. (“Method for calibrating cameras with large lens distortion,” Optical Engineering 45(4), Apr. 2006, pp. 1-8). |
Li et al. (“Robust distortion correction of endoscope,” SPIE vol. 6918, Feb. 2008, pp. 1-8). |
McCall, J. et al., “Video Based Lane Estimation and Tracking for Driver Assistance: Survey, Sytem, and Evaluation”, IEEE Transactions on Intelligent Transporation Systems, Dec. 2004, Revised Jul. 2005. |
Dasu, A. et al., “A Survey of Media Processing Approaches”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 12, Issue 8; pp. 633-645, Aug. 2002. |
Li, X., “Vidoe Processing Via Implicit and Mixture Motion Models”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, Issue 8, pp. 953-963, Aug. 2007. |
Kang, S. B. et al., “High Dynamic Range Video”, Interactive Visual Media Group, Microsoft Research, Redmond, WA. |
Krutz, A. et al., “Improved Image Registration Using the Up-Sampled Domain”, Communications Systems Group, TU Berlin, Berlin, Germany, School of IT and EE, Australian Defence Force Academy, Canberra, Australia. |
Kunter, M. et al., “Optimal Multiple Sprite Generation Based on Physical Camera Parameter Estimation”, Commun. Systems Group, Technische Universitatt berlin, Berlin, Germany, Dept. of Electrical & Computer Engineering, university of Alberta, Edmonton, Canada. |
Number | Date | Country | |
---|---|---|---|
20100014770 A1 | Jan 2010 | US |