The present invention relates generally to an image sensor and more specifically to an image sensor having pixel cross-talk compensation to reduce the effects of optically interfering elements integrated into the image sensor.
Portable digital cameras can be used to record digital images or digital video of a physical scene. A digital camera or image-recording device typically includes an image sensor formed from an array of sensor cells or sensor regions otherwise referred to as pixels. Each pixel is configured collect light from the physical scene and produce an electronic signal in response to the amount of light incident on the pixel. The signals from the array pixels may be scanned and stored as pixel data. The pixel data can be used to create a digital image that represents a visual depiction of the physical scene.
Many digital cameras also include optical components that are configured to focus the light on the surface of the image sensor. The optical components may be mechanically adjustable in order to focus light from objects that may be a variable distance from the digital camera. In some cases, the image sensor may also include auto-focus elements that are integrated into the array of sensor cells or pixels and used to determine if the light is sufficiently focused on the image sensor. In some cases, auto-focus elements that are integrated into the sensor provide feedback to an auto-focus mechanism that adjusts one or more optical components of the digital camera.
One potential drawback to using auto-focus elements that are integrated within the array of pixels is that the elements may cause cross-talk between adjacent pixels. In particular, the auto-focus elements may interfere with the light received by image sensor causing an inaccurate light reading for some pixels that are adjacent or near the auto-focus elements. That is, the auto-focus elements may cause more or less light to be absorbed by some of the pixels in the array. Due variations in the amount of light received by the neighboring pixels, in some cases, the use of auto-focus elements results in a digital image that is less accurate or contains artifacts due to the optical and electrical interference.
The system and techniques described herein can be used to reduce cross talk between adjacent pixels in an image sensor array. More specifically, the system and techniques can be used to reduce the effects of interference caused by the use of auto-focus elements and, thereby, improve the accuracy and quality of a resulting digital image.
One example embodiment is directed to a computer-implemented method for producing a digital image. The method may compensate for cross-talk between adjacent pixels and reduce the effect of auto-focus elements integrated into a pixel array of an image sensor. A set of coefficients is obtained. The set of coefficients may represent a relative measurement between two or more pixels in the pixel array and may be obtained from a calibration operation. A predictive function is constructed based on the set of coefficients. A compensated pixel value for at least one pixel of the image sensor is calculated using the predictive function. A digital image is created and stored based in part on the compensated value. In one example, the predictive function is a polynomial function. In particular, the predictive function may be a fourth-order polynomial function. The predictive function may be constructed, for example, by performing a non-linear regression on at least part of the set of coefficients.
In one embodiment, the at least one pixel is located adjacent to an auto-focus pixel that is at least partially shielded from light by a shield element. In one example, the predictive function is an approximation of the change in the at least one pixel due in part to optical and electrical interference caused of the shield element.
In one embodiment, the set of coefficients represents a relative measurement between a first, affected pixel in the pixel array and a second pixel in the pixel array. In one example, the second pixel is an auto-focus pixel that is at least partially shielded from light by a shield element. In another example, the second pixel is an unaffected pixel that is not adjacent to an auto-focus pixel that is at least partially shielded from light by a shield element.
In one embodiment, the set of coefficients are divided into a matrix of grid elements. Each grid element may represent a specified number of pixels that are adjacent to each other in the pixel array. Two or more representative coefficients may be obtained that correspond to two or more grid elements of the matrix of grid elements. The predictive function may be constructed using the two or more representative coefficients. In one example, each grid element of the matrix of grid elements represents a 16 by 16 group of pixels that are adjacent to each other.
Another example embodiment is directed to a computer-implemented method for producing a digital image using an image sensor comprised of a pixel array based on a set of coefficients. In particular, a set of coefficients representing a relative measurement between two or more pixels in the pixel array is obtained. A compensated pixel value is computed for at least one pixel of the image sensor using at least one coefficient of the set of coefficients. A digital image is created based in part on the compensated value.
Another example embodiment is directed to a computer-implemented method for producing a digital image using an image sensor comprised of a pixel array based on a predictive function. In particular, pixel data is acquired for a pixel array of the image sensor. A predictive function is obtained based on a calibration operation. A compensated pixel value is computed for at least one pixel of the image sensor using the predictive function. A digital image is created based in part on the compensated value.
In one embodiment, the pixel array is divided into a matrix of grid elements. Each grid element represents a specified number of pixels that are adjacent to each other in the pixel array. A representative pixel value is obtained for at least one grid element using the predictive function. A compensated pixel value is computed for at least one pixel of the image sensor based on the representative pixel value. In one example, each grid element of the matrix of grid elements represents a 16 by 16 group of pixels that are adjacent to each other.
One example embodiment includes a computer-implemented method for calibrating an image sensor to measure the effect of elements integrated into a pixel array of an image sensor. In particular, the image sensor is illuminated using a light source. Pixel data is acquired for a pixel array of the image sensor. A set of coefficients is calculated, where at least one coefficient of the set based on a comparison between a first, affected pixel located adjacent to an auto-focus pixel and a second pixel. A predictive function is constructed based on set of coefficients. In one example, the second pixel is an auto-focus pixel that is at least partially shielded from light by a shield element. In another example, the second pixel is an unaffected pixel that is not adjacent to an auto-focus pixel that is at least partially shielded from light by a shield element. In yet another example, the average of multiple unaffected pixels that are nearby the affected pixel but not adjacent to the auto-focus pixel.
Another example embodiment is directed to an electronic device having a digital camera. The digital camera includes an image sensor having an array of pixels. The digital camera also includes a memory having computer-readable instructions stored thereon. The digital camera also includes a processor configured to produce a digital image by executing the computer-readable instructions. The instructions are for: obtaining a set of coefficients representing a relative measurement between two or more pixels in the pixel array; computing a compensated pixel value for at least one pixel of the image sensor using at least one coefficient of the set of coefficients; and creating the digital image based in part on the compensated value.
The embodiments described herein are directed to a digital camera used to create a digital image or digital video. In the examples provided below, the digital camera includes an image sensor formed from an array of sensor cells or sensor regions otherwise referred to as pixels. The image sensor may include a charge-coupled device (CCD) image sensor, a complementary metal-oxide-semiconductor (CMOS) image sensor, or other type of arrayed sensor device. Each pixel of the image sensor may be configured to produce an electrical signal in response to light incident to a corresponding portion of the surface of the image sensor. The electrical signals can be converted into a digital data format and used to create a digital image or video sequence.
In one example, the digital camera includes one or more elements integrated with the pixel array. The elements may include auto-focus shield elements that can be used to provide feedback to an auto-focus system configured to focus the light onto the image sensor. As previously mentioned, the presence of auto-focus elements may interfere with the light incident to at least some of the pixels in the array, which may result reduce the accuracy of the pixels and potentially reduce the quality of a digital image produced using the sensor.
The system and techniques described herein can be used to reduce or eliminate auto-focus cross-talk artifacts. In particular, the amount of auto-focus pixel cross-talk artifacts may be quantified as an increase or decrease of light due to optically interfering elements in the pixel array. The amount of auto-focus pixel cross-talk artifacts may be estimated and a compensation can be applied to the affected pixel based on the estimation. A different degree of compensation may be applied to different pixels in the array depending on the spatial relationship of the pixel with respect to the optically interfering element. By applying cross-talk compensation across the pixel array of the image sensor, the techniques described herein can be used to create a digital image that reduces or minimizes effects due to auto-focus pixel cross-talk.
In the example depicted in
As shown in
The I/0 member 108 can be implemented with any type of input or output member. By way of example only, the I/0 member 108 can be a switch, a button, a capacitive sensor, or other input mechanism. The I/0 member 108 allows a user to interact with the electronic device 100. For example, the I/0 member 108 may be a button or switch to alter the volume, return to a home screen, and the like. The electronic device can include one or more input members or output members, and each member can have a single I/0 function or multiple I/0 functions.
The display 110 can be operably or communicatively connected to the electronic device 100. The display 110 may be used to display digital images, digital video sequences, or other visual media. The display 110 can be implemented with any type of suitable display, such as a retina display or an active matrix color liquid crystal display. The display 110 can provide a visual output for the electronic device 100 or function to receive user inputs to the electronic device. For example, the display 110 can be a multi-touch capacitive sensing touchscreen that can detect one or more user inputs.
The electronic device 100 may also include a number of internal components.
The one or more processors 200 are configured to execute computer-readable instructions and can control some or all of the operations of the electronic device 100. The processor(s) 200 can communicate, either directly or indirectly, with many of the components of the electronic device 100. For example, one or more system buses 210 or other communication mechanisms can provide communication between the processor(s) 200, the cameras 102, 104, the display 110, the 1/0 member 108, or the sensors 208. The processor(s) 200 can be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, the one or more processors 200 can be a microprocessor, a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), or combinations of multiple such devices. As described herein, the term “processor” is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element or elements.
The memory 202 can store electronic data that can be used by the electronic device 100. For example, the memory 202 can store electrical data or content such as, for example, digital image files, digital video files, audio files, document files, timing signals, and other digital data. The memory 202 can be configured as any type of memory. By way of example only, memory 202 can be implemented as random access memory, read-only memory, flash memory, removable memory, or other types of storage elements, in any combination.
The input/output interface 204 can receive data from a user or one or more other electronic devices. Additionally, the input/output interface 204 can facilitate transmission of data to a user or to other electronic devices. For example, in embodiments where the electronic device 100 is a smart telephone, the input/output interface 204 can receive data from a network or send and transmit electronic signals via a wireless or wired connection. Examples of wireless and wired connections include, but are not limited to, cellular, WiFi, Bluetooth, and Ethernet. In one or more embodiments, the input/output interface 204 supports multiple network or communication mechanisms. For example, the input/output interface 204 can pair with another device over a Bluetooth network to transfer signals to the other device while simultaneously receiving signals from a WiFi or other wired or wireless connection.
The power source 206 can be implemented with any device capable of providing energy to the electronic device 100. For example, the power source 206 can be a battery or a connection cable that connects the electronic device 100 to another power source such as a wall outlet.
The sensors 208 can by implemented with any type of sensors. Examples of sensors include, but are not limited to, audio sensors (e.g., microphones), light sensors (e.g., ambient light sensors), gyroscopes, and accelerometers. The sensors 208 can be used to provide data to the processor 200, which may be used to enhance or vary functions of the electronic device.
As described with reference to
The cameras 102, 104 include an imaging stage 300 that is in optical communication with an image sensor 302. The imaging stage 300 is operably connected to the enclosure 106 and positioned in front of the image sensor 302. The imaging stage 300 can include optical elements such as a lens, a filter, an iris, and a shutter. The imaging stage 300 directs, focuses or transmits light 304 within its field of view onto the image sensor 302. The image sensor 302 captures one or more images of a subject scene by converting the incident light into electrical signals.
The image sensor 302 is supported by a support structure 306. The support structure 306 can be a semiconductor-based material including, but not limited to, silicon, silicon-on-insulator (SOI) technology, silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers formed on a semiconductor substrate, well regions or buried layers formed in a semiconductor substrate, and other semiconductor structures.
Various elements of imaging stage 300 or image sensor 302 can be controlled by timing signals or other signals supplied from a processor or memory, such as processor 200 in
Referring now to
The imaging area 404 may be in communication with a column select 408 through one or more column select lines 410 and a row select 412 through one or more row select lines 414. The row select 412 selectively activates a particular pixel 406 or group of pixels, such as all of the pixels 406 in a certain row. The column select 408 selectively receives the data output from the select pixels 406 or groups of pixels (e.g., all of the pixels with a particular column).
The row select 412 and/or the column select 408 may be in communication with an image processor 402. The image processor 402 may trigger a pixel scan to obtain the pixel values of the pixel array over a relatively short scan time. The pixel scan may be performed, for example, in response to a user input or a request by an operation being performed on the device. By performing a pixel scan, the image processor 402 can process data from the pixels 406 and provide that data to the processor (e.g.,
An image sensor can be constructed on a single semiconductor-based wafer or on multiple semiconductor-based wafers to form an array of photodetectors. In general, photodetectors detect light with little or no wavelength specificity, making it difficult to identify or separate colors. When color separation is desired, a color filter array can be disposed over the imaging area to filter the wavelengths of light sensed by the photodetectors in the imaging area. A color filter array is a mosaic of color filters with each color filter typically disposed over a respective pixel. Each color filter restricts the wavelengths of light detected by the photodetector, which permits color information in a captured image to be separated and identified.
In one embodiment, each filter element restricts light wavelengths. In another embodiment, some of the filter elements filter light wavelengths while other filter elements are panchromatic. A panchromatic color filter can have a wider spectral sensitivity than the spectral sensitivities of the other color filters in the CFA. For example, a panchromatic filter can have a high sensitivity across the entire visible spectrum. A panchromatic filter can be implemented, for example, as a neutral density filter or a color filter. Panchromatic filters can be suitable in low level lighting conditions, where the low level lighting conditions can be the result of low scene lighting, short exposure time, small aperture, or other situations where light is restricted from reaching the image sensor.
Color filter arrays can be configured in a number of different mosaics. The color filter array 500 can be implemented as a red (R), green (G), and blue (B) color filter array or a cyan (C), magenta (M), yellow (Y) color filter array. The Bayer pattern is a well know color filter array pattern. The Bayer color filter array filters light in the red (R), green (G), and blue (B) wavelengths ranges. The Bayer color filter pattern includes two green color filters (Gr and Gb), one red color filter, and one blue color filter. The group of four color filters is tiled or repeated over the pixels in an imaging area to form the color filter array. For purposes of the following discussion, an image pixel may refer to the sensor region associated with a single color of the color filter array. However, in alternative implementations, a pixel may refer to a sensor region that includes multiple colors of the color filter array.
As previously mentioned, the image sensor may also include one or more elements to facilitate auto-focus functionality. In one example, auto-focus shield elements are used to provide feedback for an auto-focus mechanism configured to focus light onto the surface of the image sensor.
As shown in
Additionally, each asymmetrical photodetector pair in an imaging area can be identical or some of the asymmetrical photodetector pairs can differ from other asymmetrical photodetector pairs in an imaging area. By way of example only, the shield element in some of the asymmetrical photodetector pairs can be disposed over half of the photodetectors while the shield element in other asymmetrical photodetector pairs can cover a third of the photodetectors.
Filter elements 612, 614 are disposed over the photodetectors 602, 608, respectively. The filter elements 612, 614 can filter light wavelengths that represent the same or different colors. A microlens 616, 618 is disposed over each filter element 612, 614. The microlenses 616, 618 are configured to focus incident light 620 onto respective photodetectors 602, 608. The light 620 is angled from the left in the illustrated embodiment. The shield element 610 blocks some or all of the light 620 received by the pixel 600, thereby preventing the photodetector 602 from detecting all of the light that would be incident on the photodetector 602 if the shield element were not present. Similarly, the shield element 610 blocks some or all of the light 620 received by the pixel 602, thereby preventing the photodetector 608 from detecting all of the light that would be incident on the photodetector 608 if the shield element were not present. Due to the direction and angle of the light 620 and the shield element 610, the photodetector 608 can detect more light 620 than the photodetector 602. Thus, the photodetector 608 can accumulate more charge than the photodetector 602, making the signal response of the pixel 606 higher than the signal response of the pixel 600.
When the light 620 is angled from the right (not shown), the signal response of the pixel 600 can be higher than the signal response of the pixel 606. And when the light is perpendicular to the surface of the substrate 604, partial light will be blocked on both pixels 600, 606, and the signal responses of both pixels 600, 606 can be substantially the same. Object phase information can be obtained by analyzing the signal responses of the pixels in the asymmetrical pair. The object phase information can be used to provide information about the field depth.
As shown in
As shown in
As described in more detail below with respect to processes 800, 820, and 840 of
As shown in
In operation 802, a set of coefficients are obtained. In the present example, the coefficients represent the amount of cross-talk between adjacent pixels in the image sensor array. In particular, the coefficients represent the relative increase or decrease in light received by a pixel due to the presence of an auto-focus element in the pixel array. In some cases, an auto-focus shield element may block some of the light that would normally be incident to a pixel that is adjacent to an auto-focus pixel. In other cases, an auto-focus shield element increases the light received by a pixel that is adjacent to an auto-focus pixel. For example, the auto-focus shield element may reflect light back onto an optical element, which in turn reflects light back onto the adjacent pixel. The coefficients obtained in operation 802 qualify the optical effects of elements, such as auto-focus shield elements, on neighboring pixels in the array.
With respect to operation 802, the set of coefficients may be obtained, for example, using a calibration operation. For example, the image sensor may be subjected to a known or predictable lighting condition and the pixel data may be collected and stored. The coefficients may be obtained by, for example, comparing the output of pixel values that are adjacent or neighboring to the auto-focus shield element (affected pixel) to another reference pixel. In one case, the reference pixel is the auto-focus pixel (the pixel that is shielded by the auto-focus shield element). In another case, the reference pixel may be another pixel that is not adjacent to the auto-focus pixel (e.g., an unaffected pixel). The relative difference between the affected pixel values and the reference pixel values may be used to determine the coefficients of operation 802. An example calibration operation is described below with respect to process 840 of
In operation 802, the set of coefficients that are obtained may correspond to a set of pixels that are located at a particular location with respect to an auto-focus shield element. With reference to
In one example, multiple sets of coefficients may be obtained in operation 802. Each set of coefficients may correspond to a set of pixels that are similarly oriented with respect to the auto-focus shield elements in the pixel array. For example, a second set of coefficients may be obtained for a second set of pixels that are located above an auto-focus shield element. With reference again to
In operation 804, a predictive function is constructed. In this example, a predictive function is constructed based on the set of coefficients obtained in operation 802. As described in more detail below with respect to operation 806, the use of a predictive function facilitates the compensation of a large number of pixels in an image sensor without consuming excessive amounts of computer memory resources. In some cases, the use of a predictive function eliminates the need to permanently store the set of coefficients obtained in operation 802, above.
With respect to operation 804, the predictive function may be constructed as an nth order polynomial function. For example, a 4th order polynomial may be constructed based on the coefficients obtained in operation 802. In general, the set of coefficient values correspond to a spatial distribution in accordance with the location of the corresponding pixels in the array. A polynomial function can be fit to the coefficient data with respect to the spatial distribution along one or more axis or directions. In one example, a 4th order polynomial equation can be constructed by fitting the polynomial equation to the plot of coefficient values along an X or Y-axis of the image sensor. The polynomial equation may be created using a traditional polynomial regression technique for fitting a polynomial to a set of data. Other techniques, including linear regression, non-parametric regression, spline fitting or other techniques could also be used to construct the predictive function for operation 806. In this example, the predictive function is an nth order polynomial. However, the predictive function may also be, for example, a linear function, spline, or other parametric expression.
In the present example, the 4th order polynomial may be expressed as 5 polynomial coefficients (a-e) of a 4th order polynomial equation. The polynomial coefficients may be obtained using the coefficients obtained in operation 802 and by using a polynomial regression technique. The coefficient values may depend, in part, on the light color temperature and the relative position of the affected pixels. In some cases, the values of the polynomial coefficients may range from +/−10−15 to +/−102.
In one example, the predictive function (4th order polynomial) may be expressed as:
Y=ax4+bx3+cx2+dx+e, (Equation 1)
where x is the location of the pixel along the horizontal x-axis, Y is the approximated value of the coefficient at the location x, and a, b, c, d, e are polynomial coefficients that may be obtained using a regression or other fitting technique. While this example is directed to fitting a predictive function along the horizontal, x-axis, other implementations may construct the predictive function along a different direction. Additionally, a multi-dimensional predictive function could also be constructed using the coefficients in operation 802, above.
With respect to operation 804, the predictive function may only need to be fit to along one direction to sufficiently predict the cross-talk for a set of pixels. For example, the coefficient data may be substantially consistent along a first axis and vary according to the predictive function along a second axis that is transverse to the first axis. Alternatively, a two-dimensional predictive function can be used to predict the cross-talk for a set of pixels.
In some cases, the number of coefficients obtained in operation 802 are too numerous to efficiently construct a predictive function to all of the data. Thus, in one example, a sub-set of the set of coefficients obtained in operation 802 are used to construct the predictive function. In particular, the set of coefficients obtained in operation 802 may be represented by a matrix of grid elements. In this case, each grid element of the matrix may represent multiple actual pixels of the image sensor. In one example, a grid element represents a 16×16 square region of image pixels. In this case, the amount of data that is used to construct the predictive function of operation 804 is reduced by a factor of 16. In some cases, a representative coefficient value is used for each grid element of the matrix. The representative coefficient may be determined based on the coefficients of the pixels located within that grid element. For example, a representative coefficient may be computed by averaging two or more coefficients associated with pixels within the grid element. In another example, the representative coefficient is set as one of the coefficients associated with one of the pixels within the grid element.
Other sizes of grid elements may also be used depending on the size of the image sensor pixel array and depending on the density of the auto-focus shield elements (or other type of elements) that are integrated into the image sensor. Additionally, the grid elements may be non-square, non-rectangular, or other types of shapes. In some cases, the grid elements may not all be the same size and shape.
In operation 806, a compensated pixel value is calculated using the predictive function. In some implementations, operation 806 is performed after a set of pixel data is acquired by the image sensor. For example, operation 806 may be performed after the image sensor has acquired pixel data as part of a camera image capture operation. Similarly, operation 806 may be performed after pixel data has been acquired as part of a video sequence capture operation. Alternatively, operation 806 may be performed on pixel data that has been stored in computer memory.
In operation 806, a compensated pixel value may be calculated by using the predictive function to estimate the effect on the pixel due to any neighboring auto-focus shield elements (or other types of elements within the pixel array). In some cases, the compensated pixel value is an approximation of the pixel value had there been no auto-focus shield element present to interfere with the light received by the (affected) pixel.
The compensated pixel value may be calculated by adding or subtracting the amount of additional signal that is estimated to have been caused by a neighboring auto-focus shield element. As discussed above, the predictive function may provide a memory-efficient approach for estimating the effect of neighboring elements that interfere with the light received by a pixel. In one example, the compensated pixel value may be calculated, for example, using the following equation:
R′=R+C*G(AF) (Equation 2)
where R′ is the compensated pixel value, R is the pixel value that was acquired, C is a compensation coefficient, and G(AF) is the output from the neighboring auto-focus pixel. In this example, the compensation coefficient may be computed using the predictive function obtained in operation 804, above. For example, the compensation coefficient may be computed using the 4th order polynomial equation described above with respect to equation 1.
Although equation 2 depicts one technique for computing the compensated pixel, other techniques may also be used. For example, the compensated pixel value may be computed based on the predictive function and the output on another type of reference pixel. For example, the compensated pixel may be calculated using the following equation:
R′=R+C*G(NA), (Equation 3)
where R′ is the compensated pixel value, R is the pixel value that was acquired, C is the compensation coefficient, and G(NA) is the output from the a non-affected pixel that is not adjacent to the neighboring auto-focus pixel. Similar to the example above, the compensation coefficient may be computed using the predictive function obtained in operation 804, above. However, in this case, the predictive function is based on a comparison between the neighboring (affected) pixels and other non-affected pixels. In yet another alternative example, more than one predictive function can be used that correlates the output of the affected pixel to more than one reference pixels.
In some cases, the affected pixels could be grouped according to a courser grid (e.g., 16×16 pixel groups) to reduce the computations required for the polynomial fitting operation. In some cases, it may be advantageous to compute a compensated pixel value for groups of pixels that are adjacent to each other. Similar to as described above with respect to operation 804, a matrix of grid elements may be used, where each grid element includes a group of adjacent pixels in the pixel array. Similar to the previous example, a grid element may represents a 16×16 square region of image pixels. For each of the grid elements, a representative pixel value may be obtained using the predictive function, which represents the compensation estimate for pixels located within that grid element. A compensated pixel value can then be computed for at least one pixel of the image sensor based on the representative pixel value.
In operation 808, a digital image is created based on the compensated pixel value. In one example, a digital image is created using the compensated pixel value along with other pixel values obtained using the image sensor. That is, the compensated pixel value is used to create a portion of a digital image, such as an image pixel. In some cases, the compensated pixel corresponds directly with an image pixel, although it is not necessary that it correspond one-to-one. In one example, the compensated pixel corresponds to a red/blue/green pixel region on the image sensor. The compensated pixel value may be combined with other red/blue/green pixel regions to compute a single image pixel in the digital image.
In one example implementation, many compensated pixel values are computed and the many computed pixels values are used to create the digital image. In general, the number of compensated pixels is approximately proportional to the number of auto-focus shield elements integrated into the image sensor. In particular, there may be 4 compensated pixels for every auto-focus pixel in the image sensor. By creating a digital image using compensated pixel values, a digital image may more accurately portray the lighting conditions and the color of the scene that has been photographed. In particular, the digital image produced using process 800, exhibit reduced or minimal effects due to auto-focus shield elements.
With regard to operation 808, the digital image may be created in accordance with any one of a number of known digital image formats. For example, the digital image may be formed as a Joint Photographic Experts Group (JPG), Graphical Image File (GIF), Tagged Image File Format) (TIFF), Portable Network Graphics (PNG), or RAW image format. The digital image may be stored in persistent computer memory, including, for example, solid-state drive (SSD), magnetic storage media, or other computer memory device.
In some implementations, process 800 may be performed for each digital image that is created using a digital camera. In some implementations, process 800 is performed for multiple digital images that are used to create a video sequence or series of images. Some of the operations of process 800 may be implemented in hardware and some operations may be implemented in a combination of hardware and computer-readable instructions executed on a computer processor.
In operation 822, pixel data is acquired using the image sensor. In one example, the sensor values for the pixels or sensor cells of an image sensor are acquired using a pixel scanning technique. An example configuration for performing a pixel scan is described above with respect to
In operation 824, a predictive function is obtained. As discussed above with respect to process 800, above, the predictive function may approximate the effect of the presence of auto-focus shield elements on neighboring pixels in the image sensor. As described previously, use of a predictive function may facilitate image compensation without requiring voluminous computer storage. In particular, one or more predictive functions can be used to approximate a large number of coefficient values obtained in a calibration operation.
The predictive function may be obtained, for example, by a previously performed calibration operation or other function generation operation. For example, the predictive function may be obtained from computer memory having been previously created in accordance with operation 804, discussed above with respect to process 800. As discussed previously, the predictive function may be an nth order polynomial function, a linear function, spline, or other parametric expression.
In operation 826, a compensated pixel value is calculated using a predictive function. As described above with respect to operation 806 (of process 800), a compensated pixel value can be calculated using a predictive function that approximates the effect of elements (e.g., auto-focus shield elements) integrated into the image sensor. The compensated pixel value may be calculated using, for example, the techniques described above with respect to operation 806. Specifically, the compensated pixel value may be computed using equations 2 or 3 described above.
In operation 828, a digital image is created based on the compensated pixel. In particular, the compensated pixel may be used to create a digital image that has been compensated to reduce the effects of elements integrated in the image sensor. In this example, the digital image is compensated to reduce the effects of auto-focus shield elements integrated into an image sensor. An example of operation 828 is provided above with respect to operation 808 of process 800.
In operation 842, the image sensor is illuminated. To generate an accurate calibration the response of the sensor pixels, it may be advantageous that the illumination be substantially repeatable. In this example, the image sensor is illuminated with a light source having a known color and brightness. The light source is also provided as a surface light source to minimize localized regions of brightness or color variation.
In operation 844, the pixel values are acquired. In this example, a scan of the pixel array is performed to obtain the sensor measurements for each of the active pixels in the array. An example scan is discussed above with respect to
In operation 846, a coefficient is calculated based on the pixel values. In particular, a set of coefficients that represent the amount of cross-talk between pixels is calculated. The calculation of the coefficients for operation 846 is substantially similar to operation 802 discussed above with respect to process 800.
With regard to operation 846, the coefficients may be obtained by, for example, comparing the output of pixel values that are adjacent to or neighboring a reference pixel. In one case, as discussed above, the reference pixel is the auto-focus pixel (the pixel that is shielded by the auto-focus shield element). In another case, the reference pixel may be another pixel that is not adjacent to the auto-focus pixel (e.g., an unaffected pixel). The relative difference between the affected pixel values and the reference pixel values may be used to determine the coefficients of operation 846.
In operation 848, a predictive function is constructed based on the set of coefficients. As previously discussed, the predictive function may be a polynomial curve, linear it, or other type of parametric representation of the set of coefficients. A description of the construction of a predictive function is provided above with respect to operation 804 of process 800.
Processes 800, 820, and 840 are typically implemented as one or more sets of computer-readable instructions stored on a non-transitory computer readable storage medium. The operations of processes 800, 820, and 840 may be performed by executing the one or more sets of computer-readable instructions on a computer processor. An example of computer memory and computer processor components that can be used to perform processes 800, 820, and 840 are described above with respect to
And even though specific embodiments have been described herein, it should be noted that the application is not limited to these embodiments. In particular, any features described with respect to one embodiment may also be used in other embodiments, where compatible. Likewise, the features of the different embodiments may be exchanged, where compatible.
This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application No. 61/952,700, filed Mar. 13, 2014, entitled “Image Sensor with Auto-Focus and Pixel Cross-Talk Compensation,” and U.S. Provisional Patent Application No. 61/975,700, filed Apr. 4, 2014, entitled “Image Sensor with Auto-Focus and Pixel Cross-Talk Compensation,” which are incorporated by reference as if fully disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
4686648 | Fossum | Aug 1987 | A |
5105264 | Erhardt et al. | Apr 1992 | A |
5329313 | Keith | Jul 1994 | A |
5396893 | Oberg et al. | Mar 1995 | A |
5471515 | Fossum et al. | Nov 1995 | A |
5541402 | Ackland | Jul 1996 | A |
5550677 | Schofield et al. | Aug 1996 | A |
5781312 | Noda | Jul 1998 | A |
5841126 | Fossum et al. | Nov 1998 | A |
5880459 | Pryor et al. | Mar 1999 | A |
5949483 | Fossum et al. | Sep 1999 | A |
6008486 | Stam et al. | Dec 1999 | A |
6040568 | Caulfield et al. | Mar 2000 | A |
6233013 | Hosier et al. | May 2001 | B1 |
6348929 | Acharya et al. | Feb 2002 | B1 |
6448550 | Nishimura | Sep 2002 | B1 |
6541751 | Bidermann | Apr 2003 | B1 |
6713796 | Fox | Mar 2004 | B1 |
6714239 | Guidash | Mar 2004 | B2 |
6798453 | Kaifu | Sep 2004 | B1 |
6816676 | Bianchi et al. | Nov 2004 | B2 |
6905470 | Lee et al. | Jun 2005 | B2 |
6982759 | Goto | Jan 2006 | B2 |
7091466 | Bock | Aug 2006 | B2 |
7133073 | Neter | Nov 2006 | B1 |
7259413 | Rhodes | Aug 2007 | B2 |
7262401 | Hopper et al. | Aug 2007 | B2 |
7271835 | Iizuka | Sep 2007 | B2 |
7282028 | Kim et al. | Oct 2007 | B2 |
7332786 | Altice | Feb 2008 | B2 |
7390687 | Boettiger | Jun 2008 | B2 |
7437013 | Anderson | Oct 2008 | B2 |
7443421 | Stavely et al. | Oct 2008 | B2 |
7502054 | Kalapathy | Mar 2009 | B2 |
7525168 | Hsieh | Apr 2009 | B2 |
7554067 | Zarnowski et al. | Jun 2009 | B2 |
7555158 | Park et al. | Jun 2009 | B2 |
7626626 | Panicacci | Dec 2009 | B2 |
7671435 | Ahn | Mar 2010 | B2 |
7728351 | Shim | Jun 2010 | B2 |
7733402 | Egawa et al. | Jun 2010 | B2 |
7742090 | Street | Jun 2010 | B2 |
7764312 | Ono et al. | Jul 2010 | B2 |
7773138 | Lahav et al. | Aug 2010 | B2 |
7786543 | Hsieh | Aug 2010 | B2 |
7796171 | Gardner | Sep 2010 | B2 |
7873236 | Li et al. | Jan 2011 | B2 |
7880785 | Gallagher | Feb 2011 | B2 |
7884402 | Ki | Feb 2011 | B2 |
7906826 | Martin et al. | Mar 2011 | B2 |
7952121 | Arimoto | May 2011 | B2 |
7952635 | Lauxtermann | May 2011 | B2 |
8026966 | Altice | Sep 2011 | B2 |
8032206 | Farazi et al. | Oct 2011 | B1 |
8089524 | Urisaka | Jan 2012 | B2 |
8094232 | Kusaka | Jan 2012 | B2 |
8116540 | Dean | Feb 2012 | B2 |
8140143 | Picard et al. | Mar 2012 | B2 |
8153947 | Barbier et al. | Apr 2012 | B2 |
8159570 | Negishi | Apr 2012 | B2 |
8159588 | Boemler | Apr 2012 | B2 |
8164669 | Compton et al. | Apr 2012 | B2 |
8184188 | Yaghmai | May 2012 | B2 |
8194148 | Doida | Jun 2012 | B2 |
8194165 | Border et al. | Jun 2012 | B2 |
8222586 | Lee | Jul 2012 | B2 |
8227844 | Adkisson | Jul 2012 | B2 |
8233071 | Takeda | Jul 2012 | B2 |
8259228 | Wei et al. | Sep 2012 | B2 |
8310577 | Neter | Nov 2012 | B1 |
8324553 | Lee | Dec 2012 | B2 |
8340407 | Kalman | Dec 2012 | B2 |
8350940 | Smith et al. | Jan 2013 | B2 |
8400546 | Itano et al. | Mar 2013 | B2 |
8456559 | Yamashita | Jun 2013 | B2 |
8508637 | Han et al. | Aug 2013 | B2 |
8514308 | Itonaga et al. | Aug 2013 | B2 |
8520913 | Dean | Aug 2013 | B2 |
8547388 | Cheng | Oct 2013 | B2 |
8575531 | Hynecek et al. | Nov 2013 | B2 |
8581992 | Hamada | Nov 2013 | B2 |
8594170 | Mombers et al. | Nov 2013 | B2 |
8619163 | Ogua | Dec 2013 | B2 |
8629484 | Ohri et al. | Jan 2014 | B2 |
8634002 | Kita | Jan 2014 | B2 |
8648947 | Sato et al. | Feb 2014 | B2 |
8723975 | Solhusvik | May 2014 | B2 |
8754983 | Sutton | Jun 2014 | B2 |
8755854 | Addison et al. | Jun 2014 | B2 |
8759736 | Yoo | Jun 2014 | B2 |
8767104 | Makino et al. | Jul 2014 | B2 |
8803990 | Smith | Aug 2014 | B2 |
8817154 | Manabe et al. | Aug 2014 | B2 |
8902330 | Theuwissen | Dec 2014 | B2 |
8908073 | Minagawa | Dec 2014 | B2 |
8934030 | Kim et al. | Jan 2015 | B2 |
8936552 | Kateraas et al. | Jan 2015 | B2 |
8946610 | Iwabuchi et al. | Feb 2015 | B2 |
8982237 | Chen | Mar 2015 | B2 |
9041837 | Li | May 2015 | B2 |
9054009 | Oike et al. | Jun 2015 | B2 |
9066017 | Geiss | Jun 2015 | B2 |
9066660 | Watson et al. | Jun 2015 | B2 |
9088727 | Trumbo | Jul 2015 | B2 |
9099604 | Roy | Aug 2015 | B2 |
9100597 | Hu | Aug 2015 | B2 |
9131171 | Aoki et al. | Sep 2015 | B2 |
20030036685 | Goodman et al. | Feb 2003 | A1 |
20040207836 | Chhibber et al. | Oct 2004 | A1 |
20050026332 | Fratti et al. | Feb 2005 | A1 |
20060274161 | Ing et al. | Dec 2006 | A1 |
20070263099 | Motta et al. | Nov 2007 | A1 |
20080177162 | Bae et al. | Jul 2008 | A1 |
20080315198 | Jung | Dec 2008 | A1 |
20090096901 | Bae et al. | Apr 2009 | A1 |
20090101914 | Hirotsu et al. | Apr 2009 | A1 |
20090128671 | Kusaka | May 2009 | A1 |
20090146234 | Luo et al. | Jun 2009 | A1 |
20090201400 | Zhang et al. | Aug 2009 | A1 |
20100134631 | Voth | Jun 2010 | A1 |
20110028802 | Addison et al. | Feb 2011 | A1 |
20110069210 | Ogura | Mar 2011 | A1 |
20110077531 | Watson et al. | Mar 2011 | A1 |
20110080500 | Wang et al. | Apr 2011 | A1 |
20110156197 | Tivarus et al. | Jun 2011 | A1 |
20110245690 | Watson et al. | Oct 2011 | A1 |
20120092541 | Tuulos et al. | Apr 2012 | A1 |
20120098964 | Oggier et al. | Apr 2012 | A1 |
20120105688 | Kita | May 2012 | A1 |
20120147207 | Itonaga | Jun 2012 | A1 |
20130135500 | Theuwissen | May 2013 | A1 |
20130147981 | Wu | Jun 2013 | A1 |
20130155271 | Ishii | Jun 2013 | A1 |
20130222584 | Aoki et al. | Aug 2013 | A1 |
20140049683 | Guenter | Feb 2014 | A1 |
20140071321 | Seyama | Mar 2014 | A1 |
20140240550 | Taniguchi | Aug 2014 | A1 |
20140246568 | Wan | Sep 2014 | A1 |
20140247378 | Sharma et al. | Sep 2014 | A1 |
20140252201 | Li et al. | Sep 2014 | A1 |
20140253754 | Papiashvili | Sep 2014 | A1 |
20140263951 | Fan et al. | Sep 2014 | A1 |
20140267855 | Fan | Sep 2014 | A1 |
20140347533 | Toyoda | Nov 2014 | A1 |
20140354861 | Pang | Dec 2014 | A1 |
20150163392 | Malone et al. | Jun 2015 | A1 |
20150163422 | Fan et al. | Jun 2015 | A1 |
20150237314 | Hasegawa | Aug 2015 | A1 |
20150264241 | Kleekajai et al. | Sep 2015 | A1 |
20150264278 | Kleekajai et al. | Sep 2015 | A1 |
20150312479 | McMahon et al. | Oct 2015 | A1 |
20150350575 | Agranov et al. | Dec 2015 | A1 |
20160050379 | Jiang et al. | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
1842138 | Oct 2006 | CN |
101189885 | May 2008 | CN |
101233763 | Jul 2008 | CN |
101472059 | Jul 2009 | CN |
101567977 | Oct 2009 | CN |
101622859 | Jan 2010 | CN |
101803925 | Aug 2010 | CN |
102036020 | Apr 2011 | CN |
102821255 | Dec 2012 | CN |
103329513 | Sep 2013 | CN |
103546702 | Jan 2014 | CN |
2023611 | Feb 2009 | EP |
2107610 | Oct 2009 | EP |
2230690 | Sep 2010 | EP |
201149697 | Mar 2011 | JP |
2012019516 | Jan 2012 | JP |
2012513160 | Jun 2012 | JP |
20030034424 | May 2003 | KR |
20030061157 | Jul 2003 | KR |
20080069851 | Jul 2008 | KR |
20100008239 | Jan 2010 | KR |
20100065084 | Jun 2010 | KR |
20130074459 | Jul 2013 | KR |
201301881 | Jan 2013 | TW |
WO 2010120945 | Oct 2010 | WO |
WO 2012053363 | Apr 2012 | WO |
WO 2012088338 | Jun 2012 | WO |
WO 2012122572 | Sep 2012 | WO |
WO 2013008425 | Jan 2013 | WO |
WO 2013179018 | Dec 2013 | WO |
WO 2013179020 | Dec 2013 | WO |
Entry |
---|
U.S. Appl. No. 13/782,532, filed Mar. 1, 2013, Sharma et al. |
U.S. Appl. No. 13/783,536, filed Mar. 4, 2013, Wan. |
U.S. Appl. No. 13/785,070, filed Mar. 5, 2013, Li. |
U.S. Appl. No. 13/787,094, filed Mar. 6, 2013, Li et al. |
U.S. Appl. No. 13/797,851, filed Mar. 12, 2013, Li. |
U.S. Appl. No. 13/830,748, filed Mar. 14, 2013, Fan. |
U.S. Appl. No. 14/098,504, filed Dec. 5, 2013, Fan et al. |
U.S. Appl. No. 14/207,150, filed Mar. 12, 2014, Kleekajai et al. |
U.S. Appl. No. 14/207,176, filed Mar. 12, 2014, Kleekajai et al. |
U.S. Appl. No. 14/276,728, filed May 13, 2014, McMahon et al. |
U.S. Appl. No. 14/292,599, filed May 30, 2014, Agranov et al. |
U.S. Appl. No. 14/462,032, filed Aug. 18, 2014, Jiang et al. |
U.S. Appl. No. 14/481,806, filed Sep. 9, 2014, Kleekajai et al. |
U.S. Appl. No. 14/501,429, filed Sep. 30, 2014, Malone et al. |
U.S. Appl. No. 14/503,322, filed Sep. 30, 2014, Molgaard. |
U.S. Appl. No. 14/569,346, filed Dec. 12, 2014, Kestelli et al. |
U.S. Appl. No. 14/611,917, filed Feb. 2, 2015, Lee et al. |
Aoki, et al., “Rolling-Shutter Distortion-Free 3D Stacked Image Sensor with -160dB Parasitic Light Sensitivity In-Pixel Storage Node,” ISSCC 2013, Session 27, Image Sensors, 27.3 27.3 A, Feb. 20, 2013, retrieved on Apr. 11, 2014 from URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6487824. |
Elgendi, “On the Analysis of Fingertip Photoplethysmogram Signals,” Current Cardiology Reviews, 2012, vol. 8, pp. 14-25. |
Feng, et al., “On the Stoney Formula for a Thin Film/Substrate System with Nonuniform Substrate Thickness,” Journal of Applied Mechanics, Transactions of the ASME, vol. 74, Nov. 2007, pp. 1276-1281. |
Fu, et al., “Heart Rate Extraction from Photoplethysmogram Waveform Using Wavelet Multui-resolution Analysis,” Journal of Medical and Biological Engineering, 2008, vol. 28, No. 4, pp. 229-232. |
Han, et al., “Artifacts in wearable photoplethysmographs during daily life motions and their reduction with least mean square based active noise cancellation method,” Computers in Biology and Medicine, 2012, vol. 42, pp. 387-393. |
Lopez-Silva, et al., “Heuristic Algorithm for Photoplethysmographic Heart Rate Tracking During Maximal Exercise Test,” Journal of Medical and Biological Engineering, 2011, vol. 12, No. 3, pp. 181-188. |
Santos, et al., “Accelerometer-assisted PPG Measurement During Physical Exercise Using the LAVIMO Sensor System,” ACTA Polytechnica, 2012, vol. 52, No. 5, pp. 80-85. |
Sarkar, et al., “Fingertip Pulse Wave (PPG signal) Analysis and Heart Rate Detection,” International Journal of Emerging Technology and Advanced Engineering, 2012, vol. 2, No. 9, pp. 404-407. |
Schwarzer, et al., On the determination of film stress from substrate bending: STONEY'S formula and its limits, Jan. 2006, 19 pages. |
Yan, et al., “Reduction of motion artifact in pulse oximetry by smoothed pseudo Wigner-Ville distribution,” Journal of NeuroEngineering and Rehabilitation, 2005, vol. 2, No. 3, pp. 1-9. |
Yousefi, et al., “Adaptive Cancellation of Motion Artifact in Wearable Biosensors,” 34th Annual International Conference of the IEEE EMBS, San Diego, California, Aug./Sep. 2012, pp. 2004-2008. |
Number | Date | Country | |
---|---|---|---|
61952700 | Mar 2014 | US | |
61975700 | Apr 2014 | US |