The present invention relates generally to image sensors and in particular, but not exclusively, to white/black pixel defect correction in an image sensor.
Recent manufacturing improvements in semiconductor processing have markedly reduced the number of defects that occur in any given semiconductor device, but limitations inherent in every manufacturing process make it impossible to completely eliminate defects. Therefore, no matter how good the manufacturing process, defects continue to exist in finished semiconductor devices. If a defect is severe, the resulting device must often be thrown away, resulting in decreased yield and increased cost. But if the defect is minor, it can often be compensated for by circuitry or logic running on the semiconductor device itself or by back-end processing of signals from the semiconductor device.
In image sensors, a common type of manufacturing defect is known as a white/black pixel defect. Image sensors typically include an array of individual pixels that gather charge as a result of light incident on the pixels. White/black pixel defects occur when a particular pixel outputs a signal that is substantially different than the signals output by other nearby pixels. Thus, if a particular pixel outputs a signal corresponding to the color black (i.e., a very low intensity signal) but some or all of the surrounding pixels output signals that correspond to the color white (i.e., a very high intensity signal), the likely cause is some defect in the pixel outputting the low-intensity signal.
Fortunately, unless there is a large cluster of contiguous defective pixels, white/black pixel defects can be compensated for. Existing methods of compensating for white/black pixel defects, however, are slow and inefficient and use computational resources, thereby slowing the image capture by the image sensor and reducing its performance.
Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
Embodiments of an apparatus, system and process for white/black pixel correction in an image sensor are described herein. In the following description, numerous specific details are described to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail but are nonetheless encompassed within the scope of the invention.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in this specification do not necessarily all refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The illustrated pixel array 104 is regularly shaped, but in other embodiments the array can have a regular or irregular arrangement different than shown and can include more or less pixels, rows and columns than shown. Moreover, in different embodiments pixel array 104 can be a color image sensor including red, green and blue pixels designed to capture images in the visible portion of the spectrum, or can be a black-and-white image sensor and/or an image sensor designed to capture images in the invisible portion of the spectrum, such as infra-red or ultraviolet.
After an image is captured using pixel array 104, one or more of the pixels in the array may exhibit a potential white/black pixel defect. Whether a given pixel exhibits a potential white/black pixel defect is determined by comparing the intensity of the signal from that pixel with the intensity of the signals from at least one of its surrounding pixels. Thus, within pixel array 104, pixel D has a potential white/black pixel defect if its intensity is significantly different than one or more of surrounding pixels 1-8. Pixel D is said to have a potential white/black pixel defect because, under some circumstances, the difference in intensity between pixel D and surrounding pixels 1-8 may not actually be a defect, but rather may be a true attribute of the image captured by pixel array 104. For example, if pixel array 104 is used to capture an image of an object that has abrupt and/or high-frequency changes between light and dark areas, it is possible that the discrepancy between pixel D and its surrounding pixel is an accurately captured characteristic of the scene and not the result of a defect.
Defective pixel detection circuit 109 is coupled to pixel array 104 and includes circuitry and associated logic to receive output from each of the individual pixels within pixel array 104. Defective pixel detection circuit 110 analyzes the analog input from pixel array 104 to detect potential white/black pixel defects. Defective pixel detection circuit 109 determines the existence of a potential white/black pixel by comparing the intensity of the signal from that pixel with the intensity of the signals from at least one of its surrounding pixels. Thus, within pixel array 104, pixel D has a potential white/black pixel defect if its intensity is significantly different than one or more of surrounding pixels 1-8.
Dynamic pixel correction circuit 110 is coupled to defective pixel detection circuit 109 and uses circuitry and logic found therein to attempt to correct the potential white/black pixel defects identified by defective pixel detection circuit 109. The correction applied by dynamic pixel correction circuit 110 can be done differently in different embodiments. In one embodiment, the value of pixel D is corrected by replacing it with the value of one of its adjacent pixels 1-8. Other embodiments can have more complex correction schemes. For example, in one embodiment a value pixel of D might be interpolated from the values of some or all of surrounding pixels 1-8 using a linear interpolation or some higher-order interpolation. In another example, the value of pixel D can be replaced with an average or weighted average of surrounding pixels 1-8. In still other embodiments, pixel D can be corrected based on pixels other than or in addition to adjacent pixels 1-8. In some embodiments, dynamic pixel correction circuit 110 has no way of knowing whether a given pixel D is truly defective. Thus, in one embodiment dynamic pixel correction circuit 110 applies a correction to every potentially defective pixel D, whether truly defective or not.
Although shown in the drawing as an element separate from pixel array 104, in some embodiments dynamic pixel correction circuit 110 can be integrated with pixel array 104 on the same substrate or can comprise circuitry and logic within the pixel array. In other embodiments, however, dynamic pixel correction circuit 110 can be an element external to pixel array 104 as shown in the drawing. In still other embodiments, dynamic pixel correction circuit can be a element not only external to pixel array 104, but also external to image sensor 102.
Signal conditioner 112 is coupled to image sensor 102 to receive and condition analog signals from pixel array 104 and Dynamic pixel correction circuit 110. In different embodiments, signal conditioner 112 can include various components for conditioning analog signals. Examples of components that can be found in signal conditioner include filters, amplifiers, offset circuits, automatic gain control, etc.
Analog-to-digital converter (ADC) 114 is coupled to signal conditioner 112 to receive conditioned analog signals corresponding to each pixel in pixel array 202 from signal conditioner 112 and convert these analog signals into digital values.
Digital signal processor (DSP) 116 is coupled to analog-to-digital converter 114 to receive digitized pixel data from ADC 114 and process the digital data to produce a final digital image. DSP 116 includes a processor 117 that can store and retrieve data in a memory 118, within can be stored a data structure 120 that includes information about pixels within pixel array 104 that are known to be defective. In the illustrated embodiment memory 118 is integrated within DSP 116, but in other embodiments memory 118 can be a separate element coupled to DSP 116. Processor 117 can perform various functions, including processing pixel, cross-checking pixels against pixels whose pixel identifier is in data structure 120, and so forth.
Data structure 120 can be any kind of data structure capable of holding the required pixel data; the exact kind of data structure used will depend on the operational requirements set for apparatus 100. In one embodiment data structure 120 can be a look-up table, but in other embodiments data structure 120 can be something more complex such as a database. The defective pixels listed in data structure 120 are identified by the locations of the defective pixels within pixel array 104. In the illustrated embodiment, defective pixels are identified in data structure 120 by a pixel identifier that includes a pair of numbers I and J that denote the defective pixel's row and column within pixel array 104. In other embodiments, however, other ways can be used in data structure 120 to identify defective pixels. For example, in an embodiment where pixel array 104 has individually addressable pixels data structure 120 can contain the addresses of the defective pixels instead of their row/column coordinates (I,J) within the pixel array. The entries in data structure 120 can be generated during an initial calibration of apparatus 100, as described below in connection with
At block 210 the digital values of individual pixels are analyzed to spot defective pixels. In one embodiment, to spot defective pixels the value of each pixel is compared to adjacent pixels. Because the target whose image was captured is either uniformly black or uniformly white, the digital values for the all pixels in pixel array 104 should be the same. If there is a big discrepancy between a pixel's digital value and the digital values of its adjacent pixels, then the pixel in question is almost certainly defective. Thus if a pixel's value is substantially higher than one or more of its adjacent pixels (for a calibration using a uniformly black target) or substantially lower than one or more of its adjacent pixels (for a calibration using a uniformly white target), that pixel is deemed defective. In other embodiments other methods for determining whether a pixel is defective can be used.
If as a result of the pixel analysis of block 210 a defective pixel is found at block 212, then at block 214 the location of the defective pixel is added to the data structure 120 within DSP 116. In one embodiment the location of the defective pixel is noted by placing its pixel identifier—in one embodiment, the row and column coordinates of the pixel—into a look up table of defective pixels, but in other embodiments it can be done differently as described above. After the location of a defective pixel is added to data structure 120 at block 214, as block 216 the process checks whether there are more pixels to be analyzed. If there are, the process returns to block 210 and analyzes the next pixel; if there are not (i.e., if all pixels in pixel array 104 have been analyzed), the process proceeds to block 218 where the calibration checks to see whether there are more calibration targets to be used for the calibration; as noted above, if the initial calibration was carried out with a black target it can be repeated with a white target, or vice versa, to identify more defective pixels.
If at block 218 another target is to be used for calibration, the process returns to block 204, where the new target is set up, and proceeds through blocks 206-216 for the new target. If at block 218 there are no additional calibration targets, the process moves to block 220 where the dynamic pixel correction is turned back on so that it can correct any potentially defective pixels during operation. The process then proceeds to block 222, where the calibration stops.
At block 310 the analog pixel data received from image sensor 102 is digitized. After the pixel data is digitized, at block 312 each pixel's pixel identifier is cross-checked against the pixel identifiers in data structure 120 to see whether it is identified as a defective pixel. If as a result of cross-checking a pixel at block 312 a defective pixel is found at block 314, then at block 316 the defective pixel is corrected by DSP 116 as described above. At block 318 the process checks whether there are any pixels left that have not been cross-checked against the defective pixels listed in data structure 120 and corrected if necessary. If at block 318 there are pixels left that have not been cross-checked, the process returns to block 312 and cross-checks any remaining pixels. If at block 318 there a no pixels left to cross-check, the process proceeds to block 320, where processing by DSP 116 is finished.
The above description of illustrated embodiments of the invention, including what is described in the abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications can be made to the invention in light of the above detailed description.
The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.