Embodiments of the invention relate to digital color image sensors, and more particularly, to enhancing the sensitivity and dynamic range of image sensors that utilize arrays of sub-pixels to generate the data for color pixels in a display, and optionally increase the resolution of color sub-pixel arrays.
Digital image capture devices are becoming ubiquitous in today's society. High-definition video cameras for the motion picture industry, image scanners, professional still photography cameras, consumer-level “point-and-shoot” cameras and hand-held personal devices such as mobile telephones are just a few examples of modern devices that commonly utilize digital color image sensors to capture images. Regardless of the image capture device, in most instances the most desirable images are produced when the sensors in those devices can capture fine details in both the bright and dark areas of a scene or image to be captured. In other words, the quality of the captured image is often a function of the amount of detail at various light levels that can be captured. For example, a sensor capable of generating an image with fine detail in both the bright and dark areas of the scene is generally considered superior to a sensor that captures fine detail in either bright or dark areas, but not both simultaneously. Sensors with an increased ability to capture both bright and dark areas in a single image are considered to have better dynamic range.
Thus, higher dynamic range becomes an important concern for digital imaging performance. For sensors with a linear response, their dynamic range can be defined as the ratio of their output's saturation level to the noise floor at dark. This definition is not suitable for sensors without a linear response. For all image sensors with or without linear response, the dynamic range can be measured by the ratio of the maximum detectable light level to the minimum detectable light level. Prior dynamic range extension methods fall into two general categories: improvement of sensor structure, a revision of the capturing procedure, or a combination of the two.
Structure approaches can be implemented at the pixel level or at the sensor array level. For example, U.S. Pat. No. 7,259,412 introduces a HDR transistor in a pixel cell. A revised sensor array with additional high voltage supply and voltage level shifter circuits is proposed in U.S. Pat. No. 6,861,635. The typical method for the second category is to use different exposures over multiple frames (e.g. long and short exposures in two different frames to capture both dark and bright areas of the image), and then combine the results from the two frames. The details are described in U.S. Pat. No. 7,133,069 and U.S. Pat. No. 7,190,402. In U.S. Pat. No. 7,202,463 and U.S. Pat. No. 6,018,365, different approaches with a combination of two categories are introduced. U.S. Pat. No. 7,518,646 discloses a solid state imager capable of converting analog pixel values to digital form on an arrayed per-column basis. U.S. Pat. No. 5,949,483 discloses an imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit including a focal plane array of pixel cells. U.S. Pat. No. 6,084,229 discloses a CMOS imager including a photosensitive device having a sense node coupled to a FET located adjacent to a photosensitive region, with another FET forming a differential input pair of an operational amplifier is located outside of the array of pixels.
In addition to increased dynamic range, increased pixel resolution is also an important concern for digital imaging performance. Conventional color digital imagers typically have a horizontal/vertical orientation, with each color pixel formed from one red (R) pixel, two green (G) pixels, and one blue (B) pixel in a 2×2 array (a Bayer pattern). The R and B pixels can be sub-sampled and interpolated to increase the effective resolution of the imager. Bayer pattern image processing is described in U.S. patent application Ser. No. 12/126,347, filed on May 23, 2008, the contents of which are incorporated by reference herein in their entirety for all purposes.
Although Bayer pattern interpolation results in increased imager resolution, the Bayer pattern subsampling used today generally does not produce sufficiently high quality color images.
Embodiments of the invention improve the dynamic range of captured images by using sub-pixel arrays to capture light at different exposures and generate color pixel outputs for an image in a single frame. The sub-pixel arrays utilize supersampling and are generally directed towards high-end, high resolution sensors and cameras. Each sub-pixel array can include multiple sub-pixels. The sub-pixels that make up a sub-pixel array can include red (R) sub-pixels, green (G) sub-pixels, blue (B) sub-pixels, and in some embodiments, clear (C) sub-pixels. Because clear (a.k.a. monochrome or panachromatic) sub-pixels capture more light than color pixels, the use of clear sub-pixels can enable the sub-pixel arrays to capture a wider range of photon generated charge in a single frame during a single exposure period. Those sub-pixel arrays having clear sub-pixels effectively have a higher exposure level and can capture low-light scenes (for dark areas) better than those sub-pixel arrays without clear sub-pixels. Each sub-pixel array can produce a color pixel output that is a combination of the outputs of the sub-pixels in the sub-pixel array. The sub-pixel array can be oriented diagonally to improve visual resolution and color purity by minimizing color crosstalk. Each sub-pixel in a sub-pixel array can have the same exposure time, or in some embodiments, individual sub-pixels within a sub-pixel array can have different exposure times to improve the overall dynamic range even more.
One exemplary 3×3 sub-pixel array forming a color pixel in a diagonal strip pattern includes multiple R, G and B sub-pixels, each color arranged in a channel. One pixel can include the three sub-pixels of the same color. Diagonal color strip filters are described in U.S. Pat. No. 7,045,758. Another exemplary diagonal 3×3 sub-pixel array includes one or more clear sub-pixels. Clear pixels have been interspaced with color pixels as taught in U.S. Published Patent Application No. 20070024934. To enhance the sensitivity (dynamic range) of the sub-pixel array, one or more of the color sub-pixels can be replaced with clear sub-pixels. Sub-pixel arrays with more than three clear sub-pixels can also be used, although the color performance of the sub-pixel array can be diminished as a higher percentage of clear sub-pixels are used in the array. With more clear sub-pixels, the dynamic range of the sub-pixel array can go up because more light can be detected, but less color information can be obtained. Using fewer clear sub-pixels, the dynamic range will be smaller, but more color information can be obtained. A clear sub-pixel can be as much as six times more sensitive as compared to other colored sub-pixels (i.e. a clear sub-pixel will produce up to six times greater photon generated charge than a colored sub-pixel, given the same amount of light). Thus, a clear sub-pixel captures dark images well, but will get overexposed (saturated) at a smaller exposure time than color sub-pixels given the same exposure.
Each sub-pixel array can produce a color pixel output that is a combination of the outputs of the sub-pixels in the sub-pixel array. In some embodiments of the invention, all sub-pixels can have the same exposure time, and all sub-pixel outputs can be normalized to the same range (e.g. between [0,1]). The final color pixel output can be the combination of all sub-pixels (each sub-pixel type having different gains or response curves). However, if a higher dynamic range is desired, the exposure time of individual sub-pixels can be varied (e.g. the clear sub-pixel in a sub-pixel array can be exposed for a longer time, while the color sub-pixels can be exposed for a shorter time). In this manner, even darker areas can be captured, while the regular color sub-pixels exposed for a shorter time can capture even brighter areas. Alternately, a portion of the clear sub-pixels may have short exposure and a portion can have a long exposure to capture the very dark and very bright portions of the image. Alternately, the color pixels can have the same or similar distribution of short and long exposure on the sub-pixels to extend the dynamic range within a captured image. The types of pixels used can be Charge Coupled Devices (CCDs), Charge Injection Devices (CIDs), CMOS Active Pixel Sensors (APSs) or CMOS Active Column Sensors (ACSs) or passive photo-diode pixels with either rolling shutter or global shutter implementations.
Embodiments of the invention also increase the resolution of imagers by sampling an image using diagonally oriented color sub-pixel arrays, and creating additional pixels from the sampled image data to form a complete image in an orthogonal display. Although diagonal embodiments are presented herein, other pixel layouts on an orthogonal grid can be utilized as well.
A first method maps the diagonal color imager pixels to every other orthogonal display pixel. The missing display pixels can be computed by interpolating data from adjacent color imager pixels. For example, a missing display pixel can be computed by averaging color information from neighboring display pixels to the left and right and/or top and bottom, or from all four neighboring pixels. This averaging can be done either by weighting the surrounding pixels equally, or by applying weights to the surrounding pixels based on intensity information. By performing this interpolation, the resolution in the horizontal direction can be effectively increased by a root two of the original number of pixels and the interpolated pixel count doubles the number of displayed pixels.
A second method utilizes the captured color imager sub-pixel data instead of interpolation. Missing color pixels for orthogonal displays can simply be obtained from the sub-pixel arrays formed between the row color pixels in the imager. To accomplish this, one method is to store all sub-pixel information in memory when each row of color pixels is read out. This way, missing pixels can be re-created by the processor using the stored data. Another method stores and reads out both the color pixels and the missing pixels computed as described above. In some embodiments, binning may also be employed.
a, 2b and 2c illustrate exemplary diagonal 3×3 sub-pixel arrays, each sub-pixel array containing one, two and three clear sub-pixels, respectively, according to embodiments of the invention.
a illustrates an exemplary digital image sensor portion having four repeating sub-pixel array designs designated 1, 2, 3 and 4, each sub-pixel array design having a clear pixel in a different location according to embodiments of the invention.
b illustrates the exemplary sensor portion of
a illustrates an exemplary color imager pixel array in an exemplary color imager.
b illustrates an exemplary orthogonal color display pixel array in an exemplary display device.
a illustrates an exemplary color imager for which a first method for compensating for this compression can be applied according to embodiments of the invention.
b illustrates an exemplary orthogonal display pixel array for which interpolation can be applied in a display chip according to embodiments of the invention.
a illustrates a portion of an exemplary diagonal color imager and an exemplary second method for compensating for the horizontal compression of display pixels according to embodiments of the invention.
b illustrates a portion of an exemplary orthogonal display pixel array according to embodiments of the invention.
In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments in which the invention can be practiced. It is to be understood that other embodiments can be used and structural changes can be made without departing from the scope of the embodiments of this invention.
Embodiments of the invention can improve the dynamic range of captured images by using sub-pixel arrays to capture light at different exposures and generate color pixel outputs for an image in a single frame. The sub-pixel array described herein utilizes supersampling and is directed towards high-end, high resolution sensors and cameras. Each sub-pixel array can include multiple sub-pixels. The sub-pixels that make up a sub-pixel array can include red (R) sub-pixels, green (G) sub-pixels, blue (B) sub-pixels, and in some embodiments, clear sub-pixels. Each color sub-pixel can be covered with a micro-lens to increase the fill factors. A clear sub-pixel is a sub-pixel with no color filter covering. Because clear sub-pixels capture more light than color pixels, the use of clear sub-pixels can enable the sub-pixel arrays to capture different exposures in a single frame with the same exposure period for all pixels in the array. Those sub-pixel arrays having clear sub-pixels effectively have a higher exposure level and can capture low-light scenes (for dark areas) better than those sub-pixel arrays without clear sub-pixels. Each sub-pixel array can produce a color pixel output that is a combination of the outputs of the sub-pixels in the sub-pixel array. The sub-pixel array can be oriented diagonally to improve visual resolution and color purity by minimizing color crosstalk. Each sub-pixel in a sub-pixel array can have the same exposure time, or in some embodiments, individual sub-pixels within a sub-pixel array can have different exposure times to improve the overall dynamic range even more. With embodiments of the invention, the dynamic range can be improved without significant structure changes and processing costs.
Embodiments of the invention also increase the resolution of imagers by sampling an image using diagonally oriented color sub-pixel arrays, and creating additional pixels from the sampled image data to form a complete image in an orthogonal display. A first method maps the diagonal color imager pixels to every other orthogonal display pixel. The missing display pixels can be computed by interpolating data from adjacent color imager pixels. For example, a missing display pixel can be computed by averaging color information from neighboring display pixels to the left and right and/or top and bottom, or from all four neighboring pixels. A second method utilizes the captured color imager sub-pixel data instead of interpolation. Missing color pixels for orthogonal displays can simply be obtained from the sub-pixel arrays formed between the row color pixels in the imager. The second method maximizes the resolution up to the resulting color image to that of the color sub-pixel array without mathematical interpolation to enhance the resolution. Of course, interpolation can then be utilized to further enhance resolution if the application requires it. Sub-pixel image arrays with variable resolution facilitate the use of anamorphic lenses by maximizing the resolution of the imager. Anamorphic lenses squeeze the image aspect ratio to fit a given format film or solid state imager for image capture, usually along the horizontal axis. The sub-pixel imager of the present invention can be read out to un-squeeze the captured image and restore it to the original aspect ratio of the scene.
Although the sub-pixel arrays according to embodiments of the invention may be described and illustrated herein primarily in terms of high-end, high resolution imagers and cameras, it should be understood that any type of image capture device for which an enhanced dynamic range and resolution is desired can utilize the sensor embodiments and missing display pixel generation methodologies described herein. Furthermore, although the sub-pixel arrays may be described and illustrated herein in terms of 3×3 arrays of sub-pixels forming strip pixels with sub-pixels having circular sensitive regions, other array sizes and shapes of pixels and sub-pixels can be utilized as well. In addition, although the color sub-pixels in the sub-pixel arrays may be described as containing R, G and B sub-pixels, in other embodiments colors other than R, G, and B can be used, such as the complementary colors cyan, magenta, and yellow, and even different color shades (e.g. two different shades of blue) can be used. It should also be understood that these colors may be described generally as first, second and third colors, with the understanding that these descriptions do not imply a particular order.
Improving dynamic range.
a, 2b and 2c illustrate exemplary diagonal 3×3 sub-pixel arrays 200, 202 and 204 respectively, each sub-pixel array containing one, two and three clear sub-pixels, respectively, according to embodiments of the invention. To enhance the sensitivity (dynamic range) of the sub-pixel array, one or more of the color sub-pixels can be replaced with clear sub-pixels as shown in
Sub-pixel arrays with more than three clear sub-pixels can also be used, although the color performance of the sub-pixel array can be diminished as a higher percentage of clear sub-pixels are used in the array. With more clear sub-pixels, the dynamic range of the sub-pixel array can go up because more light can be detected, but less color information can be obtained. With fewer clear sub-pixels, the dynamic range will be smaller for a given exposure, but more color information can be obtained. Clear sub-pixels can be more sensitive and can capture more light than color sub-pixels given the same exposure time because they do not have a colorant coating (i.e. no color filter), so they can be useful in dark environments. In other words, for a given amount of light, clear sub-pixels produce a greater response, so they can capture dark scenes better than color sub-pixels. For typical R, G and B sub-pixels, the color filters block most of the light in the other two channels (colors) and only about half of the light in the same color channel can be passed. Thus, a clear sub-pixel can be about six times more sensitive as compared to other colored sub-pixels (i.e. a clear sub-pixel can produce up to six times greater voltage than a colored sub-pixel, given the same amount of light). Thus, a clear sub-pixel captures dark images well, but will get overexposed (saturated) at a smaller exposure time than color sub-pixels given the same layout.
a illustrates an exemplary sensor portion 300 having four repeating sub-pixel array designs designated 1, 2, 3 and 4, each sub-pixel array design having a clear sub-pixel in a different location according to embodiments of the invention.
b illustrates the exemplary sensor portion 300 of
As mentioned above, each sub-pixel array can produce a color pixel output that is a combination of the outputs of the sub-pixels in the sub-pixel array. In some embodiments of the invention, all sub-pixels can have the same exposure time, and all sub-pixel outputs can be normalized to the same range (e.g. between [0,1]). The final color pixel output can be the combination of all sub-pixels (each sub-pixel type having different response curves).
However, in other embodiments, if a higher dynamic range is desired, the exposure time of individual sub-pixels can be varied (e.g. the clear sub-pixel in a sub-pixel array can be exposed for a longer time, while the color sub-pixels can be exposed for a shorter time). In this manner, even darker areas can be captured, while the regular color sub-pixels exposed for a shorter time can capture even brighter areas.
Improving pixel resolution.
b illustrates an exemplary orthogonal color display pixel array 604 in an exemplary display device 606. Color images can be displayed using the orthogonal color display pixel array 604. Although the 17 color pixels used for image capture are diagonally oriented as shown in
a illustrates an exemplary color imager array for which a first method for compensating for this compression can be applied according to embodiments of the invention.
b illustrates an exemplary orthogonal display pixel array for which interpolation can be applied in a display chip according to embodiments of the invention. In the example of
Depending on the amount of overexposure or underexposure of the surrounding display pixels, the pixels can be weighted anywhere from 0% to 100%. The weightings can also be based on a desired effect, such as a sharp or soft effect. The use of weighting can be especially effective when one display pixel is saturated and an adjacent pixel is not, suggesting a sharp transition between a bright and dark scene. If the interpolated display pixel simply utilizes the saturated pixel in the interpolation process without weighting, the lack of color information in the saturated pixel may cause the interpolated pixel to appear somewhat saturated (without sufficient color information), and the transition can lose its sharpness. However, if a soft image or other result is desired, the weightings or methodology can be modified accordingly.
In essence, instead of discarding captured imager pixels, embodiments of the invention utilize diagonal striped filters arranged into evenly matched RGB imager sub-pixel arrays and create missing display pixels to fit the display media at hand. Interpolation can produce satisfactory images because the human eye is “pre-wired” for horizontal and vertical orientation, and the human brain works to connect dots to see horizontal and vertical lines. The end result is the generation of high color purity displayed images.
By performing interpolation as described above, the resolution in the horizontal direction can be effectively doubled. For example, a 5760×2180 imager pixel array comprised of about 37.7 million imager sub-pixels, which can form about 12.6 million imager pixels (red, blue and green) or about 4.2 million color imager pixels, can utilize the interpolation techniques described above to effective increase the total to about 8.4 million color display pixels or about 25.1 million display pixels (roughly the amount needed for a “4 k” camera). (The term “4 k” means 4 k samples across the displayed picture for each of R,G,B (12 k pixels wide and at least 1080 pixels high, and represents an industry-wide goal that is now achievable using embodiments of the invention).
Before the pixels in the color imager can be interpolated as described above, the pixels must be read out. Each sub-pixel in a color imager can be read out individually, or two or more sub-pixels can be combined before they are read out, in a process known as “binning.” In the example of
With continued reference to
In some embodiments, this post-charge transfer voltage level can be received by device 808 configured as an amplifier, which generates an output representative of the amount of charge transfer. The output of amplifier 808 can then be captured by capture circuit 810. The capture circuit 810 can include an analog-to-digital converter (ADC) that digitizes the output of the amplifier 808. A value representative of the amount of charge transfer can then be determined and stored in a latch, accumulator or other memory element for subsequent readout. Note that in some embodiments, in a subsequent digital binning operation the capture circuit 810 can allow a value representative of the amount of charge transfer from one or more other sub-pixels to be added to the latch or accumulator, thereby enabling more complex digital binning sequences as will be discussed in greater detail below.
In some embodiments, the accumulator can be a counter whose count is representative of the total amount of charge transfer for all of sub-pixels being binned. When a new sub-pixel or group of sub-pixels is coupled to the sense node 806, the counter can begin incrementing its count from its last state. As long as the output of DAC 818 is greater than sense node 806, comparator 808 does not change state, and the counter continues to count. When the output of the DAC 818 lowers to the point where its value exceeds the value on sense node 806 (which is connected to the other input of the comparator), the comparator changes state and stops the DAC and the counter. It should be understood that the DAC 818 can be operated with a ramp in either direction, but in a preferred embodiment the ramp can start out high (2.5V) and then be lowered. As most pixels are near the reset level (or black), this allows for fast background digitization. The value of the counter at the time the DAC is stopped is the value representative of the total charge transfer of the one or more sub-pixels. Although several techniques for storing a value representative of transferred sub-pixel charge have been described, as in U.S. Pat. No. 7,518,646 (incorporated herein by reference in its entirety for all purposes) and those mentioned above for purposes of illustration, other techniques can also be employed according to embodiments of the invention.
In other embodiments, a digital input value to a digital-to-analog converter (DAC) 818 counts up and produces an analog ramp that can be fed into one of the inputs of device 808 configured as a comparator. When the analog ramp exceeds the value on sense node 806, the comparator changes state and freezes the digital input value of the DAC 818 at a value representative of the charge coupled onto sense node 806. Capture circuit 810 can then store the digital input value in a latch, accumulator or other memory element for subsequent readout. In this manner, sub-pixels 802-1 through 802-3 can be digitally binned. After sub-pixels 802-1 through 802-3 have been binned, Tx1-Tx3 can disconnect sub-pixels 802-1 through 802-3, and reset signal 812 can reset sense node 806 to the reset bias 814.
As mentioned above, the select FETs 804 are controlled by six different transfer lines, Tx1-Tx6. When one row of pixel data is being binned in preparation for readout, Tx1-Tx3 can connect sub-pixels 802-1 through 802-3 to sense node 806, while Tx4-Tx6 keep sub-pixels 802-4 through 802-6 disconnected from sense node 806. When the next row of pixel data is ready to be binned in preparation for readout, Tx4-Tx6 can connect sub-pixels 802-4 through 802-6 to sense node 806, while Tx1-Tx3 can keep sub-pixels 802-1 through 802-3 disconnected from sense node 806, and a digital representation of the charge coupled onto the sense node can be captured as described above. In this manner, sub-pixels 802-4 through 802-6 can be binned. The binned pixel data can be stored in capture circuit 810 as described above for subsequent readout. After the charge on sub-pixels 802-4 through 802-6 has been sensed by amplifier 808, Tx1-Tx3 can disconnect sub-pixels 802-4 through 802-6, and reset signal 812 can reset sense node 806 to the reset bias 814.
Although the preceding example described the binning of three sub-pixels prior to the readout of each row, it should be understood that any plurality of sub-pixels can be binned. In addition, although the preceding example described six sub-pixels connected to sense node 806 through select FETs 804, it should be understood that any number of sub-pixels can be connected to the common sense node 806 through select FETs, although only a subset of those sub-pixels may be connected at any one time. Furthermore, it should be understood that the select FETs 804 can be turned on and off in any sequence or in any parallel combination along with FET 816 to effect multiple binning configurations. The FETs in
From the description above, it should be understood how an entire column of same-color sub-pixels can be binned and stored for readout using the same binning circuit, one row at a time. As described, the architecture of
a illustrates an exemplary diagonal color imager 900 and an exemplary second method for compensating for the horizontal compression of display pixels according to embodiments of the invention. In the example of
b illustrates a portion of an exemplary orthogonal display pixel array 902 according to embodiments of the invention. Rather than mapping the captured color imager pixels of
To utilize previously captured sub-pixel data, in one embodiment all sub-pixel information can be stored in off-chip memory when each row of sub-pixels is read out. To read out every sub-pixel, no binning occurs. Instead, when a particular row is to be captured, every sub-pixel 1002-1 through 1002-4 is independently coupled at different times to sense node 1006 utilizing FETs 1004 controlled by transfer lines Tx1-Tx4, and a representation of the charge transfer of each sub-pixel is coupled into capture circuits 1010-1 through 1010-4 using FETs 1016 controlled by transfer lines Tx5-Tx8 for subsequent readout. Although the example of
With every imager sub-pixel stored and read out in this manner, the missing color display pixels can be created by an off-chip processor or other circuit using the stored imager sub-pixel data. However, this method requires that a substantial amount of imager sub-pixel data be captured, read out, and stored in off-chip memory for subsequent processing in a short period of time, so speed and memory constraints may be present. If, for example, the product is a low-cost security camera and monitor, it may not be desirable to have any off-chip memory at all for storing imager sub-pixel data—instead, the data is sent directly to the monitor for display. In such products, off-chip creation of missing color display pixels may not be practical.
In other embodiments described below, additional capture circuits can be used in each column to store imager sub-pixel or pixel data to reduce the need for external off-chip memory and/or external processing. Although two alternative embodiments are presented below for purposes of illustration, it should be understood that other similar methods for utilizing previously captured imager sub-pixel data to create missing color display pixels can also be employed.
When row 3 is captured, sub-pixel H-R1 is captured in both capture circuits 1210-1A and 1210-1C, sub-pixel H-R2 is captured in both capture circuits 1210-2A and 1210-2C, sub-pixel H-R3 is captured in both capture circuits 1210-3A and 1210-3C, and sub-pixel H-R4 is captured in both capture circuits 1210-4A and 1210-4C. Next, the sub-pixel data for row 3 (H-R1, H-R2, H-R3 and H-R4), needed for color display pixel (H) (see
When row 4 is captured, sub-pixel data K-R1 is captured in both capture circuits 1210-1A and 1210-1D, sub-pixel data K-R2 is captured in both capture circuits 1210-2A and 1210-2D, sub-pixel data K-R3 is captured in both capture circuits 1210-3A and 1210-3D, and sub-pixel data K-R4 is captured in both capture circuits 1210-4A and 1210-4D. Next, the sub-pixel data for row 4 (K-R1, K-R2, K-R3 and K-R4), needed for color display pixel (K), can be read out of capture circuits 1210-1A, 1210-2A, 1210-3A and 1210-4A. In addition, the sub-pixel data for the previous row 3 (E-R3, E-R4, H-R1 and H-R2), needed for missing color display pixel (L), can be read out of capture circuits 1210-3B, 1210-4B, 1210-1C and 1210-2C, respectively.
When row 5 is captured, sub-pixel data Z-R1 is captured in both capture circuits 1210-1A and 1210-1D, sub-pixel data Z-R2 is captured in both capture circuits 1210-2A and 1210-2D, sub-pixel data Z-R3 is captured in both capture circuits 1210-3A and 1210-3D, and sub-pixel data Z-R4 is captured in both capture circuits 1210-4A and 1210-4D. Next, the sub-pixel data for row 5 (Z-R1, Z-R2, Z-R3 and Z-R4), needed for color display pixel (Z), can be read out of capture circuits 1210-1A, 1210-2A, 1210-3A and 1210-4A. In addition, the sub-pixel data for the previous row 4 (H-R3, H-R4, K-R1 and K-R2), needed for missing color display pixel (P), can be read out of capture circuits 1210-3C, 1210-4C, 1210-1D and 1210-2D, respectively.
The capture and readout procedure described above with regard to
When row 3 is captured, sub-pixels H-R1, H-R2, H-R3 and H-R4 are binned and captured in capture circuit 1010-1, sub-pixels H-R1 and H-R2 are binned and added to capture circuit 1010-3, and sub-pixels H-R3 and H-R4 are binned and captured in capture circuit 1010-4. Next, the sub-pixel data for row 3 (H-R1, H-R2, H-R3 and H-R 4), needed for color display pixel (H), can be read out of capture circuit 1010-1. In addition, the sub-pixel data for the previous row 2, needed for missing color display pixel (N), can be read out of capture circuit 1010-2.
When row 4 is captured, sub-pixels K-R1, K-R2, K-R3 and K-R4 are binned and captured in capture circuit 1010-1, sub-pixels K-R1 and K-R2 are binned and added to capture circuit 1010-4, and sub-pixels K-R3 and K-R4 are binned and captured in capture circuit 1010-1. Next, the sub-pixel data for row 4 (K-R1, K-R2, K-R3 and K-R4), needed for color display pixel (K), can be read out of capture circuit 1010-1. In addition, the sub-pixel data for the previous row 3 (E-R3, E-R4, H-R1 and H-R2), needed for missing color display pixel (L), can be read out of capture circuit 1010-3.
When row 5 is captured, sub-pixels Z-R1, Z-R2, Z-R3 and Z-R4 are binned and captured in capture circuit 1010-1, sub-pixels Z-R1 and Z-R2 are binned and added to capture circuit 1010-2, and sub-pixels Z-R3 and Z-R4 are binned and captured in capture circuit 1010-3. Next, the sub-pixel data for row 5 (Z-R1, Z-R2, Z-R3 and Z-R4), needed for color display pixel (Z), can be read out of capture circuit 1010-1. In addition, the sub-pixel data for the previous row 4 (H-R3, H-R4, K-R1 and K-R2), needed for missing color display pixel (P), can be read out of capture circuit 1010-4.
The capture and readout procedure described above with regard to
The methods described above (interpolation or the use of previously captured sub-pixels) to create missing color display pixels double the display resolution in the horizontal direction. In yet another embodiment, the resolution can be increased in both the horizontal and vertical directions to approach or even match the resolution of the sub-pixel arrays. In other words, a digital color imager having about 37.5 million sub-pixels can utilize previously captured sub-pixels to generate as many as about 37.5 million color display pixels.
Although the examples provided above utilize 4×4 color imager sub-pixel arrays for purposes of illustration and explanation, it should be understood that other sub-pixel array sizes (e.g., 3×3) could also be used. In such embodiments, a “zigzag” pattern of previously captured color imager sub-pixels may be needed to create the missing color display pixels. In addition, sub-pixels configured for grayscale image capture and display can be employed instead of color.
It should be understood that the creation of missing color display pixels described above can be implemented at least in part by the imager chip architecture of
Although embodiments of this invention have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of embodiments of this invention as defined by the appended claims.
This is a continuation-in-part (CIP) of U.S. application Ser. No. 12/125,466, filed on May 22, 2008, the contents of which are incorporated by reference herein in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
Parent | 12125466 | May 2008 | US |
Child | 12712146 | US |