PIXEL ASPECT RATIO CORRECTION USING PANCHROMATIC PIXELS

Information

  • Patent Application
  • 20090046182
  • Publication Number
    20090046182
  • Date Filed
    August 14, 2007
    17 years ago
  • Date Published
    February 19, 2009
    15 years ago
Abstract
A method for forming an enhanced digital full-color image having a first pixel aspect ratio includes capturing an image using an image sensor having panchromatic pixels and color pixels corresponding to at least two color photoresponses wherein color and panchromatic pixels each have a second pixel aspect ratio different from the first pixel aspect ratio, providing from the captured image a digital high-resolution panchromatic image and changing the aspect ratio of the panchromatic pixel values from the second pixel aspect ratio to the first pixel aspect ratio to produce a digital aspect corrected high-resolution panchromatic image, providing from the captured image a digital low-resolution color difference color filter array image, providing a digital aspect corrected high-resolution color difference image from the low-resolution color difference color filter array image, and using the aspect corrected high-resolution panchromatic image and an aspect corrected high-resolution color difference image to produce the enhanced digital full-color image.
Description
FIELD OF THE INVENTION

The present invention relates to forming a color image having a desired pixel aspect ratio from a panchromatic image and a color image having a different pixel aspect ratio.


BACKGROUND OF THE INVENTION

Video cameras and digital still cameras generally employ a single image sensor with a color filter array to record a scene. This approach begins with a sparsely populated single-channel image in which the color information is encoded by the color filter array pattern. Subsequent interpolation of the neighboring pixel values permits the reconstruction of a complete three-channel, full-color image. A generally understood assumption is that this full-color image is composed of pixels values sampled on a square pixel lattice, i.e., the image pixels are square. This is important for the vast majority of image display and printing devices use square pixels for subsequent image rendering. However, requiring square pixels in the full-color image does not require the single image sensor to use square pixels. Sensors using rectangular (non-square) pixels are well known in the art. The general practice of producing a square pixel image from a rectangular pixel capture is to produce a full-color image with rectangular pixels and then, as a final step, transform the full-color image into one with square pixels. This approach is exemplified by U.S. Pat. No. 5,778,106 (Juenger et al.) See FIG. 2. A digital camera 200 equipped with a single sensor of rectangular pixels produces an RGB CFA image 202. A CFA interpolation block 204 produces a full-color image 206 from the RGB CFA image 202. A pixel aspect ratio correction block 208 produces a pixel aspect ratio corrected full-color image 210 from the full-color image 206. In this example, it can be seen that an extra operation (block 208) is required in the image processing chain in order to produce an image with square pixels (block 210) from an initial image with non-square pixels (block 202). A better solution would be to incorporate the pixel aspect ratio correction block 208 directly into the CFA interpolation block 204. A related example of this approach is taught in U.S. Pat. No. 7,092,020 (Yoshikawa). See FIG. 3. A digital camera 212 (equipped with a single sensor of square pixels) produces an RGB CFA image 214. A CFA interpolation and resizing block 216 produces a resized full-color image 218 from the RGB CFA image 214 by directly computing a digitally zoomed (enlarged) full-color image without dividing the operation into two separate steps (interpolation then resizing) or producing a corresponding intermediate image.


Under low-light imaging situations, it is advantageous to have one or more of the pixels in the color filter array unfiltered, i.e. white or panchromatic in spectral sensitivity. These panchromatic pixels have the highest light sensitivity capability of the capture system. Employing panchromatic pixels represents a tradeoff in the capture system between light sensitivity and color spatial resolution. To this end, many four-color color filter array systems have been described. U.S. Pat. No. 6,529,239 (Dyck et al.) teaches a green-cyan-yellow-white pattern that is arranged as a 2×2 block that is tessellated over the surface of the sensor. U.S. Pat. No. 6,757,012 (Hubina et al.) discloses both a red-green-blue-white pattern and a yellow-cyan-magenta-white pattern. In both cases, the colors are arranged in a 2×2 block that is tessellated over the surface of the imager. The difficulty with such systems is that only one-quarter of the pixels in the color filter array have highest light sensitivity, thus limiting the overall low-light performance of the capture device.


To address the need of having more pixels with highest light sensitivity in the color filter array, U.S. Patent Application Publication No. 2003/0210332 (Frame) describes a pixel array with most of the pixels being unfiltered. Relatively few pixels are devoted to capturing color information from the scene producing a system with low color spatial resolution capability. Additionally, Frame teaches using simple linear interpolation techniques that are not responsive to or protective of high frequency color spatial details in the image.


SUMMARY OF THE INVENTION

It is an object of the present invention to produce a digital color image having the desired pixel aspect ratio from a digital image having panchromatic and color pixels with a different pixel aspect ratio.


This object is achieved by a method of forming an enhanced digital full-color image having a first pixel aspect ratio, comprising:


(a) capturing an image using an image sensor having panchromatic pixels and color pixels corresponding to at least two color photoresponses wherein color and panchromatic pixels each have a second pixel aspect ratio different from the first pixel aspect ratio;


(b) providing from the captured image a digital high-resolution panchromatic image and changing the aspect ratio of the panchromatic pixel values from the second pixel aspect ratio to the first pixel aspect ratio to produce a digital aspect corrected high-resolution panchromatic image;


(c) providing from the captured image a digital low-resolution color difference color filter array image;


(d) providing a digital aspect corrected high-resolution color difference image from the low-resolution color difference color filter array image; and


(e) using the aspect corrected high-resolution panchromatic image and an aspect corrected high-resolution color difference image to produce the enhanced digital full-color image.


It is a feature of the present invention that images can be captured under low-light conditions with a sensor having panchromatic and color pixels with a first pixel aspect ratio and processing produces the desired pixel aspect ration in a digital color image produced from the panchromatic and colored pixels.


The present invention makes use of a color filter array with an appropriate composition of panchromatic and color pixels in order to permit the above method to provide both improved low-light sensitivity and improved color spatial resolution fidelity. The above method preserves and enhances panchromatic and color spatial details and produce a full-color, full-resolution image.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a perspective of a computer system including a digital camera for implementing the present invention;



FIG. 2 is a block diagram of a prior art pixel aspect ratio correction image processing chain;



FIG. 3 is a block diagram of a prior art of a combined CFA interpolation and resizing image processing chain;



FIG. 4 is a block diagram of a preferred embodiment of the present invention;



FIG. 5A is a block diagram showing block 302 in FIG. 4 in more detail;



FIG. 5B is a block diagram showing block 302 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 6A is a block diagram showing block 316 in FIG. 4 in more detail;



FIG. 6B is a block diagram showing block 316 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 6C is a block diagram showing block 316 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 6D is a block diagram showing block 316 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 6E is a block diagram showing block 316 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 6F is a block diagram showing block 316 in FIG. 4 in more detail of an alternate embodiment of the present invention;



FIG. 7A and 7B are regions of pixels used in block 316 in FIG. 6A;



FIG. 8A and 8B are regions of pixels used in block 316 in FIG. 6C;



FIG. 9A and 9B are regions of pixels used in block 316 in FIG. 6D;



FIG. 10A and 10B are regions of pixels used in block 316 in FIG. 6E; and



FIG. 11A and 11B are regions of pixels used in block 316 in FIG. 6F.





DETAILED DESCRIPTION OF THE INVENTION

In the following description, a preferred embodiment of the present invention will be described in terms that would ordinarily be implemented as a software program. Those skilled in the art will readily recognize that the equivalent of such software can also be constructed in hardware. Because image manipulation algorithms and systems are well known, the present description will be directed in particular to algorithms and systems forming part of, or cooperating more directly with, the system and method in accordance with the present invention. Other aspects of such algorithms and systems, and hardware or software for producing and otherwise processing the image signals involved therewith, not specifically shown or described herein, can be selected from such systems, algorithms, components and elements known in the art. Given the system as described according to the invention in the following materials, software not specifically shown, suggested or described herein that is useful for implementation of the invention is conventional and within the ordinary skill in such arts.


Still further, as used herein, the computer program can be stored in a computer readable storage medium, which can include, for example; magnetic storage media such as a magnetic disk (such as a hard drive or a floppy disk) or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable bar code; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program.


Before describing the present invention, it facilitates understanding to note that the present invention is preferably utilized on any well-known computer system, such as a personal computer. Consequently, the computer system will not be discussed in detail herein. It is also instructive to note that the images are either directly input into the computer system (for example by a digital camera) or digitized before input into the computer system (for example by scanning an original, such as a silver halide film).


Referring to FIG. 1, there is illustrated a computer system 110 for implementing the present invention. Although the computer system 110 is shown for the purpose of illustrating a preferred embodiment, the present invention is not limited to the computer system 110 shown, but can be used on any electronic processing system such as found in home computers, kiosks, retail or wholesale photofinishing, or any other system for the processing of digital images. The computer system 110 includes a microprocessor-based unit 112 for receiving and processing software programs and for performing other processing functions. A display 114 is electrically connected to the microprocessor-based unit 112 for displaying user-related information associated with the software, e.g., by a graphical user interface. A keyboard 116 is also connected to the microprocessor based unit 112 for permitting a user to input information to the software. As an alternative to using the keyboard 116 for input, a mouse 118 can be used for moving a selector 120 on the display 114 and for selecting an item on which the selector 120 overlays, as is well known in the art.


A compact disk-read only memory (CD-ROM) 124, which typically includes software programs, is inserted into the microprocessor based unit for providing a way of inputting the software programs and other information to the microprocessor based unit 112. In addition, a floppy disk 126 can also include a software program, and is inserted into the microprocessor-based unit 112 for inputting the software program. The compact disk-read only memory (CD-ROM) 124 or the floppy disk 126 can alternatively be inserted into externally located disk drive unit 122 which is connected to the microprocessor-based unit 112. Still further, the microprocessor-based unit 112 can be programmed, as is well known in the art, for storing the software program internally. The microprocessor-based unit 112 can also have a network connection 127, such as a telephone line, to an external network, such as a local area network or the Internet. A printer 128 can also be connected to the microprocessor-based unit 112 for printing a hardcopy of the output from the computer system 110.


Images can also be displayed on the display 114 via a personal computer card (PC card) 130, such as it was formerly known, a PCMCIA card (based on the specifications of the Personal Computer Memory Card International Association) which contains digitized images electronically embodied in the PC card 130. The PC card 130 is ultimately inserted into the microprocessor based unit 112 for permitting visual display of the image on the display 114. Alternatively, the PC card 130 can be inserted into an externally located PC card reader 132 connected to the microprocessor-based unit 112. Images can also be input via the compact disk-read only memory (CD-ROM) 124, the floppy disk 126, or the network connection 127. Any images stored in the PC card 130, the floppy disk 126 or the compact disk-read only memory (CD-ROM) 124, or input through the network connection 127, can have been obtained from a variety of sources, such as a digital camera (not shown) or a scanner (not shown). Images can also be input directly from a digital camera 134 via a camera docking port 136 connected to the microprocessor-based unit 112 or directly from the digital camera 134 via a cable connection 138 to the microprocessor-based unit 112 or via a wireless connection 140 to the microprocessor-based unit 112.


In accordance with the invention, the algorithm can be stored in any of the storage devices heretofore mentioned and applied to images in order to interpolate sparsely populated images.



FIG. 4 is a high level diagram of a preferred embodiment. The digital camera 134 (FIG. 1) is responsible for creating an original digital red-green-blue-panchromatic (RGBP) color filter array (CFA) image 300, also referred to as the digital RGBP CFA image or the RGBP CFA image. It is noted at this point that other color channel combinations, such as cyan-magenta-yellow-panchromatic, can be used in place of red-green-blue-panchromatic in the following description. The key item is the inclusion of a panchromatic channel. This image is considered to be a sparsely sampled image because each pixel in the image contains only one pixel value of red, green, blue, or panchromatic data. A panchromatic interpolation block 302 produces a high-resolution panchromatic image 304 and a low-resolution panchromatic image 306 from the RGBP CFA image 300. At this point in the image processing chain, each color pixel location has an associated panchromatic value and either a red, green, or a blue value. The low-resolution color decimation block 310 produces a low-resolution RGB CFA image 312 from the RGBP CFA image 300. The color differences generation block 308 produces a low-resolution color differences CFA image 314 from the low-resolution RGB CFA image 312 and the low-resolution panchromatic image 306. The color differences CFA interpolation and resizing block 316 produces a corrected high-resolution color differences image 318 from the low-resolution color differences CFA image 314 and the low-resolution panchromatic image 306. The pixel aspect ratio correction block 320 produces a corrected high-resolution panchromatic image 322 from the high-resolution panchromatic image 304. Finally, the color differences and panchromatic image summation block 324 produces an enhanced full-color image 326 from the corrected high-resolution color differences image 318 and the corrected high-resolution panchromatic image 322.



FIG. 5A is a more detailed view of block 302 (FIG. 4) of the preferred embodiment. The high-resolution panchromatic interpolation block 328 produces a high-resolution panchromatic image 330 from the RGBP CFA image 300 (FIG. 4). A copy of the high-resolution panchromatic image 330 becomes the high-resolution panchromatic image 304 (FIG. 4). The low-resolution panchromatic decimation block 332 produces the low-resolution panchromatic image 306 (FIG. 4) from the high-resolution panchromatic image 330.


In FIG. 5A, the high-resolution panchromatic interpolation block 328 and the low-resolution panchromatic decimation block 332 can be performed in any ways known to those skilled in the art. Suitable methods are taught in above-cited, commonly-assigned U.S. Patent Application Publication No. 2007/0024934 and U.S. patent application Ser. No. 11/564,451.



FIG. 5B is a more detailed view of block 302 (FIG. 4) of an alternate embodiment. The high-resolution panchromatic interpolation block 328 produces the high-resolution panchromatic image 304 (FIG. 4) from the RGBP CFA image 300 (FIG. 4). The low-resolution panchromatic interpolation block 334 produces the low-resolution panchromatic image 306 (FIG. 4) from the RGBP CFA image 300 (FIG. 4). The high-resolution panchromatic interpolation block 328 has already been discussed under FIG. 5A. The low-resolution panchromatic interpolation block 334 differs from the high-resolution panchromatic interpolation block 328 only in that the captured panchromatic pixel values are automatically discarded after the interpolation computations in order to produce a low-resolution panchromatic image of interpolated panchromatic pixel values.



FIG. 6A is a more detailed view of block 316 (FIG. 4) of the preferred embodiment. A color differences CFA interpolation block 336 produces a low-resolution color differences image 338 from the low-resolution color differences CFA image 314 (FIG. 4). A high-resolution resizing block 340 produces a high-resolution color differences image 342 from the low-resolution color differences image 338. A pixel aspect ratio correction block 344 produces the corrected high-resolution color differences image 318 (FIG. 4) from the high-resolution color differences image 342.


In FIG. 6A, the color differences CFA interpolation block 336 may be performed in any way known to those skilled in the art. Suitable methods are taught in above-cited, commonly-assigned U.S. Patent Application Publication No. 2007/0024934 and U.S. patent application Ser. No. 11/564,451. The high-resolution resizing block 340 is a standard digital image resizing (interpolation or resampling) operation with an appropriate method described also in commonly-assigned U.S. Patent Application Publication No. 2007/0024934. The pixel aspect ratio correction block 344 is also a standard digital image resizing operation with the notable feature that the horizontal scale factor differs from the vertical scale factor. As an example, FIG. 7B (Q1-QC) represents the pixel aspect ratio corrected version of FIG. 7A (P1-PC). Using bilinear interpolation, the pixel aspect ratio computation would be as follows:





Q1=P1






Q
2=(2P2+P3)/3






Q
3=(P3+2P4)/3






Q
4=(P1+3P5)/4






Q
5=(2P2+P3+6P6+3P7)/12






Q
6=(P3+2P4+3P7+6P8)/12






Q
7=(P5+(9)/2






Q
8=(2P6+P7+2PA+PB)/6






Q
9=(P7+2P8+P8+2PC)/6






Q
A=(3P9+PD)/4






Q
B=(6PA+3PB+2PE+PF)/12






Q
C=(3PB+6PC+PF+2PG)/12


It will be apparent to one skilled in the art that other methods of interpolation, such as cubic convolution interpolation, can be used in place of bilinear interpolation.



FIG. 6B is a more detailed view of block 316 (FIG. 4) of an alternate embodiment. A color differences CFA interpolation block 336 produces a low-resolution color differences image 338 from the low-resolution color differences CFA image 314 (FIG. 4). A pixel aspect ratio correction block 346 produces a corrected color differences image 348 from the low-resolution color differences image 338. A high-resolution resizing block 350 produces the corrected high-resolution color differences image 318 (FIG. 4) from the corrected color differences image 348.


In FIG. 6B, the color differences CFA interpolation block 336 is as previously described under FIG. 6A. The pixel aspect ratio correction block 346 is the same as the pixel aspect ratio correction block 344 of FIG. 6A except that block 346 operates on low-resolution data and block 344 operates on high-resolution data. The high-resolution resizing block 350 is the same as the high-resolution resizing block 340 except that block 350 operates on pixel aspect ratio corrected data and block 340 does not.



FIG. 6C is a more detailed view of block 316 (FIG. 4) of an alternate embodiment. A color differences CFA interpolation block 336 produces a low-resolution color differences image 338 from the low-resolution color differences CFA image 314 (FIG. 4). A high-resolution resizing and pixel aspect ratio correction block 352 produces the corrected high-resolution color differences image 318 (FIG. 4) from the low-resolution color differences image 338.


In FIG. 6C, the color differences CFA interpolation block 336 is as previously described under FIG. 6A. The high-resolution resizing and pixel aspect ratio correction block 352 performs high-resolution resizing and pixel aspect ratio correction as a single operation. Block 352 is accomplished by a standard resizing operation with different scale factors for the horizontal and vertical directions. As an example, FIG. 8B (Q1-Qm) represents the high-resolution resized and pixel aspect ratio corrected version of FIG. 8A (P1-PC). Using bilinear interpolation, the pixel aspect ratio computation in part would be as follows:





Q1=P1






Q
2=(P1+2P2)/3






Q
3=(2P2+P3)/3






Q
7=(5P1+3P5)/8





Q8=(5P1+10P2+3P5+6P6)/24






Q
9=(10P2+5P3+6P6+3P7)/24






Q
D=(P1+3P5)/4






Q
E=(P1+2P2+3P5+6P6)/12






Q
F=(2P2+P3+6P6+3P7)/6






Q
J=(7P5+P9)/8






Q
K=(7P5+14P6+P9+2PA)/24






Q
L=(14P6+7P7+2PA+PB)/24


It will be apparent to one skilled in the art how to extend these computations to produce the other values of Q in FIG. 8B. It will also be apparent to one skilled in the art that other methods of interpolation, such as cubic convolution interpolation, can be used in place of bilinear interpolation.



FIG. 6D is a more detailed view of block 316 (FIG. 4) of an alternate embodiment. A color differences CFA interpolation and pixel aspect ratio correction block 354 produces a corrected low-resolution color differences image 356 from the low-resolution color differences CFA image 314 (FIG. 4). A high-resolution resizing block 358 produces the corrected high-resolution color differences image 318 (FIG. 4) from the corrected low-resolution color differences image 356.


In FIG. 6D, the high-resolution resizing block 358 is the same as the high-resolution resizing block 340 (FIG. 6A) except that block 358 operates on pixel aspect ratio corrected data. The color differences CFA interpolation and pixel aspect ratio correction block 354 is a combined interpolation operation. As an example, FIG. 9B (Q1-QC) represents the CFA interpolated and pixel aspect ratio corrected version of FIG. 9A (R1-GC). Note that in FIG. 9A, each pixel value is a color difference value and not an original color value. Since pixels Q1 and R1 are coincident, no pixel aspect ratio correction is required for Q1. Therefore, only CFA interpolation is performed. Standard bilinear interpolation is employed:


Q1R=R1






Q
1G=(GE+GJ+G2+G5)/4






Q
1B=(BD+BF+BL+B6)/4


In the case of Q2, both CFA interpolation and pixel aspect ratio correction are performed. Intermediate steps are shown to illustrate the determination of the final computation.






Q
2R=(2R2+R3)/3→(2(R1+R3)/2+R3)/3→(R1+2R3)/3






Q
2G=(2G2+G3)/3→(2G2+(GG+G2+G7+G4)/4)/3→(9G2+GG+G7+G4)/12






Q
2B=(2B2+B3)/3→(2(BF+B6)/2+(BF+BH+B6+B8)/4)/3→(5BF+5B6+BH+B8)/12


Therefore, the computations performed by block 354 to determine the Q2 pixel values are:






Q
2R=(R1+2R3)/3






Q
2G=(9G2+GC+G7+G4)/12






Q
2B=(5BF+5B6+BH+B8)/12


The remaining computations in the example are given below.






Q
3R(2R3+RK)/3






Q
3G=(9G4+GG+G2+G7)/12






Q
3B=(5BH+5B8+BF+B6)/12






Q
4R=(5R1+3R9)/8






Q
4G=(13G5+G2+GE+GJ)/16






Q
4B=(7B1+7B6+BD+BF)/16






Q
5R=(10R3+6RB+5R1+3R9)/24






Q
5G=(GG+15G2+6G5+G4+19G7+6GA)/48






Q
5B=(35B6+7B8+5BF+BH)/48






Q
6R=(10R3+6RB+5RK+3RO)/24






Q
6G=(GG+G2+15G4+19G7+6GM+6GC)/48






Q
6B=(BF+5BH+7B6+35B8)/48






Q
7R=(R1+3R9)/4






Q
7G=(GA+5G5+GN+GQ)/8






Q
7B=(3B6+3BL+BP+BR)/8






Q
8R=(6RB+R1+2R3+3R9)/12






Q
8G=(11GA+GC+2G2+2G5+7G7+GS)/24






Q
8B=(15B6+3B8+5BR+BT)/24






Q
9R=(6RB+RK+3RO+2R3)/12






Q
9G=(GA+11GC+2G4+7G7+2GM+GS)/24






Q
9B=(3B6+15B8+BR+5BT)/24






Q
AR=(7R9+RW)/8






Q
AG=(3GA+3G5+3GN+7GQ)/16






Q
AB=(3B6+3BL+5BP+5BR)/16






Q
BR=(14RB+7R9+RW+2RY)/24






Q
BG=(29GA+3GC+3G7+2GQ+9GS+2GX)/48






Q
BB=(15B6+3B8+25BR+5BT)/48






Q
CR=(Ra+14RB+7RO+2RY)/24






Q
CG=(3GA+29GC+3G7+9GS+2GU+2GZ)/48






Q
CB=(3B6+15B8+5BR+25BT)/48


It will be apparent to one skilled in the art that other methods of interpolation, such as cubic convolution interpolation, can be used in place of bilinear interpolation.



FIG. 6E is a more detailed view of block 316 (FIG. 4) of an alternate embodiment. A color differences CFA interpolation and high-resolution resizing block 360 produces a high-resolution color differences image 362 from the low-resolution color differences CFA image 314 (FIG. 4). A pixel aspect ratio correction block 364 produces the corrected high-resolution color differences image 318 (FIG. 4) from the high-resolution color differences image 362.


In FIG. 6E, the pixel aspect ratio correction block 364 is the same as the pixel aspect ratio correction block 344 (FIG. 6A). The color differences CFA interpolation and high-resolution resizing block 360 is a combined interpolation operation. As an example, FIG. 10B (Q1-QG) represents the CFA interpolated and high-resolution resized version of FIG. 10A (R1-B4). Note that in FIG. 10A, each pixel value is a color difference value and not an original color value. Since pixels Q1 and R1 are coincident, no high-resolution resizing is required for Q1. Therefore, only CFA interpolation is performed. Standard bilinear interpolation is employed:





Q1R=R1






Q
1G=(G6+GA+G2+G3)/4






Q
1B=(B5+B7+BD+B4)/4


In the case of Q2, both CFA interpolation and high-resolution resizing are performed. Intermediate steps are shown to illustrate the determination of the final computation.






Q
2R=(R1+R2)/2→(R1+(R1+RB)/2)/2→(3R1+RB)/4






Q
2G=(G1+G2)/2→((GA+G2+G6+G3)/4+G2)/2→(5G2+GA+G6+G3)/8






Q
2B=(B1+B2)/2→((B5+B7+BD+B4)/4+(B7+B4)/2)/2→(3B4+B5+3B7+BD)/8


Therefore, the computations performed by block 360 to determine the Q2 pixel values are:






Q
2R=(3R1+RB)/4






Q
2G=(5G2+GA+G6+G3)/8






Q
2B=(3B4+B5+3B7+BD)/8


The remaining computations in the example are given below.






Q
3R=(R1+RB)/2





Q3G=G2






Q
3B=(B7+B4)/2






Q
4R=(R1+3RB)/4






Q
4G=(3G2+GC)/4






Q
4B=(3B4+B9+3B7+BF)/8






Q
5R=(3R1+RH)/4






Q
5G=(GA+G2+5G3+G6)/8






Q
5B=(3B4+B5+B7+3BD)/8






Q
6R=(3RB+3RH+RJ+9R1)/16






Q
6G=(GA+6G2+6G3+G6+GE+G1)/16






Q
6B=(9B4+B5+3B7+3BD)/16






Q
7R=(3RB+RH+RJ+3R1)/8






Q
7G=(5G2+G3+GE+G1)/8






Q
7B=(3B4+B7)/4






Q
8R=(9RB+RH+3RJ+3R1)/16






Q
8G=(GC+6G2+G3+G8+6GE+G1)/16





Q8B=(9B4+3B7+B9+3BF)/16






Q
9R=(R1+RH)/2





Q9G=G3






Q
9B=(BD+B4)/2






Q
AR=(RB+3RH+RJ+3R1)/8






Q
AG=(G2+5G3+GE+G1)/8






Q
AB=(3B4+BD)/4






Q
BR=(R1+RB+RH+RJ)/4






Q
BG=(G2+G3+GE+G1)/4





QBB=B4






Q
CR=(3RB+RH+3RJ+R1)/8






Q
CG=(G2+G3+5GE+G1)/8






Q
CB=(3B4+BF)/4






Q
DR=(3RH+R1)/4






Q
DG=(GG+5G3+GM+G1)/8






Q
DB=(3B4+3BD+BL+BN)/8






Q
ER=(RB+9RH+3RJ+3R1)/16






Q
EG=(GG+G2+6G3+GM+GE+6G1)/16






Q
EB=(9B4+3BD+BL+3BN)/16






Q
FR=(RB+3RH+3RJ+R1)/8






Q
FG=(G2+G3+GE+5G1)/8






Q
FB=(3B4+BN)/4






Q
GR=(3RB+3RH+9RJ+R1)/16






Q
GG=(G2+G3+GK+GO+6GE+6G1)/16





QGB=(9B4+3BF+3BN+BP)/16


It will be apparent to one skilled in the art that other methods of interpolation, such as cubic convolution interpolation, can be used in place of bilinear interpolation.



FIG. 6F is a more detailed view of block 316 (FIG. 4) of an alternate embodiment. A color differences CFA interpolation, high-resolution resizing, and pixel aspect ratio correction block 366 produces the corrected high-resolution color differences image 318 (FIG. 4) from the low-resolution color differences CFA image 314 (FIG. 4). Block 366 is a combined interpolation operation. As an example, FIG. 11B (Q1-QO) represents the CFA interpolated, high-resolution resized, and pixel aspect ratio corrected version of FIG. 11A (R1 l -G6). Note that in FIG. 11A, each pixel value is a color difference value and not an original color value. Since pixels Q1 and R1 are coincident, no high-resolution resizing or pixel aspect ratio correction is required for Q1. Therefore, only CFA interpolation is performed. Standard bilinear interpolation is employed:





Q1R=R1






Q
1G=(G8+GD+G2+G4)/4






Q
1B=(B7+B9+BG+B5)/4


In the case of Q2, CFA interpolation, high-resolution resizing, and pixel aspect ratio correction are performed. Intermediate steps are shown to illustrate the determination of the final computation.





Q2R=(R1+3R2)/4→(R1+3(R1+R3)/2)/4→(5R1+3R3)/8






Q
2G=(G1+3G2)/4→((G8+GD+G4+G2)/4+3G2)/4→(GD+13G2+G4+G8)/16






Q
2B=(B1+3B2)/4→((B7+B9+BG+B5)/4+(B9+B5)/2)/4→(7B5+B7+7B9+BG)/16


Therefore, the computations performed by block 360 to determine the Q2 pixel values are:






Q
2R=(5R1+3R3)/8






Q
2G=(GD+13G2+G4+G8)/16






Q
2B=(7B5+B7+7B9+BG)/16


The remaining computations in the example are given below.






Q
3R=(R1+3R3)/4






Q
3G=(GA+5G2+G6+GE)/8






Q
3B=(BB+3B5+3B9+BH)/8






Q
4R=(RF+7R3)/8






Q
4G=(3GA+3G2+3G6+7GE)/16






Q
4B=(5BB+3B5+3B9+5BH)/16






Q
5R=(RK+5R1)/6






Q
5G=(GD+G2+3G4+G8)/6






Q
5B=(2B5+B7+B9+2BG)/6






Q
6R=(5RK+3RM+25R1+15R3)/48






Q
6G=(2GD+29G2+9G4+3G6+2G8+3GL)/48






Q
6B=(14B5+B7+7B9+2BG)/24






Q
7R=(RK+3RM+5R1+19R3)/24






Q
7G=(2GA+11G2+G4+7G6+GL+2GE)/24






Q
7B=(BB+6B5+3B9+2BH)/12





Q8R=(5RF+7RM+RO+35R3)/48






Q
8G=(6GA+6G2+19G6+GN+15GE+G1)/48






Q
8B=(7BB+9B5+3B9+17BH)/48






Q
9R=(RK+2R1)/3






Q
9G=(GD+G2+9G4+G8)/12






Q
9B=(5B5+B7+B9+5BG)/12






Q
AR=(5RK+3RM+10R1+6R3)/24






Q
AG=(GD+19G2+15G4+6G6+G8+6GL)/48






Q
AB=(35B5+B7+7B9+5BG)/48






Q
BR=(RK+3RM+2R1+6R3)/12






Q
BG=(GA+7G2+2G4+11G6+2GL+GE)/24






Q
BB=(BB+15B5+3B9+5BH)/24






Q
CR=(2RF+7RM+RO+14R3)/24






Q
CG=(3GA+3G2+29G6+2GN+9GE+2G1)/48






Q
CB=(5BB+15B5+3B9+25BH)/48






Q
DR=(R1+RK)/2





QDG=G4






Q
DB=(BG+B5)/2






Q
ER=(5RK+3RM+5R1+3R3)/16






Q
EG=(3G2+7G4+3G6+3G1)/16






Q
EB=(7B5+BG)/8






Q
FR=(RK+3RM+R1+3R3)/8





QFG=(G2+G4+5G6+G1)/8






Q
FB=(3B5+BH)/4






Q
GR=(RF+7RM+RO+7R3)/16






Q
GG=(13G6+GN+GE+G1)/16






Q
GB=(3B5+5BH)/8






Q
HR=(2RK+R1)/3






Q
HG=(9G4+GJ+G1+GQ)/12






Q
HB=(5B5+5BG+BP+BR)/12






Q
1R=(10RK+6RM+5R1+3R3)/24






Q
1G=(6G2+15G4+6G6+GJ+19GL+GQ)/48






Q
1B=(35B5+5BG+BP+7BR)/48






Q
JR=(2RK+6RM+R1+3R3)/12






Q
JG=(2G2+2G4+11G6+7GL+GN+GS)/24






Q
JB=(15B5+5BH+3BR+BT)/24






Q
KR=(RF+14RM+2RO+7R3)/24






Q
KG=(29G6+3GL+9GN+3GS+2GE+2G1)/48






Q
KB=(15B5+25BH+3BR+5BT)/48






Q
LR=(5RK+R1)/6






Q
LG=(3G4+GJ+GL+GQ)/6






Q
LB=(2B5+2BG+BP+BR)/6






Q
MR=(25RK+15RM+5R1+3R3)/48






Q
MG=(3G2+9G4+3G6+2GJ+29GL+2GQ)/48






Q
MB=(14B5+2BG+BP+7BR)/24






Q
NR=(5RK+15RM+R1+3R3)/24





QNG=(G3+G4+7G6+11G1+2GN+2GS)/24






Q
NB=(6BS+2BH+3BR+BT)/12






Q
OR=(RF+35RM+5R0+7R3)/48






Q
OG=(19G6+6GL+15GN+6GS+GE+G1)/48






Q
OB=(6B5+10BH+3BR+5BT)/24


It will be apparent to one skilled in the art that other methods of interpolation, such as cubic convolution interpolation, can be used in place of bilinear interpolation.


The pixel aspect ratio correction algorithms disclosed in the preferred embodiments of the present invention can be employed in a variety of user contexts and environments. Exemplary contexts and environments include, without limitation, wholesale digital photofinishing (which involves exemplary process steps or stages such as film in, digital processing, prints out), retail digital photofinishing (film in, digital processing, prints out), home printing (home scanned film or digital images, digital processing, prints out), desktop software (software that applies algorithms to digital prints to make them better—or even just to change them), digital fulfillment (digital images in—from media or over the web, digital processing, with images out—in digital form on media, digital form over the web, or printed on hard-copy prints), kiosks (digital or scanned input, digital processing, digital or scanned output), mobile devices (e.g., PDA or cell phone that can be used as a processing unit, a display unit, or a unit to give processing instructions), and as a service offered via the World Wide Web.


In each case, the pixel aspect ratio correction algorithms can stand alone or can be a component of a larger system solution. Furthermore, the interfaces with the algorithm, e.g., the scanning or input, the digital processing, the display to a user (if needed), the input of user requests or processing instructions (if needed), the output, can each be on the same or different devices and physical locations, and communication between the devices and locations can be via public or private network connections, or media based communication. Where consistent with the foregoing disclosure of the present invention, the algorithms themselves can be fully automatic, can have user input (be fully or partially manual), can have user or operator review to accept/reject the result, or can be assisted by metadata (metadata that can be user supplied, supplied by a measuring device (e.g. in a camera), or determined by an algorithm). Moreover, the algorithms can interface with a variety of workflow user interface schemes.


The pixel aspect ratio correction algorithms disclosed herein in accordance with the invention can have interior components that utilize various data detection and reduction techniques (e.g., face detection, eye detection, skin detection, flash detection).


The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.


Parts List




  • 110 Computer System


  • 112 Microprocessor-based Unit


  • 114 Display


  • 116 Keyboard


  • 118 Mouse


  • 120 Selector on Display


  • 122 Disk Drive Unit


  • 124 Compact Disk—read Only Memory (CD-ROM)


  • 126 Floppy Disk


  • 127 Network Connection


  • 128 Printer


  • 130 Personal Computer Card (PC card)


  • 132 PC Card Reader


  • 134 Digital Camera


  • 136 Camera Docking Port


  • 138 Cable Connection


  • 140 Wireless Connection


  • 200 Digital Camera


  • 202 RGB CFA Image


  • 204 CFA Interpolation


  • 206 Full-Color Image


  • 208 Pixel Aspect Ratio Correction


  • 210 Corrected Full-Color Image


  • 212 Digital Camera


  • 214 RGB CFA Image


  • 216 CFA Interpolation and Resizing


  • 218 Resized Full-Color Image


  • 300 RGBP CFA Image


  • 302 Panchromatic Interpolation


  • 304 High-Resolution Panchromatic Image


  • 306 Low-Resolution Panchromatic Image



Parts List Cont'd




  • 308 Color Differences Generation


  • 310 Low-Resolution Color Decimation


  • 312 Low-Resolution RGB CFA Image


  • 314 Low-Resolution Color Differences CFA Image


  • 316 Color Differences CFA Interpolation and Resizing


  • 318 Corrected High-Resolution Color Differences Image


  • 320 Pixel Aspect Ratio Correction


  • 322 Corrected High-Resolution Panchromatic Image


  • 324 Color Differences and Panchromatic Image Summation


  • 326 Enhanced Full-Color Image


  • 328 High-Resolution Panchromatic Interpolation


  • 330 High-Resolution Panchromatic Image


  • 332 Low-Resolution Panchromatic Decimation


  • 334 Low-Resolution Panchromatic Interpolation


  • 336 Color Differences CFA Interpolation


  • 338 Low-Resolution Color Differences Image


  • 340 High-Resolution Resizing


  • 342 High-Resolution Color Differences Image


  • 344 Pixel Aspect Ratio Correction


  • 346 Pixel Aspect Ratio Correction


  • 348 Corrected Color Differences Image


  • 350 High-Resolution Resizing


  • 352 High-Resolution Resizing and Pixel Aspect Ratio Correction


  • 354 Color Differences CFA Interpolation and Pixel Aspect Ratio Correction


  • 356 Corrected Low-Resolution Color Differences Image


  • 358 High-Resolution Resizing


  • 360 Color Differences CFA Interpolation and High-Resolution Resizing



Parts List Cont'd




  • 362 High-Resolution Color Differences Image


  • 364 Pixel Aspect Ratio Correction


  • 366 Color Differences CFA Interpolation, High-Resolution Resizing, and Pixel Aspect Ratio Correction


Claims
  • 1. A method of forming an enhanced digital full-color image having a first pixel aspect ratio, comprising: (a) capturing an image using an image sensor having panchromatic pixels and color pixels corresponding to at least two color photoresponses wherein color and panchromatic pixels each have a second pixel aspect ratio different from the first pixel aspect ratio;(b) providing from the captured image a digital high-resolution panchromatic image and changing the aspect ratio of the panchromatic pixel values from the second pixel aspect ratio to the first pixel aspect ratio to produce a digital aspect corrected high-resolution panchromatic image;(c) providing from the captured image a digital low-resolution color differences color filter array image;(d) providing a digital aspect corrected high-resolution color differences image from the low-resolution color differences color filter array image; and(e) using the aspect corrected high-resolution panchromatic image and an aspect corrected high-resolution color differences image to produce the enhanced digital full-color image.
  • 2. The method of claim 1 wherein step (a) includes color pixels having the photoresponses red, green, and blue.
  • 3. The method of claim 1 wherein step (a) includes color pixels having the photoresponses cyan, magenta, and yellow.
  • 4. The method of claim 1, wherein step (c) includes producing a digital low-resolution panchromatic image from the high-resolution panchromatic image and using the low-resolution panchromatic image and the captured color pixels to produce the digital low-resolution color differences color filter array image.
  • 5. The method of claim 1, wherein step (d) includes color filter array interpolating the color differences pixel values.
  • 6. The method of claim 1, wherein step (d) includes changing the pixel aspect ratio of the color differences pixel values from the second pixel aspect ratio to the first pixel aspect ratio.
  • 7. The method of claim 1, wherein step (d) includes resizing the color differences pixel values from low-resolution to high-resolution.
  • 8. The method of claim 1 wherein the first pixel aspect ratio defines a square and the second pixel aspect ratio defines a non-square rectangle.
CROSS REFERENCE TO RELATED APPLICATIONS

Reference is made to commonly assigned U.S. patent application Ser. No. 11/341,206, filed Jan. 27, 2006 (U.S. Patent Application Publication 2007/0024934) by James E. Adams, Jr. et al, entitled “Interpolation of Panchromatic and Color Pixels”, and U.S. patent application Ser. No. 11/564,451 filed Nov. 29, 2006 by James E. Adams, Jr. et al, entitled “Providing a Desired Resolution Color Image” the disclosures of which are incorporated herein.