The present invention relates generally to integrated circuit devices, and in particular, to methods of and devices for generating a digital image.
Receptors in the human eye are only capable of detecting light having wavelengths between approximately 400 nanometers (nm) and 700 nm. These receptors are of three different types, including receptors for red (R) light, receptors for green (G) light and receptors for blue (B) light. The representation of an image based upon the intensity of red, blue and green color components is commonly referred to as RGB. If a single wavelength of light is observed, the relative responses of these three types of receptors allow us to discern what is commonly referred to as the color of the light. This phenomenon is extremely useful in color video processing because it enables generating a range of colors by adding together various proportions of light from just three wavelengths.
An image to be displayed is broken down into an array of picture elements or pixels to be displayed. Generally, each pixel displays a proportion of red, green and blue light depending on the signals to be displayed. Many image detecting devices include a sensor which will detect only one color component for each pixel. However, when rendering a color image, the two missing color components at each pixel have to be interpolated based upon color components of other pixels. If this process is not performed appropriately, the produced image quality will be degraded by various aberrations, such as highly visible zipper effects and false color artifacts. A zipper effect refers to abrupt and unnatural changes in intensity between neighboring pixels. False color artifacts correspond to dots or streaks of colors which do not exist in the original image.
A method of generating a digital image is described. The method comprises detecting light from a scene to form an image; detecting an aberration in the image; and implementing a color filter array interpolator based upon the detected aberration in the image.
Another method of generating a digital image comprises establishing a plurality of implementations of a color filter array interpolator for generating a digital image; detecting light from a scene to from an image by way of a lens; detecting a mismatch between a resolution of the lens and a sensor array; and selecting an implementation of the plurality of implementations of a color filter array interpolator based upon the detection of the mismatch.
A device for generating a digital image is also described. The device comprises a lens for detecting light from a scene to form an image; a sensor array for generating pixels associated with the image; and a processing circuit detecting a mismatch between a resolution of the lens and the sensor array, and implementing a color filter array interpolator based upon the detection of the mismatch.
Turning first to
As will be described in more detail in reference to the remaining figures, the processing device 106 improves the quality of images generated by a device from light detected from a scene. The processing device 106 may be implemented in any type of digital imaging device. Further, the processing device may be implemented in a single integrated circuit device, or a plurality of integrated circuit devices of the digital imaging device. One type of integrated circuit device which may be used to implement the circuits and methods of generating a digital image may be a device having programmable resources, such as the device described in more detail in reference to
Many digital imaging devices, such digital still cameras, acquire images using an image sensor overlaid with color filters as shown in
As will be described in more detail below, the intensity values of the two missing color components at each pixel must be interpolated from known intensity values in neighboring pixels to render a complete multi-color image from the resulting pixel array. This process, commonly referred to as demosaicking, is one of the critical tasks in digital image processing. If demosaicking is not performed appropriately, the produced image quality will be degraded by highly visible zipper effects and false color artifacts. While false color artifacts may appear anywhere on an image, zipper effects may appear in either a vertical orientation or a horizontal orientation. Both aberrations are due to aliasing (i.e. the fact that sample positions of sub-sampled color channels are offset spatially, and high contrast edges lying between the sampling positions that may affect color components used in the interpolation process differently).
While a 5×5 matrix is preferably used to calculate weights and interpolate values for missing color components in a given pixel, it should be understood that methods may be adapted to be performed on a larger matrix than a 5×5 matrix, and the overall image comprises many more pixels than the 5×5 array of pixels. While references are made to single images which are generated, it should be understood that the light from an image may be sampled at a given frequency, where the sampled data may be processed to generate video data.
As the human eye is at least twice as sensitive to green light than to blue or red light, the green color channel is sampled with twice the frequency of the blue and red color channels. In the Bayer CFA pattern shown in
The purpose of interpolation is to find missing color components for a given pixel. Numerous techniques have been developed for CFA interpolation. These techniques offer a wide range of tradeoffs between the complexity and the quality of results. The complexity is often measured as instructions per pixel or some function of silicon real estate, such as gate counts. The quality of results is predominantly measured as a Signal-to-Noise Ratio (SNR) or a Peak Signal-to-Noise Ratio. However, many conventional methods either lead to significant aberrations in the digital image or require significant processing capacity to perform the interpolation. This significant processing capacity required may limit the devices in which the methods may be implemented or increase the cost of implementing the methods. De-focused, or imperfect lenses in an optical system, low-light exposure and quick motion can lead to blurred, noisy images or video streams. In a digital camera, cell-phone, security camera or studio recording environment, the captured images and video stream may be enhanced to improve visual quality. This enhancement process involves noise reduction and edge enhancement.
Various circuits and methods set forth below enable dynamically selecting CFA Interpolation architectures on a picture-by-picture basis according to variant scene conditions with correlation to implementation conditions defined by the optics and sensor interfaces in a camera system. It should be noted that a picture could comprises different amounts or arrangements of pixel data, and could include one or more frames, fields, or screens, for example.
Chromatic aberrations pose a unique challenge for CFA interpolation techniques. Advanced CFA interpolation techniques attempt to reduce color artifacts, and to infer resolution enhanced contours from sub-sampled color-channels by combining information carried on different color channels. As chromatic aberrations separate the intensity transitions of different color channels, false contours are identified. Color channel saturation occurs when one or more color channels for a particular pixel reach the maximum value that could be represented digitally. Saturation is typically due to overexposure, and leads to color shifts towards yellow, cyan or magenta, and ultimately to white. Whereas saturation is typically gradual, a faithful rendering of high contrast shadows in saturated areas pose another challenge for CFA interpolation techniques. Thin darker lines may cause one of the colors to return to a linear, unsaturated region of the sensor, while other channels remain saturated. This represents a color shift, where advanced CFA interpolation techniques (which suppress sharp chrominance transitions) may interpret as falsely detected specks on the image. The various methods and circuits set forth below provide a number of solutions, including means for detecting different photographic situations and enabling an adaptive CFA interpolation technique.
Turning now to
A color correction matrix (CCM) circuit 310 detects changes in color, and provides what is normally known as color casting. When light from an image is detected, such as by an incandescent or fluorescent light, or by light in the late afternoon causing red hues, the RBG spectrum may be offset. The CCM circuit 310 provides color balancing to convert the external light to white. A gamma correction circuit 312 receives the output of the CCM circuit, and compensates for the non-linearity of a display. Because perceived intensity of an image is non-linear, the gamma correction circuit 312 provides an offset to compensate for the non-linearity, which typically relates to intensity. The output of the gamma correction circuit 312 is coupled to a color space converter (CSC) circuit 314, which converts the RGB signal to a YCrCb signal having a luminance value and two chrominance values, which is the actual data associated with a YUV signal as described above. The YCrCb signal is coupled to a CFA enhancement circuit 316 comprising a noise reduction block 318 and an edge enhancement block 320. An output of the CFA enhancement circuit 316 is provided as an image to an output device 324 by way of an output interface 322
Various software components 326 also enable the operation of the circuit blocks of
One implementation of interpolating missing colors will be described in reference to
The spatial differences SDn are then calculated as set forth in Equation (2):
Weights for each of the 4 groups are then calculated as shown in Equation (3):
Calculating weights based upon the sum of absolute differences rather than a variance, for example, will reduce the arithmetic requirements. That is, because the determination of a variance requires a square root calculation, the calculation of weights based upon a variance calculation requires significant arithmetic operations. Any reduction in arithmetic operations will not only reduce the processing time, but also the hardware necessary to calculate the weights.
As can be seen in
Normalized weights are then defined as shown in Equation (4):
where k is a chrominance group. However, the weight normalization step in Equation (4) requires the use of a wide divider, which requires a significant amount of hardware to implement. Instead of using a divider and 4 multipliers to calculate the normalized weights as would be required by the Equation (4), a weight distribution network may be used. Various processing techniques for processing video and other data use the weighed sum operation as set forth in Equation (5):
where “wi” are the weights with which “xi” are qualified and the sum of the weights is equal to one. However, if the sum of the weights is not equal to one, normalized weights
should be used instead of wi in Equation (5).
However, the calculation of N normalized weights requires N division operations. In hardware applications, such a calculation of normalized weights may be prohibitively costly. Assuming that the number of weights N=2n where n is a positive integer, a method of normalizing weights enables normalizing weights by iterative sub-division, without division, according to one implementation of the present invention. Before providing a more general framework for the weight normalization technique according to the present invention, the case where only two weights have to be normalized (i.e. N=2, n=1) will be considered first. “a” and “b” denote the weights to be normalized, “x” and “y” denote the corresponding normalized values, and “q” is the expected sum of “x” and “y” such that:
where “q” shall be set to “1” for the normalization of two weights.
The normalized weights for weights “x” and “y” are then calculated in an iterative fashion, with each step refining the results with another binary digit. In particular, “i,” “x” and “y” are initialized to a value of “0” at a block 602. The value of “q” is set equal to “q/2” and “i” is incremented to “i+1” at a block 604. It is then determined whether a<b at a block 606. If so, the values of “x,” “y,” “a,” and “b” are modified such that “x=x+q,” “a=a−b,” and “b=b*2.” If not, the values are modified such that “y=y+q,” “b=b−a,” and “a=2*a.” It is then determined whether i<Bq, where “Bq” denotes the number of bits used to represent “q,” at a block 612. If so, the process returns to block 604. Otherwise, the process is finished. In a digital system where “q” is represented as a binary integer, the process converges in “Bq” cycles
The method of normalizing weights may be extended to a number of weights being some other multiple of 2. In particular, the block diagram of
Because division by two in hardware can be implemented at no cost, the method for weight normalization set forth above can be mapped easily to either parallel or serial hardware implementations using only comparators, multiplexers and adders/subtractors. Additional details for implementing weight normalization according to the present invention may be found in U.S. Pat. No. 8,484,267, the entire patent of which is incorporated herein by reference.
Finally, after the normalized weights are calculated as set forth in Equation 4, the green color component G13 is calculated for block 13:
The other missing green color components for red blocks are similarly calculated based upon the value for red color components, where red intensity values are substituted for the blue intensity values in Equation (7). The other values of missing green color components are calculated for the 5×5 matrix by establishing chrominance groups as described in reference to
Once all of the green color components are determined for the original 5×5 matrix, missing red and blue color components of the original 5×5 matrix may then be interpolated, using a technique called the smooth hue transition technique. The smooth hue transition heuristics take advantage of hue or chrominance values (also commonly called chroma values) typically having lower spatial frequencies than those of luminance values, as well as the human eye being less sensitive to changes in hue than in intensity. Using sample positions introduced in
B12=G12/2*(B11/G11+B13/G13)
B16=G16/2*(B11/G11+B21/G21)
B17=G17/4*(B11/G11+B13/G13+B21/G21+B23/G23). (8)
That is, for each pixel (for which a blue color component is to be interpolated) with two adjacent pixels in a row having a blue color component, such as pixel 7, the B7 color component is interpolated based upon the blue color components in the two adjacent pixels in the row. For each pixel with two adjacent pixels having a blue color component in a column, such as pixel 13, the blue color component is interpolated based upon the blue color components in the two adjacent pixels in the column. For a pixel that does not have any adjacent pixels having known blue color components in the row or column containing the pixel, such as pixel 12, the blue color component is calculated based upon the four blue color components which are diagonal neighbors of the pixel. Similarly, red pixel interpolation is performed according to Equations (9):
R8=G8/2*(R7/G7+R9/G9)
R12=G12/2*(R7/G7+R17/G17)
R13=G13/4*(R7/G7+R9/G9+R17/G17+R19/G19). (9)
The advantage of the smooth hue transition method is an improved suppression of color artifacts. However, the division operations required in the Equations (8) and (9) may introduce outlier specks, and pose a problem in very large scale integration (VLSI) implementations. Also, a digital signal processor (DSP) implementation is hindered by frequent branching due to the handling of the division by 0 exception.
Accordingly, a smooth hue transition with logarithmic domain technique may be used in interpolating red and blue color components. Subtraction is often used in place of division to alleviate the problems stemming from division operations. The advantages of the smooth hue transition with logarithmic domain technique include an improved suppression of color artifacts, reduced number of arithmetic operations, a calculation requiring only additions and subtractions, and the use of only 2 line buffers.
Blue pixel interpolation using a smooth hue transition with logarithmic domain technique is performed according to Equations (10):
B12=G12+0.5*(B11−G11+B13−G13)
B6=G6+0.5*(B11−G11+B21−G21)
B7=G7+0.5*(B11−G11+B13−G13+B21−G21+B23−G23) (10)
Similarly, red pixel interpolation a smooth hue transition with logarithmic domain technique is performed according to Equations (11):
R8=G8+0.5*(R7−G7+R9−G9)
R12=G12+0.5*(R7−G7+R17−G17)
R13=G13+0.5*(R7−G7+R9−G9+R17−G17+R19−G19) (11)
It should be noted that the various equations set forth above would equally apply to a cyan, magenta, yellow representation. Further, the equations would also apply to a four color representation, where the various missing color components would be generated based upon either two vertically adjacent pixels, two horizontally adjacent pixels, or four diagonally adjacent pixels as set forth in the equations above.
After generating an intermediate digital image having pixels including a color component for each of the color components based upon the calculated weights as shown in
A plurality of registers is coupled to receive the outputs of the various filters. In particular, a register 820 is coupled to receive the output of the low-pass filter 808, while a register 822 is delaying the original red samples, so that the filtered and the non-filtered red samples are in phase. Similarly, a register 824 is coupled to receive the output of the low-pass filter 810, and a register 826 is delaying the original blue samples. Finally, a register 828 is coupled to receive the output of the low-pass filter 812, while a register 830 is delaying the original green samples. A multiplexer network having multiplexers 814-818 is coupled to select, for each pixel, the outputs of the color-pass registers or the outputs of the corresponding low pass filters. Control block 832 evaluates the data and determines whether to select the filtered data or the unfiltered data. Registers in block 832 should be deep enough to store at least one color component for at least 3 pixel values of a row to enable identifying horizontally zipper effects as set forth below.
The multiplexing network is controlled by a control block 832. Each of the low-pass filters 808-812, the registers 820-830, and the control block 832 are enabled by an enable signal. The control block 832 is also coupled to receive an active video signal indicating that the data coupled to the horizontal post processing block is valid data. The resulting digital image may comprise pixels having both filtered and unfiltered pixel data.
The control block 832 evaluates the intermediate digital image to determine whether there is any horizontal zipper effect which could be eliminated, and controls multiplexers 814-818 to either pass on the original input color components generated at the outputs of filters 802-806 or filtered color components at the outputs of filters 808-812. The control block 832 may be a simplified Nyquist frequency detector block, for example, where the Nyquist frequency refers to the spatial sampling frequency, fs, of the green channel, or luminance channel corresponding to the RGB or CMY inputs. The filters 808-812 may comprise low-pass filters which are designed to suppress the Nyquist frequency but have minimal attenuation below fs/2.
The determination of a zipper effect is a problem associated with luminance. That is, because the zipper effect relates to abrupt changes in intensity between neighboring pixels, the zipper effect is more easily detected by luminance values. Therefore, in order to identify a zipper effect, the original RGB values are converted to luminance (Y) values according to the converter Equation (12):
Y=0.299R+0.587G+0.114B. (12)
However, in order to reduce the complexity of the hardware required to make the conversion to luminance values, the luminance values are generated instead according to the converter Equation (13):
Y=0.25R+0.625G+0.125B, (13)
where multipliers required by Equation (12) may be replaced by bit-shift operations, making the RGB-to-Y converter easier to implement in hardware.
RGB-to-Y conversion may be followed by kerning, or quantizing Y down to a programmable number of bits, Y′. Kerning is a truncation process where a programmable number of least significant bits (LSBs) are dropped. By dropping some of the less significant bits, local noise is suppressed to prevent the outputs from frequently switching between the filtered and the original outputs. According to one implementation of the invention, N−4 bits are used to represent the Y′ values, where N is the number of bits in the binary representation of the original sensor data.
The presence of Nyquist frequency, which would indicate a zipper effect condition, is detected by applying Equation (14):
sgn(Y′k-2−Y′k-1)XOR sgn(Y′k-1−Y′k), (14)
where the “sgn” function is a mathematical function that extracts the sign of a real number. The result of Equation (14) indicates whether the intensity has three alternating high and low values. Equation (14) may be implemented by the following pseudo-code:
If ((Yt-2<=Yt-1) and (Yt-1<=Yt)) or ((Yt-2>=Yt-1) and (Yt-1>=Yt)) then
The filtered values output by the low-pass filters 808-812 may be calculated according to Equation (15):
Rft-1=0.25*Rit-2+0.5*Rit-1+0.25*Rit
Gft-1=0.25*Git-2+0.5*Git-1+0.25*Git
Bft-1=0.25*Bit-2+0.5*Bit-1+0.25*Bit. (15)
Because the horizontal post-processing stage only performs horizontal processing, no line buffers are necessary. That is, because the data associated with the image is processed based upon rows of the matrix, the data is already required to be stored in memory buffers, and no additional memory buffers are needed. Because the low-pass filters may be implemented using coefficients (0.25, 0.5, 0.25) as set forth above in Equation (15), the circuit of
While the use of chrominance groups set forth above in reference to
CFA interpolation technique parameters may be set during assembly or maintenance to ensure the CFA sensor and optics (i.e. a lens) are matched to the specification task. Instead of using all chrominance groups as set forth in
Turning now to
R13=G13+¼[(R7−G7)+(R9−G9)+(R17−G17)+(R19−G19)], and
B13=G13+¼[(B7−G7)+(B9−G9)+(B17−G17)+(B19−G19)]
where G7, G9, G17, G19, and G13 are G pixels interpolated during Stage 1. On sharp edges with little Green but high Red and/or Blue contrast, the above interpolation technique may produce pronounced zipper and chroma artifacts. An improvement to Stage 2 for calculating red and blue values is through the use of only two of the four neighboring pixels in diagonally oblique positions, and preferably the ones with G values most similar to the G value of the center pixel. That is, when determining R13 for example, the differences between each of the G values G7, G9, G17 and G19 and the G value G13 would be compared. The two G values which are closest to G13 are selected and the average of the difference between the selected G values and their corresponding R values is added to G13 of equation 16. Accordingly, R13 will be determined according to the equation R13=G13+½[(R7−G7)+(R19−G19)], where G7 and G19 of the group G7, G9, G17 and G19 are closest to G13. This technique produces superior results on images with sharp chroma transitions.
A similar method is implemented to improve the clarity and resolution of the interpolation of Red and Blue pixels at native Green sites in Stage 3. Instead of uniform averaging between the four closest neighbor, edge dependent, or edge adaptive interpolation is employed. For most CFA interpolation techniques, the red and blue interpolation at G sites as shown in
R14=G14+¼[(R13−G13)+(R9−G9)+(R15−G15)+(R19−G19)], and
B14=G14+¼(B13−G13)+(B9−G9)+(B15−G15)+(B19−G19)],
where G13, G9, G15, G19 are Green pixels interpolated by Stage 1, and R13, 89, R15, B19 are samples interpolated by Stage 2.
Turning now to
if (dH<dV), then R14=G14+%[(R13−G13)+(R15−G15)],
else R14=G14+%[(R9−G9)+(R19−G19)].
Unlike the modification in Stage 2 which considers any two of the 4 pixels used for calculating a missing pixel value, the modification in Stage 3 only considers pairs of pixels in either the horizontal direction (e.g. pixel blocks 13 and 15) or the vertical direction (pixel blocks 9 and 19), and takes the average of one of those two pairs. However, in the case of video processing (i.e., not static pictures but ones correlated in the temporal domain) at regions where dH and dV are close, flickering may appear. This problem is addressed by using a weighed sum instead of a binary decision. For edge adaptive interpolation, edge orientation is used as a weight when combining constituents according to equation (19):
R14=G14+½{(dH+ε)*[(R9−G9)+(R19−G19)]+(dV+c)*[(R13−G13)+(R15−G15)]}1/(dH+dV+2ε),
where ε is 1 LSB. The various techniques implemented according to
Turning now to
Turning now to
The device of
In some FPGAs, each programmable tile includes a programmable interconnect element (INT) 1311 having standardized connections to and from a corresponding interconnect element in each adjacent tile. Therefore, the programmable interconnect elements taken together implement the programmable interconnect structure for the illustrated FPGA. The programmable interconnect element 1311 also includes the connections to and from the programmable logic element within the same tile, as shown by the examples included at the top of
For example, a CLB 1302 may include a configurable logic element (CLE) 1312 that may be programmed to implement user logic plus a single programmable interconnect element 1311. A BRAM 1303 may include a BRAM logic element (BRL) 1313 in addition to one or more programmable interconnect elements. The BRAM includes dedicated memory separate from the distributed RAM of a configuration logic block. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured implementation, a BRAM tile has the same height as five CLBs, but other numbers may also be used. A DSP tile 1306 may include a DSP logic element (DSPL) 1314 in addition to an appropriate number of programmable interconnect elements. An IOB 1304 may include, for example, two instances of an input/output logic element (IOL) 1315 in addition to one instance of the programmable interconnect element 1311. The location of connections of the device is controlled by configuration data bits of a configuration bitstream provided to the device for that purpose. The programmable interconnects, in response to bits of a configuration bitstream, enable connections comprising interconnect lines to be used to couple the various signals to the circuits implemented in programmable logic, or other circuits such as BRAMs or the processor.
In the pictured implementation, a columnar area near the center of the die is used for configuration, clock, and other control logic. The config/clock distribution regions 1309 extending from this column are used to distribute the clocks and configuration signals across the breadth of the FPGA. Some FPGAs utilizing the architecture illustrated in
Note that
Turning now to
In the pictured implementation, each memory element 1402A-1402D may be programmed to function as a synchronous or asynchronous flip-flop or latch. The selection between synchronous and asynchronous functionality is made for all four memory elements in a slice by programming Sync/Asynch selection circuit 1403. When a memory element is programmed so that the S/R (set/reset) input signal provides a set function, the REV input terminal provides the reset function. When the memory element is programmed so that the S/R input signal provides a reset function, the REV input terminal provides the set function. Memory elements 1402A-1402D are clocked by a clock signal CK, which may be provided by a global clock network or by the interconnect structure, for example. Such programmable memory elements are well known in the art of FPGA design. Each memory element 1402A-1402D provides a registered output signal AQ-DQ to the interconnect structure. Because each LUT 1401A-1401D provides two output signals, O5 and O6, the LUT may be configured to function as two 5-input LUTs with five shared input signals (IN1-IN5), or as one 6-input LUT having input signals IN1-IN6.
In the implementation of
Turning now to
Turning now to
The various elements of the methods of
It can therefore be appreciated that new methods of and devices for generating a digital image has been described. It will be appreciated by those skilled in the art that numerous alternatives and equivalents will be seen to exist which incorporate the disclosed invention. As a result, the invention is not to be limited by the foregoing implementations, but only by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7791648 | Guarnera et al. | Sep 2010 | B2 |
8400533 | Szedo et al. | Mar 2013 | B1 |
20030052981 | Kakarala et al. | Mar 2003 | A1 |
20040257467 | Nicolas | Dec 2004 | A1 |
20050200733 | Malvar | Sep 2005 | A1 |
20060087567 | Guarnera et al. | Apr 2006 | A1 |
20080240559 | Malvar | Oct 2008 | A1 |
20090027525 | Lin et al. | Jan 2009 | A1 |
20090092338 | Achong | Apr 2009 | A1 |
20090136127 | Kwak et al. | May 2009 | A1 |
20100177961 | Kalman | Jul 2010 | A1 |
20100182466 | Chang et al. | Jul 2010 | A1 |
20110063480 | Kim | Mar 2011 | A1 |
20110069192 | Sasaki | Mar 2011 | A1 |
20110075948 | Saito | Mar 2011 | A1 |
20110234823 | Terasawa | Sep 2011 | A1 |
20120013769 | Kiyosawa et al. | Jan 2012 | A1 |
20120070083 | Ishiga et al. | Mar 2012 | A1 |
20120098991 | Nomura | Apr 2012 | A1 |
20130077862 | Nomura et al. | Mar 2013 | A1 |
20140044374 | Terasawa | Feb 2014 | A1 |
20140139706 | Jang et al. | May 2014 | A1 |
Number | Date | Country |
---|---|---|
WO 2007075039 | Jul 2007 | WO |
Entry |
---|
Nakamura, Junichi, Image Sensors and Signal Processing for Digital Still Cameras, Aug. 5, 2005, pp. 60-67, CRC Press, Boca Raton, FL, USA. |
Gunturk, Bahadir K. et al., “Demosaicking: Color Filter Array Interpolation,” IEEE Signal Processing Magazine, Jan. 2005, pp. 44-54, vol. 22, Issue 1, IEEE, Piscataway, New Jersey, USA. |
Hirakawa, Keigo et al., “Adaptive Homogeneity-Directed Demosaicing Algorithm,” IEEE Transactions on Image Processing, Mar. 2005, pp. 360-369, vol. 14, No. 3, IEEE, Piscataway, New Jersey, USA. |