The invention relates generally to color digital images. More particularly, the invention relates to a system and method of fast and robust method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing multi-term cost functions.
Digital cameras have an inherent limit to their spatial resolution that is governed by the optical lens and CCD array. To improve the image quality, super-resolution can be utilized by fusing multiple low-resolution images of the same scene to produce a relatively high-resolution image.
In recent years, various super-resolution techniques have been developed for estimating a high-resolution image from a set of low-resolution images. It was demonstrated early on that the aliasing effects in the low-resolution images can be removed and the high-resolution fused image is recovered, as long as there existed a sub-pixel motion in the low-resolution input images. Even though the relatively clean frequency domain description of super-resolution provided near desired results for very simple imaging scenarios, it was evident that super-resolution in general is computationally complex and numerically ill-behaved, necessitating more sophisticated super-resolution methods be developed.
It is important to note that almost all super-resolution methods been designed to increase the resolution of a single channel (monochromatic) image, and to date there is very little work addressing the problem of color super-resolution. In addressing color super-resolution, one method uses a set of previously demosaiced color low-resolution frames and fuses them together to enhance their spatial resolution. The typical solution involves applying monochromatic super-resolution algorithms to each of the color channels independently, while using the color information to improve the accuracy of motion estimation. Another approach is transforming the problem to a different color space, where chrominance layers are separated from luminance, and super-resolution is applied only to the luminance channel. Both of these methods are suboptimal as they do not fully exploit the correlation across color bands, where ignoring the relation between different color channels will result in color artifacts in the super-resolved images. Moreover, even proper treatment of the relation between the color layers is not sufficient for removing color artifacts if the measured images are mosaiced.
For demosaicing, a color image is typically represented by combining three separate monochrome images. Ideally, each pixel reflects three data measurements; one for each of the color bands. In practice, to reduce production costs, many digital cameras have only one color measurement (red, green, or blue) per pixel. The detector array is a grid of CCD's, each made sensitive to one color by placing a color-filter array (CFA) in front of the CCD. The Bayer pattern is a very common example of such color filter. The values of the missing color bands at every pixel are often synthesized using some form of interpolation from neighboring pixel values, to estimate the underdetermined color values. This process is know as demosaicing.
Numerous demosaicing methods have been proposed through the years to solve the under-determination problem. Linear interpolation of known pixel values applied to each color band independently is one method to estimate the unknown pixel values. This approach does not consider some important information about the correlation between the color bands and results in substantial color artifacts. Because the Bayer pattern has two times the number of green pixels than the red or blue pixels, the red and blue channels are down-sampled two-times more than the green channel. Therefore, the independent interpolation of the green band will result in a more reliable reconstruction than the red or blue bands. From this, with the assumption that the red/green and blue/green ratios are similar for the neighboring pixels, the basics of the smooth hue transition method evolved.
There is a negligible correlation between the values of neighboring pixels located on the different sides of an edge in an image. Although the smooth hue transition method is logical for smooth regions of the reconstructed image, it is not useful for the high-frequency (edge) areas. Consequently, gradient-based methods were applied but did not perform interpolation across the edges of an image, where this non-iterative method uses the second derivative of the red an blue channel to estimate the edge direction in the green channel, and the green channel is then used to compute the missing values in the red and blue channels.
A modified gradient-based method was subsequently developed, where the second derivative of the green channel and the first derivative of the red (or blue) channels are used to estimate the edge direction in the green channel. The smooth hue method was later combined to provide an iterative method where the smooth hue interpolation is done with respect to the local gradient computed in eight directions about the pixel of interest. A second stage using anisotropic inverse diffusion further enhanced the quality of the reconstructed image. This two-step approach of interpolation followed by an enhancement step has been widely adopted, where spatial and spectral correlations among neighboring pixels are exploited to define the interpolation step, while adaptive median filtering is used as the enhancement step. Other iterative implementation methods of the median filters have been used as the enhancement step that take advantage of the homogeneity assumption in the neighboring pixels.
Iterative maximum a posteriori (MAP) methods are another important category of demosaicing methods. A MAP algorithm with a smooth chrominance prior has been developed, where the original image is transformed to the YIQ representation. The chrominance interpolation is preformed using isotropic smoothing. The luminance interpolation is done using edge directions computed in steerable wavelet pyramidal structure.
Almost all of the demosaicing methods are based on one or more of the following assumptions.
To date, the most sophisticated demosaicing methods have failed to produce satisfactory results when severe aliasing is present in the color-filtered image. Such severe aliasing occurs with inexpensive commercial still or video digital cameras having a small number of CCD pixels, where the color artifacts worsens as the number of CCD pixels decreases.
The poor quality of single-frame demosaiced images necessitates the need for improved multi-frame methods, where the information of several low-quality images are fused together to produce high-quality demosaiced images.
Accordingly, there is a need to develop more effective and efficient methods of image reconstruction to overcome the current shortcomings in the art.
The invention is a fast and robust hybrid method of super-resolution an demosaicing, based on maximum a posteriori estimation by minimizing a multi-term cost function. The invention is a method of creating a super-resolved color image from multiple lower-resolution color images. Specifically, combining a data fidelity penalty term, a spatial luminance penalty term, a spatial chrominance penalty term, and an inter-color dependencies penalty term creates an overall cost function. The data fidelity penalty term is an L1 norm penalty term to enforce similarities between raw data and a high-resolution image estimate, the spatial luminance penalty term is to encourage sharp edges in a luminance component to the high-resolution image, the spatial chrominance penalty term is to encourage smoothness in a chrominance component of the high-resolution image, and the inter-color dependencies penalty term is to encourage homogeneity of an edge location and orientation in different color bands. A steepest descent optimization is applied to the overall cost function for minimization by using the steps of applying a derivative to a first color band while having a second and a third color band held constant, applying a derivative to the second color band while having the first and the third color band held constant, and applying a derivative to the third color band while having the first and the second color band held constant.
In one embodiment of the invention, the method of super-resolution an demosaicing, based on maximum a posteriori estimation by minimizing a multi-term cost function, is a computer implemented method.
In another embodiment of the invention, the method of super-resolution an demosaicing, based on maximum a posteriori estimation by minimizing a multi-term cost function, is a digital camera-based implemented method.
The data fidelity penalty term is applied to space invariant point spread function, translational, affine, projective and dense motion models. The data fidelity penalty term is enabled by fusing the lower-resolution images to estimate a blurred higher-resolution image, and estimating a deblurred image from the blurred higher-resolution image, where the blurred higher-resolution image is a weighted mean of all measurements of a given pixel after zero filling and motion compensation. Further, the data fidelity penalty term uses motion estimation errors with the L1 norm in a likelihood fidelity term, where the L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation.
The spatial luminance penalty term uses bilateral-TV regularization that is a luminance image having a weighted sum of color vectors, a horizontal pixel-shift term, a vertical pixel-shift term, and a scalar weight between 0 and 1, where the color vectors include red, green and blue vectors, where this regularization term is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them.
The spatial chrominance penalty term uses regularization based on an L2 norm to smooth the chrominance component.
The inter-color dependencies penalty term is a vector outer product norm of all pairs of neighboring pixels, where this regularization term is used to force similar edge location and orientation in different color channels.
Direct image operator effects including blur, high-pass filtering, masking, down-sampling, and shift are implemented in place of matrices for process speed and memory efficiency.
The lower-resolution color images include color filtered images, compressed color images, compressed color filtered images, and an image sequence with color artifacts.
The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the Patent and Trademark Office upon requite and payment of the necessary fee.
The objectives and advantages of the present invention will be understood by reading the following detailed description in conjunction with the drawing, in which:
a shows a high-resolution image captured by a 3-CCD camera.
b shows the image of
c shows the image of
d shows the image of
e shows the image of
f shows the image of
a-b shows the current invention of a computer implemented system and method.
a shows a color image with full RGB values.
b shows a Bayer filtered low-resolution image of
c shows a Bayer filtered low-resolution image of
d shows a Bayer filtered low-resolution image of
a shows a low-resolution image reconstructed with luminance regularization.
b shows a low-resolution image reconstructed with inter-color regularization.
c shows a low-resolution image reconstructed with chrominance regularization.
d shows a low-resolution image reconstructed from low-resolution demosiacing by applying combined smooth hue and gradient-based reconstruction and super-resolution methods.
a shows raw (Bayer filtered) images reconstructed from super-resolution.
b shows raw (Bayer filtered) images reconstructed from inter-color and luminance regularization.
c shows raw (Bayer filtered) images reconstructed from chrominance and inter-color regularization.
d shows raw (Bayer filtered) images reconstructed from chrominance and luminance regularization.
a shows a low-resolution image captured from a commercial webcam.
b shows the image of
c shows the image of
d shows the image of
e shows a zoomed image of
f shows a zoomed image of
g shows a zoomed image of
h shows a zoomed image of
a shows a low-resolution image.
b shows the low-resolution image of
c shows the low-resolution image of
d shows the low-resolution image of
a shows a zoomed image of
b shows a zoomed image of
c shows a zoomed image of
d shows a zoomed image of
a shows a low-resolution image demosaiced using a gradient-based method.
b shows a low-resolution image demosaiced using a modified gradient-based reconstruction method applied to each color band with combining a gradient-based and smooth hue method.
c shows the super-resolution method of the current invention applied to 31 low-resolution images and demosaiced using the method of
d shows the super-resolution method of the current invention applied to 31 low-resolution images and demosaiced using the method of
e shows the super-resolution method of the current invention applied to undemosaiced raw low-resolution images.
f shows the multi-frame and super-resolution of color images method of the current invention applied to undemosaiced raw low-resolution images.
a shows a zoomed image of
b shows a zoomed image of
c shows a zoomed image of
d shows a zoomed image of
e shows a zoomed image of
f shows a zoomed image of
a-d shows multi-frame color super resolution implemented on real data sequences according to the present invention.
a-b shows multi-frame color super resolution implemented on real data sequences according to the present invention.
a-e shows multi-frame color super resolution implemented on real data sequences according to the present invention.
Although the following detailed description contains many specifics for the purposes of illustration, anyone of ordinary skill in the art will readily appreciate that many variations and alterations to the following exemplary details are within the scope of the invention. Accordingly, the following preferred embodiment of the invention is set forth without any loss of generality to, and without imposing limitations upon, the claimed invention.
which can be expressed as
Y=TX+V
The vectors Xi and Yi (k) are representing the ith band (R, G, of B) of the high-resolution color frame and the kth low-resolution frame after lexicographic ordering, respectively. Matrix F(k) is the geometric motion operator between the high-resolution and low-resolution frames, and H(k) is a blur matrix to model the camera's point spread function. The matrix Di(k) represents the down-sampling operator, which includes both the color-filtering and the CCD down sampling operations. Geometric motion, blur, and down-sampling operators are covered by the operator Ti(k), referred to here as the system matrix. The vector Vi(k) is the system noise and N is the number of available low-resolution frames.
The high-resolution color image (X) is of size [12r2M2×1], where r is the resolution enhancement factor. The size of the vectors VG(k) and YG(k) is [2M2×1] and vectors VR(k), YR(k), VB(k) and YB(k) are of size [M2×1]. The geometric motion and blur matrices are of size [4r2M2×4r2M2]. The down-sampling and system matrices are of size [2M2×4r2M2] for the green band and of size [M2×4r2M2] for the red and blue bands.
Considered separately, super-resolution and demosaicing models are special cases, and in the super-resolution case, the effect of color-filtering is generally ignored to simplify the model, giving
Y(k)=D(k)H(k)F(k)X+V(k) k=1, . . . N
In this model, the low-resolution images Y(k) and the high-resolution image X are assumed to be monochromatic. Alternatively in the demosaicing case, only single frame reconstruction of color images is considered, resulting in a simplified model
Yi=DiXi+Vi i=R, G, B
As such, the approach to multi-frame reconstruction of color images has been a two-step process:
One aspect of the current invention uses a maximum a posteriori (MAP) estimation to directly solve step (1).
The forward model depicted in
Single frame and multi-frame demosaicing problems are fundamentally different, making it impossible to simply cross apply traditional demosaicing methods to the multi-frame situation. For example, with respect to translational motion, a set of color-filtered low-resolution images are provided.
The availability of one and only one color band value for each pixel is not a correct assumption in the multi-frame case. In underdetermined cases, there are not enough measurements to fill the high-resolution grid; the symbol “?” depicted in
Following the forward model from
the issue at hand is an inverse problem, where the source of the information (high-resolution image) is estimated from the observed data (low-resolution images). An inherent difficulty with inverse problems is the challenge of inverting the forward model without amplifying the effect of noise in the measured data. In many real scenarios, the problem is worsened by the fact that the system matrix T is singular of ill-conditioned. Thus, for the problem of super-resolution, some form of regularization must be included in the cost function to stabilize the problem or constrain the space of solutions.
From a statistical perspective, regularization is incorporated as a priori knowledge about the solution. Thus, using the MAP estimator, a novel class of regularization functions emerges, enabling the capture of the specifics of a desired application. These aspects of the current invention are accomplished by applying the steps of hybrid-Lagrangian penalty terms as in
{circumflex over (X)}=ArgXMin[ρ(Y,TX)+λΓ(X)]
where ρ, the data fidelity term, measures the “distance” between the model and measurements, and Γ is the regularization cost function, which imposes a penalty on the unknown X to direct it to a better formed solution. The regularization parameter λ is a scalar for properly weighing the first term (data fidelity cost) against the second term (regularization cost).
A Tikhonove regularization, of the form Γ(X)=∥ΛX∥22 is known to have been used to penalize energy in the higher frequencies of the solution, providing a smooth but blurry image. To achieve reconstructed images with sharper edges, a robust regularizer called Bilateral-TV (B-TV) was introduced, having the form
where Slx and Smy are the operators corresponding to shifting the images represented by X by l pixels in the horizontal direction and m pixels in the vertical direction, respectively. This cost function in effect computes derivatives across multiple scales of resolution (as determined by parameter P). The scalar weight 0<α<1 is applied to give a spatially decaying effect to the summation of the regularization term. The parameter “P” defines the size of the corresponding bilateral filter kernel.
Multi-frame demosaicing is fundamentally different than single-frame demosaicing, where the current invention uses a computationally efficient maximum a posteriori (MAP) estimation method to fuse and demosaic a set of low-resolution frames, which may have been color-filtered by any color filtering array, resulting in a color image with higher spatial resolution and reduced color artifacts. The lower-resolution color images include color filtered images, compressed color images, compressed color filtered images, and an image sequence with color artifacts. The MAP-base cost function consists of a data fidelity penalty term, a spatial luminance penalty term, a spatial chrominance penalty term, and an inter-color dependencies penalty term. The current invention is a computer-implemented method of creating a super-resolved color image from multiple lower-resolution color images. Specifically, combining the data fidelity penalty term, the spatial luminance penalty term, the spatial chrominance penalty term, and the inter-color dependencies penalty term creates an overall cost function.
In the current invention, the data fidelity penalty term measures the similarity between the resulting high-resolution image and the original low-resolution images. Statistical analysis of noise properties, for many real image sequences used for multi-frame image fusion techniques, reveals that a heavy-tailed Laplacian-type distribution, versus a zero mean Gaussian distribution, is an appropriate model when applied to motion estimation errors. The data fidelity penalty term in the current invention uses motion estimation errors containing the L1 norm that is robust to data outliers
ρ(Y,TX)=∥Y−TX∥1
where the L1 norm is the maximum likelihood estimate of data in the presence of Laplacian noise. The L1 norm minimization of the error term results in robust reconstruction of the high-resolution image in the presence of uncertainties such as motion error. With respect to general motion and blur models the data fidelity penalty term is defined as
where the vectors Xi and Yi(k) are the ith band (red, green or blue) of the high-resolution color frame and the kth low-resolution frame, respectively. The matrix Di(k) represents the down-sampling operator, which includes both the color-filtering and CCD down-sampling operations. F(k) is the geometric motion operator between the high-resolution and low-resolution frames, and H(k) is a blur matrix to model the camera's point spread function. The data fidelity penalty term is an L1 norm penalty term to enforce similarities between raw data and a high-resolution image estimate, that measures the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation.
The data fidelity penalty term is applied to space invariant or variant point spread function (PSF), and general motion models. The data fidelity penalty term is enabled by fusing the lower-resolution images to estimate a blurred higher-resolution image, and estimating a deblurred image from the blurred higher-resolution image, where the blurred higher-resolution image is a weighted mean of all measurements of a given pixel after zero filling and motion compensation. Further, the data fidelity penalty term uses motion estimation errors with the L1 norm in a likelihood fidelity term. However, in considering only simpler cases of common space invariant PSF and translational, affine, projective and dense motion models, a two-step method is invoked to represent the data fidelity penalty term to ensure faster implementation, where the data fidelity term is defined as:
where ZR, ZG and {circumflex over (Z)}B are the three color channels of the color shift-and-add image {circumflex over (Z)}. The matrix Φi(i=R, G, B) is a diagonal matrix with diagonal values equal to the square root of the number of measurements that contributed to make each element if {circumflex over (Z)}i (in the square case is the identity matrix). The undefined pixels of {circumflex over (Z)}B have no effect on the high-resolution estimation of the high-resolution frame. The Φiε{R, G, B} matrices for the multi-frame demosaicing problem are sparser than the corresponding matrices in the color SR case. The vectors {circumflex over (X)}R, {circumflex over (X)}G and {circumflex over (X)}B are the three color components of the reconstruction high-resolution image {circumflex over (X)}.
Because the human eye is more sensitive to the details in the luminance component of an image than the details in the chrominance components, it is important that the edges in the luminance component of the reconstructed high-resolution image look sharp. The spatial luminance penalty term is to encourage sharp edges in a luminance component to the high-resolution image. To achieve reconstructed images with sharper edges, a bi-lateral TV regularizer is used as a basis for the spatial luminance cost function, were the bi-lateral TV regularization is applied to the luminance component of the image X, where the luminance image is represented as the weighted sum XL=0.299 XR+0.597 XG+0.114 XB. Here the spatial luminance penalty term is defined by
where Slx and Smy are the operators corresponding to shifting the image represented by X by l pixels in the horizontal direction and m pixels in the vertical direction, respectively. This cost function effectively computes derivatives across multiple scales of resolution (as determined by parameters P). The scalar weight 0<α<1 is applied to give a spatially decaying effect to the summation of the regularization term. The parameter “P” defines the size of the corresponding bilateral filter kernel.
The spatial luminance penalty term uses bilateral-TV regularization that is a luminance image having a weighted sum of color vectors, a horizontal pixel-shift term, a vertical pixel-shift term, and a scalar weight between 0 and 1, where the color vectors include red, green and blue vectors.
The spatial chrominance penalty term is to encourage smoothness in a chrominance component of the high-resolution image. Spatial regularization is required also for chrominance layers. Because human visual system sensitivity is less sensitive to the resolution of the chrominance layer bands, a simpler regularization based on the L2 norm is used to provide a spatial chrominance penalty term
J2(X)=∥ΛXC1∥22+∥ΛXC2∥22
where the images XC1 and XC2 are the I and Q layers in the YIQ color space representation. The spatial chrominance penalty term uses regularization based on an L2 norm.
Although different bands may have larger or smaller gradient magnitudes at a particular edge, it is reasonable to assume the same edge orientation and location for all color channels. That is, if an edge appears in the red band at a particular location and orientation, then an edge with the same location and orientation should appear in the other color bands. Therefore, a cost function that penalizes the difference in edge location and/or orientation of different color bands incorporates the correlation between different color bands prior.
In the current invention, to remove color artifacts, an inter-color dependencies penalty term is used to encourage homogeneity of an edge location and orientation in different color bands, where this term penalizes the mismatch between locations or orientations of edges across the color bands. This penalty term has the vector outer product norm of all pairs of neighboring pixels, where the inter-color dependencies penalty term is a differentiable cost function
where ⊙ is the element by element multiplication operator. The inter-color dependencies penalty term is a vector outer product norm of all pairs of neighboring pixels.
The overall cost function is the summation of the cost functions:
A steepest descent optimization is applied to the overall cost function for minimization by using the steps of applying a derivative to a first color band while having a second and a third color band held constant, applying a derivative to the second color band while having the first and the third color band held constant, and applying a derivative to the third color band while having the first and the second color band held constant.
Direct image operator effects including blur, high-pass filtering, masking, down-sampling, and shift are implemented in place of matrices for process speed and memory efficiency.
a-b depict the current invention of a computer implemented system and method 100 for robust multi-frame demosaicing and color super resolution, where shown in
In one embodiment of the invention, the method of super-resolution an demosaicing, based on maximum a posteriori estimation by minimizing a multi-term cost function, is a computer implemented method.
In another embodiment of the invention, the method of super-resolution an demosaicing, based on maximum a posteriori estimation by minimizing a multi-term cost function, is a digital camera-based implemented method.
Sample experiments on synthetic and real data sets are provided to demonstrate the aspects of the current invention. In a first experiment, a sequence of low-resolution frames from an original high resolution color image with full RGB values were created. The low-resolution frames were synthesized by shifting the high-resolution image by one pixel in the vertical direction. The point spread function (PSF) was simulated for each color band of the shifted image by convolving with a symmetric Gaussian low-pass filter of size 5×5 having standard deviation equal to one. The resulting image was sub-sampled by a factor of 4 in each direction. This process was repeated with different motion vectors (shifts) in vertical and horizontal directions to produce 10 low-resolution images from the original scene. The horizontal shift between the low resolution images varied between 0 to 0.75 pixels in the low-resolution grid (0 to 3 pixels in the high-resolution grid). The vertical shift between the low-resolution images varied between 0 to 0.5 pixels in the low-resolution grid (0 to 2 pixels in the high-resolution grid). To simulate the errors in motion estimation, a bias equal to half a pixel shift in the low-resolution grid was intentionally added to the known motion vector of one of the low-resolution frames. Gaussian noise was added to the resulting low-resolution frames to achieve a signal-to-noise ratio (SNR) equal to 30 dB. Each low-resolution color image was further sub-sampled by the Bayer filter.
a shows the color image of the sample experiment with full RGB values.
a-6c show the effect of the individual implementation of each regularization term of the current invention (luminance, chrominance, and inner-color dependencies), where
Quantitative measurements confirm the efficacy of the current invention. Peak signal to noise ratio (PSNR) and a spatial extension of CIELAB (S-CIELAB) measurements were taken to compare the performance of each of the previous methods of demosaicing and super-resolution with the methods of the current invention. Table 1. shows these values, where the method according to the current invention has the lowest S-CIELAB error and the highest PSNR values, in addition to the best visual quality.
In another sample experiment, 30-compressed images were captured from a commercial webcam, where one of these images are shown in
The shift-and-add result is shown in
In another sample experiment, 40-compressed images of a test pattern from a commercial surveillance camera were used, where one of these images are shown in
The shift-and-add result (resolution enhancement factor of 4) is shown in
In the following three sample experiments (girl sequence, bookcase sequence and window sequence), 31-uncompressed, raw CFA images (30-frames for the window sequence of
To increase the spatial resolution by a factor of three, the multi-frame color super-resolution method of the current invention was applied on the demosaiced images of these sequences.
In a final sample experiment, the multi-frame demosaicing method of the current invention was applied on the raw CFA data to increase the spatial resolution by the same factor of three.
These sample experiments show that single frame demosaicing methods, such as combined smooth hue and gradient-based reconstruction methods (which in effect implement anti-aliasing filters) remove color artifacts at the expense of making the images more blurry. The color super-resolution methods of the current invention can retrieve some high-frequency information and further remove the color artifacts. Further, applying the multi-frame demosaicing and super-resolution method of the current invention directly to raw CFA data produces sharpest results and removes color artifacts. Additionally, these sample experiments show the importance of the inter-color dependencies term, which further removes color artifacts. The parameters used for the experiments on the “girl”, “bookcase” and “window” sequences are β=0.002, α=0.9, λ′=0.1, λ″=250, λ′″=25. The (unknown) camera PSF was assumed to be a tapered 5×5 disk PSF.
The present invention has now been described in accordance with several exemplary embodiments, which are intended to be illustrative in all aspects, rather than restrictive. Thus, the present invention is capable of many variations in detailed implementation, which may be derived from the description contained herein by a person of ordinary skill in the art. For example, instead of using the L1 norm for penalizing the data fidelity term, other robust penalty terms such as Lorentzian, truncated quadratic, etc. Similarly the median Shift-And-Add operator in the two step process can be replaced by weighted or truncated mean operators.
All such variations are considered to be within the scope and spirit of the present invention as defined by the following claims and their legal equivalents.
This application is cross-referenced to and claims the benefit from U.S. patent application Ser. No. 11/301,811 filed Dec. 12, 2005, now U.S. Pat. No. 7,412,107 which claims benefit of U.S. Provisional Application 60/636,891 filed Dec. 17, 2004, and which are hereby incorporated by reference.
The present invention was supported in part by grant number CCR-9984246 from the National Science Foundation. The U.S. Government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
6452637 | Rudin et al. | Sep 2002 | B1 |
6816197 | Keshet et al. | Nov 2004 | B2 |
7379612 | Milanfar et al. | May 2008 | B2 |
7412107 | Milanfar et al. | Aug 2008 | B2 |
20020114532 | Ratner et al. | Aug 2002 | A1 |
20030193567 | Hubel | Oct 2003 | A1 |
20040008269 | Zomet et al. | Jan 2004 | A1 |
20060038891 | Okutomi et al. | Feb 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20060279585 A1 | Dec 2006 | US |
Number | Date | Country | |
---|---|---|---|
60636891 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11301811 | Dec 2005 | US |
Child | 11506246 | US |