The present invention is generally related to signal processing and, more particularly, to video processing.
Visual communications and image processing are technologies that are employed in a wide range of applications. Such applications include the delivery and presentation of video images over various networks such as cable, space (e.g., via satellite) or computer networks, medical imaging, aerial/satellite photography, remote sensing, surveillance systems, forensic science, digital cameras, and high-quality scanning applications, among others.
During the delivery and/or processing of video images, a loss of image quality typically occurs between the original image and the captured image that will be used for display and/or recording. For example, in the process of recording an image, there is a natural loss of resolution caused by the non-zero physical dimensions of the individual sensor elements, the non-zero aperture time, optical blurring, noise, and motion. Further decline in resolution may occur in transporting the image using fewer bits in order to preserve bandwidth.
Techniques have been developed to address some of these problems. For example, multi-frame resolution enhancement (herein referred to as superresolution) techniques have been developed to estimate a high resolution image by combining the nonredundant information that is available in a sequence of low resolution images. However, current techniques are employed under assumptions that are not necessarily valid for many of the applications, and thus a heretofore unaddressed need exists in the industry for solutions to address one or more of these deficiencies and inadequacies.
The preferred embodiments of the invention include, among other things, system for improving video quality and resolution. The system includes a processor configured with logic to estimate an image from at least one observed image to produce an estimated image, the observed image having a lower spatial resolution than the estimated image. The processor is further configured with logic to simulate steps of an imaging process and compression process and update the estimated image with statistical and/or deterministic information if further optimization is needed, otherwise designate the estimated image as a solution.
The preferred embodiments of the invention can also be described as, among other things, a method for improving video quality and resolution. The method can generally be described as including the following steps: estimating an image from at least one observed image to produce an estimated image, the observed image having a lower spatial resolution than the estimated image; simulating steps of an imaging process and compression process and updating the estimated image with statistical and/or deterministic information if further optimization is needed, otherwise designating the estimated image as a solution.
Other systems, methods, features, and advantages of the present invention will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
Many aspects of the invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
The preferred embodiments include methods for improving the quality and resolution of low resolution images, as well as systems and devices that enable this methodology. The preferred embodiments will first be described in the context of superresolution. Superresolution refers to multi-frame reconstruction of low resolution images to improve the spatial resolution and quality of images. In the preferred embodiments, the imaging and compression processes are simulated (e.g., modeled) to determine how an image is affected by the processes. Then, an estimate of a high resolution image is produced by adding sub-sample points to one or more low resolution images (herein referred to as the observed image or images) that have already been processed through the imaging and compression processes. In other words, the one or more observed images, or rather the data representing the respective images (herein referred to as observed image data), are interpolated to create an estimate (in the form of data estimates, such as discrete cosine transform coefficients) of a high resolution image. This estimated data is simulated as being processed through the imaging and compression simulation steps.
One goal of the preferred embodiments is to minimize an error function. To this end, an error function is used that includes an accounting of two factors: (i) the compatibility of the observed image data with the outcome of the estimated data, the outcome resulting from passing the estimated data through the imaging and compression models, and (ii) the likelihood of the estimated data. If the error function returns a small value, then the estimated data was correct and the estimated data is used as the desired high resolution image (after the coefficients are inverse transformed) and processed for presentation. Otherwise, the estimated data is updated, and this updated estimate is simulated as being passed through the models again.
The update takes into consideration factors that include the comparison of the estimated data to the observed image data, and the prior statistics (or the likelihood) of the high resolution image. The error function is a function of the estimated data. When inputting the estimated data to the error function, it returns a value (e.g., a real number). The updating occurs in such a manner that the error function returns a value when the updated estimated data is used. This process is repeated until the error function is small enough. As is described below in further detail, the error function is mathematically formulated and another formula is derived for updating the estimates. The determination of whether the error function is small enough can take many forms including setting a fixed threshold value, setting the number of iterations (i.e., the updating iterations) to a pre-determined number, or evaluating the amount of change in the update (e.g., if the change is less than a predetermined value, stop the iterations), among other forms.
The preferred embodiments also include the aforementioned methodology as applied in superprecision. Superprecision refers to multi-frame gray scale resolution enhancement. Superprecision can be thought of as a generalized version of superresolution, in that superprecision primarily addresses the decrease in bit depth when a high resolution image is bit reduced.
The preferred embodiments of the invention can, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Furthermore, all “examples” given herein are intended to be non-limiting.
In the discussion that follows, superresolution is generally described for an example imaging process, followed by some example applications that can incorporate the preferred embodiments of the invention. Then, the preferred embodiments are presented in stages, with no particular order implied, starting with a discussion of the simulation process at a higher level, followed by a low level explanation (via math formulations). Then, the updating process is described in both the discrete cosine transform domain (i.e., frequency domain, or compression domain) and the spatial domain. A description of superprecision follows the updating discussion. Finally, a description of how an example decoder can implement the preferred embodiments is discussed.
Although superresolution is described in association with
The preferred embodiments of the invention include superresolution and superprecision technology to improve bit depth resolution, as well as spatial resolution, in spatial and frequency domains. Some example implementations that can incorporate, and thus benefit from, the preferred embodiments of the invention are included below. An initial emphasis is placed on compressed video applications due to the popularity of such applications, with the understanding that the preferred embodiments of the invention can equally benefit systems in the uncompressed domain. Examples for compressed video applications include High Definition Television (HDTV), cable or satellite set-top boxes (STBs), Digital Video Disk (DVD) players, and digital cameras, among others. For example, in HDTV applications, the screen resolution is superior to the signal input (which typically occurs over an analog media). One goal is to provide a display resolution that matches the signal quality. The preferred embodiments of the invention can be employed in the HDTV, for example as a digital signal processing (DSP) chip that receives the lower quality video and outputs images with improved resolution.
As for STBs, bandwidth is expensive and, thus, transmission signals are typically sent from headends using compression and other data reduction mechanisms to conserve the bandwidth. Providing improved resolution at the STB enables the low resolution video received at the STB to be improved while conserving bandwidth. DVDs can similarly benefit from the preferred embodiments of the invention, as the current rates of 9.8 megabits per second (Mbps) can incorporate the preferred embodiments again via a DSP chip, as one example, with rates of 15 Mbps readily achievable. Regarding digital cameras, by taking multiple camera shots and incorporating the preferred embodiments, a $150 digital camera can produce pictures having the quality produced using a $1000 camera.
One problem that arises and that is addressed by the preferred embodiments of the invention is the loss of spatial resolution, due primarily to the processing characteristics of imaging systems, such as losses due to low pass filtering and sampling. Even without compression, the observed images lack spatial resolution (e.g., details of the image). When the observed image data undergoes compression, the spatial resolution degrades further. For example, during compression, discrete cosine transform (DCT) coefficients (e.g., the observed image data) are quantized, which is a lossy operation. Consequently, the inverse DCT will not produce the pre-quantized DCT coefficients.
The following discussion generally describes the simulation of imaging and compression processes. One purpose of the simulation process is to get an accurate idea of the source of resolution losses for an acquired image in order to reproduce such effects in the simulation of passing an estimated image through the processes. For example, although the actual image object (for example, a child riding his bike on the street) has infinite resolution, an acquisition device such as a camera provides for finite resolution (e.g., due to a finite number of pixels), and thus the resulting picture taken using the camera will manifest a loss of resolution. The preferred embodiments of the invention include this simulation of the imaging and compression processes and the simulation of the effects of the elements of these processes on an image that is passed through for processing.
The next element includes the sensor and optical blur element 214. Typically in cameras, the lens is a source of optical blur. Further, the sensors include charge-coupled devices (CCDs) which, due to their finite area, also cause optical blur. The non-zero aperture time element 216 relates to the fact that cameras typically have a non-zero open and close time, and thus the resulting motion is also a source of blurring (the magnitude of blurring which depends on, among other things, the quality of the camera and the amount of motion being detected). The sampling element 218 portrays the sampling of intensity values that occur to the CCD. The size of the CCD and the sensor density (e.g., pixel spacing) will influence the amount of diminished resolution. Note that noise is also added to the imaging block 210 to relate the influence of noise on the resultant image resolution. The noise influence is statistically modeled in the preferred embodiments of the invention. An image is produced from the imaging process for further processing by the compression model 230, as is described below.
The mathematical representation of the imaging model is now presented. According to the imaging model, a spatially and temporally continuous high-resolution input signal ƒ(x, t) is affected by sensor and optical blur. Sensor blur is caused by integrating over the finite nonzero sensor cell area, while optical blur is caused by the lens system. The blurred video signal is also integrated over time to capture nonzero time-aperture effects. After sampling on a low-resolution grid, discrete low-resolution frames gd(l, k) are obtained. Incorporating motion modeling into this video imaging model, the relationship between these signals can be modeled as follows:
gd(l, k)=∫ƒ(x, tr)hc(x, tr;l, k)dx+v(l, k), (1)
where hc(x, tr; l, k) represents the linear shift-varying (LSV) blur between the continuous high resolution image ƒ(x, tr) at a reference time tr and the kth discrete low resolution image. v(l, k) represents the additive noise process. Extending zero-order-hold discretization to higher-order interpolants results in the following formulation for the video imaging process:
where h(n, tr; l, k) is the discrete linear shift-varying (LSV) blur mapping between the discrete high resolution image ƒ(n, tr) at a reference time tr and the kth discrete low resolution image. The high resolution and low-resolution sampling lattice indices (i.e., pixel coordinates) are n≡(n1, n2) and 1≡(l1, l2), respectively. The summation is over both n1 and n2 but is represented as a single summation here for simplicity. Note that Equation 2 provides a linear set of equations that relates the high resolution image at time tr to the low resolution frames gd(l, k) for different values of k. The blur function h (n, tr; l, k) is computed separately for each value of k.
Returning to the high level description of the modeling process, the next modeling block shown in
Following the motion compensation element 232 is an 8×8 DCT element 234. The DCT element 234 portrays a transform that is used to transform the 8×8 pixel blocks into the DCT domain and thus provide a format where redundancy detection is facilitated. Thus, coefficients are produced for the frequency domain, or DCT domain, and these coefficients are quantized at the quantization element 236 and then stored in an MPEG file. The MPEG file is preferably sent to the decoder model 250, where the quantized DCTs from the compression model 230 are inverse transformed at the inverse DCT element 252 and then motion compensated at the motion compensation element 254.
Describing the effects of compression simulation mathematically, the low resolution frame gd(l, k) is motion compensated (i.e., the prediction frame is computed and subtracted from the original to get the residual image), and the residual is transformed using a series of 8×8 block-DCTs to produce the DCT coefficients d(m, k). The index m≡(m1, m2) denotes the spatial coordinate in the DCT domain. Defining ĝ(l, k) as the prediction frame, and
then:
d(m, k)=DCT{g(l, k)−ĝ(l, k)+v(l, k)} (3)
where DCT {·} represents the 8×8 block DCT operator. The prediction frame ĝ(l, k) is obtained using neighboring frames except for the case of intra-coded frames, where the prediction frame is zero. Denoting G, Ĝ, and V as the block-DCTs of g, ĝ, and v respectively, Equation 3 can be rewritten as:
d(m, k)=G(m, k)−Ĝ(m, k)+V(m, k) (4)
Using the definition of g, G can be written as a function of the high resolution image ƒ(n, t), as shown by the below Equation 5:
where hDCT(n, tr; m, k) is the 8×8 block DCT of h(n, tr; l, k). To be explicit hDCT(n, tr; m, k) can be written as the below Equation 6:
where └ ┘ is the floor operator, mod stands for the modulo operator, and C is the 8×8 DCT basis function, given by
with cm
The DCT coefficients d(m, k) are then quantized to produce the quantized DCT coefficients {tilde over (d)}(m, k). The quantization operation is a nonlinear process that will be denoted by the operator Q {·}:
{tilde over (d)}(m, k)=Q{d(m, k)}=Q{G(m, k)−Ĝ(m, k)+V((m, k)}. (9)
Equation 9 is an equation that represents the relation between the high resolution image ƒ(n, tr) and the quantized DCT coefficients {tilde over (d)}(m, k). Equation 9 will be used below to establish a framework for resolution enhancement, in accordance with the preferred embodiments of the invention. In MPEG compression, quantization is realized by dividing each DCT coefficient by a quantization step size followed by rounding to the nearest integer. The quantization step size is determined by the location of the DCT coefficient, the bit rate, and the macroblock mode. The quantized DCT coefficients {tilde over (d)}(m, k) and the corresponding step sizes are available at the decoder, i.e., they are either embedded in the compressed bit stream or specified as a part of the coding standard. The quantization takes place in the transform domain. This quantization information can be exploited by using it in the DCT domain (i.e., without reverting back to the spatial domain), or in other embodiments, this information can be exploited by using it in the spatial domain. Both of these implementations will be described below.
Now that a general understanding of the imaging and compression process is realized and the relationships between the observed and estimated images are related mathematically, a higher level recap of what occurs up to the point of updating the estimates can be described in the context of the described models. In the preferred embodiments, observed image data of a low resolution image (i.e., the observed image, again with the understanding that more than one observed image can be used consistent with superresolution) is interpolated to produce an estimate of a high resolution image. A simulation of the effects of this estimate being processed through the described model blocks of
Now that the modeling is discussed, the comparison and updating process of the preferred embodiments is discussed. The DCT domain embodiments of the preferred embodiments, as described in association with
It is helpful to define a couple of terms initially to understand later what is included as the meaning of an error function. A maximum a posteriori (MAP) solution is represented as follows:
An objective function to be maximized is represented as follows:
ObjectiveFunction=p{tilde over (d)}
In these equations, {circumflex over (ƒ)} is the estimated image, p{tilde over (d)}
In the preferred embodiments, not only the source statistics, but also various regularizing constraints can be incorporated into the solution. As will be shown, the preferred embodiments of the invention can combine the quantization process and the additive noise in an estimation framework. Using a MAP formulation, the quantized DCT coefficients, {tilde over (d)}(m, k), the original high resolution frame ƒ(n, tr), and the block DCT of the additive noise, V(m, k), are all assumed to be random processes. Denoting pƒ(n, tr)|{tilde over (d)}(m, kl), . . . , {tilde over (d)}(m, k
Note that underline notation (in n and m) is used to emphasize that this PDF is the joint PDF, not the PDF at a specific location. The need for this distinction will later become evident.
The conditional PDF p {tilde over (d)}(m, kl), . . . , {tilde over (d)}(m, kp) (·) and the prior PDF pƒ(n, tr) (·) will be modeled in order to find the MAP estimate {circumflex over (ƒ)}(n, tr). In the following description, the conditional PDF p {tilde over (d)}(m, kl), . . . {tilde over (d)}(m, kp)|ƒ(n, t) (·)(·) is derived, which can be used with essentially any image model. The prior PDF handles the statistical information about the high resolution image to ensure that the reconstructed image is not undesirably noisy. On the other hand, the conditional PDF handles the statistical information about the noise and quantization.
Given the frame ƒ(n, tr), the only random variable on the right-hand side of Equation 9 is the DCT of the additive noise, V(m, k). The statistics of the additive noise process v(m, k) are preferably used to find the PDF of V(m, k). A reasonable assumption for v(m, k) is that of a zero-mean independent, identically distributed (IID) Gaussian process. Under this assumption the PDF of the additive noise at each location (m, k) is:
where σ2 denotes the noise variance. Note that m is used instead of m in Equation 12 to clarify that this is the PDF at a single location, not the joint PDF. The PDF of V(m, k) is also a zero-mean IID Gaussian random variable with the same variance σ2. Therefore,
From Equation 4, it follows that the PDF of d(m, k) is also an IID Gaussian process, but in this case one with a mean G(m, k)−Ĝ (m, k). That is,
Quantization theory can be used to compute the PDF of the quantized DCT coefficients {tilde over (d)}(m, k). Under some conditions, the PDF of the quantized signal can be computed using a two-step process: first, evaluate the convolution of the input signal PDF and a rectangular pulse (with an area of one and a width equal to the quantization step size), and then multiply the result of this convolution with an impulse train. Thus,
where Δ(m, k) is the quantization step size, and pn(m, k) (x) is the rectangular pulse:
Equation 15 is valid under the conditions that the quantizer has a uniform step size, and there is no saturation. The former condition is valid for MPEG as well as many other compression standards. For each DCT coefficient, there is a uniform step size that is determined by the location of the DCT coefficient, and the compression rate of the encoder. The latter condition is also satisfied for all DCT coefficients.
Substituting Equations 14 and 16 into Equation 15 provides the following:
where the dependence on (m, k) is dropped from Δ, G, and Ĝ to simplify the notation. Herein the dependence on (m, k) will not be indicated for simplicity, but with the understanding that Δ, G, and Ĝ are function of (m, k).
Equation 17 implies that the conditional PDF is an impulse train with magnitudes determined by the areas (of the Gaussian function) within the Δ/2 neighborhood of the impulse locations. This is illustrated in
Defining
and using th sifting property, Equation 17 can be rewritten as:
Equation 19 provides the conditional PDF of a single DCT coefficient at location (m, k). Since the errors corresponging to different DCT coefficients are independent, the joint PDF will be the product of the individual PDFs. As a result, the resolution enhancement problem can be written as:
Equation 20 provides an optimal way of combining the additive noise and the quantization in accordance with the preferred embodiments of the invention.
Next, two cases will be examined with respect to the relative magnitude of the quantization step size Δ and the standard deviation σ of the additive noise. When the quantization step size Δ is much smaller than the standard deviation σ of the additive noise, Equation 19 becomes:
Equation 21 corresponds to a sampled continuous Gaussian distribution with mean G−G, and variance σ2, as illustrated in
Based on this condition PDF and prior image model, a cost function to be minimized can be defined, and optimization techniques can be applied to minimize the cost function.
The second case is when the quantization step size Δ is much larger than the variance σ. In this case, illustrated in
p{tilde over (d)}(m, k)|ƒ(n, t
where
The initial high resolution image estimate is updated in such way that G−G lies in the region
To be more specific, the estimation errors from wither the lower bounds
or upper bounds
are back-projected onto the high resolution image estimate ƒ.
In a typical video encoder, different situations between those two cases can occur at the same time. Quantization step sizes change not only from frame to frame, but also from coefficient to coefficient. The preferred embodiments of the invention can handle these two cases and any other successfully.
In the next following sections, one implementation scheme is presented for the DCT domain embodiments. It is based on an Iterated Conditional Modes (ICM) algorithm, as ICM is known in the art. In ICM, a single optimization problem is turned into many simpler optimization problems to be solved iteratively. Although ICM does not necessarily converge to the globally optimal solution, when the initial estimate is close, it yields satisfactory results.
The optimization problem given in Equation 20 can be implemented using ICM assuming that all the pixels except one are fixed, and the optimal intensity value for that single pixel is estimated. Repeating the procedure for all pixels iteratively results in a convergence to a solution. In other words, the following optimization problem is solved for each pixel:
where Sn, t
Taking the natural logarithm of Equation 25, the objective function En, tr to be maximized for the pixel at location (n, tr) is given by:
For the prior image PDF, Markov Random Fields, as the mechanisms are known in the art, that model the local characteristics of the image are readily applicable to this framework. A Maximum Likelihood (ML) solution will be provided, with the understanding that other prior image models can be employed. For this case the objective function to be maximized is given by:
In one ICM technique, at each iteration, a new candidate solution is generated; if this candidate solution increases the objective function En, tr, it is accepted, otherwise it is rejected. However, the method for generating the candidate solution is not specified. In order to decide on an update scheme, several different cases will be examined.
As described above, when the quantization step size is much smaller than the standard deviation of the additive noise, the conditional PDF ρ{tilde over (d)}(m, k)|ƒ(n, t
where τ is the update step size that can be fixed or variable.
At the other extreme, when the quantization step size is much larger than the standard deviation of the additive noise, the update is done by back-projecting the error defined by the DCT bounds. As a result, the difference between the observed data and the simulated data is input back into the estimated image, such that the next time the simulated imaging and compression processes are applied, the simulated data is consistent with the observed data. This back-projection can map from a single DCT coefficient to a set of high resolution image pixels, and be repeated for all DCT coefficients; or it can map from a set of DCT coefficients to a single high resolution image pixel, and be repeated for all high resolution image pixels. These sets of high resolution images are determined by the mapping hDCT(n, tr; m, k). Hence, one way to update the high resolution image is:
Examining Equations 28 and 29, one initial choice of an update procedure will be of the form:
with the variable step size τ(n, tr; m, k) being a function of Δ/σ. In the simulations of one embodiment,
was used for τ(n, tr; m, k).
Combining these results suggests one way of implementing the proposed framework as:
5. The high resolution image pixels are updated using the following procedure:
(a) Update the intensity of each pixel is as in Equation 30,
(b) Compute the change in the objective function,
(c) If the change in the objective function is positive, accept the update, otherwise reject it.
{tilde over (d)}(m, k)=Q{d(m, k)}=Q{G(m, k)−Ĝ(m, k)+V(m, k)} (31)
Using Equation 31, this inverse quantization process can be formulated as:
where IDCT (·) is the 8×8 block inverse DCT operator. The MAP estimator can be written as:
Equation 33 implies that the desired solution maximizes the likelihood of both the high resolution image and the observed data given the estimated image. The conditional PDF Py(1, kl), . . . , y(1, kp)·|ƒ(n, t
For a conditional PDF, a Gaussian distribution can be assumed. Letting H(l, k; n tr) be the operator applying the blocks given in
where N represents the four neighbors of the pixel at (n, tr). An iterated conditional modes (ICM) implementation of the algorithms of the spatial domain embodiments updates each pixel intensity iteratively by minimizing the weighted sum of exponents in Equation 34 below:
where α determines the relative contribution of the conditional and prior PDFs. One implementation of the spatial domain embodiments includes the following approach:
The superprecision embodiments will now be discussed. This section presents a model that establishes the connection between a high-resolution source image and multiple low bit-dept image observations. This connection is used to enhance the gray scale resolution of the images. With continued reference to
According to this model, the mapping from a high-resolution image to a low spatial-resolution image is preferably expressed as a weighted sum of the high-resolution image pixels, where the weights are the value of a space-invariant point-spread function (PSF) at the corresponding pixel location. The center of the PSF depends upon the motion between the high-resolution image and the low-resolution images. This is depicted in
Equation 35 provides the relation between a single high-resolution image and the set of low spatial-resolution images.
The bit-rate reduction block 720 (
g(N
where g(N
Letting δ(m1, m2, k) denote the error introduced by rounding, Equation 36 can be written as:
g(N
where
|δ(m1, m2, k)|<0.5 (38)
(Since the round {·} operator rounds to the nearest integer, the maximum error during the operation is 0.5.) If the images are transformed coded (for example using JPEG or MPEG), a similar relation can be formulated in the transform domain. A common transform is the discrete cosine transform (DCT), which is here applied on 8×8 blocks. Taking an 8×8 block DCT of Equation 37 results in the following:
G(N
where G(N
where hDCT(l1, l2, k; n1, n2) is the block-DCT of h(m1, m2, k; n1, n2). Mathematically, the relation can be written as:
where (·) is the modulo 8 operator, L(x)≡[x/8], and K(l1, l2; m1, m2) is the DCT kernel given by:
kl is the normalization factor
To summarize we have two equations that relate the high resolution image to low resolution images and DCT coefficients:
These two equations can be used in POCS-based superprecision estimation algorithms that are formulated in the spatial and transform domains, according to a superprecision embodiment of the preferred embodiments of the invention. Equation 45 can be extended to include the compression process where the DCT coefficients are quantized according to quantization tables.
From Equation 44, it is seen that the value of 2N
The image capture process shown in
Defining the residual as:
a convex constraint set can be written for an arbitrary image x(N
C(m
The projection operation onto these convex sets is given by:
Equation 45 provides the relation between the DCT coefficients and the high resolution image G(N
{tilde over (G)}(N
where Q(l1, l2, k) is bounded by half of the quantization step size at location (l1, l2). Using Equation 45, the following is obtained in Eq. 50 below:
The sum of Δ(l1, l2, k) and Q(l1, l2, k), is bounded by B(l1, l2, k), which is equal to the sum of the bounds of Δ(l1, l2, k) and Q(l1, l2, k). That is,
|Δ(l1, l2, k)+Q(l1, l2, k)|<B(l1, l2, k), (51)
where Equation 52 is:
Equation 50, along with the bound B(l1, l2, k) allows for the derivation of a POCS reconstruction algorithm in the compressed domain that is analogous to the spatial-domain algorithm derived in the previous section.
To create the projection operator, define the compressed-domain residual as
the convex constraint set for an arbitrary image x(N
C(l
The projection operation onto these convex sets is given by Equation 54 below:
Implementations of both the spatial- and transformed-domain algorithms are described below. In the spatial-domain implementation the error back-projected is the error in the pixel intensities. In the transform-domain implementation it is the error in the DCT coefficients that is back-projected. One algorithm of the superprecision embodiments includes the following:
1. Choose a reference frame and bilinearly interpolate it to get an initial fine-grid image.
2. Extend the bit dept of this image to the required bit depth by multiplying each pixel intensity by 2N
3. Compute the motion between the reference frame and one of the low-bit depth images, g(N
4. Using the motion estimates, compute the mapping h(m1, m2, k; n1, n2) for each pixel in the current image g(N
5. For each pixel (DCT coefficient) in the current image,
6. Stop, if a stopping criterion is reached; otherwise, choose another low-bit depth image, and go to step 2.
It should be noted that, by construction, the superprecision embodiments have the potential to achieve both spatial and gray scale resolution enhancement at the same time. If the high-resolution image estimate has a finer grid than the observation, both spatial and gray scale resolution enhancement are achieved. If they have the same sampling grid, only gray scale resolution enhancement is achieved.
In some embodiments, blocking artifacts can be reduced. For example, at low bit rates in conventional systems, DCT coefficients are coarsely quantized. This coarse quantization with independent neighboring blocks gives rise to blocking artifacts (e.g., visible block boundaries). The preferred embodiments of the invention, when implemented, inherently performs block artifact reduction.
In addition, the preferred embodiments of the invention bit-rate scalability. That is, when implemented, the preferred embodiments have application from very low bit rate video to very high bit rate video, and thus the preferred embodiments are scaleable with respect to bit-rate, frame-rate, image size, video content, among others.
In one implementation, the software in memory 930 can include video quality enhancer 910, which provides executable instructions for implementing the unitary matrix adaptations, as described above. The software in memory 930 may also include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions and operating system functions such as controlling the execution of other computer programs, providing scheduling, input-output control, file and data management, memory management, and communication control and related services.
When the decoder 900 is in operation, the microprocessor 928 is configured to execute software stored within the memory 930, to communicate data to and from the memory 930, and to generally control operations of the decoder 900 pursuant to the software.
When the video quality enhancer 910 is implemented in software, it should be noted that the video quality enhancer 910 can be stored on any computer readable medium for use by or in connection with any computer related system or method. In the context of this document, a computer readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method. The video quality enhancer 910 can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
In an alternative embodiment, where the video quality enhancer 910 is implemented in hardware, the video quality enhancer 910 can implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
It should be emphasized that the above-described embodiments of the present invention, particularly, any “preferred” embodiments, are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiment(s) of the invention without departing substantially from the spirit and principles of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.
This application claims priority to copending U.S. Provisional Application entitled, “RESOLUTION ENHANCEMENT AND ARTIFACT REDUCTION FOR MPEG VIDEO,” Ser. No. 60/286,455, filed Apr. 26, 2001, which is hereby incorporated in its entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5696848 | Patti et al. | Dec 1997 | A |
5754697 | Fu et al. | May 1998 | A |
5781196 | Streater | Jul 1998 | A |
6208347 | Migdal et al. | Mar 2001 | B1 |
6304682 | Patti | Oct 2001 | B1 |
20020048413 | Kusunoki | Apr 2002 | A1 |
20030189983 | Hong | Oct 2003 | A1 |
Number | Date | Country |
---|---|---|
731600 | Sep 1996 | EP |
892543 | Jan 1999 | EP |
998114 | May 2000 | EP |
998122 | May 2000 | EP |
Number | Date | Country | |
---|---|---|---|
20030016884 A1 | Jan 2003 | US |
Number | Date | Country | |
---|---|---|---|
60286455 | Apr 2001 | US |