The present patent application claims priority to and incorporates by reference the corresponding provisional patent application Ser. No. 61/026,453, titled, “Flicker Reduction in Video Sequences Using Temporal Processing,” filed on Feb. 5, 2008.
This application is related to the co-pending application entitled “Image/Video Quality Enhancement and Super-Resolution Using Sparse Transformations,” filed on Jun. 17, 2008, U.S. patent application Ser. No. 12/140,829, assigned to the corporate assignee of the present invention.
The present invention relates generally to processing of video sequences; more particularly, the present invention is related to reducing noise and/or flicker in video sequences.
Mosquito noise and temporal flicker are caused during acquisition due to camera limitations. Modules in the video processing pipeline such as compression, downsampling and upsampling lead to blocking artifacts, aliasing, ringing and temporal flicker. Image and video signal processing is widely used in a number of applications today. Some of these techniques have been used to reduce noise and temporal flicker.
A method and apparatus is disclosed herein for reducing at least one of both flicker and noise in video sequences. In one embodiment, the method comprises receiving an input video and performing operations to reduce one or both of noise and flicker in the input video using spatial and temporal processing.
The present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.
A method and apparatus for noise and/or flicker reduction in compressed/uncompressed video sequences are described. For purposes herein, a video sequence is made up of multiple images referred to herein as frames placed in order.
In one embodiment, the techniques disclosed herein include, but are not limited to: selecting a sub-frame at certain pixels from the current frame of input video and finding another sub-frame from the past frame of output video that satisfies a criterion; selecting a pixel-adaptive warped spatial transform and transforming the sub-frames into a spatial transform domain; deriving a detail-preserving adaptive threshold and thresholding the transform coefficients of the sub-frames from the current frame and the past frame using hard thresholding (set to zero if magnitude of transform coefficients is less than the threshold) or other thresholding techniques such as soft-thresholding; further transforming the spatial-transform coefficients using a temporal transform and thresholding a selected sub-set of the temporal-transform coefficients; inverse transforming the temporal-transform coefficients first temporally and then spatially to get the processed sub-frames belonging to both current frame and past frame; and combining the processed sub-frames belonging to current frame from input video to obtain the current frame for output video. These operations can be repeated for all the frames of the input video.
In the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
Overview
Referring to
In response to receiving the input video, processing logic performs operations to reduce one or both of noise and flicker in the input video using spatial and temporal processing (processing block 112). In one embodiment, these operations include applying a spatial transform and a temporal transform with adaptive thresholding of coefficients. In one embodiment, applying the spatial transform and the temporal transform comprises applying at least one warped transform to a sub-frame to create transform coefficients.
In the process described below, x denotes the current frame from the input video that is being processed by the techniques described herein,
After frame x has been obtained (processing block 201), the sub-frame selection process of processing block 202 of
In one embodiment, M is equal to 4 and the library of sub-frame types correspond to a set of masks illustrated in
In one embodiment, the choice of the sub-frame type for a pixel is made by choosing the sub-frame type corresponding to the regular mask always. In another embodiment, the choice of the sub-frame type for a pixel is made, for each selected pixel, (1) by evaluating, for each sub-frame type, a 2-D DCT over the sub-frame formed, and (2) by choosing, for a given threshold T, the sub-frame type that minimizes the number of non-zero transform coefficients with magnitude greater than T. In yet another embodiment, the choice of the sub-frame type for a pixel is made by choosing, for each selected pixel, the sub-frame type that minimizes the warped row variance of pixel values averaged over all warped rows. In still another embodiment, the choice of the sub-frame type for a pixel is made by having, for a block of K×L pixels, each pixel vote for a sub-frame type (based on the sub-frame type that minimizes the warped row variance of pixel values averaged over all warped rows) and choosing the sub-frame type with the most votes for all the pixels in the K×L block, where K and L can be any integers greater than 0. In one embodiment, K and L are all set to be 4. In still another embodiment, the choice of the sub-frame type for a pixel is made by forming, for each pixel, a block of K×L pixels and choosing a sub-frame type by using the preceding voting scheme on this block. In each case, the chosen sub-frame type is used for the current pixel. Thus, by using one of these measured statistics for each mask, the selection of a subframe is performed.
Note that masks other than those in
Referring to
Next, processing logic determines whether the choice is block-based (processing block 504). If processing logic determines the choice is block-based, processing logic counts the number of pixels that marked each sub-frame type in each block (processing block 506) and, for all pixels in a block, processing logic chooses the sub-frame type marked by most pixels in that block (processing block 507). In other words, if the choice is block-based, the sub-frame type marked by most pixels in a block is chosen for all pixels in that block. If processing logic determines the choice is not block-based, processing logic chooses, for each pixel, the sub-frame type marked by that pixel (processing block 505). In other words, each pixel chooses the sub-frame type marked by itself.
The choice of the sub-frame types for each pixel can be signaled within the vector OP.
The sub-frame type si is used to form a vector
The choice of mi can be made in a number of different ways. In alternative embodiments, the choice of mi is performed in one of the following ways:
In another embodiment, the sub-frame
Referring to
(processing block 601).
Next, processing logic forms sub-frame
∥zi−
(processing block 603).
After computing the p-norm, processing logic selects mk such that it gives the least p-norm; sets mi equal to mk, sets
and forms sub-frame
Spatial Transform Selection and Application
As part of processing block 204 of
It should be noted that a separable transform becomes non-separable after it is warped. The choice of the transform can be fixed apriori or can be adaptive to the different sub-frames pivoted at different pixels. In the adaptive case, the chosen transform is the one that has the least number of coefficients in ei with absolute value greater than a master threshold
A flow diagram of one embodiment of a spatial transform selection process for a sub-frame is illustrated in
Referring to
If processing logic determines the transform is pixel-adaptive, then, for each transform Hj in the library of transforms {H1, H2, . . . } (processing block 704), processing logic computes the transform coefficients ej using the formula:
ej=Hj×zi
(processing block 703).
The transform coefficients ej correspond to the transform Hj.
Next, for each j, processing logic counts the number of coefficients in ej with an absolute value greater than a threshold
The choice of the spatial transform can be signaled within the vector OP.
Thresholding
As part of processing block 204 of
where T is the threshold used. Similarly, the soft thresholding operation with T as the threshold is defined as
In alternative embodiments, the threshold {circumflex over (T)}i1 is computed in one of the following ways:
where ƒ( ) represents a function.
can be part of the side-information or default values may be used. This can be viewed as a setting for the algorithm. In one embodiment, a default value can be obtained by tuning on a training set and choosing the value that achieves a local optimum in reconstructed image/video quality.
The value of {circumflex over (T)}i1 can be signaled within the vector OP. In another embodiment, the choice of the option used for calculating {circumflex over (T)}i1 can be signaled within the vector OP.
An adaptive threshold {circumflex over (T)}i2 is applied on selected elements of ēi to get āi. In one embodiment, all the elements of ēi are selected. In another embodiment, all elements except the first element (usually the DC element) are selected. In still another embodiment, none of the elements are selected. The transform coefficients ēi are also thresholded using a master threshold
In alternative embodiments, the threshold {circumflex over (T)}i2 is computed in one of the following ways:
where ƒ( ) represents a function.
can be part of the side-information or default values may be used. This can be viewed as a setting for the algorithm. In one embodiment, a default value can be obtained by tuning on a training set and choosing the value that achieves a local optimum in reconstructed image/video quality.
In one embodiment, the value of {circumflex over (T)}i2 is signaled within the vector OP. In another embodiment, the choice of the option used for calculating {circumflex over (T)}i2 is signaled within the vector OP.
Temporal Transform Selection and Application
Processing logic in processing blocks 205 uses the results of the thresholding, namely vectors ai and āi, to form an M2×2 matrix ãi; ãi=[ai h(āi)]. For purposes herein, the function h( ) may be an identity function or a simple linear scaling of all the elements of āi to match brightness changes or a more general function to capture more complex scene characteristics such as fades. Processing logic transforms ãi into bi using a pixel-adaptive temporal transform Gi; bi=ãi×Gi. The transform Gi can be chosen from a library of transforms. The transform is called pixel-adaptive because sub-frames pivoted at different pixels can use different transforms. In the adaptive case, the chosen transform is the one that has the least number of coefficients in bi with absolute value greater than a master threshold
Referring to
The choice of the temporal transform can be signaled within the vector OP.
If processing logic determines the transform is pixel-adaptive, then, for each transform Gj in the library of transforms {G1, G2, . . . } (processing block 804), processing logic computes the transform coefficients bj using the formula:
bj=ãi×Gj
(processing block 803).
The transform coefficients bj correspond to the transform Gj.
Next, for each j, processing logic counts the number of coefficients in bj with an absolute value greater than a master threshold
Thresholding after Temporal Transform
After generating the transform coefficients bi, the transform coefficients bi are thresholded using
In one embodiment, hard thresholding is used as illustrated in
The hard thresholding begins using a master threshold
(processing block 901). In this manner, processing logic sets to zero all coefficients with absolute values less than the master threshold
In one embodiment, some elements of bi, selected apriori, are not thresholded and copied directly into their respective positions in ci. In a specific embodiment, the elements in the first column of bi are not thresholded. The choice of the set of elements that are not thresholded can be signaled within the vector OP.
In one embodiment, the elements cijεci are optionally enhanced by using the equation cij=cij*αj0+αj1, where the parameters αj0, αj1 are tuned on a training set to achieve a local optimum in reconstructed image/video quality. Note that such an operation occurs after processing block 206 in
Inverse Transformation
After thresholding, processing logic inverse transforms (with a temporal transform) the coefficients using Gi−1 to obtain {tilde over (d)}i=[di {tilde over (d)}i]=ci×Gi−1 (processing block 207). Processing logic also applies an inverse transform (spatial) Hi−1 on di to obtain the processed sub-frame {circumflex over (z)}i (processing block 208).
In one embodiment, the current frame is processed without using the past frame output by a previous iteration. In this embodiment, the vectors
In another embodiment, a set of past frames {
Combining Sub-Frames
After applying the inverse transform to the thresholded coefficients, all of the processed sub-frames are combined in a weighted fashion to form frame y. In one embodiment, a weight wi is computed for each processed sub-frame {circumflex over (z)}i. In alternative embodiments, weights based on ei and ai are computed in one of the following ways:
where emin is a constant.
where nmin is a constant.
where emin is a constant.
where nmin is a constant.
The processed sub-frames {circumflex over (z)}1:N (corresponding to all pixels) are combined together to form y in a weighted manner. One embodiment of this process is described for yj which is the value of the jth pixel.
Referring to
After initialization, processing logic determines whether pixel jεpi (processing block 1003). If it is, the process transitions to processing block 1004. If not, process transitions to processing block 1005.
At processing block 1004, in one embodiment, processing logic updates yj and nj using {circumflex over (z)}ik, the value of the pixel j in {circumflex over (z)}i, and using weight wi as described above. In one embodiment, the weight is calculated according to the following:
In processing block 1004, k is equal to the index of pixel j in pi. In one embodiment, processing logic updates yj and nj based on the following equation:
yj=yj+wi×{circumflex over (z)}ik
nj=nj+wi
After processing logic updates yj and nj, the process transitions to processing block 1005.
At processing block 1005, processing logic checks whether the index i=N, the total number of pixels in the frame. If so, the process transitions to processing block 1007. If not, the process transitions to processing block 1006. At processing block 1006, the index is incremented by one and the process transitions to processing block 1003.
At processing block 1007, processing logic updates yj according to the following equation:
After updating yj, processing logic sets the index i equal to 1 (processing block 1008) and checks whether the index j is equal to N (processing block 1009). If it is, the process ends. If not, the process transitions to processing block 1010 where the index j is incremented by one. After incrementing the index j by one, the process transitions to processing block 1003.
The frame y is the output corresponding to the current input frame x. If there are more frames to process, processing logic updates the current input frame x, copies y into
In one embodiment, the frame y undergoes further image/video processing in pixel-domain or a transform domain. In one embodiment, unsharp masking is performed on frame y to enhance high-frequency detail. In another embodiment, multiple blocks of size P×P pixels are formed from frame y, where P is an integer and each P×P block f undergoes a block transform, such as 2-D DCT, 2-D Hadamard etc, to produce another P×P block h. The elements of P×P block h, h(i,j), 0≦i, j≦P−1, are processed to form an enhanced P×P block ĥ such that h(i,j)=h(i,j)*α(i,j). In alternative embodiments, the enhancement factor α(i,j) can be computed in one of the following ways:
α(i,j)=α0*(i+j)β+α1 a.
α(i,j)=α0*iβ*jδ+α1 b.
where the parameters (α0, α1, β and δ) are tuned on a training set to achieve a local optimum in reconstructed image/video quality. In one embodiment, the parameters can be signaled within the vector OP. Note that the above operations occur after processing block 210 of
An Alternative Image Processing Embodiment
In an alternative embodiment, the process described in
Referring to
{tilde over (y)}(j)=wz*x(j)−wy*
where wz, wy are real numbers and m is an integer (processing block 1201). For purposes herein, the notation (j) denotes the value of pixel j (numbered in the raster scan order) in the frame of interest. For example,
In alternative embodiments, the choice of m can be made in one of the following ways:
In one embodiment, the choice of m can be signaled within the vector OP.
In another embodiment, the frame {tilde over (y)} is formed using a processed version of
Processing logic forms an M2×1 vector zi called a sub-frame with pixel values of frame x at locations corresponding to elements of pi. Pixel i is called the pivot for sub-frame zi (processing block 1202). An M2×1 vector denoted by
Processing logic selects a spatial transform Hi and applies the spatial transform to sub-frames zi and
Processing logic computes adaptive threshold {circumflex over (T)}i1 from
After applying the adaptive threshold {circumflex over (T)}i1 on selected elements of ei, processing logic forms a vector di using ai, ei, ēi and using threshold
In one embodiment, the choice of the option used for calculating dij is signaled within the vector OP.
Thereafter, processing logic applies the inverse spatial transform to the vector di to produce the sub-frame {circumflex over (z)}i (processing block 1205), and the remainder of the processing blocks 1206, 1207, 1208, and 1209 operate as their respective counterparts 209, 210, 211, and 212 in
For the embodiments described above, the optional parameter vector OP or parts of it can be signaled by any module including, but not limited to, codec, camera, super-resolution processor etc. One simple way to construct the parameter vector OP is as follows: each choice is signaled using two elements in the vector. For the nth choice,
OP(2*n)=value representing the choice. OP(2*n) needs to be set and is used only when OP(2*n−1)=1.
The techniques described herein can be used to process a video sequence in any color representation including, but not limited to, RGB, YUV, YCbCr, YCoCg and CMYK. The techniques can be applied on any subset of the color channels (including the empty set or the all channel set) in the color representation. In one embodiment, only the ‘Y’ channel in the YUV color representation is processed using the techniques described herein. The U and V channels are filtered using a 2-D low-pass filter (e.g. LL band filter of Le Gall 5/3 wavelet).
The techniques described herein can be used to process only a pre-selected set of frames in a video sequence. In one embodiment, alternative frames are processed. In another embodiment, all frames belonging to one or more partitions of a video sequence are processed. The set of frames selected for processing can be signaled within OP.
In addition to the application of the techniques described herein on compressed/uncompressed video sequences, the techniques can also be applied to compressed video sequences that underwent post-processing such as Non-linear Denoising Filter. Furthermore, the techniques can be applied on video sequences that are obtained by super-resolving a low-resolution compressed/uncompressed video sequence. The techniques can also be applied on video sequences that are either already processed or will be processed by a frame-rate conversion module.
An Example of a Computer System
System 1400 further comprises a random access memory (RAM), or other dynamic storage device 1404 (referred to as main memory) coupled to bus 1411 for storing information and instructions to be executed by processor 1412. Main memory 1404 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 1412.
Computer system 1400 also comprises a read only memory (ROM) and/or other static storage device 1406 coupled to bus 1411 for storing static information and instructions for processor 1412, and a data storage device 1407, such as a magnetic disk or optical disk and its corresponding disk drive. Data storage device 1407 is coupled to bus 1411 for storing information and instructions.
Computer system 1400 may further be coupled to a display device 1421, such as a cathode ray tube (CRT) or liquid crystal display (LCD), coupled to bus 1411 for displaying information to a computer user. An alphanumeric input device 1422, including alphanumeric and other keys, may also be coupled to bus 1411 for communicating information and command selections to processor 1412. An additional user input device is cursor control 1423, such as a mouse, trackball, trackpad, stylus, or cursor direction keys, coupled to bus 1411 for communicating direction information and command selections to processor 1412, and for controlling cursor movement on display 1421.
Another device that may be coupled to bus 1411 is hard copy device 1424, which may be used for marking information on a medium such as paper, film, or similar types of media. Another device that may be coupled to bus 1411 is a wired/wireless communication capability 1420 to communication to a phone or handheld palm device.
Note that any or all of the components of system 1400 and associated hardware may be used in the present invention. However, it can be appreciated that other configurations of the computer system may include some or all of the devices.
Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention.
Number | Name | Date | Kind |
---|---|---|---|
4442454 | Powell | Apr 1984 | A |
4447886 | Meeker | May 1984 | A |
5666209 | Abe | Sep 1997 | A |
5844611 | Hamano et al. | Dec 1998 | A |
5859788 | Hou | Jan 1999 | A |
6141054 | Lee et al. | Oct 2000 | A |
6438275 | Martins et al. | Aug 2002 | B1 |
7284026 | Nakayama | Oct 2007 | B2 |
7554611 | Zhou et al. | Jun 2009 | B2 |
20020028025 | Hong | Mar 2002 | A1 |
20050030393 | Tull | Feb 2005 | A1 |
20060050783 | Le Dinh et al. | Mar 2006 | A1 |
20070074251 | Oguz et al. | Mar 2007 | A1 |
20070160304 | Berkner et al. | Jul 2007 | A1 |
20070299897 | Reznik | Dec 2007 | A1 |
20080246768 | Murray et al. | Oct 2008 | A1 |
20090046995 | Kanumuri et al. | Feb 2009 | A1 |
20090060368 | Drezner et al. | Mar 2009 | A1 |
20090195697 | Kanumuri et al. | Aug 2009 | A1 |
Number | Date | Country |
---|---|---|
1665298 | Sep 2005 | CN |
1997104 | Jul 2007 | CN |
1531424 | May 2005 | EP |
06054172 | Feb 1994 | JP |
08-294001 | Nov 1996 | JP |
10-271323 | Oct 1998 | JP |
WO 2006127546 | Nov 2006 | WO |
2007089803 | Aug 2007 | WO |
WO 2009100034 | Aug 2009 | WO |
Entry |
---|
“Warped Discrete Cosine Transform and Its Application in Image Compression”, Cho et al, IEEE transactions on Circuits and Systems for Video Technology, vol. 10, No. 8, Dec. 2000. |
Gupta N. et al.: “Wavelet domain-based video noise reduction using temporal discrete cosine transform and hierarchically adapted thresholding”, IET Image Processing, vol. 1, No. 1, Mar. 6, 2007, pp. 2-12, XP006028283. |
Gupta, N., et al., “Wavelet Domain-Based Video Noise Reduction Using Temporal Discrete Cosine Transform and Hierarchically Adapted Thresholding”, IET Image Processing, Mar. 6, 2007, pp. 2-12, vol. 1—No. 1. |
Foi, Alessandro, et al., “Shape-Adaptive DCT for Image Denoising and Image Reconstruction”, Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning-Proceedings of SPIE—IS&T Electronic Imaging, Jan. 16, 2006, pp. 1-12, vol. 6064, Bellingham, WA, USA. |
Motwani, Mukesh C., et al., “Survey of Image Denoising Techniques”, Proceedings of the Global Signal Processing Expo and Conference, Sep. 27, 2004, pp. 1-8. |
PCT International Search Report for PCT Patent Application No. PCT/US09/32888, dated Jun. 25, 2009, 4 pages. |
PCT Written Opinion of the International Searching Authority for PCT Patent Application No. PCT/US09/32888, dated Jun. 25, 2009, 7 pages. |
Office Action mailed Oct. 27, 2011 for U.S. Appl. No. 12/140,829, filed Jun. 17, 2008, 30 pages. |
Final Office Action mailed May 2, 2012 for U.S. Appl. No. 12/140,829, filed Jun. 17, 2008, 30 pages. |
Office Action mailed May 30, 2012 for U.S. Appl. No. 12/239,195, filed Sep. 26, 2008, 9 pages. |
Final Office Action mailed Nov. 5, 2012 for U.S. Appl. No. 12/239,195, filed Sep. 26, 2008, 9 pages. |
Japanese Office Action for related Japanese Patent Application No. 2011-514565, Sep. 18, 2012, 4 pages. |
International Preliminary Report on Patentability for PCT Patent Application No. PCT/US2008/073203, Dec. 29, 2010, 8 pages. |
Boon, Choong S., et al., “Sparse super-resolution reconstructions of video from mobile devices in digital TV broadcast applications”, Proceedings of SPIE, Aug. 31, 2006, XP-002525249, pp. 63120M-1-63120M-12, vol. 6312. |
Guleryuz, Onur G., “Predicting Wavelet Coefficients Over Edge Using Estimates Based on Nonlinear Approximants”, Data Compression Conference 2004, Snowbird, Utah, Mar. 23, 2004, pp. 162-171. |
Seunghyeon, Rhee, et al., “Discrete cosine transform based regularized high-resolution image reconstruction algorithm”, Optical Engineering, SPIE, vol. 38, No. 9, Aug. 1999, XP-002525250, pp. 1348-1356. |
Guleryuz, Onur G., “Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising—Part I: Theory”, IEEE Trans. on Image Processing, Mar. 2006, vol. 15, No. 3, XP-002525251, pp. 539-554. |
Guleryuz, Onur G., “Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising—Part II: Adaptive Algorithms”, IEEE Trans. on Image Processing, Mar. 2006, vol. 15, No. 3, XP-002525252, pp. 555-571. |
Jiji, C.V., et al., “Single frame image super-resolution: should we process locally or globally?”, Multidimensional Systems and Signal Processing, Mar. 6, 2007, vol. 18, No. 2-3, pp. 123-152. |
Park, Min Kyu, et al., “Super-Resolution Image Reconstruction: A Technical Overview”, IEEE Signal Processing Magazine, May 1, 2003, vol. 20, No. 3, pp. 21-36. |
Hunt, B.R., et al., “Super-Resolution of Images: Algorithms, Principles, Performance”, International Journal of Imaging Systems and Technology, Dec. 21, 1995, vol. 6, No. 4, XP-001108818, pp. 297-304. |
International Search Report for PCT Patent Application No. PCT/US2008/073203, dated Jul. 17, 2009, 4 pages. |
Written Opinion of the International Searching Authority for PCT Patent Application No. PCT/US2008/073203, dated Jul. 17, 2009, 8 pages. |
ITU-Recommendation H.264 & ISP/IEC 14496-10 (MPEG-4) AVC, “Advanced Video Coding for Generic Audiovisual Services”, version 3, 2005, 282 pages. |
Vatis, Y., et al., “Locally Adaptive Non-Separable Interpolation Filter for H.264/AVC”, IEEE ICIP, Oct. 2006, 4 pages. |
International Preliminary Report on Patentability for PCT Patent Application No. PCT/US2009/032888, Aug. 19, 2010, 7 pages. |
Korean Office Action for related Korean Patent Application No. 2010-7017838, Jun. 29, 2012, 5 pages. |
Chinese Office Action for related Chinese Patent Application No. 200980103952.3, Jul. 9, 2012, 6 pages. |
Notification of Transmittal of International Search Report and the Written Opinion for PCT Patent Application No. PCT/US2009/032890, Sep. 10, 2012, 5 pages. |
Written Opinion of the International Searching Authority for PCT Patent Application No. PCT/US2009/032890, Sep. 10, 2012, 10 pages. |
Rusanovsky, et al., “Video Denoising Algorithm in Sliding 3D DCT Domain”, Advanced Concepts for Intelligent Vision Systems Lecture Notes in Computer Science, Springer. Berlin, DE. XP019019728. ISBN: 978-3-540-29032-2; Sections 2-3; Jan. 1, 2005, pp. 618-625. |
Katkovnik, et al., “Mix-Distribution Modeling for Overcomplete Denoising”, 9th IFAC Workshop on Adaptation and Learning in Control and Signal Processing, Jan. 1, 2007, vol. 9, pp. 1-6. |
Yaroslavsky, “Local Adaptive Image Restoration and Enhancement with the Use of DFT and DCT in a Running Window”, Proceedings of SPIE, Jan. 1, 1996, vol. 2825, pp. 2-13. |
Yaroslavsky, et al., “Transform Domain Image Restoration Methods: Review, Comparison, and Interpretation”, Proceedings of SPIE, Jan. 1, 2001, vol. 4304, pp. 155-169. |
Mozafari, et al., “An Efficient Recursive Algorithm and an Explicit Formula for Calculating Update Vectors of Running Walsh-Hadamard Transform”, IEEE 9th International Symposium on Signal Processing and its Applications, Feb. 12, 2007, pp. 1-4. |
Kober, “Fast Algorithms for the Computation of Sliding Discrete Sinusoidal Transforms”, IEEE Transactions on Signal Processing, Jun. 1, 2004, vol. 52, No. 6, 8 pages. |
International Preliminary Report on Patentability for PCT Patent Application No. PCT/US2009/032890, Sep. 25, 2012, 9 pages. |
Notification Concerning Transmittal of International Preliminary Report on Patentability for PCT Patent Application No. PCT/US2009/032890, Oct. 4, 2012, 10 pages. |
Chinese Office Action for related Chinese Patent Application No. 200980103952.3, Jan. 4, 2012, 13 pgs. English Translation. |
Korean Office Action for related Korean Patent Application No. 2010-7017838, Nov. 16, 2011, 6 pgs. English Translation. |
Hong, et al., “Image Compression Technology and Techniques”, Apr. 1988, 31 pgs., Sensory Intelligence Group, Robot Systems Division, National Bureau of Standards, Gaithersberg, MD 20899. |
Kanumuri, et al., “Fast super-resolution reconstructions of mobile video using warped transforms and adaptive thresholding”, DoCoMo Communications Laboratories USA, Inc. Palo Alto, CA 94304, 2007. |
Seo, Hae Jong, et al., “Video Denoising Using Higher Order Optimal Space-Time Adaptation,” IEEE: ICASSP 2008, pp. 1249-1252, 2008. |
Guleryuz, Onur G., “Weighted Averging for Denoising with Overcomplete Dictionaries,” pp. 1-24, 2007. |
Naranjo, Valery, et al., “Flicker Reduction in Old Films,” Proceedings of the International Conference on Image Processing, 2000, pp. 657-659. |
Becker, A., et al., “Flicker Reduction in Intraframe Codecs,” Proceedings of the Data Compression Conference, 2004, pp. 252-261. |
Dabov, Kostadin, et al., “Video Denoising by Sparse 3D Transform-Domain Collaborative Filtering,” Proceedings of the 15th European Signal Processing conference, 2007. |
Abbas, Houssam, et al., “Suppression of Mosquito Noise by Recursive Epsilon-Filters,” IEEE: ICASSP 2007, pp. 773-776. |
Kuszpet, Yair, et al., “Post-Processing for Flicker Reduction in H.264/AVC,” 4 pages, 2007. |
Protter, Matan, et al., “Sparse and Redundant Representations and Motion-Estimation-Free Algorithm for Video Denoising,” 12 pages, 2007. |
Number | Date | Country | |
---|---|---|---|
20090195697 A1 | Aug 2009 | US |
Number | Date | Country | |
---|---|---|---|
61026453 | Feb 2008 | US |