The present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for example-based data pruning for improving video compression efficiency.
Data pruning is a video preprocessing technology to achieve better video coding efficiency by removing a portion of input video data before the video data is encoded. The removed video data is recovered at the decoder side by inferring the removed video data from the decoded data. There have been some prior efforts relating to the use of data pruning to increase compression efficiency. For example, in a first approach (described in A. Dumitras and B. G. Haskell, “A Texture Replacement Method at the Encoder for Bit Rate Reduction of Compressed Video,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 13, No. 2, February 2003, pp. 163-175) and a second approach (described in A. Dumitras and B. G. Haskell, “An encoder-decoder texture replacement method with application to content-based movie coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, issue 6, June 2004, pp. 825-840), a texture replacement based method is used to remove texture regions at the encoder side, and re-synthesize the texture regions at the decoder side. Compression efficiency is gained because only synthesis parameters are sent to the decoder, which have smaller amount of data than the regular transformation coefficients.
In a third approach (described in C. Zhu, X. Sun, F. Wu, and H. Li, “Video Coding with Spatio-Temporal Texture Synthesis,” IEEE International Conference on Multimedia and Expo (ICME), 2007) and a fourth approach (described in C. Zhu, X. Sun, F. Wu, and H. Li, “Video coding with spatio-temporal texture synthesis and edge-based inpainting,” IEEE International Conference on Multimedia and Expo (ICME), 2008), spatio-temporal texture synthesis and edge-based inpainting are used to remove some of the regions at the encoder side, and the removed content is recovered at the decoder side, with the help of metadata, such as region masks. However, the third and fourth approaches need to modify the encoder and decoder so that the encoder/decoder can selectively perform encoding/decoding for some of the regions using the region masks. Therefore, it is not exactly an out-of-loop approach because the encoder and decoder need to be modified in order to be able to perform the third and fourth approaches. In a fifth approach (described in Dung T. Vo, Joel Sole, Peng Yin, Cristina Gomila and Truong Q. Nguyen, “Data Pruning-Based Compression using High Order Edge-Directed Interpolation,” IEEE Conference on Acoustics, Speech and Signal Processing, Taiwan, R.O.C., 2009), a line removal based method is proposed to rescale a video to a smaller size by selectively removing some of the horizontal or vertical lines in the video with a least-square minimization framework. The fifth approach is an out-of-loop approach, and does not require modification of the encoder/decoder. However, completely removing certain horizontal and vertical lines may result in a loss of information or details for some videos.
Furthermore, some preliminary researches on data pruning for video compression have been conducted. For example, in a sixth approach—described in Sitaram Bhagavathy, Dong-Qing Zhang and Mithun Jacob, “A Data Pruning Approach for Video Compression Using Motion-Guided Down-sampling and Super-resolution,” submitted to ICIP 2010 on Feb. 8, 2010, filed as a co-pending commonly-owned U.S. Provisional Patent Application (Ser. No. 61/297,320) on Jan. 22, 2010 (Technicolor docket number PU100004)—a data pruning scheme using sampling-based super-resolution is presented. The full resolution frame is sampled into several smaller-sized frames, therefore reducing the spatial size of the original video. At the decoder side, the high-resolution frame is re-synthesized from the downsampled frames with the help of metadata received from the encoder side. In a seventh approach—described in Dong-Qing Zhang, Sitaram Bhagavathy, and Joan Llach, “Data pruning for video compression using example-based super-resolution,” filed as a co-pending commonly-owned U.S. Provisional Patent Application (Ser. No. 61/336,516) on Jan. 22, 2010 (Technicolor docket number PU100014)—an example-based super-resolution based method for data pruning is presented. A representative patch library is trained from the original video. Afterwards, the video is downsized to a smaller size. The downsized video and the patch library are sent to the decoder side. The recovery process at the decoder side super-resolves the downsized video by example-based super-resolution using the patch library. However, as there is substantial redundancy between the patch library and downsized frames, it has been discovered that a substantive level of compression gain may not easily be obtained with the seventh approach.
This application discloses method and apparatus for example-based data pruning to improve video compression efficiency.
According to an aspect of the present principles, there is provided an apparatus for encoding a picture in a video sequence. The apparatus includes a patch library creator for creating a first patch library from an original version of the picture and a second patch library from a reconstructed version of the picture. Each of the first patch library and the second patch library includes a plurality of high resolution replacement patches for replacing one or more pruned blocks during a recovery of a pruned version of the picture. The apparatus also includes a pruner for generating the pruned version of the picture from the first patch library, and a metadata generator for generating metadata from the second patch library. The metadata is for recovering the pruned version of the picture. The apparatus further includes an encoder for encoding the pruned version of the picture and the metadata.
According to another aspect of the present principles, there is provided a method for encoding a picture in a video sequence. The method includes creating a first patch library from an original version of the picture and a second patch library from a reconstructed version of the picture. Each of the first patch library and the second patch library includes a plurality of high resolution replacement patches for replacing one or more pruned blocks during a recovery of a pruned version of the picture. The method also includes generating the pruned version of the picture from the first patch library, and generating metadata from the second patch library. The metadata is for recovering the pruned version of the picture. The method further includes encoding the pruned version of the picture and the metadata.
According to still another aspect of the present principles, there is provided an apparatus for recovering a pruned version of a picture in a video sequence. The apparatus includes a divider for dividing the pruned version of the picture into a plurality of non-overlapping blocks, and a metadata decoder for decoding metadata for use in recovering the pruned version of the picture. The apparatus also includes a patch library creator for creating a patch library from a reconstructed version of the picture. The patch library includes a plurality of high-resolution replacement patches for replacing the one or more pruned blocks during a recovery of the pruned version of the picture. The apparatus further includes a search and replacement device for performing a searching process using the metadata to find a corresponding patch for a respective one of the one or more pruned blocks from among the plurality of non-overlapping blocks and replace the respective one of the one or more pruned blocks with the corresponding patch.
According to a further aspect of the present principles, there is provided a method for recovering a pruned version of a picture in a video sequence. The method includes dividing the pruned version of the picture into a plurality of non-overlapping blocks, and decoding metadata for use in recovering the pruned version of the picture. The method also includes creating a patch library from a reconstructed version of the picture. The patch library includes a plurality of high-resolution replacement patches for replacing the one or more pruned blocks during a recovery of the pruned version of the picture. The method further includes performing a searching process using the metadata to find a corresponding patch for a respective one of the one or more pruned blocks from among the plurality of non-overlapping blocks and replace the respective one of the one or more pruned blocks with the corresponding patch.
According to a still further aspect of the present principles, there is provided an apparatus for encoding a picture in a video sequence. The apparatus includes means for creating a first patch library from an original version of the picture and a second patch library from a reconstructed version of the picture. Each of the first patch library and the second patch library includes a plurality of high resolution replacement patches for replacing one or more pruned blocks during a recovery of a pruned version of the picture. The apparatus also includes means for generating the pruned version of the picture from the first patch library, and means for generating metadata from the second patch library, the metadata for recovering the pruned version of the picture. The apparatus further includes means for encoding the pruned version of the picture and the metadata.
According to an additional aspect of the present principles, there is provided an apparatus for recovering a pruned version of a picture in a video sequence. The apparatus includes means for dividing the pruned version of the picture into a plurality of non-overlapping blocks, and means for decoding metadata for use in recovering the pruned version of the picture. The apparatus also includes means for creating a patch library from a reconstructed version of the picture. The patch library includes a plurality of high-resolution replacement patches for replacing the one or more pruned blocks during a recovery of the pruned version of the picture. The apparatus further includes means for performing a searching process using the metadata to find a corresponding patch for a respective one of the one or more pruned blocks from among the plurality of non-overlapping blocks and replace the respective one of the one or more pruned blocks with the corresponding patch.
These and other aspects, features and advantages of the present principles will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
The present principles may be better understood in accordance with the following exemplary figures, in which:
The present principles are directed to methods and apparatus for example-based data pruning for improving video compression efficiency.
The present description illustrates the present principles. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the present principles and are included within its spirit and scope.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present principles and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to “one embodiment” or “an embodiment” of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
Also, as used herein, the words “picture” and “image” are used interchangeably and refer to a still image or a picture from a video sequence. As is known, a picture may be a frame or a field.
Turning to
Turning to
A first output of an encoder controller 205 is connected in signal communication with a second input of the frame ordering buffer 210, a second input of the inverse transformer and inverse quantizer 250, an input of a picture-type decision module 215, a first input of a macroblock-type (MB-type) decision module 220, a second input of an intra prediction module 260, a second input of a deblocking filter 265, a first input of a motion compensator 270, a first input of a motion estimator 275, and a second input of a reference picture buffer 280.
A second output of the encoder controller 205 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 230, a second input of the transformer and quantizer 225, a second input of the entropy coder 245, a second input of the output buffer 235, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 240.
An output of the SEI inserter 230 is connected in signal communication with a second non-inverting input of the combiner 290.
A first output of the picture-type decision module 215 is connected in signal communication with a third input of the frame ordering buffer 210. A second output of the picture-type decision module 215 is connected in signal communication with a second input of a macroblock-type decision module 220.
An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 240 is connected in signal communication with a third non-inverting input of the combiner 290.
An output of the inverse quantizer and inverse transformer 250 is connected in signal communication with a first non-inverting input of a combiner 219. An output of the combiner 219 is connected in signal communication with a first input of the intra prediction module 260 and a first input of the deblocking filter 265. An output of the deblocking filter 265 is connected in signal communication with a first input of a reference picture buffer 280. An output of the reference picture buffer 280 is connected in signal communication with a second input of the motion estimator 275 and a third input of the motion compensator 270. A first output of the motion estimator 275 is connected in signal communication with a second input of the motion compensator 270. A second output of the motion estimator 275 is connected in signal communication with a third input of the entropy coder 245.
An output of the motion compensator 270 is connected in signal communication with a first input of a switch 297. An output of the intra prediction module 260 is connected in signal communication with a second input of the switch 297. An output of the macroblock-type decision module 220 is connected in signal communication with a third input of the switch 297. The third input of the switch 297 determines whether or not the “data” input of the switch (as compared to the control input, i.e., the third input) is to be provided by the motion compensator 270 or the intra prediction module 260. The output of the switch 297 is connected in signal communication with a second non-inverting input of the combiner 219 and an inverting input of the combiner 285.
A first input of the frame ordering buffer 210 and an input of the encoder controller 205 are available as inputs of the encoder 200, for receiving an input picture. Moreover, a second input of the Supplemental Enhancement Information (SEI) inserter 230 is available as an input of the encoder 200, for receiving metadata. An output of the output buffer 235 is available as an output of the encoder 200, for outputting a bitstream.
Turning to
A second output of the entropy decoder 345 is connected in signal communication with a third input of the motion compensator 370, a first input of the deblocking filter 365, and a third input of the intra predictor 360. A third output of the entropy decoder 345 is connected in signal communication with an input of a decoder controller 305. A first output of the decoder controller 305 is connected in signal communication with a second input of the entropy decoder 345. A second output of the decoder controller 305 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 350. A third output of the decoder controller 305 is connected in signal communication with a third input of the deblocking filter 365. A fourth output of the decoder controller 305 is connected in signal communication with a second input of the intra prediction module 360, a first input of the motion compensator 370, and a second input of the reference picture buffer 380.
An output of the motion compensator 370 is connected in signal communication with a first input of a switch 397. An output of the intra prediction module 360 is connected in signal communication with a second input of the switch 397. An output of the switch 397 is connected in signal communication with a first non-inverting input of the combiner 325.
An input of the input buffer 310 is available as an input of the decoder 300, for receiving an input bitstream. A first output of the deblocking filter 365 is available as an output of the decoder 300, for outputting an output picture.
As noted above, the present principles are directed to methods and apparatus for example-based data pruning for improving video compression efficiency. Advantageously, the present principles provide an improvement over the aforementioned seventh approach. That is, the present application discloses a concept of training the patch library at the decoder side using previously sent frames or existing frames, rather than sending the patch library through a communication channel as per the seventh approach. Also, the data pruning is realized by replacing some blocks in the input frames with flat regions to create “mixed resolution” frames.
In an embodiment, the present principles advantageously provide for the use of a patch example library trained from a pool of training images/frames to prune a video and recover the pruned video. The patch example library can be considered as an extension of the concept of a reference frame. Therefore, the patch example library idea can be also used in conventional video encoding schemes. In an embodiment, the present principles use error-bounded clustering (e.g., modified K-means clustering) for efficient patch searching in the library.
Moreover, in an embodiment, the present principles advantageously provide a mixed-resolution data-pruning scheme, where blocks are replaced by flat blocks to reduce the high-frequency signal to improve compression efficiency. To increase the efficiency of the metadata (best-match patch position in library) encoding, the present principles use patch signature matching, a matching rank list, and rank number encoding.
Additionally, in an embodiment, the present principles advantageously provide a strategy of encoding pruned block IDs using a flat block identification scheme based on color variation.
Thus, in accordance with the present principles, a novel method, referred to herein as example-based data pruning, is provided for pruning an input video so that the video can be more efficiently encoded by video encoders. In an embodiment, the method involves creating a library of patches as examples, and using the patch library to recover a video frame in which some blocks in the frame are replaced with low-resolution blocks or flat blocks. The framework includes the methods to create the patch library, prune the video, recover the video, as well as encode the metadata needed for recovery.
Referring to
At the encoder side, the patch library created from the original frame is used to prune the blocks, whereas the patch library created from the reconstructed frame is used to encode metadata. The reason of using the patch library created from the reconstructed frame is to make sure the patch libraries for encoding and decoding metadata are identical at the encoder side and the decoder side.
For the patch library created using the original frames, a clustering algorithm is performed to group the patches so that the patch search process during pruning can be efficiently carried out. Pruning is a process to modify the source video using the patch library so that less bits are sent to the decoder side. Pruning is realized by dividing a video frame into blocks, and replacing some of the blocks with low resolution or flat blocks. The pruned frame is then taken as the input for a video encoder. An exemplary video encoder to which the present principles may be applied is shown in
Referring back to
Turning to
Turning to
The patch library is a pool of high resolution patches that can be used to recover pruned image blocks. Turning to
To speed up computation, the horizontal and vertical dimensions of the training frames are reduced to one quarter of the original size. Also, the clustering process is performed on the patches in the downsized frames. In one exemplary embodiment, the size of the high-resolution patches is 16×16 pixels, and the size of the downsized patches is 4×4 pixels. Therefore, the downsize factor is 4. Of course, other sizes can be used, while maintaining the spirit of the present principles.
For the patch library for metadata encoding, the clustering process and clean-up process are not performed; therefore, it includes all possible patches from the reconstructed frame. However, for every patch in the patch library created from the original frames, it is possible to find its corresponding patch in the patch library created from the reconstructed frame using the coordinates of the patches. This would make sure that metadata encoding can be correctly performed. For the decoder side, the same patch library without clustering is created using the same decoded video frames for metadata decoding and pruned block recovery.
For the patch libraries created using decoded frames at both encoder and decoder sides, another process is conducted to create the signatures of the patches. The signature of a patch is a feature vector that includes the average color of the patch and the surrounding pixels of the patch. The patch signatures are used for the metadata encoding process to more efficiently encode the metadata, and used in the recovery process at the decoder side to find the best-match patch and more reliably recover the pruned content. Turning to
The metadata encoding process is described herein below. In the pruned frame, sometimes the neighboring blocks of a pruned block for recovery or metadata encoding are also pruned. Then the set of surrounding pixels used as the signature for search in the patch library only includes the pixels from the non-pruned blocks. If all the neighboring blocks are pruned, then only the average color 701 is used as the signature. This may end up with bad patch matches since too little information is used for patch matching, that is why neighboring non-pruned pixels 702 are important.
Similar to standard video encoding algorithms, the input video frames are divided into Group of Pictures (GOP). The pruning process is conducted on the first frame of a GOP. The pruning result is propagated to the rest of the frames in the GOP afterwards.
Turning to
Turning to
Thus, the input frame is first divided into non-overlapping blocks per step 910. The size of the block is the same as the size of the macroblock used in the standard compression algorithms—the size of 16×16 pixels is employed in the exemplary implementation disclosed herein. A search process then is followed to find the best-match patch in the patch library per step 920. This search process is illustrated in
After the blocks are identified for pruning, a process is conducted to prune the block. There could be different pruning strategies for the blocks that need to be pruned—for example, replacing the high-resolution blocks with low-resolution blocks. However, it has been discovered that it may be difficult for this approach to achieve significant compression efficiency gain. Therefore, in a preferred embodiment disclosed herein, a high-resolution block is simply replaced with a flat block, in which all pixels have the same color value (i.e., the average of the color values of the pixels in the original block). The block replacement process creates a video frame where some parts of a frame have high-resolution and some other parts have low-resolution; therefore, such a frame is called as a “mixed-resolution” frame (for more details on the mixed-resolution pruning scheme, see the co-pending commonly-owned International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR ENCODING VIDEO SIGNALS FOR BLOCK-BASED MIXED-RESOLUTION DATA PRUNING FOR IMPROVING VIDEO COMPRESSION EFFICIENCY filed on Mar. ______, 2011 (Technicolor Docket No. PU100194). Turning to
Metadata encoding includes two components (see
Turning to
Turning to
Turning to
During the pruning process, for each block, the system would search the best match patch in the patch library and output a patch index in the patch library for a found patch if the distortion is less than a threshold. Each patch is associated with its signature (i.e., its color plus surrounding pixels in the decoded frames). During the recovery process in the decoder side processing, the color of the pruned block and its surrounding pixels are used as a signature to find the correct high-resolution patch in the library.
However, due to noise, the search process using the signature is not reliable, and metadata is needed to assist the recovery process to ensure reliability. Therefore, after the pruning process, the system will proceed to generate metadata for assisting recovery. For each pruned block, the search process described above already identifies the corresponding patches in the library. The metadata encoding component will simulate the recovery process by using the query vector (the average color of the pruned block plus the surrounding pixels) to match the signatures of the patches in the patch library (the library created using the decoded frame). The process is illustrated in
For decoding (see
Turning to
Besides the rank number metadata, the locations of the pruned blocks need to be sent to the decoder side. This is done by block ID encoding (see
To further reduce redundancy, a differential coding scheme is employed to first compute the difference between an ID number and its previous ID number, and encode the difference sequence. For example, assuming the ID sequence is 3, 4, 5, 8, 13, 14, the differentiated sequence becomes 3, 1, 1, 3, 5, 1. The differentiation process makes the numbers closer to 1, therefore resulting in a number distribution with smaller entropy. The differentiated sequence then can be further encoded with entropy coding (e.g., Huffman coding in the current implementation). Thus, the format of the final metadata is shown as follows:
where flag is a signaling flag to indicate whether or not the block ID sequence is a false positive ID sequence; the threshold is the variance threshold for flat block identification; the encoded block ID sequence is the encoded bit stream of the pruned block IDs or the false positive block IDs; and the encoded rank number sequence is the encoded bit stream of the rank numbers used for block recovery.
For the rest of the frames in a GOP, some of the blocks in the frames will be also replaced by flat blocks. The positions of the pruned blocks in the first frame can be propagated to the rest of the frames by motion tracking. Different strategies to propagate the positions of the pruned blocks have been tested. One approach is to track the pruned blocks across frames by block matching, and prune the corresponding blocks in the subsequent frames (i.e., replace the tracked blocks with flat blocks). However, this approach does not result in good compression efficiency gain because, in general, the boundaries of the tracked blocks do not align with the coding macro blocks. As a result, the boundaries of the tracked blocks create a high frequency signal in the macroblocks. Therefore, a simpler alternative approach is currently used to set all the block positions for the subsequent frames to the same positions as the first frame. Namely, all the pruned blocks in the subsequent frames are co-located with the pruned blocks in the first frame. As a result, all of the pruned blocks for the subsequent frames are aligned with macro block positions.
However, this approach may not work well if there is motion in the pruned blocks. Therefore, one solution to solve the problem is to calculate the motion intensity of the block (see
If the motion intensity is larger than a threshold, the block would not be pruned. Another more sophisticated solution, which is an exemplary implementation disclosed herein, is to calculate the motion vectors of the pruned blocks in the original video by searching the corresponding block in the previous frame (see
The recovery process takes place at the decoder side. Before the recovery process, the patch library should be created. For long videos, such as movies, this could be achieved by using previous frames already sent to the decoder side. The encoder side can send metadata (the frame IDs) indicating which frames should be used to create the patch library. The patch library at the decoder side should be exactly the same as that at the encoder side
For the first frame in a GOP, the recovery process starts with decoding the metadata (see
Turning to
After the block ID sequence is available, for each pruned block, the average color and the surrounding pixels of this block will be taken as the signature vector to match with the signatures in the patch library. However, if the neighboring blocks of the block for recovery are also pruned, then the set of surrounding pixels used as the signature for search only includes the pixels from the non-pruned blocks. If all the neighboring blocks are pruned, then only the average color is used as the signature. The matching process is realized by calculating the Euclidean distances between the signature of the query block and those of the patches in the library. After all the distances are calculated, the list is sorted according to the distances, resulting in a rank list. The rank number corresponding to the pruned block then is used to retrieve the correct high-resolution block from the rank list.
Turning to
Turning to
It is to be appreciated that the block recovery using example patches can be replaced by traditional inpainting and texture synthesis based methods.
For the rest of the frames in a GOP, for each pruned block, if the motion vector is not available, the content of the block can be copied from the co-located block in the previous frame. If the motion vector is available, the motion vector can be used to find the corresponding block in the previous frame, and copy the corresponding block to fill the pruned block (see
Block artifacts may be visible since the recovery process is block-based. A deblocking filter, such as the in-loop deblocking filter used in AVC encoder, can be applied to reduce the block artifacts.
These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 61/403,108 entitled EXAMPLE-BASED DATA PRUNING FOR IMPROVING VIDEO COMPRESSION EFFICIENCY filed on Sep. 10, 2010 (Technicolor Docket No. PU100193). This application is related to the following co-pending, commonly-owned, patent applications: (1) International (PCT) Patent Application Serial No. PCT/US11/000107 entitled A SAMPLING-BASED SUPER-RESOLUTION APPROACH FOR EFFICIENT VIDEO COMPRESSION filed on Jan. 20, 2011 (Technicolor Docket No. PU100004);(2) International (PCT) Patent Application Serial No. PCT/US11/000117 entitled DATA PRUNING FOR VIDEO COMPRESSION USING EXAMPLE-BASED SUPER-RESOLUTION filed on Jan. 21, 2011 (Technicolor Docket No. PU100014);(3) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR ENCODING VIDEO SIGNALS USING MOTION COMPENSATED EXAMPLE-BASED SUPER-RESOLUTION FOR VIDEO COMPRESSION filed on Sep. ______, 2011 (Technicolor Docket No. PU100190);(4) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR DECODING VIDEO SIGNALS USING MOTION COMPENSATED EXAMPLE-BASED SUPER-RESOLUTION FOR VIDEO COMPRESSION filed on Sep. ______, 2011 (Technicolor Docket No. PU100266);(5) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR DECODING VIDEO SIGNALS USING EXAMPLE-BASED DATA PRUNING FOR IMPROVED VIDEO COMPRESSION EFFICIENCY filed on Sep. ______, 2011 (Technicolor Docket No. PU100267);(6) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR ENCODING VIDEO SIGNALS FOR BLOCK-BASED MIXED-RESOLUTION DATA PRUNING filed on Sep. ______, 2011 (Technicolor Docket No. PU100194);(7) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR DECODING VIDEO SIGNALS FOR BLOCK-BASED MIXED-RESOLUTION DATA PRUNING filed on Sep. ______, 2011 (Technicolor Docket No. PU100268);(8) International (PCT) Patent Application Serial No. ______ entitled METHODS AND APPARATUS FOR EFFICIENT REFERENCE DATA ENCODING FOR VIDEO COMPRESSION BY IMAGE CONTENT BASED SEARCH AND RANKING filed on Sep. ______, 2011 (Technicolor Docket No. PU100195);(9) International (PCT) Patent Application Serial No. ______ entitled METHOD AND APPARATUS FOR EFFICIENT REFERENCE DATA DECODING FOR VIDEO COMPRESSION BY IMAGE CONTENT BASED SEARCH AND RANKING filed on Sep. ______, 2011 (Technicolor Docket No. PU110106);(10) International (PCT) Patent Application Serial No. ______ entitled METHOD AND APPARATUS FOR ENCODING VIDEO SIGNALS FOR EXAMPLE-BASED DATA PRUNING USING INTRA-FRAME PATCH SIMILARITY filed on Sep. ______, 2011 (Technicolor Docket No. PU100196);(11) International (PCT) Patent Application Serial No. ______ entitled METHOD AND APPARATUS FOR DECODING VIDEO SIGNALS WITH EXAMPLE-BASED DATA PRUNING USING INTRA-FRAME PATCH SIMILARITY filed on Sep. ______, 2011 (Technicolor Docket No. PU100269); and(12) International (PCT) Patent Application Serial No. ______ entitled PRUNING DECISION OPTIMIZATION IN EXAMPLE-BASED DATA PRUNING COMPRESSION filed on Sep. ______, 2011 (Technicolor Docket No. PU10197).
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US11/50917 | 9/9/2011 | WO | 00 | 3/7/2013 |
Number | Date | Country | |
---|---|---|---|
61403108 | Sep 2010 | US |