This invention relates generally to video compression, and more particularly to compression based on predicting frames of image data.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings hereto: Copyright© 2003, Sony Electronics Inc., All Rights Reserved.
High compression gain for video sequences can be achieved by removing the temporal redundancy across frames. To encode a current frame, the frame is first predicted based on a previously coded “reference” frame, and only the prediction error is encoded. Block-based motion estimation and compensation has been a popular method for temporal prediction, and is widely used. As illustrated in
Traditional methods to achieve accurate temporal prediction is to use sub-pixel motion search, which leads to large amount of motion vector overhead. In addition, it can not provide arbitrary sub-pixel resolution, but instead only pre-determined sub-pixel resolutions, e.g. ½, ¼, or ⅛, can be obtained. However, in reality, the object in the picture may have a movement at an arbitrary sub-pixel resolution, which cannot be estimated from the traditional pure motion compensation method. To achieve fine motion resolution with pure motion compensation, it costs more bits to represent each motion vectors which will lead to poor compression performance.
Temporal classified filtering encodes image data by applying filters assigned to classes of pixels in a target frame to predict values for the pixels. The pixels are classified based on their associated motion vectors and the motion vectors are used to position the filters on the reference frame. Prediction error values are also calculated. The filters, motion vectors, and prediction errors represent the pixels in the encoded image data. The reference frame may be a past or future frame of the image data, and multiple reference frames of various combinations of past and future frames may be used in the prediction. The filters for multiple reference frame prediction are three-dimensional filters comprising a two-dimensional filter for each reference frame. The filters may be pre-determined or generated as the frames are encoded. The image data is recreated by applying the filters to the reference frames and correcting the resulting predictions with the prediction error values.
The present invention is described in conjunction with systems, clients, servers, methods, and machine-readable media of varying scope. In addition to the aspects of the present invention described in this summary, further aspects of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.
In the following detailed description of embodiments of the invention, reference is made to the accompanying drawings in which like references indicate similar elements, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, functional, and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
For example, referring back to
{circumflex over (v)}mc(i,j,t)=v(i′,j′,tr) (1)
where v(i′,j′,tr) is the value of the pixel at column i′ row j′ in previous frame Ft
All the pixels in the target frame are classified into Nc class or segments, where Nc is a positive integer. A unique filter is associated with each class or segment c, and thus there are Nc filters for each target frame. These filters are referred as classified filters. The coefficients for the filters may be pre-defined or may be created by training or other techniques as described further below. Each filter is an arbitrary two-dimensional shape, e.g., rectangular, circle, diamond, etc., defined by a set of pixel positions or filter taps. A diamond shaped filter 305 is illustrated in
As illustrated in
v(i,j,t)={circumflex over (v)}(i,j,t)+ε(i.j.t) (2)
where
{circumflex over (v)}(i,j,t)=Wc(R) (3)
The position of the filter in the reference frame can be expressed in pixel coordinates. Alternatively, one tap in the filter may be selected as an “anchor tap”, in which case, the filter position is defined as the coordinate of the pixel in the frame that the filter anchor tap aligns on. The motion vector [mi, mj] 323 of the target pixel is used to locate the reference position (i′, j′):
i′=i+mi and j′=j+mj. (4)
When an anchor tap is used, the filter Wc is placed on the reference frame Ft
Let all the values of the input tap pixels and the filter coefficients form vectors X and W, respectively. Each vector has n elements, where n is the number of filter taps, i.e. X=[x1, x2, . . . xn] and W=[w1, W2, . . . wn]. The elements in two vectors should follow the same order, i.e., element xi is the input tap that aligns to the filter tap wi in the spatial domain. Accordingly, the filtering operation of equation 2 can be expressed as the following vector product:
The predicted value is used to represent the pixel in the encoded frame. The prediction error is also produced using
ε=v−{circumflex over (v)} (6)
and transmitted to the decoder to correct the prediction when decoding the frame.
For the sake of clarity,
One embodiment of a temporal classified filtering method 400 to be performed by as encoder, such as encoder 203 of
Turning first to
At block 401, the TCF method finds the motion vectors for all pixels in the target frame based on the reference frame. This is similar to the standard video compression (e.g., MPEG). As described above, the target frame is divided into fixed size blocks and block matching is performed to find the motion vectors for each block. All the pixels in the same block share the same motion vector. The motion vector can have either pixel or sub-pixel resolutions.
At block 403, the pixels in the target frame are classified into several segments based on the motion vectors of the pixels as described above in conjunction with
A unique filter Wc is assigned for each class c of the pixels. The filter taps and shape of the filter can be pre-defined. The number of classes (or segments) Nc in the frame can be either a pre-determined value or be determined based on the characteristic of the frame. For example,
As discussed above, each filter may have different shape (filter taps) and different coefficients. The coefficients may be pre-defined or optionally, as represented by phantom block 405, generated when needed using a variety of ways. For example, the coefficients may be the weights corresponding to the temporal distance between input taps and reference position (i′, j′) (or filter position). Filter coefficients can also be generated by on-line self training with the data from reference and target frames as described below in conjunction with
For the target pixel with class ID c, the TCF method 400 predicts the value of the pixel using the associated classified filter Wc (block 407) as described above in conjunction with
The prediction error is calculated at block 409. As discussed above, the prediction error and the motion vectors are sent to the decoder. The class IDs and filter coefficients may also have to be transmitted to the decoder if the class IDs cannot that be derived from the motion vectors and if the filter coefficients have been generated at block 405.
If Nmax≧Nmv (block 415), the number of bins are less than the Nmax, and so the method 410 proceeds to bock 421.
On the other hand, if Nmax<Nmv (block 415), some of the bins will have to be combined to reduce the number of bins to Nmax. The bins are sorted in decreasing order of nb (the number of motion vectors in a bin) at block 417. Thus, the first bin will have the maximum number of motion vectors. Each bin in the first Nmax−1 number of bins will form a class, while the remaining bins from Nmax to Nmv are grouped together to form a single class (block 419), resulting in a total of Nmax bins. At block 421, each bin is assigned a class ID c, which may be for example, an integer.
Because all pixels in a class share the same filter, when the filter coefficients are transmitted to the decoder (e.g., coefficients are obtained by on-line training), the larger the class is (e.g., containing more pixels), the more efficient the compression performance. Therefore, in order to increase compression gain, the classification method 410 may optionally eliminate classes that have very few pixels (i.e., very few number of motion vectors). A threshold Tmv is selected and a class containing fewer number of motion vectors than the threshold will be merged into the closest neighbor class (block 423). The threshold Tmv can be pre-determined, e.g., Tmv=10. The closes neighbor class is based on measuring a distance da,b between pairs of classes. In one embodiment, the distance is the Euclidean distance between the two centroids of the classes
da,b=(Ma,1−Mb,1)2+(Ma,2−Mb,2)2 (7)
where [Ma,1, M1,2] and [Mb,1, Mb,2] are the centroids of class a and b, respectively. The centroid of a class c ([Mc,1, Mc,2], which is a vector of two elements) is the average value of the motion vectors in the class c defined as
where mk,i and mk,j are the two elements of the kth motion vector in the class c; and nc is total number of motion vectors in class c. The closest neighbor class of a given class c is the class that has smallest distance to c.
Classes that contain very few motion vectors can be optionally grouped into a special class (block 425), instead of being merged into other neighbor classes at block 423. A very “short” filter, i.e., a filter with few taps, is assigned to this special class, to minimize the overhead of filter coefficients for this class since the cost of filter coefficients is a consideration in maximizing the overall compression gain.
In one embodiment, the trained filter coefficients W* are obtained according to the criteria
where minw is a function that minimizes the value of ∥X·W−Y∥2 over argument W. W* is the value of W when ∥X·W−Y∥2 reaches the minimum. Here X, Y and W are, for example, the following matrices and vectors, X is an input data vector, W is the coefficient vector and Y corresponds to the target data matrix. Examples of X, Y and W are
Thus, the classified filter coefficients wi of W* obtained according to equation 9 minimize the overall prediction error for all the pixel in the same class.
The training process can be further refined to obtain filter coefficients that provide better prediction. Since there may be “false” motion vectors that are obtained in the block matching motion compensation stage, some pixels may be assigned with motion vectors that is not accurate, e.g., they do not represent the actual movement of the object. In such a case, those pixels may ruin the training process for the filter coefficients. To avoid this, multiple iterations can be used in the training process 431 as illustrated in
At block 441, a first iteration of training method 440 uses all the pixels in the same segment c to obtain the filter coefficients for that segment class. The resulting filter coefficients are used to predict the target pixel in each class of the target frame (block 443) and the prediction error for each pixel is calculated (block 445). Pixels having a error larger than a pre-defined error threshold (block 447) are removed from the class c (block 449) so they are excluded from training in next iteration for the filter coefficients of this class. The training method 440 returns to block 441, where it operates on the remaining pixels in segment c. Training method exits when the number of iterations exceeds a pre-determined value Ttr, e.g., Ttr=3, or when the number of pixels with large prediction error is below a pre-determined number (block 451). Because the training method 440 removes pixels that are badly predicted from the training data, the filter coefficients obtained from the final iteration tend to provide a more precise prediction of the remaining pixels in the segment.
The pixels that are removed from the segment c during the iterations can be either grouped into a special class with a new filter assigned (block 425 of
In practice, the method 400 may constitute one or more programs made up of machine-executable instructions. Describing the method with reference to the flow diagrams in
A particular implementation of the TCF that uses multiple reference frames is now described with reference to
Assuming Nr reference frames, each block (or pixel) will have Nr motion vectors associated with each reference frame. The motion vector would be constructed as [mi, mj, mt], where mt is a new element representing an index for each reference frame. Since there are Nr motion vectors for each target pixel, the classification procedure differs slightly from the above case with single reference frame, i.e., Nr=1. For each block of pixels, motion vector is selected from all the available Nr motion vectors. In one embodiment, the selection is based on which motion vector leads to the minimum average prediction error for the entire block of pixels. The selected motion vector is used as previously described to classify the block.
The filter assigned to the class can have a three-dimensional shape, where its taps can span over several frames. In other words, a 3D filter contains Nr pieces of a two-dimensional filter as previously described.
Another example of TCF with multiple reference frames is shown in
The following description of
The web server 9 is typically at least one computer system which operates as a server computer system and is configured to operate with the protocols of the World Wide Web and is coupled to the Internet. Optionally, the web server 9 can be part of an ISP which provides access to the Internet for client systems. The web server 9 is shown coupled to the server computer system 11 which itself is coupled to web content 10, which can be considered a form of a media database. It will be appreciated that while two computer systems 9 and 11 are shown in
Client computer systems 21, 25, 35, and 37 can each, with the appropriate web browsing software, view HTML pages provided by the web server 9. The ISP 5 provides Internet connectivity to the client computer system 21 through the modem interface 23 which can be considered part of the client computer system 21. The client computer system can be a personal computer system, a network computer, a Web TV system, a handheld device, or other such computer system. Similarly, the ISP 7 provides Internet connectivity for client systems 25, 35, and 37, although as shown in
Alternatively, as well-known, a server computer system 43 can be directly coupled to the LAN 33 through a network interface 45 to provide files 47 and other services to the clients 35, 37, without the need to connect to the Internet through the gateway system 31. Furthermore, any combination of client systems 21, 25, 35, 37 may be connected together through a peer-to-peer system using LAN 33, Internet 3 or a combination as a communications medium. Generally, a peer-to-peer system distributes data across a network of multiple machines for storage and retrieval without the use of a central server or servers. Thus, each peer may incorporate the functions of both the client and the server described above.
It will be appreciated that the computer system 51 is one example of many possible computer systems which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an input/output (I/O) bus for the peripherals and one that directly connects the processor 55 and the memory 59 (often referred to as a memory bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.
Network computers are another type of computer system that can be used with the present invention. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 59 for execution by the processor 55. A Web TV system, which is known in the art, is also considered to be a computer system according to the present invention, but it may lack some of the features shown in
It will also be appreciated that the computer system 51 is controlled by operating system software which includes a file management system, such as a disk operating system, which is part of the operating system software. One example of an operating system software with its associated file management system software is the family of operating systems known as Windows® from Microsoft Corporation of Redmond, Wash., and their associated file management systems. The file management system is typically stored in the non-volatile storage 65 and causes the processor 55 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non-volatile storage 65.
Temporal classified filtering has been described that predicts pixels in image data so that the pixel values can be recreated after transmission Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the following claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
5557684 | Wang et al. | Sep 1996 | A |
5838823 | Ancessi | Nov 1998 | A |
5995668 | Corset et al. | Nov 1999 | A |
6212235 | Nieweglowski et al. | Apr 2001 | B1 |
6249548 | Kleihorst et al. | Jun 2001 | B1 |
6351494 | Kondo et al. | Feb 2002 | B1 |
6646578 | Au | Nov 2003 | B1 |
7349473 | Hallapuro et al. | Mar 2008 | B2 |
20030169931 | Lainema | Sep 2003 | A1 |
Number | Date | Country | |
---|---|---|---|
20050265452 A1 | Dec 2005 | US |