Image information may be transmitted from a one device to another device via a communication network. For example, a sending device might transmit a digital video image to a remote television or Personal Computer (PC). Moreover, various encoding techniques may be used to reduce the bandwidth that is required to transmit the image information. For example, information about differences between a current picture and a previous picture might be transmitted. In this case, the receiving device may decode the information (e.g., by using the previous picture and the differences to generate the current picture) and provide the image to a viewer.
A variable length decoder 110 may receive the bit stream and generate packets, which are then converted into coefficient data by a run length decoder 120. A transformation unit 130 may then provide residue (or error information) associated with a picture element (“pel”) to a motion compensation unit 140. The transformation unit 130 might be associated with, for example, a discrete cosine transformation, an integer transformation, or any other transformation.
A motion compensation unit 140 may then generate the current frame using information about a previous frame along with information about differences between the previous frame and the current frame. That is, the motion compensation unit 140 may combine the residue information received from the transformation unit 130 with predicted information generated from interpolation to generate the final reconstructed pixel, including luminance and chrominance values associated with portions of a current picture (or “blocks” of the current image “frame”). For example, a motion vector may indicate how far a block has moved as compared to its previous location in the frame. In this case, the motion compensation unit 140 may use the location of the block in the previous frame along with the motion vector to calculate where the block should appear in the current frame.
For example,
Note that if the motion compensation vector indicates that the block has moved an integer number of pixels (e.g., three pixels downwards), the block can simply be placed in the new location. It may be, however, that a block has moved a non-integer number of pixels (e.g., 0.75 pixels upwards). In this case, the motion compensation unit 140 may use a filter to interpolate the current position of the block (e.g., in between the integer pixel locations). The cross-hatched circles 210 in
In addition to vertical interpolation,
Referring again to
Note that a number of different standards have been developed to encode and decode image information. For example, image information might be processed in accordance with the International Telecommunications Union (ITU) H.264 standard entitled “Advanced Video Coding (AVC) for Generic Audiovisual Services” (2003). As another approach, image information could be processed using the Society of Motion Picture and Television Engineers (SMPTE) Video Codec 1 (VC-1) standard or the MICROSOFT WINDOWS® Media Video Decoder (WMV9) standard. In other cases, image information might be processed using the Moving Pictures Expert Group (MPEG) Release Two (MPEG-2) 13818-2 or Release Four (MPEG-4) 14496 (1999/2002) standards published by the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC).
Although all of these standards use some form of motion compensation, the particular methods used to encode and/or decode the motion compensation information are different. For example, the block size, the number of interpolation filter taps, the values associated with interpolation filter taps, and/or the interpolation context size may be different. As another example, one standard might require that horizontal interpolation be performed before vertical interpolation while another standard requires that horizontal interpolation be performed after vertical interpolation. As still another example, the ways in which intermediate values are combined and/or rounded may be different.
Different motion compensation units 140 could be designed to support different video compression standards. For example, a first circuit could be designed such that a horizontal interpolation filter provides signals to a vertical compensation unit while a second circuit is designed the other way around. Such an approach, however, may be costly and impractical (e.g., it may be difficult to design a system that supports a significant number of video compression standards).
At 502, an video compression standard is selected, and at least one filter is configured in accordance with the selected image processing technique at 504. According to some embodiments, one or more buffers and/or buffer controllers may also be configured. For example, a unit might be configured such that “1.5” will be rounded to “1.0” when one standard is selected or to “2.0” when another standard is selected. Note that these actions might be performed, for example, by a system designer and/or a digital media processor during an initialization process.
At 506, pel information is interpolated via the configured filter to provide motion compensation. For example, the pel information might be vertically interpolated by a second filter after being horizontally interpolated by a first filter when one standard is selected (and horizontally interpolated by the second filter after being vertically interpolated by the first filter when a different standard is selected). The image information may then be combined with reside from an inverse discrete cosine transform unit to generate a final pixel that can be provided (e.g., to a viewer).
The input pixel information is provided from the pixel input buffer 610 to a first configurable filter 620. The filter 620 may, for example, be a multi-tap interpolation filter adapted to perform either horizontal or vertical interpolation. Moreover, the filter 620 may be configurable such that one or more configuration parameters can be used to provide a bypass operation (e.g., the filter 620 might not perform any function on the data). According to some embodiments, the filter 620 is a six-tap filter that can also be configured to operate in accordance with the following equation:
Qi=(C0*P0+C1*P1+C2*P2+C3*P3+C4*P4+C5*P5 +2FLT1
where each Pi is a raw pixel value, Ci is a filter tap coefficient (e.g., selected from a bank of coefficients during a configuration in accordance with an video compression standard), FLT_SHFT1 is a configuration parameter to shift information, RND1 is a configuration parameter associated with a rounding function, and Qi represents an un-scaled filter output. The filter 620 might also be configurable to operate in accordance with the following equation:
SQi=CLIP8(C0*P0+C1*P1+C2*P2+C3*P3+C4*P4+C5*P5+2SHFT1−RND1)>>SHFT1
where SHFT1 is a configuration parameter to shift information, RND1 is a configuration parameter associated with a rounding function, CLIP8 indicates that values below zero will be set to zero and values above 255 will be set to 255, and SQi represents a scaled filter output.
The raw, scaled, or un-scaled output from the first configurable filter 620 might then be stored in a first buffer 630. The buffer 630 might comprise, for example, an eight-bit wide Random Access Memory (RAM) unit that stores intermediate results for the motion compensation unit 600. According to some embodiments, a second buffer 650 may also be provided and the operation of the buffers 630, 650 may be controlled by a buffer controller 640. The second buffer 650 might be, for example, a sixteen-bit wide RAM unit. According to some embodiments, one buffer stores raw or scaled filtered pixels when the other buffer stores full-precision intermediate results from the first configurable filter 620.
Information from the buffers 630, 650 may then be provided to a second configurable filter 660. According to some embodiments, the buffer controller 640 and/or the buffers 630, 650 are configurable such that transposed data may be provided to the second configurable filter 660 if desired. Note that information from either of the two buffers 630, 650 might be combined with an output of the second configurable filter 660.
The second configurable filter 660 may then interpolate the received data. For example, when the first filter 620 was configured to perform a horizontal interpolation, the second filter 660 might be configured to perform a vertical interpolation (or the other way around). According to some embodiments, the second configurable filter 660 may provide a bypass operation (in which case the data remains unchanged). According to some embodiments, the filter 660 is a six-tap filter that can be configured to operate in accordance with the following equation:
Yi=(C0*X0+X1*P1+X2*P2+X3*P3+X4*P4+X5*P5)
where each Xi is a value from one of the buffers 630, 650 (and the buffer might be selectable based on the configuration parameters), Ci is a filter tap coefficient (e.g., selected from a bank of coefficients during a configuration in accordance with an video compression standard), and Yi represents an un-scaled filter output. The filter 660 might also be configurable to operate in accordance with the following equation:
SYi=CLIP8(C0*X0+C1*X1+C2*X2+C3*X3+C4*X4+C5*X5+2SHF2−RND2)>>SHFT2
where SHFT2 is a configuration parameter to shift information, RND2 is a configuration parameter associated with a rounding function, CLIP8 indicates that values below zero will be set to zero and values above 255 will be set to 255, and SYi represents a scaled filter output.
Note that any of the information stored in the two buffers 630, 650 might be combined with an output of the second configurable filter 660. Such an ability may, for example, facilitate a conversion of a two-dimensional filter to do a three-dimensional filtering operation (e.g., as might be the case with respect to H.264 operations).
The second configurable filter 660 provides output pixel information to a post-data processing unit 670 which may store the information in a pixel output buffer 680. According to some embodiments, the post-data processing unit 670 may be configured to combine the data from the second configurable filter 660 with information from the pixel output buffer 680 (e.g., to support H.264 interpolation). Note that the motion compensation unit 600 might be able to simultaneously perform operations associated with multiple blocks (e.g., the pipe-line design might let the first filter 620 perform an interpolation for one block while the second filter 660 is performing an interpolation for another block).
Thus, the motion compensation unit 600 may be configurable to combine: (i) the output of the first configurable filter 620 with raw pixel information, (ii) the output of the second configurable filter 660 with raw pixel information, (iii) the output of the second configurable filter 660 with scaled pixels from the first configurable filter, (iv) the output of the second configurable filter 660 with un-scaled pixels from the first configurable filter, or (v) combining information from one of the buffers 630, 650 with the output of the second configurable filter 660. Although a few approaches have been described, the motion compensation unit 600 might be configured in any of a number of different ways. For example, information from one source might be address offset before being combined with information from another source. For example, when combining pels from the first buffer 630 and/or the second buffer 650 with an output of the second configurable filter 660, an address offset may allow the second row of pels from the first buffer 630 to be combined with the first row of output from the second configurable filter 660. Similarly, the second column of pels from the first buffer 630 might be combined with the first column of output from the second configurable filter 660 (e.g., in connection with an H.264 operation).
As a result, an efficient, generic motion compensation unit 600 may be provided to support various video compression standards. For example, the unit 600 could be configured to support different block sizes, numbers of filter taps, and/or filter coefficients. Similarly, either horizontal or vertical interpolations could be performed first depending on the standard. Note that such a unit 600 might be associated with a hardware accelerator, an Application Specific Integrated Circuit (ASIC) device, and/or an INTEL®-Architecture (IA) based device.
The system 700 includes a motion compensation unit 710 that operates in accordance with any of the embodiments described herein. For example, the motion compensation unit 710 might configure a first and second multi-tap filter in accordance with a first image processing standard and calculate motion compensation values via the configured filters in accordance with that standard. The motion compensation unit 710 might instead configure the filters in accordance with a second image processing standard and calculate motion compensation values via the configured filters in accordance with that standard. The system 700 may also include a digital output port to provide a signal associated with output image information to an external device (e.g., to an HDTV device).
The following illustrates various additional embodiments. These do not constitute a definition of all possible embodiments, and those skilled in the art will understand that many other embodiments are possible. Further, although the following embodiments are briefly described for clarity, those skilled in the art will understand how to make any changes, if necessary, to the above description to accommodate these and other embodiments and applications.
For example, although a particular design for a motion compensation unit has been described herein, other designs may be used according to other embodiments. Similarly, although embodiments have been described with respect to a decoder, note that some embodiments may also be associated with an encoder. Moreover, although particular video compression standards have been used as examples, the motion compensation unit might be configurable to support any other standard.
The several embodiments described herein are solely for the purpose of illustration. Persons skilled in the art will recognize from this description other embodiments may be practiced with modifications and alterations limited only by the claims.