1. Field of the Invention
This disclosure relates to digital video processing and, more particularly, block-based coding of video data.
2. Description of Related Art
Video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, cellular or satellite radio telephones, video game counsels, handheld gaming devices, and the like. Digital video coding can provide significant improvements over conventional analog systems in creating, modifying, transmitting, storing, recording and playing full motion multimedia sequences. Broadcast networks may use video coding to facilitate the broadcast of one or more channels of multimedia (audio-video) sequences to wireless subscriber devices. Video coding is also used to support video telephony (VT) applications, such as video conferencing by cellular radio telephones.
A number of different coding standards have been established for coding digital video sequences. The Moving Picture Experts Group (MPEG), for example, has developed a number of standards including MPEG-1, MPEG-2 and MPEG-4. Other standards include the International Telecommunication Union (ITU) H.263 standard and H.264 standard, QuickTime™ technology developed by Apple Computer of Cupertino Calif., Video for Windows™ developed by Microsoft Corporation of Redmond, Wash., Indeo™ developed by Intel Corporation, RealVideo™ from RealNetworks, Inc. of Seattle, Wash., and Cinepak™ developed by SuperMac, Inc. Furthermore, new standards continue to emerge and evolve. The ITU H.264 standard is also set forth in MPEG-4, Part 10, Advanced Video Coding (AVC).
Most video coding techniques utilize block-based coding, which divides video frames into blocks of pixels and correlates the blocks with those of other frames in the video sequence. By encoding differences between a current block and a predictive block of another frame, data compression can be achieved. The term “macroblock” is often used to define discrete blocks of a video frame that are compared to a search space (which is typically a subset of a previous or subsequent frame of the video sequence). Macroblocks may also be further sub-divided into partitions or sub-partitions. The ITU H.264 standard supports 16 by 16 macroblocks, 16 by 8 partitions, 8 by 16 partitions, 8 by 8 partitions, 8 by 4 sub-partitions, 4 by 8 sub-partitions and 4 by 4 sub-partitions. Other standards may support differently sized blocks, macroblocks, partitions and/or sub-partitions.
For each block (macroblock, partition or sub-partition) in a video frame, an encoder compares similarly sized blocks of one or more immediately preceding video frames (and/or subsequent frames) to identify a similar block, referred to as the “prediction block” or “best match.” The process of comparing a current video block to video blocks of other frames is generally referred to as motion estimation. Once a “best match” is identified for a given block to be coded, the encoder can encode the differences between the current block and the best match. This process of encoding the differences between the current block and the best match includes a process referred to as motion compensation. Motion compensation comprises creating a difference block (referred to as the residual), which includes information indicative of the differences between the current block to be encoded and the best match. In particular, motion compensation usually refers to the act of fetching the best match using a motion vector, and then subtracting the best match from an input block to generate the residual. Additional coding steps, such as entropy coding, may be performed on the residual to further compress the bitstream.
This disclosure describes efficient transformation techniques that can be used in video coding. In particular, intermediate results of computations associated with transformation of a first block of video data are re-used when computing intermediate results of computations associated with the transformation of a second block of video data. The efficient transformation techniques may be used during a motion estimation process in which video blocks of a search space are transformed, but this disclosure is not necessarily limited in this respect. According to this disclosure, a search space may be broken into different 4 by 4 pixel blocks, and the different 4 by 4 pixel blocks may overlap one another.
One-dimensional transforms may be performed on rows of the 4 by 4 pixel blocks to generate intermediate results, and then one-dimensional transforms may be performed on a column of the intermediate results. Alternatively, the one-dimensional transforms may be performed first on the columns, and then on a row of the intermediate results. In any case, given an overlap between different 4 by 4 pixel blocks in the search space, at least some of the intermediate results can be re-used (e.g., shared with later transformations) without performing the same computations. An efficient architecture is also disclosed for implementation of the techniques described herein.
In one example, this disclosure provides a method comprising performing transformations on blocks of video data, wherein performing the transformations includes re-using one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
In another example, this disclosure provides a device comprising a video coder that performs transformations on blocks of video data. In performing the transformations, the video coder re-uses one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
In another example, this disclosure provides a device comprising means for performing transformations on blocks of video data, and means for re-using one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof If implemented in software, the software may be executed in a digital signal processor (DSP) or other type of processor or device. The software that executes the techniques may be initially stored in a computer readable medium, and loaded and executed in the processor or other device to allow for video coding using the techniques described herein.
Accordingly, this disclosure also contemplates a computer-readable medium comprising instructions that, when executed in a video coding device, cause the device to perform transformations on blocks of video data, wherein in performing the transformations the instructions cause the device to re-use one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
In addition, this disclosure contemplates a circuit configured to perform transformations on blocks of video data, wherein in performing the transformations the circuit re-uses one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
Additionally, as described in greater detail below, pipelining techniques may be used to accelerate the efficient transformation techniques, and transposition memories can be implemented to facilitate efficient pipelining. Additional details of various embodiments are set forth in the accompanying drawings and the description below. Other features, objects and advantages will become apparent from the description and drawings, and from the claims.
This disclosure describes efficient transformation techniques that may be useful in video coding. As described in greater detail below, intermediate results of computations associated with transformation of a first block of video data are re-used in the transformation of a second block of video data. The techniques may be particularly useful for integer transformations or forward discrete cosine transformations performed during a motion estimation process in which video blocks of a search space are transformed. However, the techniques may be used in other transformation contexts associated with video coding. Indeed, the techniques may be useful for any type of linear transformations, integer transformations, and possibly other transformation contexts.
According to this disclosure, a search space (of any size) may be broken into different video blocks, such as 4 by 4 pixel blocks. The different 4 by 4 pixel blocks defined within the search space may overlap one another. As an example, a 5 by 5 pixel search space may define four different 4 by 4 pixel blocks, although interpolation to fractional resolution could be used to define even more 4 by 4 pixel blocks within a 5 by 5 pixel search space. The search space may be used when transforming the 4 by 4 pixel blocks from the pixel domain to the spatial-frequency domain. During this transformation from one domain to another, typically two one-dimensional transformation passes are performed on the 4 by 4 pixel blocks. A first pass is performed on columns to generate horizontal spatial frequency components (referred to as intermediate results), and a second pass is performed on one or more rows to generate vertical spatial frequency components. A person of ordinary skill in the art will recognize that the first pass may be performed on rows and the second pass may be performed on a column just as readily.
One-dimensional transforms may be performed on columns of the 4 by 4 pixel blocks to generate intermediate results, and then one-dimensional transforms may be performed on a row of the intermediate results. Given the overlap between the different 4 by 4 pixel blocks in the search space, at least some of the intermediate results can be re-used without performing the same computations. In this manner, computations can be avoided to promote efficiency. Exemplary hardware architectures are also disclosed that can achieve efficient implementation of the techniques described herein. In this case, pipelining techniques may be used to accelerate the efficient transformation techniques of a set of blocks of video data, and transposition memories can be implemented to facilitate efficient pipelining.
As shown in
As shown in
Upon creating residual (Res), transformation and quantization is performed on the residual. Transform and quantization units 14 and 16 perform the transformation and quantization respectively. Entropy coding may also be performed to generate an output bitstream. Entropy coding unit 18 performs the entropy coding, which may achieve further compression. Entropy coding may include assigning codes to sets of bits, and matching code lengths with probabilities. Various types of entropy coding are well known, and common in video coding.
In the predictive loop of video coder 10, inverse quantization unit 22 and inverse transform unit 24 perform inverse quantization and inverse transformation on the residual to essentially reverse the transformation and quantization performed by units 12 and 14. The predictive block is added back to the reconstructed residual by adder unit 26. The essentially recreates the input MB in the predictive loop. The edges of the reconstructed MB may be filtered by deblocking unit 28 and stored in memory 30.
Quantization, in principle, involves reducing the dynamic range of the transformed signal. Reduction of dynamic range impacts the number of bits (rate) generated by entropy coding. This also introduces loss in the residual, which can cause the original MB and a reconstructed MB to be slightly different. These differences are normally referred to as quantization error or distortion. The strength of quantization is determined by a quantization parameter. Larger quantization parameters cause higher distortion, but can lower the coding rate.
The prediction loop may be an intra prediction loop or an inter prediction loop. MPEG-4 and ITU H.263 typically support only inter prediction. ITU H.264 supports both intra and inter prediction. In video coder 10 of
In intra prediction, unit 34 performs spatial estimation and spatial compensation. In this case, unit 34 compares the reconstructed MB to neighboring macroblocks within the same video frame to generate an intra predictor block. The intra predictor block is essentially the best match to the reconstructed MB, which will result in good compression in the residual. Intra predication can help to reduce spatial redundancy.
In inter prediction, unit 36 performs motion estimation and motion compensation. Motion estimation compares the reconstructed MB to blocks of previous or future frames to generate an inter predictor. The inter predictor is the best match to the reconstructed MB, but unlike an intra predictor, the inter predictor comes from a different video frame or frames. Inter prediction can help reduce temporal redundancy. Typically, exploitation of temporal redundancy can have a larger impact in compression of a video sequence than exploitation of spatial redundancy. That is to say, inter coding of MBs usually achieves better compression than intra coding.
The techniques of this disclosure generally concern transformations, such as forward discrete cosine transformations. The techniques may be implemented during the motion estimation process, although this disclosure is not limited in this respect. For purposes of description, this disclosure will describe the techniques as being performed during motion estimation, but these techniques or similar techniques may also be used in other contexts where transformations are performed.
Motion estimation is a computationally intensive process that can be performed by a video coder. The high number of computations may be due to a large number of potential predictors that can be considered in motion estimation. Practically, motion estimation usually involves searching for the inter predictor in a search space that comprises a subset of one or more previous frames (or subsequent frames). Candidates from the search space may be examined on the basis of a cost function or metric, which is usually defined by performing difference computations, such as sum of absolute difference (SAD), sum of squared difference (SSD), sum of absolute transformed difference (SATD), or sum of squared transformed difference (SSTD). Once the metric is calculated for all the candidates in the search space, the candidate that minimizes the metric can be chosen as the inter predictor. Hence, the main factors affecting motion estimation may be the size of the search space, the search methodology, and various cost functions. Cost functions essentially quantify the redundancy between the original block of the current frame and a candidate block of the search area. The redundancy may be quantified in terms of accurate rate and distortion.
To accomplish motion estimation, the residual energy is analyzed between the block to be encoded and a given search space block. Each residual candidate is obtained through a process of subtracting the corresponding pixels of the block to be encoded from the pixels of a respective search space block. This is what the difference (Diff) module 42 accomplishes in
It should be noted that the cost metric calculated above corresponds to the block being coded and search space blocks that are of size 4 by 4 pixels. For block sizes larger than 4 by 4 pixels, multiple 4 by 4 pixel blocks may be used to span the larger block. In this case, the cost for the larger block be calculated by accumulating the cost metrics for all 4 by 4 unit blocks that span the larger block. The description below focuses on a block size of 4 by 4 pixels, although the techniques could apply to other sized blocks.
Forward transform engine (FTE) 44 is generally a fundamental module of any transform-based metric calculation. Due to the linear nature of the transformation, the stages of difference module 42 and FTE 44 can be switched. In accordance with this disclosure, switching the order of difference module 42 and FTE 44 can allow for computational savings during the transformations. The notation on the input and output of the blocks depicted is (column) by (row) by (number of bits used to represent value) of input.
The basic problem of motion estimation is to find the “best match” for the block to be encoded (the encode block,e) from a search space (s). Encode block (e) and search space (s) may be defined as:
As can be seen, e may be matched against four search points ins, which can be
Note that a (search) point, e.g., s(0,0), s(0,1), s(1,0) or s(1,1) is depicted as a block of equal horizontal and vertical dimensions. s(0,0) may be referred to as block 00, s(0,1) may be referred to as block 01, s(1,0) may be referred to as block 10, and s(1,1) may be referred to as block 11. A search point may also depict a block of unequal horizontal and vertical dimensions.
r(x, y)=e−s(x, y) x, y ε0,1 (7)
The residual block r(x, y) is then transformed into the spatial frequency domain via transformation matrix,
In equation 8, variable v denotes vertical columns, and in equation 9, variable h denotes horizontal rows. The transformation matrix is comprised of integers, in some contexts it is known as an integer transform, and in others it is referred to as a discrete cosine transform. Discrete cosine transforms (DCTs) may be either integer transforms or “real number” transforms. A person having ordinary skill in the art will recognize that the transformation matrix may be formed by integers or real numbers. It can be noted that a 4-point one-dimensional (1-D) transform may be employed to generate spatial frequency domain components of a video block. The 4-point 1-D transform may first be applied to all columns to generate first pass intermediate results and then a 4-point 1D transform may be applied in a second pass to all rows of the intermediate results. A “butterfly implementation” of a 1-D transform is shown in
As shown in
If a modified framework is used, as shown in
Thus, the transformation can be changed from a transformation of r(x,y)to a transformation of s(x,y). This allows the column overlap of s(0,0) (equation 3) and s(0,1) (equation 4) to be exploited, and the column overlap of s(1,0) (equation 5) and s(1,1) (equation 6) to be exploited. First pass intermediate results (i.e., results associated with the columns) of S′(0,0) may be re-used to avoid duplicate computations of first pass intermediate results (i.e., results associated with the columns) of S′(0,1). Similarly, first pass intermediate results (i.e., results associated with the columns) of S′(1,0) may be re-used to avoid duplicate computations of first pass intermediate results (i.e., results associated with the columns) of S′(1,1). This re-use (also called sharing) of intermediate results is explained in greater detail below.
To illustrate the re-use (e.g., sharing) concept, consider as an example, that two dimensional transformed s(0,0) may be calculated as:
It can be noted that from equations (3) & (4):
sv(n)(0,0)=sv(n−1)(0,1) v ε0,1,2,3 n ε1,2,3 (19)
Hence, it follows
S′v(n)(0,0)=S′v(n−1)(0,1) v ε0,1,2,3 n ε1,2,3 (20)
This means for n ε1,2,3 the first pass intermediate results associated with S′(0,0) can be re-used to calculate the first pass intermediate results associated with S′(0,1). It may be noted that re-use may eliminate three out of eight 1-D column transforms otherwise needed for the calculation of S′(0,1). Therefore, savings of up to 37.5 percent of 1-D transforms may be achieved during the first pass. It should be pointed out that video blocks may also be processed by overlapping the rows of s(0,0) with the rows of s(1,0), in which case 1-D row transforms would be applied to the rows on a first pass, prior to a second pass of 1-D column transforms on the columns resulting from the first pass. In other words, the techniques of this disclosure can apply regardless of the order of the horizontal transforms and the vertical transforms.
In general, to search for a best match of a 4×4 block in a N×M search space, there are total (N−3)×(M−3) search points. Where N represents the total pixel count in horizontal direction and M represents the total pixel count in vertical direction. In order to ensure the search for best match tasks be able to finish in time for real time video coding, multiple 1D engines can be designed and run at the same time. Assuming “j+1” units ((0≦j≦N−3) of horizontal engines are used in parallel to accelerate the searching, and if vertical transformed data is shared efficiently, only “k+1” units of vertical engines are needed. Where k=Integer of (j+3)/4. In designing a video coding architecture, power consumption and silicon area are important trade off factors for the performance of transformation engines.
FTE 60 may implement a 4 by 4 forward transform in both vertical and horizontal directions. Typically, it does not matter whether the vertical transformations or horizontal transformations are performed first. That is to say, the sequence of the transformations (vertical and horizontal) could be switched in other implementations. Therefore, in other implementations, the horizontal transformations may be performed prior to the vertical transformations. In
As shown in
As noted above, the technique described herein may be able to achieve a reduction of up to 37.5 percent in terms of number of 1D transforms that are performed. However, the architecture shown in
1−[(Number of Vertical Engines+Number of Horizontal Engines)/(Number of Horizontal Engines+Number of Horizontal Engines)]
Again, multiple horizontal engines can be used in the FTE to improve the transformation throughput and the data sharing. In order to ensure that the FTE engine can facilitate efficient data sharing and low power consumption in a relatively small area, the vertical and horizontal engines of the FTE may be designed differently. In particular, to save clock power, the majority of data registers in the engines are designed by clocking at one half or one quarter of the pixel clock rate. Table 1, Table 2 and Table 3 (below) illustrate a clocking scheme that can be used.
Table 1 shows exemplary data flow and timing of vertical engine 80 for search point s(0,0) and part of search point s(1,0) which its transform starts at clock cycle sixteen. In particular, Table 1 shows the content of these different registers, given input (I/P) over successive clock cycles. The output (O/P) may correspond to the content of register Rf (8IF). Again, a “divided down” clock signal may be used, in which case the equivalent clocking power for vertical engine 80 handles 3 registers for every clock cycle.
In general, for search point s(i,j), the pixel data from the RAM 62 is re-sequenced as follow: s(i, j+l), s(i+3, j+l), s(i+1, j+l), s(i+2, j+l); l ε0,1,2,3. Register R0 (81A) is enabled at clock 2n cycle to latch pixel data s(i, j+l), s(i+1, j+l) l ε0,1,2,3. Register R1 (81B) latch the input pixel data every two clock cycles. Register R0 (81A) clock is enabled at clock 2n+1 cycle to latch pixel data s(i+3, j+l), s(i+2, j+l) l ε0,1,2,3. For the registers P081C, P181D, P281E that are used to hold the intermediate transform data, P0 (81C) clock is enabled at cycle 4n+1 to latch s(i,j+l)+s(i+3, j+l), P1 (81D) at cycle 4n+2 latches s(i, j+l)−s(i+3, j+l) and P2 (81E) at cycle 4n+3 latches s(i+1, j+l)+s(i+2, j+l), at cycle 4n+4 latches s(i +1, j+l)−s(i+2, j+l) here n=0, 1, 2 . . . l ε0,1,2,3. Register Rf (81F) is used to hold the final transformation result and it is enabled every clock cycle. For search point s(i,j) the vertically transformed output sequence are S′(i, j+1), S′(i+2, j+l), S′(i+1, j+l), S′(i+3, j+l), l ε0,1,2,3
In Table 1, the square wave illustrated in the forth column represents the system clock. All data in vertical engine 80 may be registered at the rising edge of this system clock.
In order for the horizontal engine can share the vertically transformed data efficiently it has to take the 4×4 vertically transformed data in sequential order, there are few different orders to make the efficient data sharing works, the example shown here is just one of them. To minimize the power consumption, it is desirable for the horizontal engine to consume the vertically transformed data immediately. Transpose registers TM 65A, 65B are designed to temporary store and reshuffle the vertical transformed data.
For efficient processing in horizontal engines (such as engines 66A-66D of
Each horizontal engine takes four strings of data, they are data sequence0, data sequence1, data sequence2 and data sequence3. The sequences are depicted as follow:
Sequence 0: S′(i, j), S′(i+1, j), S′(i+2, j), S′(i+3, j)
Sequence 1: S′(i, j+1), S′(i+1, j+1), S′(i+2, j+1), S′(i+3, j+1)
Sequence 2: S′(i, j+2), S′(i+1, j+2), S′(i+2, j+2), S′(i+3, j+2)
Sequence 3: S′(i, j+3), S′(i+1, j+3), S′(i+2, j+3), S′(i+3, j+3)
S′(i,j) represent the vertically transformed pixel data. For horizontal engine HE(0), all four sequence data are input from the vertical engine VE(0).
For horizontal engine HE(1) 100, its sequence0 and sequence0 input data are from register 91B and 91C data of the HE(0) 90, sequence2 data is from the sharing data output of the 92H of HE(0) 90, and the sequence3 data of the HE(1) 100 is from the vertical engine VE(1) 80 directly.
For horizontal engine HE(2) 100, its sequence0 data input is from HE(0) 90 register 91C, sequence1 data is from the sharing data output of the 92H of HE(0), sequence2 data is from the sharing data output 92H of HE(1) 100, and the sequence3 data of the HE(2) is from the vertical engine VE(1) 80 directly.
For horizontal engine HE(j) 100, where j≧3, its input data sequence0, sequence1 and sequence2 use the sharing data output multiplexer 102H of its neighboring horizontal engine HE(j−3), HE(j−2) and HE(i−1) respectively. The sequence3 data of HE(j) always come from the vertical engine VE(k) output directly. Where k=integer of (j+3)/4.
Table 2 & Table 3 below show the input, output and internal control timing of the horizontal engine HE(0) 90 and HE(1) 100. Notice R0(91A), R1(91B), R2(91C), R3(91D) and R0(101A) are only enabled once every four clock cycle, and the intermediate register P0 (91E, 101B), P1 (91F, 101C)) latch data every two clock cycles.
Horizontal engine HE(0) 90 may be designed to transform the first 4 by 4 pixel column blocks (search points s(0,0), s(1,0) . . . s(M−3,0) from a search area of N by M size). In contrast, horizontal engine HE(j) 100 (shown in
Horizontal engine HE(0) 90 latches broadcasted sequential pixel data from vertical engine VE(0) (
The data latched in registers R0 (91A), R1 (9 lB), R2 (91C) and R3 (91D) of the horizontal engine 90 may be shared by the next three horizontal engines HE(1), HE(2,) HE(3) (see 66B-66D or
Table 2 shows the timing and how the data is shared among the horizontal engines HE(0)-HE(3). In particular, Table 2 shows the timing information of horizontal engine HE(0) 66A working on search points s(0,0), s(1,0), s(2,0) . . . , and so forth. The pixel data starting at cycle 0 S′00 is the first pixel of the 4 by 4 block associated with search point s(0,0). The data starting at cycle 16 S′10 refers to first pixel of the 4 by 4 block for the search point s(1,0), and so forth. Table 2 can be easily extended to the ending search point s(M−3, 0) by applying a similar timing relationship to that presented in Table 2.
Table 3 displays the 16 pixel timing of the HE(1) engine for the 4×4 matching area of the search point s(0,1) and part of pixel timing of search point s(1,1) which starts from cycle seventeen. Data sequence0 sequence1 and sequence3 in the Table 3 are copies from the R1, R2 & R3 register values from horizontal engine HE(0), which are shared among horizontal engines HE(1) to HE(3). The register values are listed in Table 3 to illustrate the timing relationship and data flow between HE(0) and HE(1). Notably, by using a “divided down” pixel clock frequency, the equivalent clocking power can be achieved by using more registers. For example, HE(1) may have four physical registers in a design with clocking power equivalent to 2.25 registers, and HE(0) may have seven registers with clocking power equivalent to three registers.
Table 4 shows an exemplary timing relationship among VE(0), VE(1), HE(0), HE(1), HE(2) and HE(3). In particular, Table 4 illustrates the input timing relationship among the vertical and horizontal engines VE(k) and HE(j), (k=0, 1. j=0,1,2,3) and how the vertical transformed data is shared among the horizontal engines HE(j). Table 4 only provides the timing information for search point s(i, j), where i=0, 1, 2 . . . 4; j=0, 1, . . . 3. A similar timing pattern can easily be developed for all s(i,j) search points with i=0, 1, 2 . . . M−3; j=0, 1, . . . N−3.
An FTE transformation architecture is not only determined by the engines, but also the data flowing in between the vertical and horizontal engines. Since the architecture may comprise a pixel rate pipelined design, any skewed data flow between the engines can stall the pipeline. In order to avoid this problem, transposition memories 65A and 65B may be used in between the engines to buffer and account for timing of the data.
The techniques of this disclosure allow transformations to be performed with respect to a set of blocks of video data in a pipelined manner. In particular, the horizontal engines and vertical engines may operate simultaneously for at least part of the transformations such that data is pipelined through FTE 60 (
In general, the maximum amount of timing skew between data generation by one of vertical engines 64A or 64B, and consumption by one or horizontal engines 66A-66D defines the minimum amount of the transposition memory (TM) required. In order to have the smallest TM between the conversions, it is generally advisable to:
One way to implement a TM is using random access memory (RAM). The disadvantages of the RAM-based design are:
Given these factors, the use of RAM as the transpose memory in between vertical and horizontal engines can be considered when the physical implementation on silicon does not experience difficulties.
Another TM design approach is a register based design. The same pipelining techniques may be used regardless of whether transposition memory or transposition registers are used.
In order to share the input data, it is desirable for the horizontal engine input to be in a sequential format. In this case, a pipelining technique may be employed whereby the horizontal and vertical engines work in parallel with respect to a pipeline of video data. For efficient pipelined data processing, the horizontal engine may input the first data generated by the vertical engine as soon it is available. So the horizontal input may be in the S′00, S′01, S′02, S′03 . . . S′33 sequence format. Comparing the vertical engine output sequences (Ver. O/P seq.) in
A number of techniques and examples have been described. The described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the techniques described herein may be embodied in a computer readable medium comprising instructions that upon execution in a device to perform one or more of the techniques described above. For example, the instructions, upon execution, may cause a video coding device to perform transformations on blocks of video data, wherein performing the transformations includes re-using computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data.
The computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The instructions may be executed by one or more processors or other machines, such as one or more digital signal processors (DSPs), general purpose microprocessors, one or more application specific integrated circuits (ASICs), one or more field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. In some embodiments, the functionality described herein may be provided within dedicated software modules or hardware units configured for encoding and decoding audio or multimedia information, or incorporated in a combined multimedia encoder-decoder (CODEC).
If implemented in a hardware circuit, this disclosure may be directed to a circuit configured to perform transformations on blocks of video data, wherein in performing the transformations the circuit re-uses one or more computations associated with a first transformation of a first block of video data in a second transformation of a second block of video data. For example, the circuit may comprise an integrated circuit or set of circuits that form a chipset. The circuit may comprise an ASIC, an FPGA, various logic circuits, integrated circuits, or combinations thereof.
These and other embodiments are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5197021 | Cucchi et al. | Mar 1993 | A |
5894430 | Ohara | Apr 1999 | A |
6181831 | Sadjadian | Jan 2001 | B1 |
6414992 | Sriram et al. | Jul 2002 | B1 |
7630435 | Chen et al. | Dec 2009 | B2 |
7756351 | Dang | Jul 2010 | B2 |
20070168410 | Reznik | Jul 2007 | A1 |
20070233764 | Reznik et al. | Oct 2007 | A1 |
20080107176 | Chatterjee et al. | May 2008 | A1 |
Number | Date | Country |
---|---|---|
1564602 | Jan 2005 | CN |
1889690 | Jan 2007 | CN |
0809193 | Nov 1997 | EP |
1544797 | Jun 2005 | EP |
2305798 | Apr 1997 | GB |
7200539 | Aug 1995 | JP |
9154134 | Jun 1997 | JP |
2000222578 | Aug 2000 | JP |
2005184829 | Jul 2005 | JP |
I272848 | Feb 2007 | TW |
WO0045602 | Aug 2000 | WO |
WO2005086981 | Sep 2005 | WO |
Entry |
---|
Parhi, K., “Video Data Format Converters Using Minimum No. Of Registers”, IEEE Trans. on Circuit and System for Video Technology, 2(2): 255-267, Jun. 1992. |
Huang, Y.-W. et al., “A 1.3TOPS H.264/AVC Single-Chip Encoder for HDTV Applications”, IEEE International Solid-State Circuits Conference, 2005. |
Yang, K.-M. et al., “A Family of VLSI Designs for the Motion Compensation Block-Matching Algorithm”, IEEE Trans. on Circuits and Systems, 36(10): 1317-1325, Oct. 1989. |
International Search Report and Written Opinion—PCT/US2008/077988—International Search Authority, European Patent Office, Nov. 12, 2010. |
Sullivan G. J., et al., “Rate-Distortion Optimization for Video Compression” IEEE Signal Processing Magazine, IEEE Service Center, Piscataway, NJ, US, vol. 15, No. 6, Nov. 1, 1998, pp. 74-90, XP011089821 ISSN: 1053-5888 the whole document. |
Number | Date | Country | |
---|---|---|---|
20090080515 A1 | Mar 2009 | US |