1. Technical Field
The present disclosure relates to the field of video compression, particularly video compression using High Efficiency Video Coding (HEVC) that employ block processing.
2. Related Art
Source pictures 120 supplied from, by way of a non-limiting example, a content provider can include a video sequence of frames including source pictures in a video sequence. The source pictures 120 can be uncompressed or compressed. If the source pictures 120 are uncompressed, the coding system 110 can have an encoding function. If the source pictures 120 are compressed, the coding system 110 can have a transcoding function. Coding units can be derived from the source pictures utilizing the controller 111. The frame memory 113 can have a first area that can be used for storing the incoming frames from the source pictures 120 and a second area that can be used for reading out the frames and outputting them to the encoding unit 114. The controller 111 can output an area switching control signal 123 to the frame memory 113. The area switching control signal 123 can indicate whether the first area or the second area is to be utilized.
The controller 111 can output an encoding control signal 124 to the encoding unit 114. The encoding control signal 124 can cause the encoding unit 114 to start an encoding operation, such as preparing the Coding Units based on a source picture. In response to the encoding control signal 124 from the controller 111, the encoding unit 114 can begin to read out the prepared Coding Units to a high-efficiency encoding process, such as a prediction coding process or a transform coding process which process the prepared Coding Units generating video compression data based on the source pictures associated with the Coding Units.
The encoding unit 114 can package the generated video compression data in a packetized elementary stream (PES) including video packets. The encoding unit 114 can map the video packets into an encoded video signal 122 using control information and a program time stamp (PTS) and the encoded video signal 122 can be transmitted to the transmitter buffer 115.
The encoded video signal 122, including the generated video compression data, can be stored in the transmitter buffer 115. The information amount counter 112 can be incremented to indicate the total amount of data in the transmitter buffer 115. As data is retrieved and removed from the buffer, the counter 112 can be decremented to reflect the amount of data in the transmitter buffer 115. The occupied area information signal 126 can be transmitted to the counter 112 to indicate whether data from the encoding unit 114 has been added or removed from the transmitted buffer 115 so the counter 112 can be incremented or decremented. The controller 111 can control the production of video packets produced by the encoding unit 114 on the basis of the occupied area information 126 which can be communicated in order to anticipate, avoid, prevent, and/or detect an overflow or underflow from taking place in the transmitter buffer 115.
The information amount counter 112 can be reset in response to a preset signal 128 generated and output by the controller 111. After the information counter 112 is reset, it can count data output by the encoding unit 114 and obtain the amount of video compression data and/or video packets which have been generated. The information amount counter 112 can supply the controller 111 with an information amount signal 129 representative of the obtained amount of information. The controller 111 can control the encoding unit 114 so that there is no overflow at the transmitter buffer 115.
In some embodiments, the decoding system 140 can comprise an input interface 170, a receiver buffer 150, a controller 153, a frame memory 152, a decoding unit 151 and an output interface 175. The receiver buffer 150 of the decoding system 140 can temporarily store the compressed bitstream 105, including the received video compression data and video packets based on the source pictures from the source pictures 120. The decoding system 140 can read the control information and presentation time stamp information associated with video packets in the received data and output a frame number signal 163 which can be applied to the controller 153. The controller 153 can supervise the counted number of frames at a predetermined interval. By way of a non-limiting example, the controller 153 can supervise the counted number of frames each time the decoding unit 151 completes a decoding operation.
In some embodiments, when the frame number signal 163 indicates the receiver buffer 150 is at a predetermined capacity, the controller 153 can output a decoding start signal 164 to the decoding unit 151. When the frame number signal 163 indicates the receiver buffer 150 is at less than a predetermined capacity, the controller 153 can wait for the occurrence of a situation in which the counted number of frames becomes equal to the predetermined amount. The controller 153 can output the decoding start signal 164 when the situation occurs. By way of a non-limiting example, the controller 153 can output the decoding start signal 164 when the frame number signal 163 indicates the receiver buffer 150 is at the predetermined capacity. The encoded video packets and video compression data can be decoded in a monotonic order (i.e., increasing or decreasing) based on presentation time stamps associated with the encoded video packets.
In response to the decoding start signal 164, the decoding unit 151 can decode data amounting to one picture associated with a frame and compressed video data associated with the picture associated with video packets from the receiver buffer 150. The decoding unit 151 can write a decoded video signal 162 into the frame memory 152. The frame memory 152 can have a first area into which the decoded video signal is written, and a second area used for reading out decoded pictures 160 to the output interface 175.
In various embodiments, the coding system 110 can be incorporated or otherwise associated with a transcoder or an encoding apparatus at a headend and the decoding system 140 can be incorporated or otherwise associated with a downstream device, such as a mobile device, a set top box or a transcoder.
The coding system 110 and decoding system 140 can be utilized separately or together to encode and decode video data according to various coding formats, including High Efficiency Video Coding (HEVC). HEVC is a block based hybrid spatial and temporal predictive coding scheme. In HEVC, input images, such as video frames, can be divided into square blocks called Largest Coding Units (LCUs) 200, as shown in
With higher and higher video data density, what is needed are further improved ways to code the CUs so that large input images and/or macroblocks can be rapidly, efficiently and accurately encoded and decoded.
The present invention provides an improved system for HEVC. In embodiments for the system, a method of determining binary codewords for transform coefficients in an efficient manner is provided. Codewords for the transform coefficients within transform units (TUs) that are subdivisions of the CUs 202 are used in encoding input images and/or macroblocks.
In one embodiment, a method is provided that comprises providing a transform unit comprising one or more sub-blocks of the transform coefficients, each of the transform coefficients having a quantized value, determining a symbol for each of the transform coefficients that have a quantized value equal to or greater than a threshold value, by subtracting the threshold value from the absolute value of the transform coefficient, providing a parameter variable, initially setting the parameter variable to a value of zero, converting each symbol into a binary codeword based on the value of the parameter variable, and updating the parameter variable after each symbol has been converted by setting the parameter variable to a new value, the new value being based at least in part on the value of the parameter variable preceding the updating and the value of the most recently converted symbol, wherein the parameter variable has possible values of 0, 1, 2, 3, and 4.
In another embodiment, the invention includes a method of determining binary codewords for transform coefficients that uses a look up table to determine the transform coefficients. The method comprises providing a transform unit comprising one or more sub-blocks of the transform coefficients, each of the transform coefficients having a quantized value, determining a symbol for each of the transform coefficients that have a quantized value equal to or greater than a threshold value, by subtracting the threshold value from the absolute value of the transform coefficient, providing a parameter variable, initially setting the parameter variable to an initial value of zero, converting each symbol into a binary codeword based on the value of the parameter variable, looking up a new value for the parameter variable from a table based on the value of the parameter variable preceding the updating and the value of the most recently converted symbol, and replacing the value of the parameter variable with the new value, wherein the parameter variable has possible values of 0, 1, 2, 3, and 4.
In another embodiment, the invention includes a method of determining binary codewords for transform coefficients that uses one or more mathematical conditions that can be performed using logic rather than requiring a look up table. The method comprises providing a transform unit comprising one or more sub-blocks of transform coefficients, each of the transform coefficients having a quantized value, determining a symbol for each transform coefficient having a quantized value equal to or greater than a threshold value, by subtracting the threshold value from the absolute value of the transform coefficient, providing a parameter variable, initially setting the parameter variable to a value of zero, converting each symbol into a binary codeword based on the value of the parameter variable, determining whether the value of the parameter variable preceding the updating and the value of the most recently converted symbol together satisfy one or more conditions, and mathematically adding an integer of one to the value of the parameter variable for each of the one or more conditions that is satisfied, wherein the parameter variable has possible values of 0, 1, 2, 3, and 4.
Further details of the present invention are explained with the help of the attached drawings in which:
In HEVC, an input image, such as a video frame, is broken up into coding units (CUs) that are then identified in code. The CUs are then further broken into sub-units that are coded as will be described subsequently.
Initially for the coding a quadtree data representation can be used to describe the partition of a large coding unit (LCU) 200. The quadtree representation can have nodes corresponding to the LCU 200 and CUs 202. At each node of the quadtree representation, a flag “1” can be assigned if the LCU 200 or CU 202 is split into four CUs 202. If the node is not split into CUs 202, a flag “0” can be assigned. By way of a non-limiting example, the quadtree representation shown in
At each leaf of the quadtree, the final CUs 202 can be broken up into one or more blocks called prediction units (PUs) 204. PUs 204 can be square or rectangular. A CU 202 with dimensions of 2N×2N can have one of the four exemplary arrangements of PUs 204 shown in
A PU can be obtained through spatial or temporal prediction. Temporal prediction is related to inter mode pictures. Spatial prediction relates to intra mode pictures. The PUs 204 of each CU 202 can, thus, be coded in either intra mode or inter mode. Features of coding relating to intra mode and inter mode pictures are described in the paragraphs to follow.
Intra mode coding can use data from the current input image, without referring to other images, to code an I picture. In intra mode the PUs 204 can be spatially predictive coded. Each PU 204 of a CU 202 can have its own spatial prediction direction. Spatial prediction directions can be horizontal, vertical, 45-degree diagonal, 135 degree diagonal, DC, planar, or any other direction. The spatial prediction direction for the PU 204 can be coded as a syntax element. In some embodiments, brightness information (Luma) and color information (Chroma) for the PU 204 can be predicted separately. In some embodiments, the number of Luma intra prediction modes for 4×4, 8×8, 16×16, 32×32, and 64×64 blocks can be 18, 35, 35, 35, and 4 respectively. In alternate embodiments, the number of Luma intra prediction modes for blocks of any size can be 35. An additional mode can be used for the Chroma intra prediction mode. In some embodiments, the Chroma prediction mode can be called “IntraFromLuma.”
Inter mode coding can use data from the current input image and one or more reference images to code “P” pictures and/or “B” pictures. In some situations and/or embodiments, inter mode coding can result in higher compression than intra mode coding. In inter mode PUs 204 can be temporally predictive coded, such that each PU 204 of the CU 202 can have one or more motion vectors and one or more associated reference images. Temporal prediction can be performed through a motion estimation operation that searches for a best match prediction for the PU 204 over the associated reference images. The best match prediction can be described by the motion vectors and associated reference images. P pictures use data from the current input image and one or more previous reference images. B pictures use data from the current input image and both previous and subsequent reference images, and can have up to two motion vectors. The motion vectors and reference pictures can be coded in the HEVC bitstream. In some embodiments, the motion vectors can be coded as syntax elements “MV,” and the reference pictures can be coded as syntax elements “refIdx.” In some embodiments, inter mode coding can allow both spatial and temporal predictive coding.
As shown in
Referring back to
At 614 the quantized transform coefficients 212 can be dequantized into dequantized transform coefficients 216 E′. At 616 the dequantized transform coefficients 216 E′ can then be inverse transformed to reconstruct the residual PU 218, e′. At 618 the reconstructed residual PU 218, e′, can then be added to a corresponding prediction PU 206, x′, obtained through either spatial prediction at 602 or temporal prediction at 604, to obtain a reconstructed PU 220, x″. At 620 a deblocking filter can be used on reconstructed PUs 220, x″, to reduce blocking artifacts. At 620 a sample adaptive offset process is also provided that can be conditionally performed to compensate the pixel value offset between reconstructed pixels and original pixels. Further, at 620, an adaptive loop filter can be conditionally used on the reconstructed PUs 220, x″, to reduce or minimize coding distortion between input and output images.
If the reconstructed image is a reference image that will be used for future temporal prediction in inter mode coding, the reconstructed images can be stored in a reference buffer 622. Intra mode coded images can be a possible point where decoding can begin without needing additional reconstructed images.
HEVC can use entropy coding schemes during step 612 such as context-based adaptive binary arithmetic coding (CABAC). The coding process for CABAC is shown in
At block 904 in
The quantized transform coefficients 212 of the TUs 210 can be divided into groups. In some embodiments, the groups can be square blocks of quantized transform coefficients 212 called sub-blocks. The sub-blocks within a TU 210 can be subdivisions of any desired size, such as 4×4 block of 16 quantized transform coefficients 212. By way of non-limiting examples: an 8×8 TU 210 having 64 quantized transform coefficients 212 can be divided into four 4×4 sub-blocks each having 16 quantized transform coefficients 212; a 16×16 TU 210 having 256 quantized transform coefficients 212 can be divided into 16 4×4 sub-blocks each having 16 quantized transform coefficients 212; and a 32×32 TU 210 having 1024 quantized transform coefficients 212 can be divided into 64 4×4 sub-blocks each having 16 quantized transform coefficients 212. In other embodiments, the groups can be subsets. Subsets can comprise 16 quantized transform coefficients 212 that are consecutive along a backwards zig-zag scan. In alternate embodiments, groups can comprise any number of quantized transform coefficients 212 from a TU 210 in any scan order and/or shape.
Referring back to
The coefficient levels 222 obtained at block 1104 that are expected to occur with a higher frequency can be coded before coefficient levels 222 that are expected to occur with lower frequencies. By way of a non-limiting example, in some embodiments coefficient levels 222 of 0, 1, or 2 can be expected to occur most frequently. Coding the coefficient levels 222 in three parts can identify the most frequently occurring coefficient levels 222, leaving more complex calculations for the coefficient levels 222 that can be expected to occur less frequently. In some embodiments, this can be done by coding the coefficient levels 222 in three parts. First, the coefficient level 222 of a quantized transform coefficient 212 can be checked to determine whether it is greater than one. If the coefficient level 222 is greater than one, the coefficient level 222 can be checked to determine whether it is greater than two.
At 1106 in
For the quantized transform coefficients 212 that occur less frequently and have coefficient levels 222 of three or more as determined in the blocks of
Referring to
Referring still to
In some situations and/or embodiments, converting the symbol 226 according to Truncated Rice code with a lower value for the parameter variable 230 can result in a binary codeword 228 having fewer bits than converting the same symbol 226 according to Truncated Rice code with a higher value for the parameter variable 230. By way of a non-limiting example, as shown by the table depicted in
In other situations and/or embodiments, converting the symbol 226 according to Truncated Rice code with a higher value for the parameter variable 230 can result in a binary codeword 228 having fewer bits than converting the same symbol 226 according to Truncated Rice code with a lower value for the parameter variable 230. By way of a non-limiting example, as shown in the table depicted in
Generally referring to
At 1410, after the parameter variable 230 has been updated at 1408, if any symbols 226 remain uncoded in the sub-block, subset, or other group, the coding system 110 can return to 1404 and move to the next symbol 226 in the group. The next symbol 226 can then be coded at 1406 using the updated value of the parameter variable 230 and the process can repeat for all remaining symbols 226 in the group. If no symbols 226 remain uncoded in the group at 1410, the coding system 110 can move to the next group at 1412, return to 1402 and reset the parameter variable 230 to zero, and repeat the process to code the symbols 226 in the next group. In some embodiments, the parameter variable cRiceParam 230 can be reset once per group with an initial “0” value. For a TU with more than one group of quantized transform coefficients 212, the cRiceParam parameter variable 230 for coeff_abs_level_minus3 symbols 226 can be reset to 0 for each group, which can favor smaller symbol value coding. In other embodiments, the cRiceParam parameter variable 230 can be reset to 0 for each TU and/or each subset, sub-block, or other group of transform coefficients 212. In still other embodiments, the step of resetting to the parameter variable 230 to zero can be omitted.
Referring to
In some embodiments, the system can have one or more additional values for the “cRiceParam” parameter variable 230 beyond the values of 0, 1, 2, and 3. By way of a non-limiting example, when the quantization step-size is small a large quantity of quantized transform coefficients 212 and symbols 226 can be generated, and many of these symbols 226 can have values that are higher than those shown in Table 2. To assist in the coding of these symbols 226, additional values for the cRiceParam parameter variable 230 such as 4, 5, 6, and/or higher values can be used to convert the symbols 226 into binary codewords 228. As stated above, the parameter variable 230 can be any integer between 0 and N. By way of a non-limiting example, in some embodiments N can be 4, such that the parameter variable 230 can be 0, 1, 2, 3, or 4.
In some embodiments, each condition 1502 can comprise two parts, a conditional symbol threshold and a conditional parameter threshold. In these embodiments, the condition 1502 can be met if the value of the symbol 226 is equal to or greater than the conditional symbol threshold and the parameter variable 230 is equal to or greater than the conditional parameter threshold. In alternate embodiments, each condition 1502 can have any number of parts or have any type of condition for either or both the symbol 226 and parameter variable 230. In some embodiments, the parameter variable 230 can be incremented by one for each condition 1502 that is met. By way of a non-limiting example, an integer of one can be mathematically added to the previous value of the parameter variable 230 for each condition that is satisfied.
Because an updating table, such as Table 4 shown in
The table of
As discussed above, updating of the cRiceParam parameter variable 230 can be performed by looking up a new value for the parameter variable 230 from an updating table 1504 or using a comparison equation 1506, based on the previous value of the symbol 226 and the previous value of the parameter variable 230.
The execution of the sequences of instructions required to practice the embodiments may be performed by a computer system 2400 as shown in
A computer system 2400 according to an embodiment will now be described with reference to
The computer system 2400 may include a communication interface 2414 coupled to the bus 2406. The communication interface 2414 provides two-way communication between computer systems 2400. The communication interface 2414 of a respective computer system 2400 transmits and receives electrical, electromagnetic or optical signals, that include data streams representing various types of signal information, e.g., instructions, messages and data. A communication link 2415 links one computer system 2400 with another computer system 2400. For example, the communication link 2415 may be a LAN, an integrated services digital network (ISDN) card, a modem, or the Internet.
A computer system 2400 may transmit and receive messages, data, and instructions, including programs, i.e., application, code, through its respective communication link 2415 and communication interface 2414. Received program code may be executed by the respective processor(s) 2407 as it is received, and/or stored in the storage device 2410, or other associated non-volatile media, for later execution.
In an embodiment, the computer system 2400 operates in conjunction with a data storage system 2431, e.g., a data storage system 2431 that contains a database 2432 that is readily accessible by the computer system 2400. The computer system 2400 communicates with the data storage system 2431 through a data interface 2433.
Computer system 2400 can include a bus 2406 or other communication mechanism for communicating the instructions, messages and data, collectively, information, and one or more processors 2407 coupled with the bus 2406 for processing information. Computer system 2400 also includes a main memory 2408, such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 2406 for storing dynamic data and instructions to be executed by the processor(s) 2407. The computer system 2400 may further include a read only memory (ROM) 2409 or other static storage device coupled to the bus 2406 for storing static data and instructions for the processor(s) 2407. A storage device 2410, such as a magnetic disk or optical disk, may also be provided and coupled to the bus 2406 for storing data and instructions for the processor(s) 2407.
A computer system 2400 may be coupled via the bus 2406 to a display device 2411, such as an LCD screen. An input device 2412, e.g., alphanumeric and other keys, is coupled to the bus 2406 for communicating information and command selections to the processor(s) 2407.
According to one embodiment, an individual computer system 2400 performs specific operations by their respective processor(s) 2407 executing one or more sequences of one or more instructions contained in the main memory 2408. Such instructions may be read into the main memory 2408 from another computer-usable medium, such as the ROM 2409 or the storage device 2410. Execution of the sequences of instructions contained in the main memory 2408 causes the processor(s) 2407 to perform the processes described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and/or software.
Although the present invention has been described above with particularity, this was merely to teach one of ordinary skill in the art how to make and use the invention. Many additional modifications will fall within the scope of the invention, as that scope is defined by the following claims.
This Application claims priority under 35 U.S.C. §119(e) from earlier filed U.S. Provisional Application Ser. No. 61/589,307, filed Jan. 21, 2012, and earlier filed U.S. Provisional Application Ser. No. 61/590,805, filed Jan. 25, 2012, the entirety of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61589307 | Jan 2012 | US | |
61590805 | Jan 2012 | US |