1. Technical Field
The present disclosure relates to the field of video compression, particularly video compression using High Efficiency Video Coding (HEVC) that employ block processing.
2. Related Art
Source pictures 120 supplied from, by way of a non-limiting example, a content provider can include a video sequence of frames including source pictures in a video sequence. The source pictures 120 can be uncompressed or compressed. If the source pictures 120 are uncompressed, the coding system 110 can have an encoding function. If the source pictures 120 are compressed, the coding system 110 can have a transcoding function. Coding units can be derived from the source pictures utilizing the controller 111. The frame memory 113 can have a first area that can be used for storing the incoming frames from the source pictures 120 and a second area that can be used for reading out the frames and outputting them to the encoding unit 114. The controller 111 can output an area switching control signal 123 to the frame memory 113. The area switching control signal 123 can indicate whether the first area or the second area is to be utilized.
The controller 111 can output an encoding control signal 124 to the encoding unit 114. The encoding control signal 124 can cause the encoding unit 114 to start an encoding operation, such as preparing the Coding Units based on a source picture. In response to the encoding control signal 124 from the controller 111, the encoding unit 114 can begin to read out the prepared Coding Units to a high-efficiency encoding process, such as a prediction coding process or a transform coding process which process the prepared Coding Units generating video compression data based on the source pictures associated with the Coding Units.
The encoding unit 114 can package the generated video compression data in a packetized elementary stream (PES) including video packets. The encoding unit 114 can map the video packets into an encoded video signal 122 using control information and a program time stamp (PTS) and the encoded video signal 122 can be transmitted to the transmitter buffer 115.
The encoded video signal 122, including the generated video compression data, can be stored in the transmitter buffer 115. The information amount counter 112 can be incremented to indicate the total amount of data in the transmitter buffer 115. As data is retrieved and removed from the buffer, the counter 112 can be decremented to reflect the amount of data in the transmitter buffer 115. The occupied area information signal 126 can be transmitted to the counter 112 to indicate whether data from the encoding unit 114 has been added or removed from the transmitted buffer 115 so the counter 112 can be incremented or decremented. The controller 111 can control the production of video packets produced by the encoding unit 114 on the basis of the occupied area information 126 which can be communicated in order to anticipate, avoid, prevent, and/or detect an overflow or underflow from taking place in the transmitter buffer 115.
The information amount counter 112 can be reset in response to a preset signal 128 generated and output by the controller 111. After the information counter 112 is reset, it can count data output by the encoding unit 114 and obtain the amount of video compression data and/or video packets which have been generated. The information amount counter 112 can supply the controller 111 with an information amount signal 129 representative of the obtained amount of information. The controller 111 can control the encoding unit 114 so that there is no overflow at the transmitter buffer 115.
In some embodiments, the decoding system 140 can comprise an input interface 170, a receiver buffer 150, a controller 153, a frame memory 152, a decoding unit 151 and an output interface 175. The receiver buffer 150 of the decoding system 140 can temporarily store the compressed bitstream 105, including the received video compression data and video packets based on the source pictures from the source pictures 120. The decoding system 140 can read the control information and presentation time stamp information associated with video packets in the received data and output a frame number signal 163 which can be applied to the controller 153. The controller 153 can supervise the counted number of frames at a predetermined interval. By way of a non-limiting example, the controller 153 can supervise the counted number of frames each time the decoding unit 151 completes a decoding operation.
In some embodiments, when the frame number signal 163 indicates the receiver buffer 150 is at a predetermined capacity, the controller 153 can output a decoding start signal 164 to the decoding unit 151. When the frame number signal 163 indicates the receiver buffer 150 is at less than a predetermined capacity, the controller 153 can wait for the occurrence of a situation in which the counted number of frames becomes equal to the predetermined amount. The controller 153 can output the decoding start signal 164 when the situation occurs. By way of a non-limiting example, the controller 153 can output the decoding start signal 164 when the frame number signal 163 indicates the receiver buffer 150 is at the predetermined capacity. The encoded video packets and video compression data can be decoded in a monotonic order (i.e., increasing or decreasing) based on presentation time stamps associated with the encoded video packets.
In response to the decoding start signal 164, the decoding unit 151 can decode data amounting to one picture associated with a frame and compressed video data associated with the picture associated with video packets from the receiver buffer 150. The decoding unit 151 can write a decoded video signal 162 into the frame memory 152. The frame memory 152 can have a first area into which the decoded video signal is written, and a second area used for reading out decoded pictures 160 to the output interface 175.
In various embodiments, the coding system 110 can be incorporated or otherwise associated with a transcoder or an encoding apparatus at a headend and the decoding system 140 can be incorporated or otherwise associated with a downstream device, such as a mobile device, a set top box or a transcoder.
The coding system 110 and decoding system 140 can be utilized separately or together to encode and decode video data according to various coding formats, including High Efficiency Video Coding (HEVC). HEVC is a block based hybrid spatial and temporal predictive coding scheme. In HEVC, input images, such as video frames, can be divided into square blocks called Largest Coding Units (LCUs) 200, as shown in
With higher and higher video data density, what is needed are further improved ways to code the CUs so that large input images and/or macroblocks can be rapidly, efficiently and accurately encoded and decoded.
The present invention provides an improved system for HEVC. In embodiments for the system, a method of determining binary codewords for transform coefficients in an efficient manner is provided. Codewords for the transform coefficients within transform units (TUs) that are subdivisions of the CUs 202 are used in encoding input images and/or macroblocks.
In one embodiment, a method is provided that comprises providing a transform unit including one or more subsets of transform coefficients, each transform coefficient having a quantized value, determining a symbol for each transform coefficient having a quantized value equal to or greater than a threshold value by subtracting the threshold value from the quantized value of the transform coefficient, providing a parameter variable set to an initial value of zero, converting each symbol into a binary codeword based on the current value of the parameter variable and the value of the symbol, and updating the value of the parameter variable with a new current value after each symbol has been converted, the new current value being based at least in part on the last value of the parameter variable and the value of the last converted symbol in the current or previous subset.
In another embodiment, the invention includes a method of determining binary codewords for transform coefficients that uses a look up table to determine the transform coefficients. The method comprises providing a transform unit comprising one or more subsets of transform coefficients, each transform coefficient having a quantized value, determining a symbol for each transform coefficient having a quantized value equal to or greater than a threshold value, by subtracting the threshold value from the quantized value of the transform coefficient, providing a parameter variable set to an initial value of zero, converting each symbol into a binary codeword based on the current value of the parameter variable and the value of the symbol, looking up a new current value from a table based on the last value of the parameter variable and the value of the last converted symbol, and replacing the value of the parameter variable with the new current value.
In another embodiment, the invention includes a method of determining binary codewords for transform coefficients that uses one or more mathematical conditions that can be performed using logic rather than requiring a look up table. The method comprises providing a transform unit comprising one or more subsets of transform coefficients, each transform coefficient having a quantized value, determining a symbol for each transform coefficient having a quantized value equal to or greater than a threshold value, by subtracting the threshold value from the quantized value of the transform coefficient, providing a parameter variable set to an initial value of zero, converting each symbol into a binary codeword based on the current value of the parameter variable and the value of the symbol, determining whether the last value of the parameter variable and the value of the last converted symbol together satisfy one or more conditions, and mathematically adding an integer of one to the last value of the parameter variable for each of the one or more conditions that is satisfied.
Further details of the present invention are explained with the help of the attached drawings in which:
In HEVC, an input image, such as a video frame, is broken up into CUs that are then identified in code. The CUs are then further broken into sub-units that are coded as will be described subsequently.
Initially for the coding a quadtree data representation can be used to describe the partition of a LCU 200. The quadtree representation can have nodes corresponding to the LCU 200 and CUs 202. At each node of the quadtree representation, a flag “1” can be assigned if the LCU 200 or CU 202 is split into four CUs 202. If the node is not split into CUs 202, a flag “0” can be assigned. By way of a non-limiting example, the quadtree representation shown in
At each leaf of the quadtree, the final CUs 202 can be broken up into one or more blocks called prediction units (PUs) 204. PUs 204 can be square or rectangular. A CU 202 with dimensions of 2N×2N can have one of the four exemplary arrangements of PUs 204 shown in
A PU can be obtained through spatial or temporal prediction. Temporal prediction is related to inter mode pictures. Spatial prediction relates to intra mode pictures. The PUs 204 of each CU 202 can, thus, be coded in either intra mode or inter mode. Features of coding relating to intra mode and inter mode pictures is described in the paragraphs to follow.
Intra mode coding can use data from the current input image, without referring to other images, to code an I picture. In intra mode the PUs 204 can be spatially predictive coded. Each PU 204 of a CU 202 can have its own spatial prediction direction. Spatial prediction directions can be horizontal, vertical, 45-degree diagonal, 135 degree diagonal, DC, planar, or any other direction. The spatial prediction direction for the PU 204 can be coded as a syntax element. In some embodiments, brightness information (Luma) and color information (Chroma) for the PU 204 can be predicted separately. In some embodiments, the number of Luma intra prediction modes for 4×4, 8×8, 16×16, 32×32, and 64×64 blocks can be 18, 35, 35, 35, and 4 respectively. In alternate embodiments, the number of Luma intra prediction modes for blocks of any size can be 35. An additional mode can used for the Chroma intra prediction mode. In some embodiments, the Chroma prediction mode can be called “IntraFromLuma.”
Inter mode coding can use data from the current input image and one or more reference images to code “P” pictures and/or “B” pictures. In some situations and/or embodiments, inter mode coding can result in higher compression than intra mode coding. In inter mode PUs 204 can be temporally predictive coded, such that each PU 204 of the CU 202 can have one or more motion vectors and one or more associated reference images. Temporal prediction can be performed through a motion estimation operation that searches for a best match prediction for the PU 204 over the associated reference images. The best match prediction can be described by the motion vectors and associated reference images. P pictures use data from the current input image and one or more previous reference images. B pictures use data from the current input image and both previous and subsequent reference images, and can have up to two motion vectors. The motion vectors and reference pictures can be coded in the HEVC bitstream. In some embodiments, the motion vectors can be coded as syntax elements “MV,” and the reference pictures can be coded as syntax elements “refIdx.” In some embodiments, inter mode coding can allow both spatial and temporal predictive coding.
As shown in
Referring back to
At 614 the quantized transform coefficients 212 can be dequantized into dequantized transform coefficients 216 E′. At 616 the dequantized transform coefficients 216 E′ can then be inverse transformed to reconstruct the residual PU 218, e′. At 618 the reconstructed residual PU 218, e′, can then be added to a corresponding prediction PU 206, x′, obtained through either spatial prediction at 602 or temporal prediction at 604, to obtain a reconstructed PU 220, x″. At 620 a deblocking filter can be used on reconstructed PUs 220, x″, to reduce blocking artifacts. At 620 a sample adaptive offset process is also provided that can be conditionally performed to compensate the pixel value offset between reconstructed pixels and original pixels. Further, at 620, an adaptive loop filter can be conditionally used on the reconstructed PUs 220, x″, to reduce or minimize coding distortion between input and output images.
If the reconstructed image is a reference image that will be used for future temporal prediction in inter mode coding, the reconstructed images can be stored in a reference buffer 622. Intra mode coded images can be a possible point where decoding can begin without needing additional reconstructed images.
HEVC can use entropy coding schemes during step 612 such as context-based adaptive binary arithmetic coding (CABAC). The coding process for CABAC is shown in
At block 904 in
In some situations and/or embodiments, there can be one or more groups of 16 quantized transform coefficients 212 that do not contain a significant transform coefficient along the reverse scan order prior to the group containing the last significant transform coefficient 212b. In these situations and/or embodiments, the first subset can be the subset 1102 containing the last significant transform coefficient 212b, and any groups before the first subset 1102 are not considered part of a subset 1102. By way of a non-limiting example, in
Referring back to
The coefficient levels 222 obtained at block 1204 that are expected to occur with a higher frequency can be coded before coefficient levels 222 that are expected to occur with lower frequencies. By way of a non-limiting example, in some embodiments coefficient levels 222 of 0, 1, or 2 can be expected to occur most frequently. Coding the coefficient levels 222 in three parts can identify the most frequently occurring coefficient levels 222, leaving more complex calculations for the coefficient levels 222 that can be expected to occur less frequently. In some embodiments, this can be done by coding the coefficient levels 222 in three parts. First, the coefficient level 222 of a quantized transform coefficient 212 can be checked to determine whether it is greater than one. If the coefficient level 222 is greater than one, the coefficient level 222 can be checked to determine whether it is greater than two.
At 1206 in
For the quantized transform coefficients 212 that occur less frequently and have coefficient levels 222 of three or more as determined in the blocks of
Referring to
Referring still to
In some situations and/or embodiments, converting the symbol 226 according to Truncated Rice code with a lower parameter variable 230 can result in a binary codeword 228 having fewer bits than converting the same symbol 226 according to Truncated Rice code with a higher parameter variable 230. By way of a non-limiting example, as shown by the table depicted in
In other situations and/or embodiments, converting the symbol 226 according to Truncated Rice code with a higher parameter variable 230 can result in a binary codeword 228 having fewer bits than converting the same symbol 226 according to Truncated Rice code with a lower parameter variable 230. By way of a non-limiting example, as shown in the table depicted in
After the parameter variable 230 has been updated at 1608, the coding system 110 can return to 1604 and move to the next symbol 226. The next symbol 226 can be in the current subset 1102 or in the next subset 1102. The next symbol 226 can then be coded at 1606 using the updated value of the parameter variable 230 and the process can repeat for all remaining symbols 226 in the TU 210. In some embodiments, when symbols 226 in a subsequent subset 1102 are coded, the parameter variable 230 can be updated based on the last value of the parameter variable 230 from the previous subset 1102, such that the parameter variable 230 is not reset to zero at the first symbol 226 of each subset 1102. In alternate embodiments, the parameter variable 230 can be set to zero at the first symbol 226 of each subset 1102.
Generally referring to
In one embodiment illustrated with the table of
Note that in conventional implementations, cRiceParam 230 is reset once per subset with initial “0” values. For a TU with more than one subset of 16 consecutive symbol coefficients 226, the cRiceParam calculation for coeff_abs_level_minus3 can be reset to 0 for each subset, which favors smaller symbol value coding. Generally, inside each TU, starting from the last non-zero quantized transform coefficient, the absolute values of the non-zero quantized transform coefficients tend to get larger and larger. Therefore, resetting cRiceParam to 0 for each subset might not give optimal compression performance.
In
Tables 4 and 5 as illustrated in respective
By not resetting the cRiceParam to 0 at each subset, the operations of resetting for each subset are saved and once the cRiceParam reaches 3, the symbols will always be binarized with the same set truncated rice codes (cRiceParam equals 3), which can reduce hardware complexity.
Note that Table 5 of
In some embodiments, updating the parameter variable 230 at 1608, referring back to
In some embodiments, each condition 1702 can comprise two parts, a conditional symbol threshold and a conditional parameter threshold. In these embodiments, the condition 1702 can be met if the value of the symbol 226 is equal to greater than the conditional symbol threshold and the parameter variable 230 is equal to or greater than the conditional parameter threshold. In alternate embodiments, each condition 1702 can have any number of parts or have any type of condition for either or both the symbol 226 and parameter variable 230.
Since updating tables can need extra memory to store and fetch the data and the memory can require a lot of processor cycles, it can be preferable to use combination logics to perform the comparison in place of an updating table as the logic can use very few processor cycles. An example of the combination logic that determines the cRiceParam for updating in the place of Table 3 is shown in
In some embodiments, the possible outcomes of the conditions 1702 based on possible values of the parameter variable 230 and the last coded symbols 226 can be stored in memory as a low complexity update table 1704 as illustrated in the table of
In further embodiments, a low complexity level parameter updating table in CABAC can be provided that in some embodiments can operate more efficiently than previous tables and not require the logic illustrated in
Further in this low complexity level parameter updating tables, the following further applies: (1) The parameter variable 230 can: remain the same when the value of the last coded symbol 226 is between 0 and A−1; (2) The parameter variable 230 can be set to one or remain at the last value of the parameter variable 230, whichever is greater, when the symbol 226 is between A and B−1; (3) The parameter variable 230 can be set to two or remain at the last value of the parameter variable 230, whichever is greater, when the symbol 226 is between B and C−1; or (4) The parameter variable 230 can be set to three when the symbol 226 is greater than C−1. The low complexity update table 1704, labeled Table 6, for these conditions 1702 is depicted in
A selection of non-limiting examples of update tables 1704 and their associated combination logic representations 1706 with particular values of A, B, and C, are depicted in
The execution of the sequences of instructions required to practice the embodiments may be performed by a computer system 3300 as shown in
A computer system 3300 according to an embodiment will now be described with reference to
The computer system 3300 may include a communication interface 3314 coupled to the bus 3306. The communication interface 3314 provides two-way communication between computer systems 3300. The communication interface 3314 of a respective computer system 3300 transmits and receives electrical, electromagnetic or optical signals that include data streams representing various types of signal information, e.g., instructions, messages and data. A communication link 3315 links one computer system 3300 with another computer system 3300. For example, the communication link 3315 may be a LAN, an integrated services digital network (ISDN) card, a modem, or the Internet.
A computer system 3300 may transmit and receive messages, data, and instructions, including programs, i.e., application, code, through its respective communication link 3315 and communication interface 3314. Received program code may be executed by the respective processor(s) 3307 as it is received, and/or stored in the storage device 3310, or other associated non-volatile media, for later execution.
In an embodiment, the computer system 3300 operates in conjunction with a data storage system 3331, e.g., a data storage system 3331 that contains a database 3332 that is readily accessible by the computer system 3300. The computer system 3300 communicates with the data storage system 3331 through a data interface 3333.
Computer system 3300 can include a bus 3306 or other communication mechanism for communicating the instructions, messages and data, collectively, information, and one or more processors 3307 coupled with the bus 3306 for processing information. Computer system 3300 also includes a main memory 3308, such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 3306 for storing dynamic data and instructions to be executed by the processor(s) 3307. The computer system 3300 may further include a read only memory (ROM) 3309 or other static storage device coupled to the bus 3306 for storing static data and instructions for the processor(s) 3307. A storage device 3310, such as a magnetic disk or optical disk, may also be provided and coupled to the bus 3306 for storing data and instructions for the processor(s) 3307.
A computer system 3300 may be coupled via the bus 3306 to a display device 3311, such as an LCD screen. An input device 3312, e.g., alphanumeric and other keys, is coupled to the bus 3306 for communicating information and command selections to the processor(s) 3307.
According to one embodiment, an individual computer system 3300 performs specific operations by their respective processor(s) 3307 executing one or more sequences of one or more instructions contained in the main memory 3308. Such instructions may be read into the main memory 3308 from another computer-usable medium, such as the ROM 3309 or the storage device 3310. Execution of the sequences of instructions contained in the main memory 3308 causes the processor(s) 3307 to perform the processes described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and/or software.
Although the present invention has been described above with particularity, this was merely to teach one of ordinary skill in the art how to make and use the invention. Many additional modifications will fall within the scope of the invention, as that scope is defined by the following claims.
This Application claims priority under 35 U.S.C. §119(e) from: earlier filed U.S. Provisional Application Ser. No. 61/556,826, filed Nov. 8, 2011; earlier filed U.S. Provisional Application Ser. No. 61/563,774, filed Nov. 26, 2011; and earlier filed U.S. Provisional Application Ser. No. 61/564,248, filed Nov. 28, 2011, the entirety of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61556826 | Nov 2011 | US | |
61563774 | Nov 2011 | US | |
61564248 | Nov 2011 | US |