The present invention relates to video coding system. In particular, the present invention relates to method and apparatus for reduction of Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) line buffers associated with video coding systems incorporating virtual boundary for SAO and ALF.
Motion estimation is an effective Inter-frame coding technique to exploit temporal redundancy in video sequences. Motion-compensated Inter-frame coding has been widely used in various international video coding standards The motion estimation adopted in various coding standards is often a block-based technique, where motion information such as coding mode and motion vector is determined for each macroblock or similar block configuration. In addition, Intra-coding is also adaptively applied, where the picture is processed without reference to any other picture. The Inter-predicted or Intra-predicted residues are usually further processed by transformation, quantization, and entropy coding to generate a compressed video bitstream. During the encoding process, coding artifacts are introduced, particularly in the quantization process. In order to alleviate the coding artifacts, additional processing has been applied to reconstructed video to enhance picture quality in newer coding systems. The additional processing is often configured in an in-loop operation so that the encoder and decoder may derive the same reference pictures to achieve improved system performance.
As shown in
The coding process in HEVC is applied according to Largest Coding Unit (LCU), also called Coding Tree Unit (CTU). The LCU is adaptively partitioned into coding units using quadtree. In HEVC, the DF is applies to 8×8 block boundaries. For each 8×8 block, horizontal filtering across horizontal block boundaries is first applied, and then vertical filtering across horizontal block boundaries is applied.
Sample adaptive offset (SAO) types according to HEVC and AVS2 are shown in
The conditions for the SAO classification as shown in Table 1 can be implemented by comparing the center pixel with two neighboring pixels individually. The conditions for classification checks whether the center pixel is greater than, smaller than or equal to one of the neighboring pixels. The three comparison results may be represented by a 2-bit data for each comparison result.
The SAO parameters such as pixel offset values and SAO types can be determined adaptively for each CTU. For HEVC, the SAO parameter boundary is the same as the CTU boundary. Within the parameter boundary, SAO process for all pixels share the same SAO types and offset values. Since SAO is applied to DF processed pixels, the SAO process for a current CTU has to wait for the DF process to complete for the current CTU. However, the pixels around the CTU boundary cannot be processed by DF until the reconstructed video data around the CTU boundary on the other side of the CTU boundary are ready. Due to such data dependency, AVS2 adopted shifted SAO parameter boundaries.
Adaptive Loop Filtering (ALF) 132 is a video coding tool to enhance picture quality. ALF has been evaluated during the development stage of HEVC. However, ALF is not adopted in the current HEVC standard. Nevertheless, it is being incorporated into AVS2. In particular, a 17-tap symmetric ALF filter is being used for AVS2 as shown in
As mentioned above, the DF, SAO and ALF process involves neighboring data. In HEVC and AVS2, CTU has been used as a unit for coding process. When the DF, SAO and ALF processes are applied to data across a CTU boundary, the data dependency has to be managed carefully to minimize line buffer. Since the DF, SAO and ALF processes are applied to each CTU sequentially, the corresponding hardware implementation may be arranged in a pipeline fashion.
The processing status for corresponding DF 720, SAO 730 and ALF 740 processes are indicated by respective reference numbers 725, 735 and 745. Diagram 725 illustrates the DF processing status at the end of DF processing stage for CTU X. Luma pixels above line 722 and chroma pixels above line 724 are DF processed. Luma pixels below line 722 and chroma pixels below line 732 cannot be processed during DF processing stage for CTU X since involved pixels on the other side of block boundary (i.e., below CTU boundary 705) are not available yet. Diagram 735 illustrates the SAO processing status at the end of SAO processing stage for CTU X. Luma pixels above line 732 and chroma pixels above line 734 are SAO processed, where line 732 and line 734 are aligned. Diagram 745 illustrates the ALF processing status at the end of ALF processing stage the CTU X. Again, the luma pixels below line 732 and the chroma pixels below line 734 cannot be processed by SAO for CTU X yet since it involves SAO parameter signaled in the CTU Y, which is not yet processed by VLD. Luma pixels above line 742 (luma ALF virtual boundary) are ALF processed according to the AVS2 draft standard. Chroma pixels above line 744 (chroma ALF virtual boundary) would be ALF processed. Nevertheless, the ALF process for the chroma component cannot be performed for chroma lines A through D during the CTU X processing stage. For example, the ALF process for pixel 746 will use pixel 748. Since chroma pixel 748 is below the chroma SAO parameter boundary 734, chroma pixel 748 is not SAO processed yet for the CTU X processing stage. Therefore, even though it is above the chroma ALF virtual boundary, chroma pixel 746 cannot be ALF processed. Accordingly, 6 lines of chroma SAO processed lines above pixel 748 (i.e., above line D) have to be stored in buffer for later ALF process on lines A through D during the CTU Y processing stage, wherein the three lines above line A have been ALF processed in the CTU X processing stage, but also being required by the ALF process on line A.
For hardware based implementation, the 6 lines of chroma samples with picture width have to be stored in line buffer, which is usually implemented using embedded memory and such implementation would result in high chip cost. Therefore, it is desirable to develop a method and apparatus that can reduce the required line buffer associated with in-loop filtering processes, such as DF, SAO, ALF, any other in-loop filtering process or combination thereof. Furthermore, for different SAO parameter boundaries, the system will switch between different SAO parameters. This will increase system complexity and power consumption. Therefore, it is desirable to develop in-loop filtering processes, such as DF, SAO, ALF, any other in-loop filtering process, or combination thereof, with proper system parameter design to reduce line buffer requirement, system complexity, system power consumption, or any combination thereof. In yet another aspect, it is desirable to develop method and apparatus for performance and cost efficient loop filter processing including DF, SAO and ALF for any video coding system incorporating such loop filter processing.
A method and apparatus for loop filter processing of reconstructed video are disclosed. In order to reduce both the computational complexity of SAO parameter switching and the requirement of line buffer, the present invention manipulates SAO parameter boundary by shifting it in horizontal and vertical directions according to a respective goal. According to the present invention, deblocking filter (DF) process is first applied to reconstructed pixels, where DF processing modifies up to m pixels at each side of a horizontal edge corresponding to an image unit boundary between two image units. The sample adaptive offset (SAO) process is applied to DF-processed pixels of current image unit according to one or more SAO parameters. All or a part of pixels within SAO parameter boundary of current image unit share the same SAO parameters. The vertical SAO parameter boundary of current image unit is shifted-left by xs lines from a vertical boundary of current image unit and the horizontal SAO parameter boundary of current image unit is shifted-up by ys lines from a horizontal boundary of current image unit. The spatial-loop-filter process is then applied to SAO-processed pixels above a spatial-loop-filter restricted boundary of current image unit according to one or more spatial-loop-filter parameters. The spatial-loop-filter restricted boundary of current image unit is shifted-up by yv lines from the bottom boundary of current image unit. In order to reduce the requirement of line buffer, m, xs, ys, and yv are set to positive integers, xs is always greater than m, ys is greater than or equal to 0, ys is always smaller than yv, and yv is determined according to m.
Each image unit may correspond to a coding tree unit (CTU). The spatial-loop-filter process may correspond to adaptive loop filter (ALF) process.
If the reconstructed video data comprises a luma component and a chroma component, the DF process, the SAO process, and the spatial-loop-filter process are applied to the luma component and chroma component separately with individual m denoted as M and N, individual xs denoted as xS and xSC, individual ys denoted as yS and ySC, and individual yv denoted as yV and yVC respectively. In one embodiment, yS and ySC can be equal to 0. yV can be greater than M and yVC can be greater than N, such as yV=(M+1) and yVC=(N+1). In one example, M is equal to 3 and N is equal to 2.
In another embodiment, yS is equal to ySC, yV is equal to yVC, and yVC is greater than MAX(M,N). For example, yV and yVC are equal to MAX(M,N)+1, and yS and ySC can be an integer from 0 to MAX(M,N). In one example, M is equal to 3 and N is equal to 2.
In yet another embodiment, the sign data generated from comparing a current pixel in a current line processed during a current processing stage for the current image unit with a neighboring pixel in an adjacent line to be processed during in a subsequent processing stage are stored. Each sign data corresponds to “greater than”, “less than” or “equal to”. Each sign data can be stored in 2 bits.
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
For the convenience to discuss the data dependence between different loop processing stages, loop filter related boundary parameters are introduced in this disclosure. The processing status for DF, SAO and ALF processes in
Diagram 835 illustrates the SAO processing status at the end of SAO processing stage for CTU X. Luma pixels above line 832 (i.e., luma SAO parameter boundary) and chroma pixels above line 834 (i.e., chroma SAO parameter boundary) are SAO processed, where line 832 and line 834 are aligned due to the SAO parameter boundary shift proposed in AVS2 standard. In order to avoid SAO parameter switching in the processing stage of each CTU, SAO parameter boundary is shifted by (xS, yS) for the luma component and (xSC, ySC) for the chroma component. In other words, for a CTU with top-left point (xC, yC), the top boundary of SAO parameter is shifted to (yC-yS) for the luma component and shifted to (yC-ySC) for the chroma component as indicated in
Diagram 845 illustrates the ALF processing status at the end of ALF processing stage for CTU X. Luma pixels above line 842 (i.e., luma ALF virtual boundary) are SAO processed. Chroma pixels above line 844 (i.e., chroma ALF virtual boundary) would be ALF processed. Nevertheless, the ALF process for the chroma component cannot be performed for chroma line D during the CTU X processing stage. ALF virtual boundary is (yC-yV) for the luma component and (yC-yVC) for the chroma component, where yV and yVC correspond to the boundary vertical shifts for the luma and chroma components respectively. For the AVS2 standard, the number (i.e., M and N) of boundary pixels to be updated are 3 and 2 for the luma and chroma components respectively. The SAO parameter boundary vertical offsets correspond to 4 for both luma and chroma components. On the other hand, the vertical shifts for the ALF virtual boundaries (i.e., yV and yVC) are 4 and 3 for the luma and chroma components respectively.
In order to simultaneously reduce the size requirement of line buffer and the computational complexity of SAO parameter switching during processing stage of a CTU, a method to manipulate SAO parameter boundary by shifting it in horizontal and vertical directions according to a respective goal is disclosed, which has different SAO parameter boundary and SAO processing boundary. As mentioned above, in the conventional approach, the SAO parameter boundary and SAO processing boundary are always the same. According to the present invention, vertical SAO parameter boundary keeps equal to the SAO processing boundary, but the horizontal SAO parameter boundary can be different from the SAO processing boundary. In particular, the SAO processing boundary is selected according to the locations of DF processed pixel data.
In the above discussion, an image is partitioned into CTUs and each CTU is partitioned into one or more coding units (CUs). The DF, SAO and ALF processes are applied to block boundaries to reduce artifacts at or near block boundaries. For a coding system that the CTUs are processed in a horizontal scan order, the DF, SAO and ALF processes at CTU boundaries, which are also block boundaries, will require line buffers to store information across CTU row boundaries. However, the image may also be partitioned into other image units, such as macroblock or tile, for coding process. The line buffer issue associated with CTU boundaries also exists in image unit boundaries.
While ALF filter is used as an example in the above illustration, the present invention is applicable to any spatial loop filter. For example, a two-dimensional FIR (finite impulse response) filter with as set of spatial loop filter parameters can be used to replace the ALF. In order to reduce line buffer requirement associated with the spatial loop filter processing, a restricted spatial loop filter boundary can be used to restrict the spatial loop filter processing to use only SAO processed data within the restricted spatial loop filter boundary. For example, the restricted spatial loop filter boundary can be located at y lines above the CTU boundary. The spatial loop filter will be applied to the SAO processed pixels above the restricted spatial loop filter boundary and will only use the SAO processed pixels above the restricted spatial loop filter boundary as input to the spatial loop filter.
The present invention can be applied to the luma component and the chroma component if the underlying video data corresponds to color video data. In the first embodiment, the vertical offsets yS and ySC of the horizontal SAO parameter boundary, and the vertical offsets yV and yVC of the ALF virtual boundary are determined according to:
0≦yS<yV=M+1, and (1)
0≦ySC<yVC=N+1 (2)
The major impact on line buffer requirement is due to the storage requirement for boundary loop filter processing from one CTU row to the next CTU row. Since a picture may be very wide, the corresponding line buffer size may be very large. Therefore, a goal of the present invention is to reduce the line buffer requirement for loop filter processing across CTU boundary between two CTU rows. For boundary offsets for the vertical boundaries, the impact on line buffer requirement is very small if any. The SAO parameter boundary horizontal offset xS and xSC for the luma and chroma components keeps the same as the convention case, i.e., xS=M+1 and xSC=N+1. In case that a system processes picture in a vertical scan order, the CTU columns will be treated as if they were CTU row.
The offset of the SAO parameter boundary in the horizontal direction is always the same as SAO processing boundary. For example, the horizontal offsets xS and xSC of the SAO parameter boundary for the luma and chroma components is the same as the offsets of the SAO processing boundaries in the horizontal direction.
In the second embodiment, the SAO parameter boundary vertical offsets yS and ySC, and the ALF virtual boundary vertical offsets yV and yVC are determined according to:
0≦yS=ySC<yV=yVC=MAX(M,N)+1 (3)
In other words, the SAO parameter boundaries for the luma and chroma components are the same in order to favor a regular memory access behavior. The ALF virtual boundaries for the luma and chroma components are also the same. Furthermore, the ALF virtual boundaries are at least 1 line above the respective SAO parameter boundaries.
The SAO parameter boundary in the horizontal direction is always the same as the SAO processing boundary in the horizontal direction. For example, the SAO parameter boundary horizontal offsets xS and xSC for the luma and chroma components may be set to xS=xSC=MAX(M,N)+1.
During the SAO processing, the pixel data within current CTU processing boundaries may be used for later SAO processing. For example, the line D in
The result of the comparison between a pixel line C and a neighboring pixel line D can be represented by a 2-bit data for each pixel to indicate one of the three comparison results. The 2-bit sign data is much smaller than storing a whole pixel, which is typically 8 bits or more. Accordingly, the cost of the line buffer can be substantially reduced.
The loop filter processing boundary design as disclosed above can be used to overcome the large line buffer requirement issue due to data dependency in coding systems, such as AVS2 system, utilizing loop filters including DF, SAO and ALF. The present invention is also applicable to any advanced video coding systems incorporating DF, SAO and ALF.
Table 2 compares the line buffer requirements among the conventional AVS2 standard and the embodiments of the present invention. As mentioned before, all above implementations require 3 lines for each of the luma and chroma components to store data for deblocking filter. For SAO processing, all systems require to store line D and Line C data for each of the luma and chroma components. However, instead of storing pixel data for line C, comparison results between line C and line D can be stored to reduce storage requirement. As mentioned before, only 2 bits are required to store each comparison results. According to the conventional AVS2 approach, 6 lines of SAO results would be stored for ALF processing on the chroma component. Systems incorporating any embodiment of the present invention can remove the need for these 6 lines of buffer for ALF processing of the chroma component. The total numbers of lines required for DF, SAO and ALF are 16, 7.5 and 8.5 for the conventional AVS2, the first embodiment, and the second embodiment, where both embodiments achieve additional memory saving by storing the signs for comparison results involved in the SAO processing. In other words, the first and second embodiments can reduce the line buffer requirement by 8.5 and 7.5 lines.
The flowchart shown above is intended to illustrate examples of loop filter processing according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be one or more electronic circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
The present invention claims priority to U.S. Provisional Patent Application, Ser. No. 62/115,755, filed Feb. 13, 2015. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62115755 | Feb 2015 | US |