Method and apparatus for sub-picture based raster scanning coding order

Information

  • Patent Grant
  • 11425383
  • Patent Number
    11,425,383
  • Date Filed
    Monday, February 24, 2020
    4 years ago
  • Date Issued
    Tuesday, August 23, 2022
    2 years ago
Abstract
A method and apparatus for sub-picture based raster scanning coding order. The method includes dividing an image into even sub-pictures, and encoding parallel sub-pictures on multi-cores in raster scanning order within sub-pictures, wherein from core to core, coding of the sub-picture is independent around sub-picture boundaries, and wherein within a core, coding of a sub-picture is at least one of dependent or independent around sub-picture boundaries.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

Embodiments of the present invention generally relate to a method and apparatus for sub-picture based raster scanning coding order.


Description of the Related Art

The High Efficiency Video Coding (HEVC) has a design goal of being more efficient than the MPEG AVC/H.264 High profile. One of the application areas of this standard is the ultra high definition (UHD) video coding, in which the picture or image size can go up to 8K×4K (7680×4320). The big picture size poses great challenge for the chip design to devise cost-effective video solutions. This is due to the fact that the UHD requires even bigger search range in the motion estimation for providing the intended coding efficiency of such a standard. On-chip memory, for buffering the reference blocks for the motion estimation and compensation, tends to be expensive, which is a major limiting factor for a cost-effective UHD video solutions. Also, UHD HEVC coding may well beyond the capability of a single video core, multi-core based platforms may become popular in the future for HEVC UHD solutions.


Therefore, there is a need for improved method and/or apparatus for sub-picture based raster scanning coding order.


SUMMARY OF THE INVENTION

Embodiments of the present invention relate to a method and apparatus for sub-picture based raster scanning coding order. The method includes dividing an image into even sub-pictures, and encoding parallel sub-pictures on multi-cores in raster scanning order within sub-pictures, wherein from core to core, coding of the sub-picture is independent around sub-picture boundaries, and wherein within a core, coding of a sub-picture is at least one of dependent or independent around sub-picture boundaries.





BRIEF DESCRIPTION OF THE DRAWINGS

So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.



FIG. 1 is an embodiment of a motion estimation with growing search window;



FIG. 2(a) non sub-picture partitioning, FIG. 2(b) a picture is partitioned into two sub-pictures, FIG. 2(c) a picture is partitioned into three sub-pictures, and FIG. 2(d) a picture is partitioned into four sub-pictures;



FIG. 3 is an embodiment of sub-picture partitioning for multi-core parallel processing purpose;



FIG. 4 is an embodiment of an alternative sub-picture partitioning for multi-core parallel processing purpose;



FIG. 5 is an embodiment of high quality coding with sub-pictures;



FIG. 6 is an embodiment of multi-core parallel processing with sub-pictures; and



FIG. 7 is an embodiment of multi-core parallel processing and high quality video with sub-picture.





DETAILED DESCRIPTION

In video coding, the growing search window is commonly used to minimize the memory bandwidth, the data traffic between on-chip and off-chip, required for loading the reference data for the motion estimation and motion compensation. FIG. 1 is an embodiment of motion estimation with growing search window. FIG. 1 illustrates the growth of widow works. Both the reference picture and current picture can be divided into a set of non-overlapped macroblock (MB) rows. In this example, the picture has 8 MB rows, each MB row is made up of a same amount of MBs determined by the horizontal picture size and macroblock size. In the growing window fashion, the horizontal reference block size is equal to the horizontal picture size, the vertical size of the reference block depends on the on-chip memory size available for the motion estimation and compensation. As shown in FIG. 1, the reference block size of the growing window size has 3 MB rows. For the growing window used for the MBs of the 2nd MB row in the current picture, row 1 and 2 are re-used from the previous growing window, which is the growing window for MB row 1 of the current picture. In one embodiment, row 3 of the reference data is loaded from the off-chip memory; likewise, for the growing window used for the MBs of the 3rd MB row in the current picture, only row 4 of reference data is loaded from the off-chip memory. Rows 2 and 3 of reference data are re-used from the previous growing window; so on and so forth. Therefore, with the growing window search strategy, only one MB row of reference data needs to be loaded when the coding of the current picture is moving from the one MB row to the next row.


For search range srX*srY, the on-chip memory size required by the growing window can be computed by using the equation below

MemSize=(2*srY+N)*picWidth  (1)

where N×N is MB size, picWidth is the horizontal size of the picture.


For 8K×4K (7680×4320) video, if the search range is 256×256, and MB size is 64×64, the on-chip memory size for the growing widow will be 4,423,680 bytes (over 4.4 Mbytes). This is very expensive for the chip design. Therefore, it is desirable for this standard to enable big enough search range for the UHD coding while still keep the on-chip memory size requirements in check.


In addition, for UHD coding multi-core solutions may become vital because it might be well beyond the capability of single core processor to handle real-time encoding/decoding of UHD video, such as, 8K×4K, 120 frame/sec. Therefore, it is desirable that the HEVC standard can design in features that can facilitate the multi-core paralleling processing.


In order to reduce the on-chip memory requirements without impacting the coding efficiency, the traditional picture-based rater-scanning order coding is extended, as shown in FIG. 2 (a) to sub-picture based raster scanning order coding shown in FIGS. 2 (b), (c) and (d). The division of a picture into sub-pictures can be signaled in the high-level syntax, such as, in the sequence parameter set. While macroblocks inside a sub-picture follow the rater-scanning order, the sub-pictures of a picture also follow the raster scanning coding order, such as, from left to right, from top to bottom. The coding the sub-pictures does not need to be independent; rather, it depends on the slice partitioning within the picture. FIGS. 2(b)-2(d) depicts various embodiments of sub-picture partitioning; however, other patterns of sub-picture partitioning are also possible.


The sub-pictures on the vertical picture boundary in FIGS. 2(b)-2(d) and rest of sub-pictures may have different horizontal size in order to keep the exact same search ranges for all the sub-pictures under the same growing window memory size. This is the consequence of the sub-pictures on the vertical picture boundary. Thus, the search window has to overlap with the neighboring sub-picture by srX pixels to the right or the left direction. However, the sub-pictures inside the picture, the search window has to overlap with the neighboring sub-pictures by a total of 2*srX picture, where srX pixels to the right and left. Therefore, in this embodiment of partitioning of sub-pictures, the horizontal size of sub-pictures on the vertical picture boundary will be larger than the rest of sub-pictures by srX pixels.


If we treat (a) as the special case of the sub-picture partitioning, for search range srX*srY, the on-chip memory size required by the growing window in the sub-picture coding mode can be computed as










MemSize
=


(


2
*
srY

+
N

)

*


picWidth
+

2
*

(

K
-
1

)

*
srX


K



,




(
2
)








where K is the number of sub-pictures.
















TABLE 1












memSize



srX
srY
N
K
picWidth
(bytes)























256
256
64
1
7680
4423680



256
256
64
2
7680
2359296



256
256
64
3
7680
1671168



256
256
64
4
7680
1327104



256
256
64
5
7680
1120666



256
256
64
6
7680
983040










Table 1 lists the growing window memory size for different number of sub-pictures. As shown in Table 1, even if the picture is divided into two sub-pictures, the on-chip memory requirement for the growing window almost goes down by half, which is significant cost saving for the chip design.


For multi-core paralleling processing, a picture can be evenly divided into the sub-pictures so that each core has balanced loading. For hardware implementation it is extremely critical that picture can be divided evenly to minimize the implementation cost. This is due to the fact that cores are simply replicated, each core is designed to deal with real-time encoding/decoding of the sub-picture of largest size. Therefore, to minimize the largest sub-picture size during the process of dividing the picture into sub-pictures is the key for reducing the hardware implementation cost of multi-core codec.


Normally, a picture cannot be evenly divided into sub-pictures in a perfect fashion, the sub-pictures having equal size. For example, for 1080p sequences (1920×1080 picture size), if the largest coding unit (LCU) size is 64×64, the picture size will be 30×17 in units of LCUs. In HEVC traditional macroblock concept of 16×16 block size maybe extended to LCU, which is up to 64×64 block size. If the picture is divided into 4×2 sub-pictures, such as, a number of sub-picture columns is 4 and number of sub-picture rows is 2), it will lead to sub-pictures of different size, because 30 is not a multiple of 4 and 17 is not multiple of 2. Hence, the sub-picture size is decomposed 30 into 7+7+8+8 and 17 into 8+9. As a result, the sub-picture of largest size has 8×9 LCUs and sub-picture of smallest size has 7×8 LCUs. Alternatively, horizontal picture size 30 can be divided into 7+7+7+9, but this kind of partitioning is less desirable because it results in the largest sub-picture size of 9×9 LCUs. The implementation may become more expensive because each core would need to be able to handle sub-pictures of size 9×9 LCUs instead of 8×9 LCUs in real-time.


Thus, in one embodiment, dividing a picture into sub-pictures for multi-core paralleling processing is done by limiting the sub-picture size difference between the largest sub-picture and smallest sub-picture to be less than or equal to one LCU in the horizontal and vertical directions.


For example, let picture size be W*H, in unit of LCUs, and n*m be number of sub-pictures to be divided, then








{




W
=




(

n
-
k

)

*
x

+

k
*

(

x
+
1

)



=


n
*
x

+
k








H
=




(

m
-
j

)

*
y

+

j
*

(

y
+
1

)



=


m
*
y

+
j












where x*y is smallest sub-picture size, and (x+1)*(y+1) is largest sub-picture size. In horizontal direction, k columns of sub-pictures will have size of (x+1) and (n-k) columns of sub-picture have size of x. Likewise, in vertical direction, j rows of sub-pictures will have size of (y+1) and (m−j) rows of sub-picture have size of y. x, y, k and j are all integers and in units of LCUs, they are determined by








{




x
=

W
/
n







k
=

W





%





n







y
=

H
/
m







j
=

H





%





m










For example, for W*H=30×17 and n*m=4×2, we have








{




x
=


30
/
4

=
7







k
=


30

%4

=
2







y
=


17
/
2

=
8







j
=


17

%2

=
1











FIG. 3 is an embodiment of sub-picture partitioning for multi-core parallel processing purpose. In FIG. 3, a 30×17 picture (1080p) is evenly divided into 4×2 sub-pictures, with largest sub-picture size of 8×9 and smallest sub-picture size of 7×8. Then, W=30 is decomposed into 30=7+7+8+8, and H=17 is decomposed into 17=8+9, which is an optimal sub-picture partitioning discussed above.


In one embodiment, the sub-picture size difference between the largest sub-picture and smallest sub-picture is limited to be less than or equal to one LCU in each direction, and specifies the way to compute sub-picture sizes and the number of sub-pictures of determined sizes. Such an embodiment may not impose any constraints on the sub-picture partitioning order.



FIG. 4 is an embodiment of an alternative sub-picture partitioning for multi-core parallel processing purpose. FIG. 4 shows an alternative sub-picture partitioning order which is different from that of FIG. 3, wherein a 30×17 picture (1080p) is evenly divided into 4×2 sub-pictures in an alternative order, with largest sub-picture size of 8×9 and smallest sub-picture size of 7×8. That is, once the sizes and numbers of sub-pictures are determined based on the proposed method, it is up to users to divide a picture into sub-pictures of determined sizes and numbers in any possible order.


As mentioned above, the sub-picture based raster scanning order coding significantly reduces the on-chip memory requirements for motion estimation and compensation while maintaining the intended coding efficiency, thus, reduces the chip cost for the UHD video solutions. It also provides a way of evenly divide a picture into sub-pictures to minimize the implementation cost of multi-core HEVC codecs.



FIG. 5 is an embodiment of high quality coding with sub-pictures. The sub-pictures are evenly divided and encoded sequentially on a single-core in raster-scanning order within sub-pictures. The sub-pictures may have coding dependency around sub-picture boundaries. Coding dependency around sub-picture boundaries could include intra prediction mode prediction, motion vector prediction, entropy coding, de-blocking filter, adaptive loop-filter, etc. The sub-picture coding leads to larger vertical search range under a same amount of on-chip memory and thus high video quality



FIG. 6 is an embodiment of multi-core parallel processing with sub-pictures. The sub-pictures are evenly divided and encoded parallels on multi-cores in raster-scanning order within sub-pictures on multiple video cores. To ensure parallelism, coding of sub-pictures is independent around sub-picture boundaries



FIG. 7 is an embodiment of multi-core parallel processing of FIG. 5 and high quality video with sub-picture of FIG. 6. In FIG. 7, the sub-pictures are evenly divided and encoded parallels on multi-cores in raster-scanning order within sub-pictures. From core to core, the coding of sub-pictures, for example, sub-picture 0 & 1 vs. 2&3, is independent around sub-picture boundaries. But within a core, coding of sub-pictures, such as, sub-picture 0 and 1, maybe dependent around sub-picture boundaries.


While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. It also should be noted that picture and/or image may be used interchangeably and refer to a single image/picture or to a series or images/pictures.

Claims
  • 1. A system, comprising: a memory;one or more processors coupled to the memory and configured to: determine a position for each non-overlapping region in a frame in a first sub-frame scan order, the first sub-frame scan order being based on a position of each non-overlapping region in the frame, the frame comprising: a first sub-frame having a first set of non-overlapping regions;a second sub-frame having a second set of non-overlapping regions; anddetermine a sub-frame position of a single non-overlapping region from the frame position of the single non-overlapping region by: determining a sub-frame from the first sub-frame or the second sub-frame that contains the single non-overlapping region; anddetermining the sub-frame position of the single non-overlapping region based on a second sub-frame scan order, the second sub-frame scan order based on the position of the single non-overlapping region in the determined sub-frame.
  • 2. The system of claim 1, wherein the first sub-frame scan order is a raster-scan order in the frame.
  • 3. The system of claim 2, wherein the second sub-frame scan order is a raster-scan order in the determined sub-frame.
  • 4. The system of claim 3, wherein the first sub-frame is rectangular and the second sub-frame is rectangular.
  • 5. A method, comprising: determining a position for each non-overlapping region in a frame in a first sub-frame scan order, the first sub-frame scan order being based on a position of each non-overlapping region in the frame, the frame comprising: a first sub-frame having a first set of non-overlapping regions;a second sub-frame having a second set of non-overlapping regions; anddetermining a sub-frame position of a single non-overlapping region from the frame position of the single non-overlapping region by: determining a sub-frame from the first sub-frame or the second sub-frame that contains the single non-overlapping region; anddetermining the sub-frame position of the single non-overlapping region based on a second sub-frame scan order, the second sub-frame scan order based on the position of the single non-overlapping region in the determined sub-frame.
  • 6. The method of claim 5, wherein the first sub-frame scan order is a raster-scan order in the frame.
  • 7. The method of claim 6, wherein the second sub-frame scan order is a raster-scan order in the determined sub-frame.
  • 8. The system of claim 7, wherein the first sub-frame is rectangular and the second sub-frame is rectangular.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of application Ser. No. 16/167,134 filed Oct. 22, 2018, which is a continuation of application Ser. No. 14/664,992, filed Mar. 23, 2015, (now U.S. Pat. No. 10,110,901), which is a continuation of application Ser. No. 13/179,174, filed Jul. 8, 2011 (now U.S. Pat. No. 8,988,531), which claims the benefit of U.S. Provisional Application No. 61/362,468, filed Jul. 8, 2010 and 61/485,200 filed May 12, 2011 the entireties of all of which are hereby incorporated by reference.

US Referenced Citations (31)
Number Name Date Kind
7719540 Piazza et al. May 2010 B2
7822119 Boon et al. Oct 2010 B2
7830959 Park et al. Nov 2010 B2
8988531 Zhou Mar 2015 B2
10110901 Zhou Oct 2018 B2
10574992 Zhou Feb 2020 B2
10623741 Zhou Apr 2020 B2
10939113 Zhou Mar 2021 B2
20030123557 De With et al. Jul 2003 A1
20030159139 Candelore et al. Aug 2003 A1
20040003178 Magoshi Jan 2004 A1
20040010614 Mukherjee et al. Jan 2004 A1
20040022543 Hosking et al. Feb 2004 A1
20050219253 Piazza et al. Oct 2005 A1
20080031329 Iwata Feb 2008 A1
20080094419 Leigh et al. Apr 2008 A1
20080123750 Bronstein et al. May 2008 A1
20090094658 Kobayashi Apr 2009 A1
20100091836 Jia Apr 2010 A1
20100091880 Jia Apr 2010 A1
20100097250 Demircin et al. Apr 2010 A1
20100098155 Demircin et al. Apr 2010 A1
20100118945 Wada et al. May 2010 A1
20100128797 Dey May 2010 A1
20100195922 Amano et al. Aug 2010 A1
20100246683 Webb et al. Sep 2010 A1
20100260263 Kotaka et al. Oct 2010 A1
20100296744 Boon et al. Nov 2010 A1
20100321428 Saito et al. Dec 2010 A1
20110182523 Kim et al. Jul 2011 A1
20190058885 Zhou Feb 2019 A1
Non-Patent Literature Citations (1)
Entry
“group, v.” OED Online. Oxford University Press, Sep. 2016. Web. Dec. 5, 2016.
Related Publications (1)
Number Date Country
20200195929 A1 Jun 2020 US
Provisional Applications (2)
Number Date Country
61485200 May 2011 US
61362468 Jul 2010 US
Continuations (3)
Number Date Country
Parent 16167134 Oct 2018 US
Child 16799115 US
Parent 14664992 Mar 2015 US
Child 16167134 US
Parent 13179174 Jul 2011 US
Child 14664992 US