Embodiments of the present disclosure relates generally to point cloud coding techniques, and more particularly, to coding and encapsulation of coding parameters in point cloud coding.
A point cloud is a collection of individual data points in a three-dimensional (3D) plane with each point having a set coordinate on the X, Y, and Z axes. Thus, a point cloud may be used to represent the physical content of the three-dimensional space. Point clouds have shown to be a promising way to represent 3D visual data for a wide range of immersive applications, from augmented reality to autonomous cars.
Point cloud coding standards have evolved primarily through the development of the well-known MPEG organization. MPEG, short for Moving Picture Experts Group, is one of the main standardization groups dealing with multimedia. In 2017, the MPEG 3D Graphics Coding group (3DG) published a call for proposals (CFP) document to start to develop point cloud coding standard. The final standard will consist in two classes of solutions. Video-based Point Cloud Compression (V-PCC or VPCC) is appropriate for point sets with a relatively uniform distribution of points. Geometry-based Point Cloud Compression (G-PCC or GPCC) is appropriate for more sparse distributions. However, coding efficiency of conventional point cloud coding techniques is generally expected to be further improved.
In a first aspect, a method for point cloud coding is proposed. The method comprises: determining, during a conversion between a current point cloud (PC) sample of a point cloud sequence and a bitstream of the point cloud sequence, one or multiple reference PC samples for the current PC sample, wherein at least one reference frame comprising the one or multiple reference PC samples and a current frame comprising the current PC sample are in a group of frames (GOF); and performing the conversion based on the one or multiple reference PC samples. According to the method in accordance with the first aspect of the present disclosure, a hierarchical GOF structure is proposed to perform the inter prediction for geometry coding and attribute coding. One or multiple reference PC samples in the same GOF as the current PC sample may be used for the current PC sample, which improves prediction accuracy and increases the coding efficiency. Moreover, the one or more reference PC samples may have later or earlier time stamps than the current frame, which also causes the coding performance to be improved.
In a second aspect, an apparatus for processing point cloud data is proposed. The apparatus comprises a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure.
In a third aspect, a non-transitory computer-readable storage medium is proposed. The non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
In a fourth aspect, a non-transitory computer-readable recording medium is proposed. The non-transitory computer-readable recording medium stores a bitstream of a point cloud sequence which is generated by a method performed by a point cloud processing apparatus. The method comprises: determining one or multiple reference point cloud (PC) samples for a current PC sample of the point cloud sequence, wherein at least one reference frame comprising the one or multiple reference PC samples and a current frame comprising the current PC sample are in a GOF; and generating the bitstream based on the one or multiple reference PC samples.
In a fifth aspect, a method for storing a bitstream of a point cloud sequence is proposed. The method comprises: determining, one or multiple reference point cloud (PC) samples for a current PC sample of the point cloud sequence, wherein at least one reference frame comprising the one or multiple reference PC samples and a current frame comprising the current PC sample are in a GOF; and generating the bitstream based on the one or multiple reference PC samples; and storing the bitstream in a non-transitory computer-readable recording medium.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Through the following detailed description with reference to the accompanying drawings, the above and other objectives, features, and advantages of example embodiments of the present disclosure will become more apparent. In the example embodiments of the present disclosure, the same reference numerals usually refer to the same components.
Throughout the drawings, the same or similar reference numerals usually refer to the same or similar elements.
Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment,” “an embodiment,” “an example embodiment,” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “has”, “having”, “includes” and/or “including”, when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
Source device 100 and destination device 120 may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as smartphones and mobile phones, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming devices, vehicles (e.g., terrestrial or marine vehicles, spacecraft, aircraft, etc.), robots, LIDAR devices, satellites, extended reality devices, or the like. In some cases, source device 100 and destination device 120 may be equipped for wireless communication.
The source device 100 may include a data source 112, a memory 114, a GPCC encoder 116, and an input/output (I/O) interface 118. The destination device 120 may include an input/output (I/O) interface 128, a GPCC decoder 126, a memory 124, and a data consumer 122. In accordance with this disclosure, GPCC encoder 116 of source device 100 and GPCC decoder 126 of destination device 120 may be configured to apply the techniques of this disclosure related to point cloud coding. Thus, source device 100 represents an example of an encoding device, while destination device 120 represents an example of a decoding device. In other examples, source device 100 and destination device 120 may include other components or arrangements. For example, source device 100 may receive data (e.g., point cloud data) from an internal or external source. Likewise, destination device 120 may interface with an external data consumer, rather than include a data consumer in the same device.
In general, data source 112 represents a source of point cloud data (i.e., raw, unencoded point cloud data) and may provide a sequential series of “frames” of the point cloud data to GPCC encoder 116, which encodes point cloud data for the frames. In some examples, data source 112 generates the point cloud data. Data source 112 of source device 100 may include a point cloud capture device, such as any of a variety of cameras or sensors, e.g., one or more video cameras, an archive containing previously captured point cloud data, a 3D scanner or a light detection and ranging (LIDAR) device, and/or a data feed interface to receive point cloud data from a data content provider. Thus, in some examples, data source 112 may generate the point cloud data based on signals from a LIDAR apparatus. Alternatively or additionally, point cloud data may be computer-generated from scanner, camera, sensor or other data. For example, data source 112 may generate the point cloud data, or produce a combination of live point cloud data, archived point cloud data, and computer-generated point cloud data. In each case, GPCC encoder 116 encodes the captured, pre-captured, or computer-generated point cloud data. GPCC encoder 116 may rearrange frames of the point cloud data from the received order (sometimes referred to as “display order”) into a coding order for coding. GPCC encoder 116 may generate one or more bitstreams including encoded point cloud data. Source device 100 may then output the encoded point cloud data via I/O interface 118 for reception and/or retrieval by, e.g., I/O interface 128 of destination device 120. The encoded point cloud data may be transmitted directly to destination device 120 via the I/O interface 118 through the network 130A. The encoded point cloud data may also be stored onto a storage medium/server 130B for access by destination device 120.
Memory 114 of source device 100 and memory 124 of destination device 120 may represent general purpose memories. In some examples, memory 114 and memory 124 may store raw point cloud data, e.g., raw point cloud data from data source 112 and raw, decoded point cloud data from GPCC decoder 126. Additionally or alternatively, memory 114 and memory 124 may store software instructions executable by, e.g., GPCC encoder 116 and GPCC decoder 126, respectively. Although memory 114 and memory 124 are shown separately from GPCC encoder 116 and GPCC decoder 126 in this example, it should be understood that GPCC encoder 116 and GPCC decoder 126 may also include internal memories for functionally similar or equivalent purposes. Furthermore, memory 114 and memory 124 may store encoded point cloud data, e.g., output from GPCC encoder 116 and input to GPCC decoder 126. In some examples, portions of memory 114 and memory 124 may be allocated as one or more buffers, e.g., to store raw, decoded, and/or encoded point cloud data. For instance, memory 114 and memory 124 may store point cloud data.
I/O interface 118 and I/O interface 128 may represent wireless transmitters/receivers, modems, wired networking components (e.g., Ethernet cards), wireless communication components that operate according to any of a variety of IEEE 802.11 standards, or other physical components. In examples where I/O interface 118 and I/O interface 128 comprise wireless components, I/O interface 118 and I/O interface 128 may be configured to transfer data, such as encoded point cloud data, according to a cellular communication standard, such as 4G, 4G-LTE (Long-Term Evolution), LTE Advanced, 5G, or the like. In some examples where I/O interface 118 comprises a wireless transmitter, I/O interface 118 and I/O interface 128 may be configured to transfer data, such as encoded point cloud data, according to other wireless standards, such as an IEEE 802.11 specification. In some examples, source device 100 and/or destination device 120 may include respective system-on-a-chip (SoC) devices. For example, source device 100 may include an SoC device to perform the functionality attributed to GPCC encoder 116 and/or I/O interface 118, and destination device 120 may include an SoC device to perform the functionality attributed to GPCC decoder 126 and/or I/O interface 128.
The techniques of this disclosure may be applied to encoding and decoding in support of any of a variety of applications, such as communication between autonomous vehicles, communication between scanners, cameras, sensors and processing devices such as local or remote servers, geographic mapping, or other applications.
I/O interface 128 of destination device 120 receives an encoded bitstream from source device 110. The encoded bitstream may include signaling information defined by GPCC encoder 116, which is also used by GPCC decoder 126, such as syntax elements having values that represent a point cloud. Data consumer 122 uses the decoded data. For example, data consumer 122 may use the decoded point cloud data to determine the locations of physical objects. In some examples, data consumer 122 may comprise a display to present imagery based on the point cloud data.
GPCC encoder 116 and GPCC decoder 126 each may be implemented as any of a variety of suitable encoder and/or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof. When the techniques are implemented partially in software, a device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of GPCC encoder 116 and GPCC decoder 126 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device. A device including GPCC encoder 116 and/or GPCC decoder 126 may comprise one or more integrated circuits, microprocessors, and/or other types of devices.
GPCC encoder 116 and GPCC decoder 126 may operate according to a coding standard, such as video point cloud compression (VPCC) standard or a geometry point cloud compression (GPCC) standard. This disclosure may generally refer to coding (e.g., encoding and decoding) of frames to include the process of encoding or decoding data. An encoded bitstream generally includes a series of values for syntax elements representative of coding decisions (e.g., coding modes).
A point cloud may contain a set of points in a 3D space, and may have attributes associated with the point. The attributes may be color information such as R, G, B or Y, Cb, Cr, or reflectance information, or other attributes. Point clouds may be captured by a variety of cameras or sensors such as LIDAR sensors and 3D scanners and may also be computer-generated. Point cloud data are used in a variety of applications including, but not limited to, construction (modeling), graphics (3D models for visualizing and animation), and the automotive industry (LIDAR sensors used to help in navigation).
In both GPCC encoder 200 and GPCC decoder 300, point cloud positions are coded first. Attribute coding depends on the decoded geometry. In
For Category 3 data, the compressed geometry is typically represented as an octree from the root all the way down to a leaf level of individual voxels. For Category 1 data, the compressed geometry is typically represented by a pruned octree (i.e., an octree from the root down to a leaf level of blocks larger than voxels) plus a model that approximates the surface within each leaf of the pruned octree. In this way, both Category 1 and 3 data share the octree coding mechanism, while Category 1 data may in addition approximate the voxels within each leaf with a surface model. The surface model used is a triangulation comprising 1-10 triangles per block, resulting in a triangle soup. The Category 1 geometry codec is therefore known as the Trisoup geometry codec, while the Category 3 geometry codec is known as the Octree geometry codec.
In the example of
As shown in the example of
Coordinate transform unit 202 may apply a transform to the coordinates of the points to transform the coordinates from an initial domain to a transform domain. This disclosure may refer to the transformed coordinates as transform coordinates. Color transform unit 204 may apply a transform to convert color information of the attributes to a different domain. For example, color transform unit 204 may convert color information from an RGB color space to a YCbCr color space.
Furthermore, in the example of
Geometry reconstruction unit 216 may reconstruct transform coordinates of points in the point cloud based on the octree, data indicating the surfaces determined by surface approximation analysis unit 212, and/or other information. The number of transform coordinates reconstructed by geometry reconstruction unit 216 may be different from the original number of points of the point cloud because of voxelization and surface approximation. This disclosure may refer to the resulting points as reconstructed points. Attribute transfer unit 208 may transfer attributes of the original points of the point cloud to reconstructed points of the point cloud data.
Furthermore, RAHT unit 218 may apply RAHT coding to the attributes of the reconstructed points. Alternatively or additionally, LOD generation unit 220 and lifting unit 222 may apply LOD processing and lifting, respectively, to the attributes of the reconstructed points. RAHT unit 218 and lifting unit 222 may generate coefficients based on the attributes. Coefficient quantization unit 224 may quantize the coefficients generated by RAHT unit 218 or lifting unit 222. Arithmetic encoding unit 226 may apply arithmetic coding to syntax elements representing the quantized coefficients. GPCC encoder 200 may output these syntax elements in an attribute bitstream.
In the example of
GPCC decoder 300 may obtain a geometry bitstream and an attribute bitstream. Geometry arithmetic decoding unit 302 of decoder 300 may apply arithmetic decoding (e.g., CABAC or other type of arithmetic decoding) to syntax elements in the geometry bitstream. Similarly, attribute arithmetic decoding unit 304 may apply arithmetic decoding to syntax elements in attribute bitstream.
Octree synthesis unit 306 may synthesize an octree based on syntax elements parsed from geometry bitstream. In instances where surface approximation is used in geometry bitstream, surface approximation synthesis unit 310 may determine a surface model based on syntax elements parsed from geometry bitstream and based on the octree.
Furthermore, geometry reconstruction unit 312 may perform a reconstruction to determine coordinates of points in a point cloud. Coordinate inverse transform unit 320 may apply an inverse transform to the reconstructed coordinates to convert the reconstructed coordinates (positions) of the points in the point cloud from a transform domain back into an initial domain.
Additionally, in the example of
Depending on how the attribute values are encoded, RAHT unit 314 may perform RAHT coding to determine, based on the inverse quantized attribute values, color values for points of the point cloud. Alternatively, LOD generation unit 316 and inverse lifting unit 318 may determine color values for points of the point cloud using a level of detail-based technique.
Furthermore, in the example of
The various units of
Some exemplary embodiments of the present disclosure will be described in detailed hereinafter. It should be understood that section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to GPCC or other specific point cloud codecs, the disclosed techniques are applicable to other point cloud coding technologies also. Furthermore, while some embodiments describe point cloud coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder.
This invention is related to point cloud coding technologies. Specifically, it is about coding and encapsulation of coding parameters in point cloud coding. The ideas may be applied individually or in various combination, to any point cloud coding standard or non-standard point cloud codec, e.g., the being-developed Geometry based Point Cloud Compression (G-PCC).
Point cloud coding standards have evolved primarily through the development of the well-known MPEG organization. MPEG, short for Moving Picture Experts Group, is one of the main standardization groups dealing with multimedia. In 2017, the MPEG 3D Graphics Coding group (3DG) published a call for proposals (CFP) document to start to develop point cloud coding standard. The final standard will consist in two classes of solutions. Video-based Point Cloud Compression (V-PCC) is appropriate for point sets with a relatively uniform distribution of points. Geometry-based Point Cloud Compression (G-PCC) is appropriate for more sparse distributions.
To explore the future point cloud coding technologies in G-PCC, Core Experiment (CE) 13.5 and Exploration Experiment (EE) 13.2 were formed to develop inter prediction technologies in G-PCC. Since then, many new inter prediction methods have been adopted by MPEG and put into the reference software named inter Exploration Model (inter-EM).
In one point cloud frame, there are many data points to describe the 3D objects or scenes. For each data point, there may be corresponding geometry information and attribute information. Geometry information is used to record the spatial location of the data point. Attribute information is used to record more details of the data point, such as texture, normal vector and reflection. In inter-EM, there are some optional tools to support the inter prediction coding and decoding of geometry information and attribute information respectively.
For attribute information, the codec uses the attribute information of the reference points to perform the inter prediction for each point in current frame. The reference points are selected from the data points in current frame and reference frame based on the geometric distance of points. Each reference point corresponds to one weight value which is based on the geometric distance from the current point. The predicted attribute value can be the weighted average value of or one of the attribute values of the reference points. The decision on predicted attribute value is based on Rate Distortion Optimization (RDO) methods.
For geometry information, there are two main methods to perform the inter prediction coding, which are octree based method and predictive tree based method.
In the first method, the geometry information is represented by octree structures and the occupancy code (OC) of each node. For each node in the octree of the current frame, the codec will decide whether to perform octagonal division or not based on the number of points in the current node. The same division will be performed on the corresponding reference node in the reference frame. At the same time, the occupancy codes of the current node and the reference node will be calculated. The codec will use the occupancy code of the reference node to perform the prediction coding for the occupancy code of the current node.
In the second method, the points in the point cloud are sorted to form a predictive tree. As shown in
In current inter-EM, the IPPP GOF structure is applied which means that the reference frame of the current frame is the previous frame if the current frame applies inter prediction. At the same time, inter-EM uses quantization parameters (QP) to control the bit rate points and all frames share the same QP values.
The existing designs for inter prediction for point cloud compression have the following problems:
To solve the above problems and some other problems not mentioned, methods as summarized below are disclosed. The embodiments of the present disclosure should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these inventions can be applied individually or combined in any manner.
In the following discussions, the term “PC sample” refer to the unit that performs prediction coding in the point cloud sequence coding, such as frame/picture/slice/tile/subpicture/node/point/other units that contains one or more nodes or points.
In the example, the point cloud frames in the point cloud sequence are divided in multiple GOFs and the GOF size is set to 8.
As shown in
The first frame is the random access (RA) point which means that there is only intra prediction coding but no inter prediction coding.
For the other 7 frames, both the intra prediction coding and inter prediction coding will be performed based on the reference relationship.
As shown in
For frame “1”˜“7”, each frame has two reference frames. One reference frame has an earlier time stamp than the current frame and another reference frame has a later time stamp than the current frame. For each frame, the reference frames are shown in Table 1.
To make sure that the reference frames are encoded and decoded before the current frames, the encoding and decoding order for frame “0” ˜ “8” are {0, 8, 4, 2, 1, 3, 6, 5, 7}.
It should be noticed that the frame “8” is the first frame of the next GOF, but it should be processed before the frame “1”˜“7”. And if the frame “8” was encoded or decoded in the current GOF, the processing for the frame “8” which is also the frame “0” in the next GOF should be skipped in the next GOF.
In the example, the point cloud frames in the point cloud sequence are divided in multiple GOFs and the GOF size is set to 8. As shown in
The hierarchical coding priorities are calculated based on the principle that the reference frame has higher coding priority than the current frame. The coding priorities results are shown in Table 2.
In the example, QP value is used to control the coding accuracy. The lower the QP value, the higher the coding accuracy. Thus, a hierarchical QP values structure is applied to frames so that the coding accuracy can be changes based on the coding priority.
For each frame, the QP value is calculated as:
The QP_shift values for each frame are shown in Table 3.
The parameter step is one non-negative number that is used to control the change scale of the hierarchical QP value. In test, it can be 2/3/4, etc. al.
For example, when step is set to 3, the QP_shift values for each frame are shown in Table 4.
In this example, the geometry information is represented by an octree structure and the occupancy code of each node. As shown in
At the encoder, the same octree division is performed on the current frame and the reference frames. Thus, the octree structures are the same for the current frame and the reference frames. A FIFO queue is used to store the nodes that need to be processed.
A bool flag predicted_forward is used for each node to indicate the source of the reference occupancy code:
A parameter mismatched_count_parent_node is used to indicate the number of mismatched bits between the occupancy code of the parent node and the reference occupancy code for parent node. Firstly, the root node of the octree of the current node is generated and pushed into the queue. The predicted_forward value of the root node is set to 1. The mismatched_count_parent_node value of the root node is set to 0.
Secondly, Perform the following process until the queue is empty:
At the decoder, the same process is performed on the current frame and the reference frames. Thus, the reference occupancy code can be derived for each node. The occupancy code can be decoded based on the reference occupancy code.
In this example, the geometry information is represented by an octree structure and the occupancy code of each node. As shown in
At the encoder, the same octree division is performed on the current frame and the reference frames. Thus, the octree structures are the same for the current frame and the reference frames. A FIFO queue is used to store the nodes that need to be processed.
A bool flag predicted_forward is used for each node to indicate the source of the reference occupancy code:
A parameter mismatched_count_parent_node is used to indicate the number of mismatched bits between the occupancy code of the parent node and the reference occupancy code for parent node. Firstly, the root node of the octree of the current node is generated and pushed into the queue. The predicted_forward value of the root node is set to 1. The mismatched_count_parent_node value of the root node is set to 0.
Secondly, Perform the following process until the queue is empty:
At the decoder, the same process is performed on the current frame and the reference frames. Thus, the reference occupancy code can be derived for each node. The occupancy code can be decoded based on the reference occupancy code.
In this example, the attribute information is represented by the reflection value of each point. As shown in
At the encoder, 3 reference points, {point 0, point 1, point 2}, will be selected from the current frame and the reference frames. The predicted attribute value will be calculated based on the attribute values of the reference points. Then the residual between the predicted attribute value and the current attribute value will be calculated and signaled to the decoder.
For each point, an array neighbors is used to record the selected reference points with weight value. The weight value of each reference point is the distance between the reference point and the current point.
Firstly, the points in the current frame and the reference frames are reordered by motion code order.
Secondly, for each point, the encoder will search 3 reference points which are nearest to the current point. The search results and their weight values will be stored in neighbors:
Thirdly, the weight values of the reference points will be recomputed. The reference point from the current frame should have higher weight value.
Fourthly, the predicted attribute value will be selected from a candidate list:
For each candidate value, a coding score will be calculated based on the compression bits and prediction residual. Then the encoder will select the candidate value with the highest coding score. The indication referring to the selected candidate will be signaled to the decoder.
Finally, the residual between the attribute value and the predicted attribute value will be calculated and signaled to the decoder.
At the decoder, the reference points will be searched for each point by the same method as the encoding process. The candidate list will be calculated in the same way and the indication will be decoded for each point to get the predicted attribute value. Based on that, the prediction residual will be decoded and the real attribute value will be generated.
A hierarchical GOF structure is proposed to perform the inter prediction for geometry coding and attribute coding.
In the hierarchical GOF structure, the first frame in each GOF is an I-frame. The other frames in the GOF are B-frames, which means that the frame will use two reference frames from the forward and backward directions.
As shown in
For frame “0”˜“8”, the reference frames are shown in Table 5.
The encoding and decoding order for frame “0”˜“8” are {0, 8, 4, 2, 1, 3, 6, 5, 7}.
For geometry coding, the same octree division is performed on the current frame and the two reference frames.
For each node in octree, the occupancy codes of the current node and the reference nodes are calculated. As shown in
For each child node of the current node, the corresponding bit values in the occupancy codes of the reference nodes are denoted as bit_pre and bit_follow:
If bit_pre=1 and bit_follow=0, the prediction direction of the child node is set to using the previous reference (forward) node to perform inter prediction.
If bit_pre=0 and bit_follow=1, the prediction direction of the child node is set to using the following reference (backward) node to perform inter prediction.
If bit_pre=bit_follow, the numbers of the mismatched bits between the occupancy code of current node and the occupancy codes of the reference nodes are calculated.
If the numbers of the mismatched bits are different, the prediction direction of the child node is set to the prediction direction with less mismatched number. Otherwise, the prediction direction of the child node is set to the prediction direction of the current node.
When coding the attribute, three reference points, {point 0, point 1, point 2}, are selected from the current frame and the two reference frames. The predicted attribute value will be calculated based on the attribute values of the reference points, which is similar with the Inter-EM.
Besides, a hierarchical QP structure is applied to perform the attribute coding. There is a QPshift value for each frame based on the reference relationship. The QPshift value for reference frame should be lower than that of the current frame.
For each frame, the real attribute QP value is set to:
The quantization process is performed based on the real attribute QP value.
More details of the embodiments of the present disclosure will be described below which are related to coding and encapsulation of coding parameters in point cloud coding.
As discussed above, in one point cloud frame, there are many data points to describe the 3D objects or scenes. For each data point, there may be corresponding geometry information and attribute information. The geometry information is used to record the spatial location of the data point. The attribute information is used to record more details of the data point, such as texture, normal vector and reflection.
Either the attribute information or the geometry information can be used for performing inter prediction for each point in a frame. However, both the prediction accuracy and the coding efficiency of conventional inter prediction need to be further improved.
To solve the above problems and some other problems not mentioned, methods as summarized below are disclosed. The embodiments of the present disclosure should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these embodiments can be applied individually or combined in any manner.
As used herein, the term “PC sample” may refer to the unit that performs prediction coding in the point cloud sequence coding, such as frame/picture/slice/tile/subpicture/node/point/other units that contains one or more nodes or points.
The current PC sample may comprise a frame, one or more points or other units in a frame or a picture. In some embodiments, current PC sample may be comprised in a current frame of the point cloud sequence. The multiple reference PC samples may be from a plurality of reference frames or only one reference frame of the point cloud sequence. In other words, the one or multiple reference PC samples may be comprised in at least one reference frame.
To perform point cloud compression, frames in the point cloud sequence may be divided into one or more groups of frames (GOFs). According to embodiments of the present disclosure, the at least one reference frame and the current frame are in the same GOF.
According to embodiments of the present disclosure, there are no limit on time stamp(s) of the one or multiple reference PC samples and the current PC sample. That is, the time stamp of any of the one or multiple reference PC samples may be earlier or later than that of the current PC sample. For example, there may be a plurality of reference PC sample for the current PC sample, and these reference PC sample may have earlier or later time stamps than the current PC sample. Alternatively, in some embodiments, if the current PC sample only has one reference PC sample, the reference PC sample may have a time stamp later than that of the current PC sample. As a further alternative, if the current PC sample only has one reference PC sample, the time stamp of the reference PC sample may be earlier than that of the current PC sample, but the reference PC sample is not a PC sample immediately preceding the current PC sample in terms of the time stamp.
At 904, the conversion is performed based on the one or multiple reference PC samples.
According to the method 900, a hierarchical GOF structure is proposed to perform the inter prediction for geometry coding and attribute coding. One or multiple reference PC samples in the same GOF as the current PC sample may be used for the current PC sample, which improves prediction accuracy and increases the coding efficiency. Moreover, the one or more reference PC samples may have later or earlier time stamps than the current frame, which also causes the coding performance to be improved.
Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner.
In some embodiments, frames of the point cloud sequence may be divided into one or more GOFs to perform point cloud compression. There may be various ways for dividing the frames into the one or more GOFs. For example, a plurality of consecutive frames of the point cloud sequence in time stamp order may be clustered as one GOF. The GOFs may have the same or different size. For instance, the number of the plurality of consecutive frames equals to a size of the GOF.
Alternatively, in some embodiments, each frame of the point cloud sequence belongs to one GOF.
In some embodiments, the first frame of a GOF in decoding order may be an I-frame. In some embodiments, there is only intra prediction for the I-frame.
In some embodiments, the first frame of a GOF in decoding order is not an I-frame. In some embodiments, the first frame of a GOF in decoding order is a P-frame.
In some embodiments, the first frame of a GOF in decoding order may be a P-frame or a B frame with all of the at least one reference frame ahead of the current frame in the time stamp order.
In some embodiments, whether to code the first frame of a GOF in decoding order with I-frame may depend on an intra period or a random access period.
In some embodiments, the size of a GOF may be equal to the intra period or the random access period.
In some embodiments, the size of a GOF may be smaller than the intra period or the random access period.
In some embodiments, an indication of the size of a GOF and/or coding structure within a GOF may be indicated in the bitstream. In this way, the indication can be signalled to the decoder of the bitstream.
In some embodiments, at 904, the conversion may be performed by performing inter prediction for the current PC sample by using the one or multiple reference PC samples.
In some embodiments, the multiple reference PC samples may be from different frames or slices. Alternatively, the multiple reference PC samples may be from the same frame or slice.
In some embodiments, a reference PC sample in the one or multiple reference PC samples may be from a frame or slice comprising the current PC sample.
In some embodiments, an indication about whether to use the multiple reference PC samples is indicated in the bitstream.
In some embodiments, the reference information of the current PC sample may be derived at a decoder of the bitstream. Alternatively or in addition, the reference information of the current PC sample may be indicated in the bitstream.
The reference information may comprise information related to the reference PC sample(s), for example, where the reference PC samples are from and/or which reference PC samples are to be used.
In some embodiments, the reference information may comprise a reference direction. whether a reference direction is indicated in the bitstream may depend on reference picture list information. Alternatively or in addition, the reference information may comprise an indication of a reference frame where the one or multiple reference PC samples are from, an indication of the number of the one or multiple reference PC samples, a reference relationship indication referring to at least one of the one or multiple reference PC samples, at least one of the one or multiple reference PC samples, and/or the like.
The reference direction may comprise various information. For example, the reference direction comprises a uni-prediction from a reference frame in a first reference list, a uni-prediction from a reference frame in a second reference list different from the first reference list, a bi-prediction from a first reference frame in the first reference list and a second reference frame in the second reference list, and/or the like.
In some embodiments, relative positions of reference frames in a reference list may be fixed for the current frame within one GOF.
In some embodiments, the previously coded N frames in displayer order may be utilized as the reference frames, N being an integer. For example, N equals to 2. In some embodiments, N may be indicated in the bitstream.
In some embodiments, the N frames may be consecutively previously coded frames right before the current frame.
In some embodiments, the relative positions of the reference frames in reference lists may be adaptive for the current frame in a GOF. For example, the relative positions may be derived based on a size of the GOF (also referred to as “GOF size” in the present disclosure).
In some embodiments, the indication of the reference frame is indicated as a reference list index and a reference frame index in the reference list. The reference list index may be the first reference list or the second reference list. Alternatively, the indication of the reference frame is indicated by a reference direction and a reference frame index for the reference direction.
In some embodiments, if there is only one reference list, the reference list index is not indicated in the bitstream.
In some embodiments, if there is only one reference frame in the reference list, the reference frame index is not indicated in the bitstream.
In some embodiments, whether a reference relationship indication referring to at least one of the one or multiple reference PC samples is indicated in the bitstream depends on whether to use other samples rather than the previous one sample as the reference PC samples.
In some embodiments, the reference relationship indication is represented by an index indicating an associated sample to be used as one of the reference PC samples.
In some embodiments, the reference relationship indication may be coded with a various coding approaches, such as, fixed-length coding, unary coding, truncated unary coding, or the like. In addition or alternatively, the reference relationship indication may be coded in a predictive way.
In some embodiments, the one or multiple reference PC samples may comprise a first reference PC sample in a reference frame with a later time stamp than the current frame comprising the current PC sample.
In some embodiments, there may be time stamp information for each frame in the point cloud sequence.
In some embodiments, a time stamp order of frames in the point cloud sequence may be the same as their displaying order. Alternatively, or in addition, the time stamp order of frames in the point cloud sequence may be the same as their rendering order.
In some embodiments, a time stamp for each PC sample may be equal to a time stamp of a frame comprising the PC sample.
In some embodiments, the one or multiple reference PC samples may comprise a second reference PC sample with an earlier time stamp than the current frame comprising the current PC sample.
In some embodiments, the one or multiple reference PC samples may comprise a third reference PC sample with the same time stamp as the current frame comprising the current PC sample.
In some embodiments, an indication indicating whether a sample with a later time stamp than the current frame is allowed to be used as a reference PC sample may be indicated in the bitstream. In this way, this indication may be signalled to the decoder.
In some embodiments, there is a low-delay mode for point cloud compres-sion. With the low-delay mode, a time stamp order of frames in the point cloud sequence and a decoding order of frames in the point cloud sequence may be the same.
In some embodiments, an indication of whether to use low-delay mode may be indicated in the bitstream.
In some embodiments, the multiple reference frames may be used for the low-delay mode.
In some embodiments, a sample in a frame with an earlier time stamp or the same time stamp may be used as the reference PC sample in low delay mode. Alternatively, or in addition, In some embodiments, a sample with an earlier time stamp than the cur-rent PC sample may be used as a reference PC sample for the current PC sample in low delay mode.
In some embodiments, a sample with the same time stamp as the current PC sample may be used as a reference PC sample for the current PC sample in low delay mode.
According to some embodiments of the present disclosure, the current PC sample and the one or multiple reference PC samples may be coded in different orders and/or with different coding accuracies. In this way, in the case of limited transmission resources, the reference PC samples can be assigned a lower QP value to ensure that they can be transmitted more accurately. As such, there is no need to apply the same coding accuracy for all frames. Accordingly, the coding performance can be improved.
In some embodiments, PC samples in the point cloud sequence may have different coding priorities. In this case, the method 700 may further comprise: applying hierarchical coding accuracies to the PC samples based on the coding priorities of the PC samples. The coding priorities of the PC samples may be configured in different ways. For example, coding priorities of the one or multiple reference PC samples may be higher than the current PC sample.
In some embodiments, coding accuracy of a first PC sample in the point cloud sequence with a higher coding priority may be higher than a second PC sample in the point cloud sequence with a lower coding priority.
In some embodiments, the coding accuracies of PC samples in the point cloud sequence may be controlled by a QP value or a quantization step in the point cloud sequence coding. The QP value or the quantization step for a reference PC sample in the one or more multiple reference PC samples may be smaller than that for the current PC sample.
In some embodiments, a delta value of the QP value or the quantization step for a reference PC sample is fixed, or a delta value of the QP value or the quantization step for the current PC sample is fixed.
In some embodiments, a delta value of the QP value or the quantization step for a reference PC sample may be derived at a decoder of the bitstream. Alternatively, or in addition, a delta value of the QP value or the quantization step for the current PC sample may be derived at a decoder of the bitstream.
In some embodiments, the delta value may be derived based on a variety of factors, for example, a size of a GOF, an intra period or a random access period, indicators of lossless coding mode, indicators of low delay coding mode, and/or the like.
In some embodiments, a delta value of the QP value or the quantization step for a reference PC sample is indicated in the bitstream. Alternatively, or in addition, a delta value of the QP value or the quantization step for the current PC sample may be indicated in the bitstream.
In some embodiments, a delta value of the QP value or the quantization step for a reference PC sample may be indicated in the bitstream. Alternatively, or in addition, a delta value of the QP value or the quantization step for the current PC sample may be indicated in the bitstream.
In some embodiments, an indication indicating whether hierarchical QP values and/or QP values or quantization steps are to be used may be indicated in the bitstream. As such, this indication can be signalled to the decoder of the bitstream.
In some embodiments, a QP value for each PC sample in the point cloud sequence may be derived at a decoder of the bitstream. Alternatively or in addition, a QP value for a frame, a block, a cube, a tile, or a slice the point cloud sequence is derived at a decoder of the bitstream.
In some embodiments, a PC sample may be one of the following: a frame, a picture, a slice, a tile, a subpicture, a node, a point, or a unit containing one or more nodes or points.
In some embodiments, the conversion includes encoding the current PC sample into the bitstream.
In some embodiments, the conversion includes decoding the current PC sample from the bitstream.
It is to be understood that the above-mentioned examples are only for the purpose of illustration, without suggesting any limitations. Scope of the present disclosure may not be limited in this regard.
According to further embodiments of the present disclosure, a bitstream of a point cloud sequence may be stored in a non-transitory computer-readable recording medium. The bitstream of the point cloud sequence can be generated by a method performed by a point cloud processing apparatus. According to the method, one or multiple reference PC samples are determined for a current PC sample of the point cloud sequence. At least one reference frame comprises the one or multiple reference PC samples and a current frame comprising the current PC sample are in a GOF. A bitstream of the current frame may be generated based on the one or multiple reference PC samples.
According to still further embodiments of the present disclosure, a method for storing a bitstream of a point cloud sequence is proposed. In the method, one or multiple reference PC samples are determined for one or multiple reference PC samples are determined for a current PC sample of the point cloud sequence. At least one reference frame comprises the one or multiple reference PC samples and a current frame comprising the current PC sample are in a GOF. A bitstream of the current frame may be generated based on the one or multiple reference PC samples. The bitstream may be stored in a non-transitory computer-readable recording medium.
Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner.
It would be appreciated that the computing device 1000 shown in
As shown in
In some embodiments, the computing device 1000 may be implemented as any user terminal or server terminal having the computing capability. The server terminal may be a server, a large-scale computing device or the like that is provided by a service provider. The user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It would be contemplated that the computing device 1000 can support any type of interface to a user (such as “wearable” circuitry and the like).
The processing unit 1010 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1020. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1000. The processing unit 1010 may also be referred to as a central processing unit (CPU), a microprocessor, a controller or a microcontroller.
The computing device 1000 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1000, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 1020 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM)), a non-volatile memory (such as a Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), or a flash memory), or any combination thereof. The storage unit 1030 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 1000.
The computing device 1000 may further include additional detachable/non-detachable, volatile/non-volatile memory medium. Although not shown in
The communication unit 1040 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 1000 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1000 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
The input device 1050 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like. The output device 1060 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like. By means of the communication unit 1040, the computing device 1000 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1000, or any devices (such as a network card, a modem and the like) enabling the computing device 1000 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown).
In some embodiments, instead of being integrated in a single device, some or all components of the computing device 1000 may also be arranged in cloud computing architecture. In the cloud computing architecture, the components may be provided remotely and work together to implement the functionalities described in the present disclosure. In some embodiments, cloud computing provides computing, software, data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services. In various embodiments, the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols. For example, a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components. The software or components of the cloud computing architecture and corresponding data may be stored on a server at a remote position. The computing resources in the cloud computing environment may be merged or distributed at locations in a remote data center. Cloud computing infrastructures may provide the services through a shared data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
The computing device 1000 may be used to implement point cloud encoding/decoding in embodiments of the present disclosure. The memory 1020 may include one or more point cloud coding modules 1025 having one or more program instructions. These modules are accessible and executable by the processing unit 1010 to perform the functionalities of the various embodiments described herein.
In the example embodiments of performing point cloud data encoding, the input device 1050 may receive point cloud data as an input 1070 to be encoded. The point cloud data may be processed, for example, by the point cloud coding module 1025, to generate an encoded bitstream. The encoded bitstream may be provided via the output device 1060 as an output 1080.
In the example embodiments of performing point cloud data decoding, the input device 1050 may receive an encoded bitstream as the input 1070. The encoded bitstream may be processed, for example, by the point cloud coding module 1025, to generate decoded point cloud data. The decoded point cloud data may be provided via the output device 1060 as the output 1080.
While this disclosure has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2021/122507 | Oct 2021 | WO | international |
This application is a continuation of International Application No. PCT/CN2022/121855, filed on Sep. 27, 2022, which claims the benefit of International Application No. PCT/CN2021/122507 filed on Oct. 4, 2021. The entire contents of these applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/121855 | Sep 2022 | WO |
Child | 18627287 | US |