Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud

Information

  • Patent Application
  • 20240070922
  • Publication Number
    20240070922
  • Date Filed
    May 18, 2022
    2 years ago
  • Date Published
    February 29, 2024
    9 months ago
Abstract
An encoding method includes: obtaining original point cloud data; obtaining depth information of a point cloud based on the original point cloud data; establishing a relationship between the depth information and azimuth information of the point cloud; and predictively encoding the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.
Description
TECHNICAL FIELD

The present invention pertains to the field of a point cloud encoding/decoding technologies, and particularly relates to a predictive encoding/decoding method and apparatus for azimuth information of a point cloud.


BACKGROUND

With the improvement of a hardware processing capability and the rapid development of computer vision, three-dimensional point cloud data has been widely used in fields such as virtual reality, augmented reality, self-driving, and environment modeling. However, a large-scale point cloud usually has a large amount of data, which is not conducive to the transmission and storage of the point cloud data. Therefore, the large-scale point cloud needs to be efficiently encoded/decoded.


In a conventional encoding/decoding technology for the large-scale point cloud, Cartesian coordinates of the point cloud are usually predicted by using cylindrical coordinates (including azimuth information, depth information, and the like) of the point cloud. Therefore, each cylindrical coordinate component of the point cloud needs to be predictively encoded in the prior art. Specifically, for predictive encoding of the azimuth information of the point cloud, a point cloud encoding/decoding method based on a prediction tree is provided in the prior art 1. First, a multiple of a difference between an azimuth of a current point and an azimuth obtained in a selected prediction mode with respect to an angular velocity of rotation needs to be calculated, and then the azimuth of the current point is predicted by using an integral multiple of the angular velocity of rotation and the azimuth obtained in the selected prediction mode, to obtain a prediction residual, and finally the integral multiple and the prediction residual of the azimuth are encoded to reconstruct azimuth information in the same manner on a decoder. A point cloud encoding/decoding method based on a single-chain structure is provided in the prior art 2. First, an azimuth of each point is quantized, and a quantized value of the azimuth of each point may be restored by using the single-chain structure. Then, a quantized residual of the azimuth is predictively encoded, and specifically, a prediction mode list is created, and an optimal prediction mode is selected by an encoder, to complete predictive encoding of azimuth information of a point cloud.


However, when the two methods are used to predictively encode the azimuth information of the point cloud, only encoded azimuth information is used for prediction, and a relationship between other information and the azimuth information is not considered. Consequently, the obtained prediction residuals of the azimuth information are large and are not centralized, and the validity of an entropy encoding context model is destroyed, and therefore, encoding efficiency of the azimuth information of the point cloud is low.


SUMMARY

To resolve the foregoing problem in the prior aft, the present invention provides a predictive encoding/decoding method and apparatus for azimuth information of a point cloud. Technical problems in the present invention are resolved by the following technical solutions:


A predictive encoding method for azimuth information of a point cloud includes:


obtaining original point cloud data;


obtaining depth information of a point cloud based on the original point cloud data;


establishing a relationship between the depth information and azimuth information of the point cloud; and


predictively encoding the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.


In an embodiment of the present invention, the establishing a relationship between the depth information and azimuth information of the point cloud includes:


establishing the relationship between the depth information and the azimuth information of the point cloud by using a mathematical derivation method; or


establishing the relationship between the depth information and the azimuth information of the point cloud by using a fitting method.


In an embodiment of the present invention, a relational expression between the depth information and the azimuth information of the point cloud is established by using the mathematical derivation method as follows:







φ
-

φ
0


=


-
α

-

arctan




H
o

r

.







φ represents azimuth information of a point, φ0 represents originally collected azimuth information of the point, r represents depth information of the point, α represents a horizontal correction angle of a laser to which the point belongs, and Ho represents a horizontal offset of the laser to which the point belongs.


In an embodiment of the present invention, a relational expression between the depth information and the azimuth information of the point cloud is established by using the mathematical derivation method as follows:







φ
-

φ
0


=


90

°

-
α
-


arccos

(

-


H
o

r


)

.






φ represents azimuth information of a point, φ0 represents originally collected azimuth information of the point, r represents depth information of the point, α represents a horizontal correction angle of a laser to which the point belongs, and Ho represents a horizontal offset of the laser to which the point belongs.


In an embodiment of the present invention, after the relational expression between the depth information and the azimuth information of the point cloud is obtained, the method further includes:


selecting several points from points collected by a same laser or encoded points and estimating unknown parameters α and Ho in the relational expression based on information about the selected points.


In an embodiment of the present invention, formulas for estimating the unknown parameters α and Ho by selecting two points are:







H
o

=







(


r
1

-

r
2


)

·

(

1
+

tan



Δφ
1

·
tan



Δφ
2



)


±










(


r
1

-

r
2


)

2

·


(

1
+

tan



Δφ
1

·
tan



Δφ
2



)

2


-

4




(


tan


Δφ
1


-

tan


Δφ
2



)

2

·

r
1

·

r
2









2


(


tan


Δφ
1


-

tan


Δφ
2



)












tan

α

=






r
1

·
tan



Δφ
1


-



r
2

·
tan



Δφ
2






H
o

·

(


tan


Δφ
1


-

tan


Δφ
2



)


-

(


r
1

-

r
2


)



.






r1 and r2 separately represent depth information of the two selected points, and Δφ1 and Δφ2 separately represent azimuth residuals of the two selected points.


In an embodiment of the present invention, the predictively encoding the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud includes:


predicting the azimuth of the point cloud based on the relationship between the depth information and the azimuth information of the point cloud, to obtain an initial predicted value of an azimuth residual;


selectively shifting the initial predicted value of the azimuth residual to obtain a final predicted value of the azimuth residual and a prediction residual of the azimuth residual; and


encoding the prediction residual of the azimuth residual and azimuth auxiliary information.


Another embodiment of the present invention provides a predictive encoding apparatus for azimuth information of a point cloud, including:


a data obtaining module, configured to obtain original point cloud data;


a data processing module, configured to obtain depth information of a point cloud based on the original point cloud data;


a first calculation module, configured to establish a relationship between the depth information and azimuth information of the point cloud; and


a predictive encoding module, configured to predictively encode the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.


Still another embodiment of the present invention further provides a prediction decoding method for azimuth information of a point cloud, including:


obtaining coded stream information and decoding the coded stream information to obtain a prediction residual of an azimuth residual of a point cloud and azimuth auxiliary information;


predicting an azimuth of the point cloud by using reconstructed depth information and a relationship between the depth information and azimuth information, to obtain a final predicted value of the azimuth residual;


reconstructing the azimuth residual of the point cloud based on the final predicted value of the azimuth residual and the prediction residual of the azimuth residual; and


reconstructing the azimuth information of the point cloud based on the reconstructed azimuth residual and the azimuth auxiliary information.


Yet another embodiment of the present invention further provides a prediction decoding apparatus for azimuth information of a point cloud, including:


a decoding module, configured to obtain coded stream information and decode the coded stream information to obtain a prediction residual of an azimuth residual of a point cloud and azimuth auxiliary information;


a prediction module, configured to predict an azimuth of the point cloud by using reconstructed depth information and a relationship between the depth information and azimuth information, to obtain a final predicted value of the azimuth residual;


a second calculation module, configured to reconstruct the azimuth residual of the point cloud based on the final predicted value of the azimuth residual and the prediction residual of the azimuth residual; and


a reconstruction module, configured to reconstruct the azimuth information of the point cloud based on the reconstructed azimuth residual and the azimuth auxiliary information.


Beneficial effects of the present invention are as follows:


In the present invention, a relationship between depth information and azimuth information of a point cloud is established, and cross-component prediction is performed on the azimuth information by using the depth information of the point cloud and the foregoing relationship. In this way, prediction precision of the azimuth information of the point cloud is further improved, and a prediction residual of the azimuth information can be reduced, so that distribution of prediction residuals of an azimuth is more centralized. In this way, a more effective context model is established in an entropy encoding process, and therefore, encoding efficiency of the point cloud is improved.


The present invention is further described in detail below with reference to the accompanying drawings and embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic flowchart of a predictive encoding method for azimuth information of a point cloud according to an embodiment of the present invention;



FIG. 2 is a schematic diagram of calibration of a laser radar according to an embodiment of the present invention;



FIG. 3 is a schematic diagram of calibration of a laser radar on an x-y plane according to an embodiment of the present invention;



FIG. 4 is a schematic diagram of a relation curve of depth information of a specific laser collection point and an azimuth residual according to an embodiment of the present invention;



FIG. 5 is a schematic diagram of segmented hopping of a relation curve of depth information of a specific laser collection point and an azimuth residual according to an embodiment of the present invention;



FIG. 6 is a schematic diagram of a structure of a predictive encoding apparatus for azimuth information of a point cloud according to an embodiment of the present invention;



FIG. 7 is a schematic flowchart of a predictive decoding method for azimuth information of a point cloud according to an embodiment of the present invention; and



FIG. 8 is a schematic diagram of a structure of a predictive decoding apparatus for azimuth information of a point cloud according to an embodiment of the present invention.





DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

The present invention is further described in detail below with reference to specific embodiments, but implementations of the present invention are not limited thereto.


Embodiment 1


FIG. 1 is a schematic flowchart of a predictive encoding method for azimuth information of a point cloud according to an embodiment of the present invention. The predictive encoding method for azimuth information of a point cloud specifically includes the following steps.


Step 1: Obtain original point cloud data.


Specifically, the original point cloud data generally includes a set of three-dimensional spatial points, each spatial point records geometric position information of the spatial point and additional attribute information such as color, reflectivity, and normal. Geometric position information of a point cloud is usually expressed based on a Cartesian coordinate system; in other words, coordinates x, y, and z of a point are used for representation. The original point cloud data may be obtained by a laser radar through scanning, or may be obtained by using common data sets provided by various platforms.


In this embodiment, the obtained geometric position information of the original point cloud data is represented based on the Cartesian coordinate system. It should be noted that a method of representing the geometric position information of the original point cloud data is not limited to Cartesian coordinates.


Step 2: Obtain depth information of a point cloud based on the original point cloud data.


Specifically, in this embodiment, the depth information of the point cloud may be calculated by using the following formula:






r=√{square root over (x2+y2)}.


r represents depth information of each point in the original point cloud data, and x and y are Cartesian coordinate components of each point.


In addition, it should be noted that the depth information of the point cloud may alternatively be calculated by using a formula r=√{square root over (x2+y2+z2)} or in another manner, where (x, y, z) is Cartesian coordinates of a point.


Step 3: Establish a relationship between the depth information and azimuth information of the point cloud.


In this embodiment, a relationship between the depth information and the azimuth information of the point cloud may be established by using a mathematical derivation method.


For example, the relationship between the depth information and the azimuth information of the point cloud may be established from an algebraic perspective based on a collection principle of a laser radar.


Specifically, the laser radar is formed by combining and arranging a plurality of lasers (laser scanners) distributed along both sides of a central axis. Each laser has a fixed pitch angle, and may be regarded as a relatively independent collection system. The lasers rotate by 360 degrees around the central axis of the laser radar, perform sampling at fixed rotation angle intervals during rotation, and return originally collected information of a sampling point, that is, originally collected distance information r0 of the sampling point, an index number i(θ0) of a laser to which the sampling point belongs, and originally collected azimuth information φ0. The information is expressed based on a local cylindrical coordinate system in which a corresponding laser is an origin. However, to facilitate subsequent processing of the point cloud, originally collected data of the point cloud needs to be converted to a Cartesian coordinate system in which the bottom of the laser radar is used as a unified origin to form a point cloud of the laser radar in the unified Cartesian coordinate system, that is, a point cloud finally collected by a device. This conversion process is a calibration process of the laser radar. As shown in FIG. 2, FIG. 2 is a schematic diagram of calibration of a laser radar according to an embodiment of the present invention. A calibration formula of the laser radar is as follows. Originally collected information (r0,i(θ0), φ0) of a point is converted into Cartesian coordinates (x, y, z) , r0 in the originally collected information is originally collected distance information of the point, i(θ0) is an index number of a laser to which the point belongs, and φ0 is originally collected azimuth information of the point.





β=φ0−α;






x=(r0+Dcorr)·cosθ0·sinβ−Ho·cosβ;






y=(r0+Dcorr)·cosθ0·cosβ+Ho·sinβ; and






z=(r0+Dcorr)·sinθ0+Vo.


Dcorr is a distance correction factor of the laser to which the point belongs, that is, an ith laser of the laser radar, Vo is a vertical offset of the laser to which the point belongs, Ho is a horizontal offset of the laser to which the point belongs, θ0 is a vertical elevation angle of the laser to which the point belongs, and α is a horizontal correction angle of the laser to which the point belongs. All the foregoing parameters are calibration parameters of the laser.


Then, (r0+Dcorr)·cosθ0≈r projected to an x-y plane is set, where r is depth information of a current point (x, y, z). In this case,






x=r·sinβ−Ho·cosβ; and






y=r·cosβ+Ho·sinβ.


Then, conversion is performed by using an auxiliary angle formula to obtain:







x
=




r
·
sin


β

-



H
o

·
cos


β


=




r
2

+

H
o
2



·

sin

(

β
-

arctan



H
o

r



)




;
and






y
=




r
·
cos


β

+



H
o

·
sin


β


=




r
2

+

H
o
2



·


cos

(

β
-

arctan



H
o

r



)

.







In this case, azimuth information φ of the point is calculated as follows by using x and y.






φ
=


arctan


x
y


=


arctan

(




r
·
sin


β

-



H
o

·
cos


β





r
·
cos


β

+



H
o

·
sin


β



)

=


arctan

(

tan

(

β
-

arctan



H
o

r



)

)

=

β
-

arctan



H
o

r












    • Finally β=φ0−α is substituted into the foregoing equation, so that a relational expression between the depth information r and the azimuth information φ of the point cloud can be obtained:










φ
-

φ
0


=


-
α

-

arctan




H
o

r

.







φ0 is the originally collected azimuth information of the point, Ho is the horizontal offset of the laser to which the point belongs, α is the horizontal correction angle of the laser to which the point belongs, and both Ho and α are calibration parameters of the laser.


In another embodiment of the present invention, the relationship between the depth information and the azimuth information of the point cloud may alternatively be established from a geometric perspective based on a collection principle of the laser radar.


Specifically, the laser radar performs calibration in a process of collecting the point cloud, and converts originally collected information represented in a laser local cylindrical coordinate system into a Cartesian coordinate system in which the bottom of the laser radar is used as a unified origin, to form a point cloud of the laser radar in the unified Cartesian coordinate system, that is, a point cloud finally collected by a device, as shown in FIG. 2. However, the azimuth information of the point cloud is represented on an x-y plane in the Cartesian coordinate system, and therefore, FIG. 3 is further derived from FIG. 2. FIG. 3 is a schematic diagram of calibration of the laser radar on the x-y plane according to an embodiment of the present invention.


It may be derived from FIG. 3 that







γ
=


arccos

(




"\[LeftBracketingBar]"



O
world



O
laser




"\[RightBracketingBar]"


r

)

=

arccos

(


-

H
0


r

)



,






    • where r=√{square root over (x2+y2)} is the depth information of the point, and Ho is a horizontal offset of a laser to which the point belongs. Because Ho herein is a negative value, |OworldOlaser=−Ho. 90°−β=180°−γ−φ may be further obtained. Then, β=φ0−α and









γ
=

arccos

(


-

H
0


r

)







    • are substituted to obtain the relational expression










φ
-

φ
0


=


90

°

-
α
-

arccos

(

-


H
o

r


)








    • between the depth information r and the azimuth information φ of the point cloud.





φ0 is the originally collected azimuth information of the point, Ho is the horizontal offset of the laser to which the point belongs, α is the horizontal correction angle of the laser to which the point belongs, and both Ho and a are calibration parameters of the laser.


It should be noted that, because approximate treatment is adopted during derivation from the algebraic perspective, the relational expressions obtained in the foregoing two manners are different in form, but are almost identical in relation curves. As shown in FIG. 4, FIG. 4 is a schematic diagram of a relation curve between depth information and an azimuth residual of a laser collection point according to an embodiment of the present invention. In the figure, an x coordinate is depth information r of a point, and a y coordinate is an azimuth residual Δφ of the point, where Δφ=φ−φ0, φ is azimuth information of the point, and φ0 is originally collected azimuth information of the point.


In addition, the relationship between the depth information and the azimuth information of the point cloud may alternatively be established by using a fitting method.


After the relational expression between the depth information and the azimuth information of the point cloud is obtained, unknown parameters in the relational expression further need to be estimated, so that the azimuth information of the point cloud is subsequently predicted by using the relationship between the depth information and the azimuth information.


Specifically, several points may be selected from points collected by a same laser or encoded points, and unknown parameters α and Ho in the relational expression may be calculated based on information about the selected points.


A parameter estimation process is described in detail below by using an example in which two points are selected from the points collected by the same laser to estimate parameters.


First, two points are selected from the points collected by the same laser. Specific descriptions are as follows:


Depth information r and azimuth information φ of the points collected by the same laser are calculated based on the following formula:






r=√{square root over (x2+y2)}; and






φ
=

arctan



y
x

.






Then, an approximate value φ0′ of the originally collected azimuth of the point is calculated based on the following formula, to obtain the azimuth residual Δφ:







j
=

round



(



180

°

+
φ


φ
speed


)



;








φ
0


=



-
180


°

+

j
·

φ
speed




;
and






Δφ
=

φ
-


φ
0


.






j is an azimuth index of the point, and φspeed is resolution of a sampling angle of the laser. In this case, the azimuth residual






Δφ


[


-


φ
speed

2


,


φ
speed

2


]







    • is calculated based on the foregoing formula. Therefore, a relation curve between the depth information of the point cloud and the azimuth residual hops in segments in this case. As shown in FIG. 5, FIG. 5 is a schematic diagram of segmented hopping of a relation curve between depth information of a laser collection point and an azimuth residual according to an embodiment of the present invention.





Then, points collected by all lasers are sequentially sorted based on the depth information r and the azimuth residual Δφ. Because the relation curve between the depth information r and the azimuth residual Δφ in this embodiment hops in segments, the two selected points need to be located on a same segment of segmented curve, and a segmented curve including most points needs to be selected. Finally, points on both ends of the segment curve that meet a condition are used as two finally selected points.


Then, unknown parameters α and Ho in the relational expression are estimated based on the two selected points. Specific descriptions are as follows:


The relational expression between the depth information r and the azimuth residual Δφ is obtained based on the relational expression between the depth information r and the azimuth information φ of the point cloud:







φ
-

φ
0


=




-
α

-

arctan



H
o

r






Δφ
=

φ
-

φ
0




Δφ

=


-
α

-

arctan




H
o

r

.








The relational expression between the depth information r of the point cloud and the azimuth residual Δφ is further as follows:







Δφ
=




-
α

-

arctan



H
o

r





tan

(

Δφ
+
α

)


=


-

H
o


r



;





and






tan

(

Δφ
+
α

)

=




tan

Δφ

+

tan

α



1
-

tan


Δφ
·
tan


α



.





r and Δφ are substituted into the relational expression between r and Δφ, to obtain:






r·tanΔφ+r·tanα+Ho−Ho·tanΔφ·tanα=0.


The depth information and the azimuth residual of the two selected points are respectively denoted as (r1,Δφ1) and (r2,Δφ2), and (r1,Δφ1) and (r2,Δφ2) are substituted to obtain an equation set:






r
1·tanΔφ1+r1·tanα+Ho−Ho·tanΔφ1·tanα=0






r
2·tanΔφ2+r2·tanα+Ho−Ho·tanΔφ2·tanα=0.


Unknown numbers in the equation set are Ho and tan α, and finally, the binary equation set is solved to obtain:







H
o

=







(


r
1

-

r
2


)

·

(

1
+

tan



Δφ
1

·
tan



Δφ
2



)


±










(


r
1

-

r
2


)

2




(

1
+

tan



Δφ
1

·
tan



Δφ
2



)

2


-

4




(


tan


Δφ
1


-

tan


Δφ
2



)

2

·

r
1

·

r
2









2


(


tan


Δφ
1


-

tan


Δφ
2



)












tan

α

=






r
1

·
tan



Δφ
1


-



r
2

·
tan



Δφ
2






H
o

·

(


tan


Δφ
1


-

tan


Δφ
2



)


-

(


r
1

-

r
2


)



.






It should be noted that Ho may have two possible solutions, and a value that has a minimum absolute value and that meets a condition needs to be selected from the two possible solutions as Ho obtained through final solution and estimation. α has only one possible solution, and may be obtained by solving an inverse trigonometric function of tan a obtained through calculation of the foregoing formula.


Further, to predict the azimuth information of the point cloud by using the relationship on a decoder, estimated parameters Ho and α of each laser need to be encoded. Specific encoding may be completed by using an existing entropy encoding technology. If the parameters Ho and α are known, the parameters Ho and α do not need to be estimated and encoded.


After the parameters Ho and α in the relational expression are obtained, the relationship between the depth information and the azimuth information of the point cloud is entirely established.


Step 4: Predictively encode the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information. This specifically includes the following steps:


41) Predict the azimuth of the point cloud based on the relationship between the depth information and the azimuth information of the point cloud, to obtain an initial predicted value of an azimuth residual.


Specifically, first, cylindrical coordinates (r,φ,i) of each point in the point cloud are calculated based on Cartesian coordinates (x, y, z) of the point. Calculation formulas of the cylindrical coordinates are as follows:







r
=



x
2

+

y
2




;







φ
=

arc

tan



y
x



;

and






i
=

arg

min

k
=

1



laser


Num







"\[LeftBracketingBar]"


z
-

V
o
k

-

r
×
tan



θ
0
k





"\[RightBracketingBar]"


.






r represents depth information of the point, φ represents azimuth information of the point, i represents an ID of a laser to which the point belongs, Vok is a vertical offset of a kth laser, θ0k is a vertical elevation angle of the kth laser, and both Vok and θ0k are calibration parameters of the kth laser.


Then, an azimuth index j of the current point, an approximate value θ0′ of an originally collected azimuth, and an azimuth residual Δφ are calculated based on resolution φspeed of a sampling angle of the laser radar. Specific calculation formulas are as follows:







j
=

round



(



180

°

+
φ


φ
speed


)



;








φ
0


=



-
180


°

+

j
·

φ

s

p

e

e

d





;

and






Δφ
=

φ
-


φ
0


.






Finally, an initial predicted value Δ{circumflex over (φ)} of the azimuth residual Δφ is obtained based on the depth information r of the current point and a corresponding relational expression between the depth information and the azimuth information. A specific formula is as follows:







Δ


φ
ˆ


=


-

α
i


-

arc

tan





H
o
i

r

.







Hoi represents a horizontal offset of an ith laser to which the current point belongs, α′ represents a horizontal correction angle of the ith laser to which the current point belongs, and both Hoi and α′ are calibration parameters of the ith laser.


42) Selectively shift the initial predicted value of the azimuth residual to obtain a final predicted value of the azimuth residual and a prediction residual of the azimuth residual.


It should be noted that azimuth residuals calculated in different manners may have different characteristics, and whether to correspondingly shift the initial predicted value of the azimuth residual needs to be determined based on the characteristic of the azimuth residual.


The azimuth residual






Δφ


[


-


φ
speed

2


,


φ
speed

2


]







    • calculated in this embodiment does not continuously change as the depth information r increases, but is continuous in segments. Therefore, as shown in FIG. 5, the initial predicted value Δ{circumflex over (φ)} obtained by using the relational expression needs to be correspondingly shifted, to obtain a final predicted value Δ{tilde over (ϕ)} of the azimuth residual and a prediction residual resΔφ of the azimuth residual, where










Δ


ϕ
˜





[


-


ϕ
speed

2


,


ϕ
speed

2


]

.





Specifically, a shifting method may be as follows: First, a reminder obtained after the initial predicted value Δ{circumflex over (φ)} of the azimuth residual is divided by the resolution φspeed of the sampling angle of the laser radar is used to initialize the final predicted value Δ{tilde over (ϕ)} of the azimuth residual, that is, Δ{tilde over (ϕ)}=Δ{circumflex over (ϕ)}%ϕspeed. Then, determining is performed; if Δ{tilde over (ϕ)} is greater than








φ

s

p

e

e

d


2

,



Δ


ϕ
˜


=


Δ


ϕ
˜


-

ϕ
speed



;







    • and if Δ{tilde over (ϕ)} is less than










-


φ

s

p

e

e

d


2


,


Δ


ϕ
˜


=


Δ


ϕ
˜


+

ϕ
speed



,






    • so that the final predicted value Δ{tilde over (ϕ)} of the azimuth residual is obtained. Finally, the prediction residual resΔφ of the azimuth residual is calculated by using a formula resΔϕ=Δϕ−Δ{tilde over (ϕ)}.





In addition, another shifting method may alternatively be selected. This is not specifically limited in this embodiment.


43) Encode the prediction residual of the azimuth residual and azimuth auxiliary information.


Specifically, the prediction residual resΔφ of the azimuth residual may be encoded by using an existing entropy encoding technology. In addition, the azimuth auxiliary information needs to be correspondingly encoded. In this embodiment, the azimuth auxiliary information is an azimuth index j of a point. Specifically, the azimuth index j of the point may be encoded in a differential encoding manner.


In the present invention, the relationship between the depth information and the azimuth information of the point cloud is established, and cross-component prediction is performed on the azimuth information by using the depth information of the point cloud and the foregoing relationship. Compared with an existing method for performing prediction by using only encoded information of an azimuth, the method provided in the present invention can further improve prediction precision of the azimuth information of the point cloud, and reduce a prediction residual of the azimuth information, so that distribution of prediction residuals of an azimuth is more centralized. In this way, a more effective context model is established in an entropy encoding process, and therefore, encoding efficiency of the point cloud is improved.


Embodiment 2

Based on Embodiment 1, this embodiment provides a predictive encoding apparatus for azimuth information of a point cloud. Refer to FIG. 6. FIG. 6 is a schematic diagram of a structure of a predictive encoding apparatus for azimuth information of a point cloud according to an embodiment of the present invention. The apparatus includes:


a data obtaining module 11, configured to obtain original point cloud data;


a data processing module 12, configured to obtain depth information of a point cloud based on the original point cloud data;


a first calculation module 13, configured to establish a relationship between the depth information and azimuth information of the point cloud; and


a predictive encoding module 14, configured to predictively encode the azimuth information of the point cloud by using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.


The apparatus provided in this embodiment can implement the encoding method provided in Embodiment 1, and a detailed process is not described herein again.


Embodiment 3

This embodiment provides a predictive decoding method for azimuth information of a point cloud. Refer to FIG. 7. FIG. 7 is a schematic flowchart of a predictive decoding method for azimuth information of a point cloud according to an embodiment of the present invention. The method specifically includes the following steps:


Step 1: Obtain coded stream information and decode the coded stream information to obtain a prediction residual of an azimuth residual of a point cloud and azimuth auxiliary information.


A decoder obtains compressed coded stream information, and decodes the coded stream information by using an existing entropy decoding technology to obtain the prediction residual of the azimuth residual of the point cloud and the azimuth auxiliary information.


Step 2: Predict an azimuth of the point cloud by using reconstructed depth information and a relationship between the depth information and azimuth information, to obtain a final predicted value of the azimuth residual.


Specifically, similar to a procedure on an encoder, first, the azimuth residual information of the point is predicted by using depth information of the point cloud reconstructed by the decoder and the relationship between the depth information and the azimuth information, to obtain an initial predicted value of the azimuth residual, and then, the initial predicted value of the azimuth residual is selectively shifted to obtain a final predicted value of the azimuth residual.


Step 3: Reconstruct the azimuth residual of the point cloud based on the final predicted value of the azimuth residual and the prediction residual of the azimuth residual.


Specifically, the final predicted value of the azimuth residual and the prediction residual of the azimuth residual obtained through decoding are added, so that the azimuth residual of the point cloud can be reconstructed.


Step 4: Reconstruct the azimuth information of the point cloud based on the reconstructed azimuth residual and the azimuth auxiliary information.


Specifically, first, an approximate value of an originally collected azimuth of the point cloud is calculated by using a decoded azimuth index, and then the reconstructed azimuth residual and the approximate value of the originally collected azimuth are added, so that the azimuth information of the point cloud can be reconstructed.


Embodiment 4

Based on Embodiment 3, this embodiment provides a predictive decoding apparatus for azimuth information of a point cloud. Refer to FIG. 8. FIG. 8 is a schematic diagram of a structure of a predictive decoding apparatus for azimuth information of a point cloud according to an embodiment of the present invention. The apparatus includes:


a decoding module 21, configured to obtain coded stream information and decode the coded stream information to obtain a prediction residual of an azimuth residual of a point cloud and azimuth auxiliary information;


a prediction module 22, configured to predict an azimuth of the point cloud by using reconstructed depth information and a relationship between the depth information and azimuth information, to obtain a final predicted value of the azimuth residual;


a second calculation module 23, configured to reconstruct the azimuth residual of the point cloud based on the predicted value of the azimuth residual and the prediction residual of the azimuth residual; and


a reconstruction module 24, configured to reconstruct the azimuth information of the point cloud based on the reconstructed azimuth residual and the azimuth auxiliary information.


The apparatus provided in this embodiment can implement the decoding method provided in Embodiment 3, and a detailed process is not described herein again.


The foregoing content is further detailed descriptions of the present invention with reference to specific preferred implementations, and it cannot be assumed that specific implementation of the present invention is limited to these descriptions. For a person of ordinary skill in the art to which the present invention belongs, several simple deductions or substitutions may be made without departing from the inventive concept, which shall be regarded as falling within the protection scope of the present invention.

Claims
  • 1-10. (canceled)
  • 11. A predictive encoding method for azimuth information of a point cloud, comprising: obtaining original point cloud data;obtaining depth information of a point cloud based on the original point cloud data;establishing a relationship between the depth information and azimuth information of the point cloud; andpredictively encoding the azimuth information of the point cloud using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.
  • 12. The predictive encoding method according to claim 11, wherein the original point cloud data includes a group of 3D spatial points, each spatial point in the group of 3D spatial points records its geometric position information, and the geometric position information of each spatial point is expressed based on a Cartesian coordinate system.
  • 13. The predictive encoding method according to claim 11, wherein obtaining the depth information of a point cloud based on the original point cloud data comprises: calculating the depth information of the point cloud using the following formula: r32 √{square root over (x2+y2)}; or r=√{square root over (x2+y2+z2)};wherein r represents depth information of a respective point in the original point cloud data, and x and y are Cartesian coordinate components of the respective point; or (x, y, z) is Cartesian coordinates of the respective point.
  • 14. The predictive encoding method according to claim 11, wherein establishing the relationship between the depth information and the azimuth information of the point cloud comprises: establishing the relationship between the depth information and the azimuth information of the point cloud by using a mathematical derivation method; orestablishing the relationship between the depth information and the azimuth information of the point cloud by using a fitting method.
  • 15. The predictive encoding method according to claim 14, wherein a relational expression between the depth information and the azimuth information of the point cloud is established using the mathematical derivation method as follows:
  • 16. The predictive encoding method according to claim 14, wherein a relational expression between the depth information and the azimuth information of the point cloud is established by using the mathematical derivation method as follows:
  • 17. The predictive encoding method according to claim 15, wherein after the relational expression between the depth information and the azimuth information of the point cloud is obtained, the method further comprises: selecting several points from points collected by a same laser or encoded points and estimating unknown parameters α and Ho in the relational expression based on information about the selected points.
  • 18. The predictive encoding method according to claim 17, wherein a formula for estimating the unknown parameters α and Ho by selecting two points is:
  • 19. The predictive encoding method according to claim 11, wherein predictively encoding the azimuth information of the point cloud using the relationship between the depth information and the azimuth information of the point cloud comprises: predicting the azimuth of the point cloud based on the relationship between the depth information and the azimuth information of the point cloud, to obtain an initial predicted value of an azimuth residual;selectively shifting the initial predicted value of the azimuth residual to obtain a final predicted value of the azimuth residual and a prediction residual of the azimuth residual; andencoding the prediction residual of the azimuth residual and azimuth auxiliary information.
  • 20. The predictive encoding method according to claim 19, wherein predicting the azimuth of the point cloud based on the relationship between the depth information and the azimuth information of the point cloud, to obtain the initial predicted value of the azimuth residual, comprises: obtaining the initial predicted value Δ{circumflex over (φ)} of the azimuth residual Δφ based on a formula as follows:
  • 21. The predictive encoding method according to claim 20, wherein selectively shifting the initial predicted value of the azimuth residual to obtain the final predicted value of the azimuth residual and the prediction residual of the azimuth residual comprises: obtaining a reminder after the initial predicted value Δ{circumflex over (φ)} of the azimuth residual is divided by the resolution φspeed of a sampling angle of a laser radar;using the reminder to initialize the final predicted value Δ{tilde over (ϕ)} of the azimuth residual Δ{tilde over (ϕ)}=Δ{tilde over (ϕ)}%ϕspeed; determining whether Δ{tilde over (ϕ)} is greater than
  • 22. The predictive encoding method according to claim 19, wherein encoding the prediction residual of the azimuth residual and azimuth auxiliary information comprises: using an entropy encoding technology to encode the prediction residual of the azimuth residual; andencoding the azimuth auxiliary information in a differential encoding manner, wherein the azimuth auxiliary information is an azimuth index j of the point.
  • 23. A predictive encoding device for azimuth information of a point cloud, the device comprising: a processor; anda memory storing instruction that, when executed by the processor, cause the device to perform operations comprising: obtaining original point cloud data;obtaining depth information of a point cloud based on the original point cloud data;establishing a relationship between the depth information and azimuth information of the point cloud; andpredictively encoding the azimuth information of the point cloud using the relationship between the depth information and the azimuth information of the point cloud, to obtain coded stream information.
  • 24. The predictive encoding device according to claim 23, wherein establishing the relationship between the depth information and azimuth information of the point cloud comprises: establishing the relationship between the depth information and the azimuth information of the point cloud using a mathematical derivation method; orestablishing the relationship between the depth information and the azimuth information of the point cloud using a fitting method.
  • 25. The predictive encoding device according to claim 24, wherein a relational expression between the depth information and the azimuth information of the point cloud is established using the mathematical derivation method as follows:
  • 26. The predictive encoding device according to claim 25, wherein after the relational expression between the depth information and the azimuth information of the point cloud is obtained, the method further comprises: selecting several points from points collected by a same laser or encoded points and estimating unknown parameters a and H o in the relational expression based on information about the selected points.
  • 27. The predictive encoding device according to claim 26, wherein a formula for estimating the unknown parameters α and Ho by selecting two points is:
  • 28. The predictive encoding device according to claim 23, wherein predictively encoding the azimuth information of the point cloud using the relationship between the depth information and the azimuth information of the point cloud comprises: predicting the azimuth of the point cloud based on the relationship between the depth information and the azimuth information of the point cloud, to obtain an initial predicted value of an azimuth residual;selectively shifting the initial predicted value of the azimuth residual to obtain a final predicted value of the azimuth residual and a prediction residual of the azimuth residual; andencoding the prediction residual of the azimuth residual and azimuth auxiliary information.
  • 29. A predictive decoding method for azimuth information of a point cloud, comprising: obtaining coded stream information, and decoding the coded stream information to obtain a prediction residual of an azimuth residual of a point cloud and azimuth auxiliary information;predicting an azimuth of the point cloud using reconstructed depth information and a relationship between the depth information and azimuth information, to obtain a final predicted value of the azimuth residual;reconstructing the azimuth residual of the point cloud based on the final predicted value of the azimuth residual and the prediction residual of the azimuth residual; andreconstructing the azimuth information of the point cloud based on the reconstructed azimuth residual and the azimuth auxiliary information.
Priority Claims (1)
Number Date Country Kind
202110580220.6 May 2021 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a national stage of International Application No. PCT/CN2022/093678, filed on May 18, 2022, which claims priority to Chinese Patent Application No. 202110580220.6, filed on May 26, 2021. The disclosures of both of the aforementioned applications are hereby incorporated by reference in their entireties.

PCT Information
Filing Document Filing Date Country Kind
PCT/CN2022/093678 5/18/2022 WO