This application pertains to the field of encoding and decoding technologies, and specifically, relates to an encoding method, apparatus, and device, and a decoding method, apparatus, and device.
Texture coordinates, also referred to as UV coordinates, are a type of information that describes a vertex texture of a three-dimensional mesh. Two-dimensional projection is first performed on a surface texture of the three-dimensional mesh to form a two-dimensional texture map. The UV coordinates indicate a location of the three-dimensional vertex texture in the two-dimensional texture map, and are in a one-to-one correspondence with geometry information. Therefore, the texture coordinates determine the texture map of the three-dimensional mesh, and are an important component of the three-dimensional mesh. An amount of data of the UV coordinates accounts for a large proportion in the three-dimensional mesh. However, prediction in an existing parallelogram prediction method is not sufficiently accurate. This affects texture coordinate encoding efficiency.
According to a first aspect, an encoding method is provided, including:
According to a second aspect, an encoding apparatus is provided, including:
According to a third aspect, a decoding method is provided, including:
According to a fourth aspect, a decoding apparatus is provided, including:
According to a fifth aspect, an encoding device is provided, including a processor and a memory. The memory stores a program or instructions capable of running on the processor. When the program or instructions are executed by the processor, the steps of the method according to the first aspect are implemented.
According to a sixth aspect, an encoding device is provided, including a processor and a communication interface. The processor is configured to reconstruct geometry information and a connectivity of a target three-dimensional mesh based on an encoding result for the geometry information and the connectivity of the target three-dimensional mesh, and encode texture coordinates of a vertex in the target three-dimensional mesh based on reconstructed geometry information and a reconstructed connectivity.
According to a seventh aspect, a decoding device is provided, including a processor and a memory. The memory stores a program or instructions capable of running on the processor. When the program or instructions are executed by the processor, the steps of the method according to the third aspect are implemented.
According to an eighth aspect, a decoding device is provided, including a processor and a communication interface. The processor is configured to decode an obtained bitstream of a target three-dimensional mesh to obtain geometry information and a connectivity of the target three-dimensional mesh, and perform decoding based on the geometry information and the connectivity to obtain texture coordinates of a vertex in the target three-dimensional mesh.
According to a ninth aspect, a communication system is provided, including an encoding device and a decoding device. The encoding device may be configured to perform the steps of the method according to the first aspect. The decoding device may be configured to perform the steps of the method according to the third aspect.
According to a tenth aspect, a readable storage medium is provided. The readable storage medium stores a program or instructions. When the program or instructions are executed by a processor, the steps of the method according to the first aspect are implemented, or the steps of the method according to the third aspect are implemented.
According to an eleventh aspect, a chip is provided. The chip includes a processor and a communication interface. The communication interface is coupled to the processor. The processor is configured to run a program or instructions, to implement the method according to the first aspect, or implement the method according to the third aspect.
According to a twelfth aspect, a computer program or program product is provided. The computer program or program product is stored in a storage medium. The computer program or program product is executed by at least one processor to implement the steps of the method according to the first aspect, or implement the steps of the method according to the third aspect.
The following clearly describes the technical solutions in the embodiments of this application with reference to the accompanying drawings in the embodiments of this application. Apparently, the described embodiments are only some rather than all of the embodiments of this application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of this application shall fall within the protection scope of this application.
The terms “first”, “second”, and the like in this specification and claims of this application are used to distinguish between similar objects rather than to describe a specific order or sequence. It should be understood that terms used in this way are interchangeable in appropriate circumstances so that the embodiments of this application can be implemented in other orders than the order illustrated or described herein. In addition, “first” and “second” are usually used to distinguish objects of a same type, and do not restrict a quantity of objects. For example, there may be one or a plurality of first objects. In addition, “and/or” in the specification and claims represents at least one of connected objects, and the character “/” generally indicates that the associated objects have an “or” relationship.
It should be noted that technologies described in the embodiments of this application are not limited to a long term evolution (LTE)/LTE-advanced (LTE-A) system, and may be further used in other wireless communication systems, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), single-carrier frequency division multiple access (SC-FDMA), and other systems. The terms “system” and “network” in the embodiments of this application are often used interchangeably, and the technology described herein may be used in the above-mentioned systems and radio technologies as well as other systems and radio technologies. In the following descriptions, a new radio (NR) system is described for an illustration purpose, and NR terms are used in most of the following descriptions, but these technologies may also be applied to applications other than an NR system application, for example, a 6th generation (6G) communication system.
The following describes in detail an encoding method, apparatus, and device and a decoding method, apparatus, and device provided in the embodiments of this application with reference to the accompanying drawings and by using some embodiments and application scenarios thereof.
As shown in
Step 101: An encoder reconstructs geometry information and a connectivity of a target three-dimensional mesh based on an encoding result for the geometry information and the connectivity of the target three-dimensional mesh.
It should be noted that the target three-dimensional mesh in this application may be understood as a three-dimensional mesh corresponding to any video frame, the geometry information of the target three-dimensional mesh may be understood as coordinates of a vertex in the three-dimensional mesh, the coordinates are usually three-dimensional coordinates, and the connectivity is used to describe a connectivity between elements, such as vertices and faces, in the three-dimensional mesh, and may also be referred to as a connectivity relationship.
It should be noted that texture coordinates of a vertex are encoded based on geometry information and a connectivity in this embodiment of this application, and to ensure that encoded texture coordinates are consistent with encoded geometry information and an encoded connectivity, geometry information and a connectivity are reconstructed after encoding in this embodiment of this application.
Step 102: The encoder encodes texture coordinates of a vertex in the target three-dimensional mesh based on reconstructed geometry information and a reconstructed connectivity.
It should be noted that, in this embodiment of this application, texture coordinates of a vertex are predicted based on reconstructed geometry information of the vertex, so that a prediction result is closer to a real result for the vertex. This can improve accuracy of predicting a vertex, and therefore can ensure accuracy of encoding texture coordinates.
Optionally, a specific implementation process of step 102 includes the following steps.
Step 1021: The encoder determines a target triangle based on the reconstructed connectivity, where the target triangle includes a first edge and a to-be-encoded vertex.
Optionally, in this embodiment of this application, an optional implementation of this step is as follows:
The encoder selects the first edge from an edge set, where the edge set is a set including edges of a triangle constructed based on the reconstructed connectivity; and the encoder determines the target triangle based on the first edge.
Optionally, before the encoder selects the first edge from the edge set, the method further includes:
The encoder selects an initial triangle based on the reconstructed connectivity; and the encoder encodes texture coordinates of three vertices of the initial triangle, and adds three edges of the initial triangle to the edge set.
Usually, the initial triangle is the 1st triangle in the connectivity. For the initial triangle, in this embodiment of this application, texture coordinates are directly encoded without predicting vertices. Further, a specific implementation of encoding, by the encoder, the texture coordinates of the three vertices of the initial triangle includes: The encoder directly encodes texture coordinates of the 1st vertex of the initial triangle; the encoder predicts an edge by using the texture coordinates of the 1st vertex to obtain texture coordinates of the 2nd vertex of the initial triangle, and obtains texture coordinates of the 3rd vertex of the initial triangle through similar-triangles-based predictive coding; and the encoder encodes the texture coordinates of the 2nd vertex and the 3rd vertex of the initial triangle.
After texture coordinates (it should be noted that the texture coordinates are original texture coordinates directly obtained based on the target three-dimensional mesh) of all vertices of the initial triangle are encoded, all edges of the initial triangle are added to the edge set to form an initial edge set, and then subsequent vertices are predicted based on the initial edge set.
Step 1022: The encoder predicts texture coordinates of the to-be-encoded vertex based on reconstructed geometry information corresponding to three vertices of the target triangle and real texture coordinates of a vertex on the first edge, to obtain predicted texture coordinates of the to-be-encoded vertex.
Step 1023: The encoder encodes the texture coordinates of the to-be-encoded vertex based on a residual between real texture coordinates of the to-be-encoded vertex and the predicted texture coordinates.
It should be noted that, after the predicted texture coordinates of the to-be-encoded vertex are obtained, the residual between the predicted texture coordinates and the real texture coordinates may be obtained, and then encoding is performed based on the residual. Optionally, the residual may be directly encoded; or the residual may be first processed, and then a processed residual is encoded. For example, the processing may be normalization. The to-be-encoded vertex is encoded through encoding based on the residual, so that the number of bits in texture coordinate encoding can be reduced.
It should be noted that the residual in this embodiment of this application may be obtained in a manner of subtracting the predicted texture coordinates from the real texture coordinates, or may be obtained in a manner of subtracting the real texture coordinates from the predicted texture coordinates. Either manner can be specifically used, provided that the encoder and a decoder have consistent understanding. The residual in this embodiment of this application may also be referred to as a residual.
Optionally, in an embodiment of this application, an implementation of predicting the texture coordinates of the to-be-encoded vertex based on the reconstructed geometry information corresponding to the three vertices of the target triangle and the real texture coordinates of the vertex on the first edge, to obtain the predicted texture coordinates of the to-be-encoded vertex includes the following steps.
Step S11: The encoder obtains texture coordinates of a predicted projected point of the to-be-encoded vertex on the first edge based on reconstructed geometry information corresponding to each vertex of the target triangle and the real texture coordinates of the vertex on the first edge.
It should be noted that, as shown in
The encoder obtains the texture coordinates of the predicted projected point of the to-be-encoded vertex on the first edge based on a sum of {right arrow over (AX)}uv and Auv, or obtains the texture coordinates of the predicted projected point of the to-be-encoded vertex on the first edge based on a residual between Auv and {right arrow over (XA)}uv, where
For example, the texture coordinates of the predicted projected point of the to-be-encoded vertex on the first edge are obtained based on a formula 1: Xuv={right arrow over (N)}Xuv+Nuv; or
Further, {right arrow over (NX)}uv=({right arrow over (NP)}G×{right arrow over (NC)}G)×{right arrow over (NP)}UV/{right arrow over (NP)}G2, where {right arrow over (NP)}G is a vector between the vertex N on the first edge and reconstructed geometry information corresponding to the vertex P, {right arrow over (NC)}G, is a vector from the vertex N on the first edge to reconstructed geometry information corresponding to the to-be-encoded vertex C, and {right arrow over (NP)}UV is a vector from the vertex N on the first edge to the texture coordinates of the vertex P;
{right arrow over (XP)}
uv={right arrow over (−PX)}uv.
Step S12: The encoder obtains a determining result indicating whether a first triangle including the first edge and a first vertex corresponding to the first edge is a degenerate triangle, where the first vertex is an opposite vertex of the first edge of the first triangle, and the first triangle and the target triangle have a common first edge.
It should be noted that, in some cases, an area of the selected first triangle may be zero, and the encoder performs different processing for different triangles. Therefore, in this step, the area of the first triangle needs to be first determined. In a case that the area of the first triangle is zero, the first triangle may be considered as a degenerate triangle; otherwise, the first triangle is not a degenerate triangle.
Step S13: The encoder obtains the predicted texture coordinates of the to-be-encoded vertex based on the determining result and the texture coordinates of the predicted projected point.
Optionally, an implementation of step S13 is as follows:
In a case that the determining result indicates that the first triangle is a degenerate triangle, the encoder obtains the predicted texture coordinates of the to-be-encoded vertex based on Xuv, {right arrow over (XC)}uv, and a magnitude relationship between {right arrow over (C1C)}uv and {right arrow over (C2C)}uv, where
To be specific, in a case that the determining result indicates that the first triangle is a degenerate triangle, an implementation of obtaining, by the encoder, the predicted texture coordinates of the to-be-encoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
where
It should be further noted that, in this case, the encoder obtains a target identifier, where the target identifier is used to identify the magnitude relationship between |{right arrow over (C1C)}uv| and |{right arrow over (C2C)}uv|; and the encoder performs entropy encoding on the target identifier. The decoder can accurately decode the predicted texture coordinates based on the target identifier.
Optionally, another implementation of step S13 is as follows:
In a case that the determining result indicates that the first triangle is not a degenerate triangle, the encoder obtains the predicted texture coordinates of the to-be-encoded vertex based on Xuv, {right arrow over (XC)}uv, and a magnitude relationship between |{right arrow over (C)}1Ouv| and |{right arrow over (C2O)}uv|, where
To be specific, in a case that the determining result indicates that the first triangle is not a degenerate triangle, an implementation of obtaining, by the encoder, the predicted texture coordinates of the to-be-encoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
where
It should be noted that the predicted texture coordinates of the to-be-encoded vertex are obtained based on the foregoing process, and the to-be-encoded vertex may be encoded based on the predicted texture coordinates. Optionally, after encoding the to-be-encoded vertex based on the first edge in the edge set, the encoder adds a second edge of the target triangle to the edge set and deletes the first edge from the edge set, where the second edge is an edge of the target triangle that is not included in the edge set. In this way, the edge set is updated.
It should be noted that, in this embodiment of this application, residuals of the to-be-encoded vertex may be encoded when being obtained; or all residuals may be first obtained, and then the residuals are uniformly encoded.
To sum up, a specific implementation process of encoding texture coordinates (hereinafter referred to as UV coordinates) in this embodiment of this application is as follows:
Step S1: Select an initial triangle from the reconstructed connectivity, directly encode UV coordinates of three vertices of the initial triangle without prediction, and then add edges of the initial triangle to an edge set.
Step S2: Select an edge t from the edge set according to an access criterion, encode UV coordinates of an opposite vertex of a new triangle including the edge t, calculate predicted UV coordinates of a to-be-encoded vertex in the foregoing manner by using a three-dimension-to-two-dimension projection relationship of a triangle, obtain a target identifier for a special degenerate triangle, and subtract the predicted UV coordinates from original values of UV coordinates (namely, real UV coordinates, to be specific, UV coordinates directly obtained based on the target three-dimensional mesh) of the to-be-encoded vertex to obtain a residual.
Step S3: Add two edges of the new triangle to the edge set, remove the edge t at the top of the edge set, then obtain a next edge from the edge set, continue to encode predicted UV coordinates of an opposite vertex of a triangle adjacent to the edge, obtain a residual, return to step S3, and cyclically perform step S3 until residuals of all vertices are obtained.
Step S4: Perform entropy encoding on residuals of UV coordinates and the target identifier, and output a UV coordinate bitstream.
A UV coordinate encoding framework based on a three-dimensional mesh of a video in this embodiment of this application is shown in
In a case that geometry information and a connectivity of a three-dimensional mesh are encoded, UV coordinates may be encoded by using reconstructed geometry information and a reconstructed connectivity. First, a triangle is selected as an initial triangle, and coordinate values are directly encoded. UV coordinate values of an uncoded vertex of a triangle adjacent to a selected initial edge are predicted based on encoded UV coordinates of the initial triangle and a texture projection relationship. A residual between real UV coordinates of a to-be-encoded vertex and predicted coordinate values is encoded, and a new edge is selected from a newly encoded triangle to encode an uncoded vertex of an adjacent triangle. This process is continuously iterated to complete encoding of UV coordinates of the entire three-dimensional mesh.
It should be noted that this embodiment of this application provides a new manner of predicting UV coordinates of a vertex. Texture coordinates of a vertex are predicted based on three-dimensional and two-dimensional information of the vertex, so that a prediction result is closer to a real result for the vertex. This can improve prediction accuracy, and therefore improve texture coordinate encoding efficiency.
The encoding method provided in the embodiments of this application may be performed by an encoding apparatus. In the embodiments of this application, an encoding apparatus provided in the embodiments of this application is described by using an example in which the encoding apparatus performs the encoding method.
As shown in
Optionally, the encoding module 402 includes:
Optionally, the first obtaining unit is configured to:
Optionally, the obtaining texture coordinates of a predicted projected point of the to-be-encoded vertex on the first edge based on reconstructed geometry information corresponding to each vertex of the target triangle and the real texture coordinates of the vertex on the first edge includes:
A is a vertex N on the first edge of the target triangle, {right arrow over (AX)}uv is a vector from the vertex N on the first edge of the target triangle to texture coordinates of a predicted projected point X of the to-be-encoded vertex C on the first edge, Auv is real texture coordinates of the vertex N on the first edge of the target triangle, and {right arrow over (XA)}uv is a vector from the predicted projected point X of the to-be-encoded vertex C on the first edge to texture coordinates of the vertex N on the first edge of the target triangle; or A is a vertex P on the first edge of the target triangle, {right arrow over (AX)}uv is a vector from the vertex P on the first edge of the target triangle to texture coordinates of a predicted projected point X of the to-be-encoded vertex C on the first edge, Auv is real texture coordinates of the vertex P on the first edge of the target triangle, and {right arrow over (XA)}uv is a vector from the predicted projected point X of the to-be-encoded vertex C on the first edge to texture coordinates of the vertex P on the first edge of the target triangle.
Optionally, an implementation of obtaining the predicted texture coordinates of the to-be-encoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
Optionally, the apparatus further includes:
Optionally, an implementation of obtaining the predicted texture coordinates of the to-be-encoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
Optionally, the first determining unit is configured to:
Optionally, before the first determining unit selects the first edge from the edge set, the encoding module further includes:
Optionally, the first processing unit is configured to:
Optionally, after the encoding unit encodes the texture coordinates of the to-be-encoded vertex based on the residual between the real texture coordinates of the to-be-encoded vertex and the predicted texture coordinates, the encoding module 402 further includes:
The apparatus embodiment corresponds to the foregoing encoding method embodiment, and all implementation processes and implementations of the foregoing method embodiment are applicable to the apparatus embodiment, with the same technical effect achieved.
An embodiment of this application further provides an encoding device, including a processor and a communication interface. The processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is further configured to:
Optionally, the processor is configured to:
Optionally, the processor is further configured to:
Optionally, the processor is further configured to:
Optionally, the processor is configured to:
Optionally, the processor is further configured to:
Specifically, an embodiment of this application further provides an encoding device. As shown in
Specifically, the encoding device 500 in this embodiment of this application further includes instructions or a program stored in the memory 503 and capable of running on the processor 501, and the processor 501 invokes the instructions or program in the memory 503 to perform the method performed by the modules shown in
As shown in
Step 601: A decoder decodes an obtained bitstream of a target three-dimensional mesh to obtain geometry information and a connectivity of the target three-dimensional mesh.
Step 602: The decoder performs decoding based on the geometry information and the connectivity to obtain texture coordinates of a vertex in the target three-dimensional mesh.
Optionally, that the decoder performs decoding based on the geometry information and the connectivity to obtain texture coordinates of a vertex in the target three-dimensional mesh includes:
The decoder determines a target triangle based on the connectivity, where the target triangle includes a first edge and a to-be-decoded vertex;
It should be noted that a residual (namely, a residual) of each vertex can be obtained by decoding a UV coordinate bitstream, and real UV coordinates can be obtained by using a residual obtaining manner corresponding to that used by an encoder. For example, if the encoder obtains the residual in a manner of subtracting predicted UV coordinates from the real UV coordinates, the decoder obtains the real UV coordinates of the vertex by adding the residual to the predicted UV coordinates of the vertex; or if the encoder obtains the residual in a manner of subtracting the real UV coordinates from the predicted UV coordinates, the decoder obtains the real UV coordinates of the vertex by subtracting the residual from the predicted UV coordinates of the vertex.
Optionally, the obtaining predicted texture coordinates of the to-be-decoded vertex include:
The decoder obtains texture coordinates of a predicted projected point of the to-be-decoded vertex on the first edge based on geometry information corresponding to each vertex of the target triangle and the real texture coordinates of the vertex on the first edge;
Optionally, that the decoder obtains texture coordinates of a predicted projected point of the to-be-decoded vertex on the first edge based on geometry information corresponding to each vertex of the target triangle and the real texture coordinates of the vertex on the first edge includes:
The decoder obtains the texture coordinates of the predicted projected point of the to-be-decoded vertex on the first edge based on a sum of {right arrow over (AX)}uv and Auv, or obtains the texture coordinates of the predicted projected point of the to-be-decoded vertex on the first edge based on a residual between Auv and {right arrow over (XA)}uv, where
A is a vertex N on the first edge of the target triangle, {right arrow over (AX)}uv is a vector from the vertex N on the first edge of the target triangle to predicted texture coordinates of a predicted projected point X of the to-be-decoded vertex C on the first edge, Auv is real texture coordinates of the vertex N on the first edge of the target triangle, and {right arrow over (XA)}uv is a vector from the predicted projected point X of the to-be-decoded vertex C on the first edge to texture coordinates of the vertex N on the first edge of the target triangle; or A is a vertex P on the first edge of the target triangle, {right arrow over (AX)}uv is a vector from the vertex P on the first edge of the target triangle to predicted texture coordinates of a predicted projected point X of the to-be-decoded vertex C on the first edge, Auv is real texture coordinates of the vertex P on the first edge of the target triangle, and {right arrow over (XA)}uv is a vector from the predicted projected point X of the to-be-decoded vertex C on the first edge to texture coordinates of the vertex P on the first edge of the target triangle.
Optionally, a manner of obtaining, by the decoder, the texture coordinates of the predicted projected point of the to-be-decoded vertex on the first edge is the same as a manner of obtaining, by the encoder, texture coordinates of a predicted projected point of a corresponding to-be-encoded vertex on the first edge. Details are not described herein again.
Optionally, that the decoder obtains the predicted texture coordinates of the to-be-decoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
In a case that the determining result indicates that the first triangle is a degenerate triangle, the decoder performs decoding to obtain a target identifier, where the target identifier is used to identify a magnitude relationship between |{right arrow over (C1C)}uv| and |{right arrow over (C2C)}uv|; and
Optionally, that the decoder obtains the predicted texture coordinates of the to-be-decoded vertex based on the target identifier and the texture coordinates of the predicted projected point includes:
The decoder obtains the predicted texture coordinates of the to-be-decoded vertex based on Xuv, {right arrow over (XC)}uv, and the magnitude relationship between |{right arrow over (C1C)}uv| and |{right arrow over (C2C)}uv|, where
Optionally, that the decoder obtains the predicted texture coordinates of the to-be-decoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
In a case that the determining result indicates that the first triangle is not a degenerate triangle, the decoder obtains the predicted texture coordinates of the to-be-decoded vertex based on Xuv, {right arrow over (XC)}uv, and a magnitude relationship between |{right arrow over (C1O)}uv| and {right arrow over (C2O)}uv, where
Optionally, a manner of obtaining, by the decoder, the predicted texture coordinates of the to-be-decoded vertex is the same as a manner of obtaining, by the encoder, predicted texture coordinates of a corresponding to-be-encoded vertex. Details are not described herein again.
Optionally, that the decoder determines a target triangle based on the connectivity includes:
The decoder selects the first edge from an edge set, where the edge set is a set including edges of a triangle constructed based on the reconstructed connectivity; and the decoder determines the target triangle based on the first edge.
Optionally, before the decoder selects the first edge from the edge set, the method further includes:
The decoder selects an initial triangle based on the connectivity; and
Optionally, after the decoder performs decoding based on the predicted texture coordinates of the to-be-decoded vertex and the residual obtained by decoding the to-be-decoded vertex, to obtain the real texture coordinates of the to-be-decoded vertex, the method further includes:
The decoder adds a second edge of the target triangle to the edge set, and deletes the first edge from the edge set, where the second edge is an edge of the target triangle that is not included in the edge set.
It should be noted that this embodiment of this application is a reverse process of encoding, and a decoding block diagram is shown in
To sum up, a specific implementation process of decoding UV coordinates in this embodiment of this application is as follows:
Step SP1: Perform entropy decoding on a UV coordinate bitstream.
Step SP2: Decode UV coordinates of three vertices of an initial triangle, where a predicted value is not calculated herein, and UV coordinates, instead of residuals, of the initial triangle are directly encoded; and add three edges of the initial triangle to an edge set.
Step SP3: Select an edge t from the edge set according to an access criterion, decode UV coordinates of an opposite vertex of a new triangle including t, and first calculate predicted values of UV coordinates of a to-be-decoded vertex by using a three-dimension-to-two-dimension mapping relationship of a triangle and using a calculation manner consistent with that used by the encoder, and then add the predicted values to a residual obtained through entropy decoding to obtain reconstructed UV coordinates.
Step SP4: Add two edges of the new triangle to the edge set, remove the edge t at the top of the set, obtain a next edge from the edge set, continue to decode UV coordinates of an opposite vertex of a triangle adjacent to the edge, and return to step SP4 until UV coordinates of all vertices are decoded.
It should be noted that this embodiment of this application is a peer-side method embodiment corresponding to the foregoing embodiment of the encoding method, the decoding process is a reverse process of encoding, and all implementations on the encoder side are applicable to the embodiment on the decoder side, with the same technical effect achieved. Details are not described herein again.
As shown in
Optionally, the decoding module 802 includes:
Optionally, the second obtaining unit is configured to:
Optionally, an implementation of obtaining the texture coordinates of the predicted projected point of the to-be-decoded vertex on the first edge based on the geometry information corresponding to each vertex of the target triangle and the real texture coordinates of the vertex on the first edge includes:
Optionally, an implementation of obtaining the predicted texture coordinates of the to-be-decoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
Optionally, an implementation of obtaining the predicted texture coordinates of the to-be-decoded vertex based on the target identifier and the texture coordinates of the predicted projected point includes:
Optionally, an implementation of obtaining the predicted texture coordinates of the to-be-decoded vertex based on the determining result and the texture coordinates of the predicted projected point includes:
Optionally, the second determining unit is configured to:
Optionally, before the second determining unit selects the first edge from the edge set, the decoding module further includes:
Optionally, after the decoding unit performs decoding based on the predicted texture coordinates of the to-be-decoded vertex and the residual obtained by decoding the to-be-decoded vertex, to obtain the real texture coordinates of the to-be-decoded vertex, the decoding module further includes:
It should be noted that the apparatus embodiment describes an apparatus corresponding to the foregoing method, and all implementations of the foregoing method embodiment are applicable to the apparatus embodiment, with the same technical effect achieved. Details are not described herein again.
Optionally, an embodiment of this application further provides a decoding device, including a processor, a memory, and a program or instructions stored in the memory and capable of running on the processor. When the program or instructions are executed by the processor, the processes of the foregoing decoding method embodiment are implemented, with the same technical effect achieved. To avoid repetition, details are not described herein again.
An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a program or instructions. When the program or instructions are executed by a processor, the processes of the foregoing decoding method embodiment are implemented, with the same technical effect achieved. To avoid repetition, details are not described herein again.
For example, the computer-readable storage medium is a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc.
An embodiment of this application further provides a decoding device, including a processor and a communication interface. The processor is configured to decode an obtained bitstream of a target three-dimensional mesh to obtain geometry information and a connectivity of the target three-dimensional mesh, and perform decoding based on the geometry information and the connectivity to obtain texture coordinates of a vertex in the target three-dimensional mesh.
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is configured to:
Optionally, the processor is further configured to:
Optionally, the processor is further configured to:
The decoding device embodiment corresponds to the foregoing decoding method embodiment, and all implementation processes and implementations of the foregoing method embodiment are applicable to the decoding device embodiment, with the same technical effect achieved.
Specifically, an embodiment of this application further provides a decoding device. Specifically, a structure of the decoding device is shown in
An embodiment of this application further provides a readable storage medium. The readable storage medium stores a program or instructions. When the program or instructions are executed by a processor, the processes of the foregoing decoding method embodiment are implemented, with the same technical effect achieved. To avoid repetition, details are not described herein again.
The processor is a processor in the decoding device in the foregoing embodiments. The readable storage medium includes a computer-readable storage medium, for example, a computer read-only memory ROM, a random access memory RAM, a magnetic disk, or an optical disc.
Optionally, as shown in
An embodiment of this application further provides a chip. The chip includes a processor and a communication interface. The communication interface is coupled to the processor. The processor is configured to run a program or instructions, to implement the processes of the foregoing encoding or decoding method embodiment, with the same technical effect achieved. To avoid repetition, details are not described herein again.
It should be understood that the chip provided in this embodiment of this application may also be referred to as a system-level chip, a system on chip, a chip system, a system-on-a-chip, or the like.
An embodiment of this application further provides a computer program or program product. The computer program or program product is stored in a storage medium. The computer program or program product is executed by at least one processor to implement the processes of the foregoing encoding or decoding method embodiment, with the same technical effect achieved. To avoid repetition, details are not described herein again.
An embodiment of this application further provides a communication system, including at least an encoding device and a decoding device. The encoding device may be configured to perform the steps of the foregoing encoding method, and the decoding device may be configured to perform the steps of the foregoing decoding method, with the same technical effect achieved. To avoid repetition, details are not described herein again.
It should be noted that in this specification, the terms “include” and “comprise”, or any of their variants are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements not only includes those elements but also includes other elements that are not expressly listed, or further includes elements inherent to such process, method, article, or apparatus. In absence of more constraints, an element preceded by “includes a . . . ” does not preclude the existence of other identical elements in the process, method, article, or apparatus that includes the element. Furthermore, it should be noted that the scope of the methods and apparatuses in the embodiments of this application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in a reverse order depending on the functions involved. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to some examples may be combined in other examples.
According to the foregoing descriptions of the implementations, a person skilled in the art can clearly understand that the methods in the foregoing embodiments may be implemented by using software in combination with a necessary common hardware platform, or certainly may be implemented by using hardware. However, in most cases, the former is a preferred implementation. Based on such an understanding, the technical solutions of this application essentially or the part contributing to the conventional technology may be implemented in a form of a computer software product. The computer software product may be stored in a storage medium (for example, a ROM/RAM, a magnetic disk, or an optical disc), and includes several instructions for instructing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, a network device, or the like) to perform the methods in the embodiments of this application.
The foregoing describes the embodiments of this application with reference to the accompanying drawings. However, this application is not limited to the foregoing specific implementations. The foregoing specific implementations are merely examples, but are not limitative. Inspired by this application, a person of ordinary skill in the art may further make many modifications without departing from the purposes of this application and the protection scope of the claims, and all the modifications shall fall within the protection scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
202210370096.5 | Apr 2022 | CN | national |
202210735276.9 | Jun 2022 | CN | national |
202210845200.1 | Jul 2022 | CN | national |
This application is a continuation application of PCT Application No. PCT/CN2023/086199 filed on Apr. 4, 2023, which claims priority to Chinese Patent Application No. 202210370096.5 filed in China on Apr. 8, 2022, Chinese Patent Application No. 202210735276.9 filed in China on Jun. 27, 2022, and Chinese Patent Application No. 202210845200.1 filed in China on Jul. 18, 2022, disclosures of which are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/086199 | Apr 2023 | WO |
Child | 18902081 | US |