This application pertains to the field of encoding and decoding technologies, and particularly relates to an encoding method and apparatus, a decoding method and apparatus, and a device.
Texture coordinates, also referred to as UV coordinates, are information that describes vertex texture of a three-dimensional mesh. In three-dimensional meshes, two-dimensional projection is performed on surface texture to form a two-dimensional texture map. UV coordinates indicate positions of three-dimensional vertex texture on the two-dimensional texture map, and are in one-to-one correspondence to geometry information. In this way, the texture coordinates determine a texture map of the three-dimensional mesh, are crucial to three-dimensional meshes. The data amount of UV coordinates accounts for a large proportion in the three-dimensional mesh. However, the existing parallelogram prediction method or similar triangle-based prediction algorithm have unsatisfying compression effects, affecting the encoding efficiency of texture coordinates.
Embodiments of this application provide an encoding method and apparatus, a decoding method and apparatus and a device.
According to a first aspect, an encoding method is provided. The method includes:
According to a second aspect, a decoding method is provided. The method includes:
a reconstructing module, configured to reconstruct geometry information and connectivity information for a target three-dimensional mesh based on an encoding result of the geometry information and connectivity information of the target three-dimensional mesh;
According to a fourth aspect, a decoding apparatus is provided. The apparatus includes: a decoding module, configured to: decode an obtained bitstream corresponding to a target three-dimensional mesh to obtain geometry information and connectivity information of the target three-dimensional mesh, and decode an obtained bitstream corresponding to each vertex to obtain a texture coordinate residual of each vertex;
According to a fifth aspect, a terminal is provided. The terminal includes a processor and a memory, and a program or instructions capable of running on the processor are stored on the memory. When the program or instructions are executed by the processor, the steps of the method according to the first aspect or the steps of the method according to the second aspect are implemented.
According to a sixth aspect, a readable storage medium is provided, where a program or instructions are stored in the readable storage medium, and when the program or the instructions are executed by a processor, the steps of the method according to the first aspect are implemented, or the steps of the method according to the second aspect are implemented.
According to a seventh aspect, a chip is provided. The chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or instructions to implement the method according to the first aspect or the method according to the second aspect.
According to an eighth aspect, a computer program/program product is provided. The computer program/program product is stored in a storage medium, and the computer program/program product is executed by at least one processor to implement steps of the method according to the first aspect or steps of the method according to the second aspect.
According to a ninth aspect, a system is provided. The system includes an encoder side and a decoder side, where the encoder side performs the steps of the method according to the first aspect and the decoder side performs the steps of the method according to the second aspect.
The following clearly describes the technical solutions in the embodiments of this application with reference to the accompanying drawings in the embodiments of this application. Apparently, the described embodiments are only some rather than all of the embodiments of this application. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of this application shall fall within the protection scope of this application.
The terms “first”, “second”, and the like in this specification and claims of this application are used to distinguish between similar objects rather than to describe a specific order or sequence. It should be understood that terms used in this way are interchangeable in appropriate circumstances so that the embodiments of this application can be implemented in other orders than the order illustrated or described herein. In addition, “first” and “second” are usually used to distinguish objects of a same type, and do not restrict a quantity of objects. For example, there may be one or a plurality of first objects. In addition, “and/or” in the specification and claims represents at least one of connected objects, and the character “/” generally indicates that the associated objects have an “or” relationship.
The following describes in detail the encoding and decoding method and apparatus, and the device provided in the embodiments of this application by using some embodiments and application scenarios thereof, with reference to the accompanying drawings.
Referring to
S101. Reconstruct geometry information and connectivity information for a target three-dimensional mesh based on an encoding result of the geometry information and connectivity information of the target three-dimensional mesh.
It should be noted that: the target three-dimensional mesh mentioned in this application may be understood as a three-dimensional mesh corresponding to any video frame, the geometry information of the target three-dimensional mesh may be understood as the coordinates of a vertex in the three-dimensional mesh, usually three-dimensional coordinates, and the connectivity describes a connection, also referred to as connectivity information, between vertices, faces and other elements in a three-dimensional mesh.
It should be noted that in this step, the texture coordinates of vertices are encoded based on the geometry information and connectivity information. To ensure the encoded texture coordinates to be consistent with the encoded geometry information and connectivity information, the geometry information and connectivity information used in this embodiment of this application are reconstructed from the encoded geometry information and connectivity information.
S102. Determine, based on the reconstructed geometry information and connectivity information, N predicted texture coordinates of each vertex in the target three-dimensional mesh by means of predicting vertices from multiple encoded triangles.
It should be noted that the N predicted texture coordinates of each vertex are determined by means of predicting vertices from multiple encoded triangles, so as to improve the compression effects of UV coordinates data.
For specific implementations of determining the N predicted texture coordinates of each vertex by means of predicting vertices from multiple encoded triangles, refer to the subsequent embodiments.
S103. Encode a texture coordinate residual of each vertex.
In this step, the texture coordinate residual of any one vertex can be determined based on the N predicted texture coordinates of the vertex before being encoded. For specific implementations of encoding the texture coordinate residual of each vertex, refer to subsequent embodiments.
In an embodiment of this application, geometry information and connectivity information are reconstructed for a target three-dimensional mesh based on encoding results of the geometry information and connectivity information of the target three-dimensional mesh; N predicted texture coordinates of each vertex in the target three-dimensional mesh are determined based on the reconstructed geometry information and connectivity information by means of predicting vertices from multiple encoded triangles; and a texture coordinate residual of each vertex is encoded. In this solution, the N predicted texture coordinates of each vertex are obtained by means of predicting vertices from multiple encoded triangles, so as to improve the compression effects of UV coordinates data and improve the encoding efficiency of texture coordinates.
Optionally, the determining, based on the reconstructed geometry information and connectivity information, N predicted texture coordinates of each vertex in the target three-dimensional mesh by means of predicting vertices from multiple encoded triangles includes:
It should be noted that an initial edge set needs to be obtained before encoding. Specifically, the initial edge set is obtained as follows.
Before the selecting a first edge from an edge set, the method further includes:
It should be noted that in this embodiment of this application, vertices of the initial triangle are not predicted, but the texture coordinates are directly encoded. Optionally, for the initial triangle, in this embodiment of this application, texture coordinates of the first vertex of the initial triangle may be directly encoded, texture coordinates of the second vertex of the initial triangle are obtained by predicting edges based on the texture coordinates of the first vertex, and then texture coordinates of the third vertex of the initial triangle are obtained through similar triangle-based predictive encoding.
After the texture coordinates of all vertices of the initial triangle are encoded, all edges of the initial triangle are stored into an edge set to form an initial edge set, and then the rest of vertices are predicted based on the initial edge set.
For ease of understanding, an illustration is provided in
If the vertex C is a to-be-encoded vertex, vertices corresponding to the first edge are the vertex N and vertex P. In this case, the triangle corresponding to the first edge, namely, the second triangle, is determined as the target triangle. Further, a search is performed around the vertex C for triangles including the vertex C and other two encoded vertices but excluding the first edge. Such triangles are determined as the target triangles. To be specific, the first triangle and the second triangle are determined as the target triangles.
It should be understood that the vertices other than the to-be-encoded one of the target triangles are encoded vertices, and more than one target triangles is provided. Optionally, multiple target triangles are adjacent or not.
Optionally, the obtaining predicted texture coordinates of the to-be-encoded vertex in the target triangle includes:
In this embodiment, for any one target triangle, the texture coordinates of the projection point of the to-be-encoded vertex on the first edge can be obtained based on the geometry coordinates of the vertices of the target triangle, that is, geometry coordinates of the three vertices of the target triangle. For specific implementations, refer to subsequent embodiments.
After the texture coordinates of the projection point are obtained, the predicted texture coordinates of the to-be-encoded vertex are obtained based on the texture coordinates of the projection point. For specific implementations, refer to subsequent embodiments.
The following describes how to obtain texture coordinates of the projection point of the to-be-encoded vertex on the first edge based on the geometry coordinates of each vertex of the target triangle.
Optionally, the obtaining texture coordinates of a projection point of the to-be-encoded vertex on the first edge based on geometry coordinates of vertices of the target triangle includes:
In this embodiment, the encoder side can obtain the texture coordinates of the projection point of the to-be-encoded vertex on the first edge by using a first formula. The first formula is Xuv={right arrow over (NX)}uv+Nuv, or Xuv=Nuv−{right arrow over (XN)}uv, where
For ease of understanding, an illustration is provided in
The following describes how to obtain predicted texture coordinates of the to-be-encoded vertex based on the texture coordinates of the projection point.
Optionally, the obtaining predicted texture coordinates of the to-be-encoded vertex based on the texture coordinates of the projection point includes:
In this embodiment, the texture coordinates of the to-be-encoded vertex can be obtained using a second formula. The second formula is
PredC_NP represents the predicted texture coordinates of the to-be-encoded vertex, Xuv represents the texture coordinates of the projection point X of the to-be-encoded vertex on the first edge, {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, distance1=Ouv−(Xuv+{right arrow over (XC)}uv), where Ouv represents the texture coordinates of the first vertex corresponding to the first edge of the target triangle and {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, and distance2=Ouv−(Xuv−{right arrow over (XC)}uv).
In this embodiment, referring to
It should be understood that the first triangle is a triangle formed by the vertex N, vertex P, and vertex O shown in
Optionally, the obtaining, on the encoder side, predicted texture coordinates of the to-be-encoded vertex based on the texture coordinates of the projection point includes:
In this embodiment, a third formula can be used to obtain the texture coordinates of the to-be-encoded vertex and encode the target identifier of the to-be-encoded vertex.
The third formula is
represents the predicted texture coordinates of the to-be-encoded vertex, Xuv represents the texture coordinates of the projection point X of the to-be-encoded vertex on the first edge, {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, distance3=Cuv−(Xuv+{right arrow over (XC)}uv), where Cuv represents the texture coordinates of the to-be-encoded vertex and {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, and distance4=Cuv−(Xuv−{right arrow over (XC)}uv). The target identifier is used to indicate which one of |distance3| and |distance4| is greater.
In this embodiment, referring to
It should be understood that the target identifier is used to indicate which one of |distance3| and |distance4| is greater. For example, the target identifier is set to 0, meaning PredC_NP=Xuv+{right arrow over (XC)}uv; and the target identifier is set to 1, meaning PredC_NP=Xuv−{right arrow over (XC)}uv.
Optionally, in the first formula, second formula, and third formula, the vertex N can be replaced with the vertex P for calculations.
For example, the texture coordinates of the vertex N are replaced with the texture coordinates of the vertex P.
For example, {right arrow over (NX)}uv in the first formula is replaced with {right arrow over (PX)}uv, where {right arrow over (PX)}uv represents a vector from the vertex P on the first edge of the target triangle to the texture coordinates of the projection point X of the to-be-encoded vertex on the first edge.
Optionally, before the selecting a first edge from an edge set, the method further includes:
Optionally, after the obtaining predicted texture coordinates of the to-be-encoded vertex in the target triangle, the method further includes:
Optionally, the encoding texture coordinate residual of each vertex includes:
In this embodiment, the N predicted texture coordinates can be weighted and summed, and a target value obtained therefrom is determined as the target texture coordinates of the vertex. In a case that the predicted texture coordinates have the same weight, an average value of the N predicted texture coordinates is determined as the target texture coordinates of the vertex. It should be understood that in other embodiments, the target value may not be obtained from the weighted sum of the N predicted texture coordinates, but may be obtained through other operations, which is not specifically limited herein.
For ease of understanding, an illustration is provided in
Optionally, an average value of PredC_NP, PredC_PON, and PredC_PNO is determined as the target texture coordinates of the to-be-encoded vertex. Then, a residual of the to-be-encoded vertex can be obtained based on the target texture coordinates and the real texture coordinates. The to-be-encoded vertex can be encoded by encoding the residual, so that less bits are used for encoding texture coordinates.
To sum up, the specific implementation of encoding texture coordinates (hereinafter referred to as UV coordinates) in this embodiment of this application is as follows.
Step S1. Select one initial triangle based on a connectivity information, directly encode UV coordinates of three vertices of the initial triangle, and store three edges of the initial triangle into an edge set.
Step S2. Select an edge π from the set according to access rules, and encode the UV coordinates of a to-be-encoded vertex in a new triangle including π. According to rules of mapping from a three-dimensional triangle to a two-dimensional triangle and the foregoing process of calculating the predicted UV coordinates, a search is performed around the to-be-encoded vertex for target triangles that share the to-be-encoded vertex with the to-be-encoded triangle and have two encoded vertices, so as to obtain a predicted value of the to-be-encoded vertex (namely, the predicted texture coordinates). Then, the residual is obtained by subtracting an original UV coordinate value (namely, the real texture coordinates) by the predicted value. In a case that the first vertex O is an encoded vertex or the target triangle is not a degenerate triangle, the target identifier of the to-be-encoded vertex does not need to be encoded; or in a case that the first vertex O is an uncoded vertex or the target triangle is a degenerate triangle, the target identifier of the to-be-encoded vertex needs to be encoded.
Step S3. Add two edges of the new triangle into the edge set, remove the edge π at the top of the edge set, select a next edge, encode predicted UV coordinates of an opposite vertex of the triangle adjacent to this edge to obtain a residual, and then repeat step S3 until the residuals for all vertices are obtained.
Step S4. Perform entropy coding on the UV coordinate residuals and output UV coordinate bitstream.
In a case that the geometry information and connectivity information of a three-dimensional mesh are encoded, the UV coordinates can be encoded based on reconstructed geometry information and connectivity information. First, a triangle is selected as the initial triangle and coordinate values are directly encoded. Second, a triangle adjacent to the initial triangle is selected as the triangle to be encoded, and a search is performed around the to-be-encoded vertex in the triangle to be encoded for a target triangle that share the to-be-encoded vertex with the triangle to be encoded and have two encoded vertices. The UV coordinate value of an uncoded vertex in a triangle adjacent to the initial edge is predicted based on the relationship between the encoded UV coordinates of the target triangle and texture projections, a difference between the real UV coordinates and predicted coordinates of the to-be-encoded vertex is encoded, and a new edge is selected from the newly encoded triangle for encoding uncoded vertices of an adjacent triangle. This process is iterated until the UV coordinates of the entire three-dimensional mesh are encoded.
Referring to
S501. Decode an obtained bitstream corresponding to a target three-dimensional mesh to obtain geometry information and connectivity information of the target three-dimensional mesh, and decode an obtained bitstream corresponding to each vertex to obtain a texture coordinate residual of each vertex.
S502. Determine, based on the geometry information and connectivity information, N predicted texture coordinates of each vertex in the target three-dimensional mesh by means of predicting vertices from multiple decoded triangles.
S503. Determine real texture coordinates of each vertex based on the N predicted texture coordinates of each vertex and the texture coordinate residual of each vertex.
Optionally, the determining, based on the geometry information and connectivity information, N predicted texture coordinates of each vertex in the target three-dimensional mesh by means of predicting vertices from multiple decoded triangles includes:
Optionally, the obtaining predicted texture coordinates of the to-be-decoded vertex in the target triangle includes:
Optionally, the obtaining texture coordinates of a projection point of the to-be-decoded vertex on the first edge based on geometry coordinates of vertices of the target triangle includes:
In this embodiment, the decoder side can obtain the texture coordinates of the projection point of the to-be-decoded vertex on the first edge using a first formula.
The first formula is Xuv={right arrow over (NX)}uv+Nuv, or Xuv=Nuv−{right arrow over (XN)}uv, where
Optionally, the obtaining predicted texture coordinates of the to-be-decoded vertex based on the texture coordinates of the projection point includes:
In this embodiment, the decoder side can obtain the texture coordinates of the to-be-decoded vertex by using a second formula.
The second formula is
represents the predicted texture coordinates of the to-be-encoded vertex, Xuv represents the texture coordinates of the projection point X of the to-be-encoded vertex on the first edge,{right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, distance1=Ouv−(Xuv+{right arrow over (XC)}uv), where Ouv represents the texture coordinates of the first vertex corresponding to the first edge of the target triangle and {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, and distance2=Ouv−(Xuv−{right arrow over (XC)}uv).
Optionally, the obtaining predicted texture coordinates of the to-be-decoded vertex based on the texture coordinates of the projection point includes:
In this embodiment, the decoder side can determine the texture coordinates of the to-be-decoded vertex based on the retrieved target identifier of the to-be-decoded vertex and a third formula.
The third formula is
represents the predicted texture coordinates of the to-be-encoded vertex, Xuv represents the texture coordinates of the projection point X of the to-be-encoded vertex on the first edge, {right arrow over (XC)}uv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, distance3=Cuv−(Xuv+{right arrow over (XC)}uv), where Cuv represents the texture coordinates of the to-be-encoded vertex and XCuv represents the vector from the projection point X of the to-be-encoded vertex on the first edge to the texture coordinates Cuv of the to-be-encoded vertex, and distance4=Cuv−(Xuv−{right arrow over (XC)}uv). The target identifier is used to indicate which one of |distance3| and |distance4| is greater.
Optionally, before the selecting a first edge from an edge set, the method further includes:
Optionally, after the obtaining predicted texture coordinates of the to-be-decoded vertex in the target triangle, the method further includes:
Optionally, the determining real texture coordinates of each vertex based on the N predicted texture coordinates of each vertex and the texture coordinate residual of each vertex includes:
It should be noted that this embodiment of this application describes a reverse process of encoding. As shown in the decoding block diagram
To sum up, the specific implementation of decoding UV coordinates in this embodiment of this application is as follows.
Step SP1. Perform entropy decoding on UV coordinate bitstream, which contains a UV coordinate residual and a target identifier.
Step SP2. Decode UV coordinates of three vertices of an initial triangle, where predicted values are not calculated and UV coordinates, instead of a residual, of the initial triangle are encoded, and store three edges of the initial triangle into an edge set.
Step SP3. Select an edge π from the edge set according to access rules, and decode the UV coordinates of an opposite vertex of a new triangle including the edge π. Based on the mapping from a three-dimensional triangle to a two-dimensional triangle and multiple decoded triangles, a predicted UV coordinate value of the to-be-decoded vertex is calculated using the calculation method adopted at the encoder side. Then, the predicted value and the residual of entropy decoding are added to obtain the reconstructed UV coordinates. If the encoder side has generated a target identifier, the target identifier is used to calculate the predicted UV coordinate value of the to-be-decoded vertex.
Step SP4. Add two edges of the new triangle into the edge set, remove the edge π at the top of the edge set, select a next edge, decode UV coordinates of an opposite vertex of the triangle adjacent to this edge, and then repeat step SP3 until the residuals of all vertices are obtained.
It should be noted that this embodiment of this application is a method embodiment reverse to the embodiment of the encoding method. Since decoding is a reverse process of encoding, all implementations at the encoder side are applicable to the embodiments of the decoder side, with the same technical effects achieved. Details are not described herein again.
The encoding method provided in this embodiment of this application may be executed by an encoding apparatus. In the embodiments of this application, the encoding apparatus according to an embodiment of this application is described assuming that the encoding method is executed by an encoding apparatus.
As shown in
Optionally, the determining module 702 is specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the determining module 702 is further specifically configured to:
Optionally, the encoding module 703 is specifically configured to:
In an embodiment of this application, geometry information and connectivity information are reconstructed for a target three-dimensional mesh based on encoding results of the geometry information and connectivity information of the target three-dimensional mesh; N predicted texture coordinates of each vertex in the target three-dimensional mesh are determined based on the reconstructed geometry information and connectivity information by means of predicting vertices from multiple encoded triangles; and a texture coordinate residual of each vertex is encoded. In this solution, the N predicted texture coordinates of each vertex are obtained by means of predicting vertices from multiple encoded triangles, so as to improve the compression effects of UV coordinates data and improve the encoding efficiency of texture coordinates.
This apparatus embodiment corresponds to the encoding method embodiment illustrated in
The decoding method provided in this embodiment of this application may be executed by a decoding apparatus. In the embodiments of this application, the decoding apparatus provided in the embodiments of this application is described by using the decoding method being executed by the decoding apparatus as an example.
As shown in
Optionally, the first determining module 802 is specifically configured to:
Optionally, the first determining module 802 is further specifically configured to:
Optionally, the first determining module 802 is further specifically configured to:
Optionally, the first determining module 802 is further specifically configured to:
Optionally, the first determining module 802 is further specifically configured to:
Optionally, the first determining module 802 is further specifically configured to:
Optionally, the first determining module 802 is further specifically configured to: store a second edge of the target triangle into the edge set and remove the first edge from the edge set, where the second edge is an edge of the target triangle not contained in the edge set.
Optionally, the second determining module 803 is further specifically configured to: determine target values corresponding to the N predicted texture coordinates of any one vertex as target texture coordinates of the vertex; and perform an addition operation on the target texture coordinates of the vertex and the texture coordinate residual of the vertex to determine the real texture coordinates of the vertex.
The decoding apparatus provided in this embodiment of this application can implement the processes implemented by the method embodiment illustrated in
The encoding apparatus and decoding apparatus according to embodiments of this application may be an electronic device, for example, an electronic device with an operating system, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal or another device than the terminal. For example, the terminal may include but is not limited to a type of the foregoing terminal, and another device may be a server, a network attached storage (NAS), or the like, which is not specifically limited in the embodiments of this application.
Optionally, as shown in
An embodiment of this application further provides a terminal, including a processor 901 and a communication interface. The processor 901 is configured to perform the following operations:
Alternatively, the processor 901 is configured to perform the following operations:
This terminal embodiment corresponds to the foregoing method embodiments used on the terminal side. All processes and implementations in the foregoing method embodiments are applicable to this terminal embodiment, with the same technical effects achieved. Specifically,
A terminal 1000 includes but is not limited to components such as a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Persons skilled in the art can understand that the terminal 1000 may further include a power source (for example, a battery) for supplying power to the components. The power source may be logically connected to the processor 1010 through a power management system. In this way, functions such as charge management, discharge management, and power consumption management are implemented by using the power management system. The structure of the terminal shown in
It should be understood that in the embodiments of this application, the input unit 1004 may include a graphics processing unit (GPU) 10041 and a microphone 10042. The graphics processing unit 10041 processes image data of a static picture or a video that is obtained by an image capture apparatus (for example, a camera) in a video capture mode or an image capture mode. The display unit 1006 may include a display panel 10061. The display panel 10061 may be configured in a form of a liquid crystal display, an organic light-emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touchscreen. The touch panel 10071 may include two parts: a touch detection apparatus and a touch controller. The other input devices 10072 may include but are not limited to a physical keyboard, a functional button (such as a volume control button or a power on/off button), a trackball, a mouse, and a joystick. Details are not described herein again.
In the embodiments of this application, after receiving downlink data from the network-side device, the radio frequency circuit 1001 may transmit it to the processor 1010 for processing. In addition, the radio frequency circuit 1001 may transmit uplink data to the network-side device. Generally, the radio frequency unit 1001 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 1009 may be configured to store a software program or instructions, and various data. The memory 1009 may mainly include a first storage area where the program or instructions are stored and a second storage area where data is stored. The first storage area may store an operating system, an application program or instructions required by at least one function (for example, an audio playing function or an image playing function), and the like. Further, the memory 1009 may include a volatile memory or non-volatile memory, or the memory 1009 may include both volatile and non-volatile memories. The non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electrically erasable programmable read-only memory (Electrically EPROM, EEPROM), or a flash memory. The volatile memory may be random access memory (RAM), a static random access memory (Static RAM, SRAM), a dynamic random access memory (Dynamic RAM, DRAM), a synchronous dynamic random access memory (Synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), an enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), a synchronous link dynamic random access memory (Synch link DRAM, SLDRAM), and a direct rambus random access memory (Direct Rambus RAM, DRRAM). The memory 1009 in this embodiment of this application includes but is not limited to these and any other suitable types of memories.
The processor 1010 may include one or more processing units. Optionally, the processor 1010 integrates an application processor and a modem processor. The application processor mainly processes operations involving an operating system, a user interface, an application program, and the like. The modem processor mainly processes wireless communication signals, for example, a baseband processor. It can be understood that the modem processor may alternatively be not integrated in the processor 1010.
The processor 1010 is configured to:
Alternatively, the processor 1010 is configured to perform the following operations:
An embodiment of this application further provides a readable storage medium, where the readable storage medium stores a program or instructions, and when the program or instructions are executed by a processor, the processes of the foregoing embodiments of the encoding method are implemented or the processes of the foregoing embodiments of the decoding method are implemented, with the same technical effects achieved. To avoid repetition, details are not described herein again.
The processor is a processor in the terminal in the foregoing embodiments. The readable storage medium includes a computer-readable storage medium, for example, a computer read-only memory ROM, a random access memory RAM, a magnetic disk, or an optical disc.
An embodiment of this application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or instructions to implement the processes of the foregoing embodiments of the encoding method or the processes of the foregoing embodiments of the decoding method, with the same technical effects achieved. To avoid repetition, details are not described herein again.
It should be understood that the chip mentioned in this embodiment of this application may also be referred to as a system-level chip, a system chip, a chip system, a system-on-chip, or the like.
An embodiment of this application further provides a computer program/program product, where the computer program/program product is stored in a storage medium. The computer program/program product is executed by at least one processor to the foregoing embodiments of the encoding method or the processes of the foregoing embodiments of the decoding method, with the same technical effects achieved. To avoid repetition, details are not described herein again.
An embodiment of this application further provides a system. The system includes an encoder side and a decoder side, where the encoder side executes the processes in the embodiments of the encoding method, and the decoder side executes the processes in the embodiments of the decoding method, with the same technical effects achieved. To avoid repetition, details are not described herein again.
It should be noted that in this specification, the terms “include” and “comprise”, or any of their variants are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements not only includes those elements but also includes other elements that are not expressly listed, or further includes elements inherent to such process, method, article, or apparatus. In absence of more constraints, an element preceded by “includes a . . . ” does not preclude the existence of other identical elements in the process, method, article, or apparatus that includes the element. Furthermore, it should be noted that the scope of the methods and apparatuses in the embodiments of this application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in a reverse order depending on the functions involved. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to some examples may be combined in other examples.
By means of the foregoing description of the implementations, persons skilled in the art may clearly understand that the method in the foregoing embodiments may be implemented by software with a necessary general hardware platform. Certainly, the method in the foregoing embodiments may also be implemented by hardware. However, in many cases, the former is a preferred implementation. Based on such an understanding, the technical solutions of this application essentially or the part contributing to the prior art may be implemented in a form of a computer software product. The software product is stored in a storage medium (for example, a ROM/RAM, a magnetic disk, or an optical disc), and includes several instructions for instructing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, a network device, or the like) to perform the method described in the embodiments of this application.
The foregoing describes the embodiments of this application with reference to the accompanying drawings. However, this application is not limited to the foregoing specific embodiments. The foregoing specific embodiments are merely illustrative rather than restrictive. As instructed by this application, persons of ordinary skill in the art may develop many other manners without departing from principles of this application and the protection scope of the claims, and all such manners fall within the protection scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
202210800663.6 | Jul 2022 | CN | national |
This application is a continuation of International Application No. PCT/CN2023/103922 filed on Jun. 29, 2023, which claims priority to Chinese Patent Application No. 202210800663.6 filed on Jul. 6, 2022, which are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/103922 | Jun 2023 | WO |
Child | 19010127 | US |