This disclosure relates generally to coding and decoding of 3-dimensional (3D) mesh and specifically to merging of boundary vertexes in a symmetric mesh.
This background description provided herein is for the purpose of generally presenting the context of this disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing of this application, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Various technologies are developed to capture, represent, and simulate real world objects, environments and the like in 3D space. 3D representations of the world can enable more immersive forms of interactive communications. Example 3D representations of objects and environments includes but is not limited to point clouds and meshes. A series of 3D representation of objects and environments may form a video sequence. Redundancies and correlations within the sequence of 3D representations of objects and environments may be utilized for compressing and coding such a video sequence into a more compact digital form. As one example of such redundancies and correlations, reflection symmetry in a 3D mesh may be utilized to enhance compression efficiency.
This disclosure relates generally to coding and decoding of 3-dimensional (3D) mesh and specifically to merging of boundary vertexes in a symmetric mesh. The disclosure particularly provides methods, systems, and devices for determining whether true boundary vertices are present near a symmetry cut plane during symmetric coding of the 3D mesh, for determining an adaptive distance threshold for merging of boundary vertices, and for determining a list of true boundary vertices in order to reduce incorrect merging of these vertices, thereby reducing cracking artifact near the symmetry plane while preserving true boundaries in the reconstructed mesh.
In some example implementations, a method for encoding a 3D mesh is disclosed. The method may include determining a symmetry plane of the 3D mesh that cuts the 3D mesh into a first half mesh and a second half mesh; encoding the first half mesh to generate an encoded first half mesh; encoding the second half mesh using a reconstruction of the encoded first half mesh reflected by the symmetry plane as a predictor to generate encoded second half mesh; identifying, among boundary vertex pairs between the first half mesh and the second half mesh, a first subset boundary vertex pairs containing zero or more actual boundaries in the 3D mesh; generating, based on the encoded first half mesh, the encoded second half mesh, and the first subset boundary vertex pairs, at least one syntax element associated with merging of one or more of the boundary vertex pairs; and including the encoded first half mesh, the encoded second half mesh, and the at least one syntax element in a bitstream of the 3D mesh.
In the example implementations above, the at least one syntax element comprises an indicator for indicating whether none of the boundary vertex pairs is associated with the actual boundaries in the 3D mesh or for indicating whether all of the boundary vertex pairs are to be merged irrespective of their distances from the symmetry plane.
In any one of the example implementations above, the indicator is generated prior to cutting the 3D mesh into the first half mesh and vertex pairs associated with the actual boundaries of the 3D mesh are determined by: determining mesh adjacency information according to vertex connections in forming surfaces of the 3D mesh; determining actual boundary edges that belongs only to one face; and identifying vertices of the boundary vertex pairs belonging to the actual boundary edges as the first subset boundary vertex pairs.
In any one of the example implementations above, the at least one syntax element indicates a distance value for informing a 3D mesh decoder of a threshold distance to the symmetry plane with respect to merging the one or more of the boundary vertex pairs.
In any one of the example implementations above, the threshold distance is determined by progressively increasing from an initial threshold distance.
In any one of the example implementations above, the threshold distance is determined by iteratively updating a current threshold distance by: incrementing a factor to scaling the initial threshold distance to obtain the current threshold distance; reconstructing the encoded first half mesh and the second half mesh; merging boundary vertex pairs between the reconstructed first half mesh and the reconstructed second half mesh having distances to the symmetry plane at or smaller than the current threshold distance; determining distortion of the reconstruction of the encoded first half mesh and the encoded second half mesh with boundary vertex merging with respect to the 3D mesh; and using the current threshold distance as the threshold distance until a differential distortion relative to an initial distortion falls below a differential distortion threshold or until the factor reaches a maximum scaling factor.
In any one of the example implementations above, the at least one syntax element contains a list of the first subset boundary vertex pairs.
In any one of the example implementations above, the list comprises a set of indices of the first subset boundary vertex pairs among the boundary vertex pairs between the first half mesh and the second half mesh.
In any one of the example implementations above, the method may further include ordering the boundary vertex pairs between the first half mesh and the second half mesh according to increasing distances to the symmetry plane; and generating the set of indices of the first subset boundary vertex pairs according to the ordered boundary vertex pairs.
In any one of the example implementations above, the at least one syntax element comprises a list of a second subset boundary vertex pairs to be merged by a decoder.
In any one of the example implementations above, the list of the second subset boundary vertex pairs is determined by: determining a collection of boundary vertex pairs having distances to the symmetry plane that are at or smaller than the threshold distance; and determining the second subset boundary vertex pairs from the collection of boundary vertex pairs that produce larger distortion of reconstructed first half mesh and reconstructed second half mesh relative to the 3D mesh if not being merged.
In any one of the example implementations above, the method may further include determine a distance value as a threshold distance to the symmetry plane with respect to merging the one or more of the boundary vertex pairs based on a bit-depth or quantization parameter for encoding a residual between the 3D mesh and a base mesh; and merge the boundary vertex pairs between the first half mesh and the second half mesh having distances that are at or smaller than the threshold distance when reconstructing the 3D mesh.
In any one of the example implementations above, the threshold distance is determined as an increasing function of the bit-depth or the quantization parameter.
In any one of the example implementations above, the threshold distance is determined as a decreasing function of the bit-depth or the quantization parameter.
In some other example implementations, a method for decoding a 3D mesh is disclosed. The method may include receiving a bitstream of the 3D mesh encoded as a first half mesh and a second half mesh divided by a symmetry plane; generating reconstructed first half mesh and reconstructed second half mesh from the bitstream for the first half mesh and the second half mesh, respectively; extracting at least one syntax element associated with merging of boundary vertex pairs between the first half mesh and the second half mesh; and merging one or more of the boundary vertex pairs between the first half mesh and the second half mesh according to the at least one syntax element.
In the example implementations above, at least one syntax element comprises at least one of: an indicator for indicating whether none of the boundary vertex pairs is associated with the actual boundaries in the 3D mesh or for indicating whether all of the boundary vertex pairs are to be merged irrespective of their distances from the symmetry plane; or a list of a subset of boundary vertex pairs to be merged.
In any one of the example implementations above, the at least one syntax element indicates a distance value for informing a 3D mesh decoder of a threshold distance to the symmetry plane with respect to merging the one or more of the boundary vertex pairs; and merging the one or more of the boundary vertex pairs is performed only for the one or more of the boundary vertex pairs having distances to the symmetry plane that are at or smaller than the threshold distance.
In any one of the example implementations above, the at least one syntax element comprises a list of a subset boundary vertex pairs associated with actual boundaries in the 3D mesh; and merging the one or more of the boundary vertex pairs comprises excluding the subset of boundary vertex pairs from being merged.
In any one of the example implementations above, the list of the subset boundary vertex pairs comprises a set of indices of the subset boundary vertex pairs among the boundary vertex pairs between the first half mesh and the second half mesh; and the method further comprises: ordering the boundary vertex pairs between the first half mesh and the second half mesh according to increasing distance to the symmetry plane; and identifying the subset of boundary vertex pairs excluded from being merged among the ordered boundary vertex pairs and the set of indices.
In some other example implementations, a method for decoding a 3D mesh is disclosed. The method may include receiving a bitstream of the 3D mesh encoded as a first half mesh and a second half mesh divided by a symmetry plane; generating reconstructed first half mesh and reconstructed second half mesh from the bitstream for the first half mesh and the second half mesh, respectively; determining a distance threshold based on a bit-depth or quantization parameter for encoding a residual between the 3D mesh and a base mesh; and merging one or more of boundary vertex pairs between the first half mesh and the second half mesh according to distances of the boundary vertex pairs to the symmetry plane in comparison to the distance threshold.
Aspects of the disclosure also provide an electronic device or apparatus function as encoder or decoder including a circuitry configured to carry out any of the method implementations above.
Aspects of the disclosure also provide non-transitory computer-readable medium for storing computer instructions which when executed by a computer for 3D mesh processing, cause the computer to perform any one of the method implementations above.
Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:
Throughout this specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. The phrase “in one embodiment” or “in some embodiments” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” or “in other embodiments” as used herein does not necessarily refer to a different embodiment. Likewise, the phrase “in one implementation” or “in some implementations” as used herein does not necessarily refer to the same implementation and the phrase “in another implementation” or “in other implementations” as used herein does not necessarily refer to a different implementation. It is intended, for example, that claimed subject matter includes combinations of exemplary embodiments/implementations in whole or in part.
In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” or “at least one” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a”, “an”, or “the”, again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” or “determined by” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
Technological developments in 3D media processing, such as advances in 3D capture, 3D modeling, and 3D rendering, and the like have promoted the ubiquitous creation of 3D contents across several platforms and devices. Such 3D contents contain information that may be processed to generate various forms of media to provide, for example, immersive viewing/rendering and interactive experience. Applications of 3D contents are abundant, including but not limited to virtual reality, augmented reality, metaverse interactions, gaming, immersive video conferencing, robotics, computer-aided design (CAD), and the like. According to an aspect of this disclosure, in order to improve immersive experience, 3D models are becoming ever more sophisticated, and the creation and consumption of 3D models demand a significant amount of data resources, such as data storage, data transmission resources, and data processing resources.
In comparison to traditional 2-dimensional (2D) contents that are generally represented by datasets in the form of 2D pixel arrays (such as images), 3D contents with three-dimensional full-resolution pixilation may be prohibitively resource intensive and are nevertheless unnecessary in many if not most practical applications. In most 3D immersive applications, according to some aspects of the disclosure, less data intensive representations of 3D contents may be employed. For example, in most applications, only topographical information rather than volumetric information of objects in a 3D scene (either real-world scene captured by sensors such as LIDAR devices or an animated 3D scene generated by software tools) may be necessary. As such, datasets in more efficient forms may be used to represent 3D objects and 3D scenes. For example, 3D meshes may be used as a type of 3D models to represent immersive 3D contents, such as 3D objects in 3D scenes.
A mesh (alternatively referred to as mesh model) of one or more objects may include a collection of vertices. The vertices may connect to one another to form edges. The edges may further connect to form faces. The faces may further form polygons. 3D surfaces of various objects may be decomposed into, for example, faces and polygons. Each of the vertices, edges, faces, polygons, or surfaces may be associated with various attributes such as color, surface normal, texture, and the like. A normal for a surface may be referred as the surface normal; and/or the normal for a vertex may be referred as the vertex normal. The information of how the vertices are connected into edges, faces or polygons may be referred to as connectivity information. The connectivity information is important for uniquely defining components of a mesh since the same set of vertices can form different faces, surfaces, and polygons. In general, a position of a vertex in the 3D space may be represented by its 3D coordinates. A face may be represented by a set of sequentially connected vertices, each associated with a set of 3D coordinates. Likewise, an edge may be represented by two vertices each associated with its 3D coordinates. The vertices, edges, and faces may be indexed in the 3D mesh datasets.
A mesh may be defined and described by a collection of one or more of these fundamental element types. However, not all types of elements above are necessary in order to fully describe a mesh. For example, a mesh may be fully described by using just vertices and their connectivity. For another example, a mesh may be fully described by just using a list of faces and common vertices of faces. As such, a mesh can be of various alternative types described by alternative dataset compositions and formats. Example mesh types include but are not limited to face-vertex meshes, winged-edge meshes, half-edge meshes, quad-edge meshes, corner-table meshes, vertex-vertex meshes, and the like. Correspondingly, a mesh dataset may be stored with information in compliance with alternative file formats with file extensions including but not limited to .raw, .blend, .fbx, .3ds, .dac, .dng, 3dm, .dsf, .dwg, .obj, .ply, .pmd, .stl, amf, .wrl, .wrz, .x3d, .x3db, .x3dv, .x3dz, .x3dbz, .x3dvz, .c4d, .lwo, .smb, .msh, .mesh, .veg, .z3d, .vtk, .14d, and the like. Attributes for these elements, such as color, surface normal, texture, and the like may be included into a mesh dataset in various manners.
In some implementations, vertices of a mesh may be mapped into a pixelated 2D space, referred to as a UV space. As such, each vertex of the mesh may be mapped to a pixel in the UV space. In some implementations, one vertex may be mapped to more than one pixel in the UV space, for example, a vertex at a boundary may be mapped to two or three pixels in the UV space. Likewise, a face or surface in the mesh may be sampled into a plurality of 3D points that may or may not be among recorded vertices in the mesh, and these plurality of 3D points may be also mapped to pixels in the 2-dimensional UV space. Mapping the vertices and sampled 3D points of faces or surfaces in a mesh into the UV space and the subsequent data analytics and processing in the UV space may facilitate data storage, compression, and coding of 3D dataset of a mesh or a sequence of mesh. A mapped UV space dataset may be referred to as a UV image, or 2D map, or a 2D image of the mesh.
While the example implementations above focus on a mesh that is static, according to an aspect of the disclosure, 3D meshes may be dynamic. A dynamic mesh, for example, may refer to a mesh where at least one of the components (geometry information, connectivity information, mapping information, vertex attributes and attribute maps) varies with time. As such, a dynamic mesh can be described by a sequence of meshes or meshes (also referred to as mesh frames), analogous to a timed sequence of 2D image frames that form a video.
In some example implementations, a dynamic mesh may have constant connectivity information, time varying geometry and time varying vertex attributes. In some other examples, a dynamic mesh can have time varying connectivity information. In some examples, digital 3D content creation tools may be used to generate dynamic meshes with time varying attribute maps and time varying connectivity information. In some other examples, volumetric acquisition/detection/sensing techniques are used to generate dynamic meshes. The volumetric acquisition techniques can generate a dynamic mesh with time varying connectivity information especially under real-time constraints.
A dynamic mesh may require a large amount of data since the dynamic mesh may include a significant amount of information changing over time. However, compression may be performed to take advantage of redundancies within a mesh frame (intra-compression) and between mesh frames (inter-compression). Various mesh compression processes may be implemented to allow efficient storage and transmission of media contents in the mesh representation, particularly for a mesh sequence.
Aspects of the disclosure provide example architectures and techniques for mesh compression. The techniques may be used for various mesh compression including but not limited to static mesh compression, dynamic mesh compression, compression of a dynamic mesh with constant connectivity information, compression of a dynamic mesh with time varying connectivity information, compression of a dynamic mesh with time varying attribute maps, and the like. The techniques may be used in lossy and lossless compression for various applications, such as real-time immersive communications, storage, free viewpoint video, augmented reality (AR), virtual reality (VR), and the like. The applications can include functionalities such as random access and scalable/progressive coding.
While this disclosure explicitly describes techniques and implementations applicable to 3D meshes, the principles underlying the various implementations described herein are applicable to other types of 3D data structures, including but not limited to Point Cloud (PC) data structures. For simplicity, references to 3D meshes below are intended to be general and include other type of 3D representations such as point clouds and other 3D volumetric datasets. Turning first to example architectural level implementations,
simplified block diagram of a communication system (100) according to an example embodiment of the present disclosure. The communication system (100) may include a plurality of terminal devices that can communicate with one another, via, for example, a communication network (150) (alternatively referred to as a network). For example, the communication system (100) may include a pair of terminal devices (110) and (120) interconnected via the network (150). In the example of
In the example of
The streaming system (200) may include a capture or storage subsystem (213). The capture or storage subsystem (213) may include 3D mesh generator or storage medium (201), e.g., a 3D mesh or point cloud generation tool/software, a graphics generation component, or a point cloud sensor such as a light detection and ranging (LIDAR) systems, 3D cameras, 3D scanners, a 3D mesh store and the like that generate or provide 3D mesh (202) or point clouds (202) that are uncompressed. In some example implementations, the 3D meshes (202) include vertices of a 3D mesh or 3D points of a point cloud (both referred to as 3D mesh). The 3D meshes (202), depicted as a bold line to emphasize a corresponding high data volume when compared to compressed 3D meshes (204) (a bitstream of compressed 3D meshes). The compressed 3D meshes (204) may be generated by an electronic device (220) that includes an encoder (203) coupled to the 3D meshes (202). The encoder (203) can include hardware, software, or a combination thereof to enable or implement aspects of the disclosed subject matter as described in more detail below. The compressed 3D meshes (204) (or bitstream of compressed 3D meshes (204)), depicted as a thin line to emphasize the lower data volume when compared to the stream of uncompressed 3D meshes (202), can be stored in a streaming server (205) for future use. One or more streaming client subsystems, such as client subsystems (206) and (208) in
It is noted that the electronic devices (220) and (230) can include other components (not shown). For example, the electronic device (220) can include a decoder (not shown) and the electronic device (230) can include an encoder (not shown) as well.
In some streaming systems, the compressed 3D meshes (204), (207), and (209) (e.g., bitstreams of compressed 3D meshes) can be compressed according to certain standards. In some examples, as described in further detail below, video coding standards are used to take advantage of redundancies and correlations in the compression of 3D meshes after the 3D mesh is first projected to mapped into 2D representations suitable for video compression. Non-limiting examples of those standards include, High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC), and the like, as described in further detail below.
The compressed 3D mesh or sequence of 3D meshes may be generated by an encoder whereas a decoder may be configured to decompress the compressed or coded 3D meshes.
In further detail,
The mesh encoder (400) may receive 3D mesh frames as uncompressed inputs and generate bitstream corresponding to compressed 3D mesh frames. In some example implementations, the mesh encoder (400) may receive the 3D mesh frames from any source, such as the mesh or point cloud source (201) of
In the example of
In various embodiments in the present disclosure, a module may refer to a software module, a hardware module, or a combination thereof. A software module may include a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal, such as those functions described in this disclosure. A hardware module may be implemented using processing circuitry and/or memory configured to perform the functions described in this disclosure. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module. The description here also may apply to the term module and other equivalent terms (e.g., unit).
According to an aspect of the disclosure, and as descried above, the mesh encoder (400), converts 3D mesh frames into image-based representations (e.g., 2D maps) along with some non-map meta data (e.g., patch or chart info) that is used to assist converting the compressed 3D mesh back into a decompressed 3D mesh. In some examples, the mesh encoder (400) may convert 3D mesh frames into 2D geometry maps or images, texture maps or images and occupancy maps or images, and then use video coding techniques to encode the geometry images, texture images and occupancy maps into a bitstream along with the meta data and other compressed non-map data. Generally, and as described above, a 2D geometry image is a 2D image with 2D pixels filled with geometry values associated with 3D points projected (the term “projected” is used to mean “mapped”) to the 2D pixels, and a 2D pixel filled with a geometry value may be referred to as a geometry sample. A texture image is a 2D image with pixels filled with texture values associated with 3D points projected to the 2D pixels, and a 2D pixel filled with a texture value may be referred to as a texture sample. An occupancy map is a 2D image with 2D pixels filled with values that indicate occupation or non-occupation by 3D points.
The patch generation module (406) segments a 3D mesh into a set of charts or patches (e.g., a patch is defined as a contiguous subset of the surface described by the 3D mesh or point cloud), which may or may not be overlapping, such that each patch may be described by a depth field with respect to a plane in 2D space (e.g., flattening of the surface such that deeper 3D points on the surface is further away from center of the corresponding 2D map). In some embodiments, the patch generation module (406) aims at decomposing the 3D mesh into a minimum number of patches with smooth boundaries, while also minimizing the reconstruction error.
The patch info module (404) can collect the patch information that indicates sizes and shapes of the patches. In some examples, the patch information can be packed into a data frame and then encoded by the auxiliary patch info compression module (438) to generate the compressed auxiliary patch information. The auxiliary patch compression may be implemented in various forms, including but not limited to various types of arithmetic coding.
The patch or chart packing module (408) may be configured to map the extracted patches onto a 2D grid of the UV space while minimize the unused space. In some example implementations, the pixels of the 2D UV space may granularized to blocks of pixels for mapping of the patches or charts. The block size may be predefined. For example, the block size may be M be M×M (e.g., 16×16). With such granularity, it may be guaranteed that every M×M block of the 2D UV grid is associated with a unique patch. In other words, each patch is mapped to the 2D UV space with a 2D granularity of M×M. Efficient patch packing can directly impact the compression efficiency either by minimizing the unused space or ensuring temporal consistency. Examples implementations of packing of the patches or charts into the 2D UV space are given in further detail below.
The geometry image generation module (410) can generate 2D geometry images associated with geometry of the 3D mesh at given patch locations in the 2D grid. The texture image generation module (412) can generate 2D texture images associated with texture of the 3D mesh at given patch locations in the 2D grid. The geometry image generation module (410) and the texture image generation module (412) essentially exploit the 3D to 2D mapping computed during the packing process above to store the geometry and texture of the 3D mesh as 2D images, as described above. In some implementations, in order to better handle the case of multiple points being projected to the same sample (e.g., the patches overlap in the 3D space of the mesh), the 2D image may be layered. In other words, each patch may be projected onto, e.g., two images, referred to as layers, such that the multiple points can be projected into the same points in the different layers.
In some example implementations, a geometry image may be represented by a monochromatic frame of width×height (W×H). As such, three geometry images of the 3 luma or chroma channels may be used to represents the 3D coordinates. In some example implementations, a geometry image may be represented by a 2D image having three channels (RGB, YUV, YCrCb, and the like) with a certain color depth (e.g., 8-bit, 12-bit, 16-bit, or the like). As such, one geometry image having the 3 color channels may be used to represents the 3D coordinates.
To generate the texture image, the texture generation procedure exploits the reconstructed/smoothed geometry in order to compute the colors to be associated with the sampled points from the original 3D mesh (see “sampling” of
The occupancy map module (414) may be configured to generate an occupancy map that describes padding information at each unit. For example, as described above, the occupancy image may include a binary map that indicates for each cell of the 2D grid whether the cell belongs to the empty space or to the 3D mesh. In some example implementations, the occupancy map may use binary information to describe for each pixel whether the pixel is padded or not. In some other example implementations, the occupancy map may use binary information to describe for each block of pixels (e.g., each M×M block) whether the block of pixels is padded or not.
The occupancy map generated by the occupancy map module (414) may be be compressed using lossless coding or lossy coding. When lossless coding is used, the entropy compression module (434) may be used to compress the occupancy map. When lossy coding is used, the video compression module (432) may be used to compress the occupancy map.
It is noted that the patch packing module (408) may leave some empty spaces between 2D patches packed in an image frame. The image padding modules (416) and (418) may fill the empty spaces (referred to as padding) in order to generate an image frame that may be suited for 2D video and image codecs. The image padding is also referred to as background filling which can fill the unused space with redundant information. In some examples, a well-implemented background filling minimally increases the bit rate while avoiding introducing significant coding distortion around the patch boundaries.
The video compression modules (422), (423), and (432) can encode the 2D images, such as the padded geometry images, padded texture images, and occupancy maps based on a suitable video coding standard, such as HEVC, VVC and the like. In some example implementations, the video compression modules (422), (423), and (432) are individual components that operate separately. It is noted that the video compression modules (422), (423), and (432) can be implemented as a single component in some other example implementations.
In some example implementations, the smoothing module (436) may be configured to generate a smoothed image of the reconstructed geometry image. The smoothed image can be provided to the texture image generation (412). Then, the texture image generation (412) may adjust the generation of the texture image based on the reconstructed geometry images. For example, when a patch shape (e.g. geometry) is slightly distorted during encoding and decoding, the distortion may be taken into account when generating the texture images to correct for the distortion in the patch shape.
In some embodiments, the group dilation (420) is configured to pad pixels around the object boundaries with redundant low-frequency content in order to improve coding gain as well as visual quality of reconstructed 3D mesh.
The multiplexer (424) may be configured to multiplex the compressed geometry image, the compressed texture image, the compressed occupancy map, the compressed auxiliary patch information into a compressed bitstream.
In the example of
The de-multiplexer (532) may receive and separate the compressed bitstream into compressed texture image, compressed geometry image, compressed occupancy map, and compressed auxiliary patch information.
The video decompression modules (534) and (536) can decode the compressed images according to a suitable standard (e.g., HEVC, VVC, etc.) and output decompressed images. For example, the video decompression module (534) may decode the compressed texture images and output decompressed texture images. The video decompression module (536) may further decode the compressed geometry images and outputs the decompressed geometry images.
The occupancy map decompression module (538) may be configured to decode the compressed occupancy maps according to a suitable standard (e.g., HEVC, VVC, etc.) and output decompressed occupancy maps.
The auxiliary patch-information decompression module (542) may be configured to decode the compressed auxiliary patch information according to a suitable decoding algorithm and output decompressed auxiliary patch information.
The geometry reconstruction module (544) may be configured to receive the decompressed geometry images, and generate reconstructed 3D mesh geometry based on the decompressed occupancy map and decompressed auxiliary patch information.
The smoothing module (546) may be configured to smooth incongruences at edges of patches. The smoothing procedure may be aimed at alleviating potential discontinuities that may arise at the patch boundaries due to compression artifacts. In some example implementations, a smoothing filter may be applied to the pixels located on the patch boundaries to alleviate the distortions that may be caused by the compression/decompression.
The texture reconstruction module (548) may be configured to determine texture information for points in the 3D meshes based on the decompressed texture images and the smoothing geometry.
The color smoothing module (552) may be configured to smooth incongruences of coloring. Non-neighboring patches in 3D space are often packed next to each other in 2D videos. In some examples, pixel values from non-neighboring patches might be mixed up by the block-based video codec. The goal of color smoothing may be to reduce the visible artifacts that appear at patch boundaries.
The video decoder (610) may include a parser (620) to reconstruct symbols (621) from compressed images, such as the coded video sequence. Categories of those symbols may include information used to manage operation of the video decoder (610). The parser (620) may parse/entropy-decode the coded video sequence being received. The coding of the coded video sequence can be in accordance with a video coding technology or standard, and can follow various principles, including variable length coding, Huffman coding, arithmetic coding with or without context sensitivity, and so forth. The parser (620) may extract from the coded video sequence, a set of subgroup parameters for at least one of the subgroups of pixels in the video decoder, based upon at least one parameter corresponding to the group. Subgroups can include Groups of Pictures (GOPs), pictures, tiles, slices, macroblocks, Coding Units (CUs), blocks, Transform Units (TUs), Prediction Units (PUs) and so forth. The parser (620) may also extract from the coded video sequence information such as transform coefficients, quantizer parameter values, motion vectors, and so forth.
The parser (620) may perform an entropy decoding/parsing operation on the image sequence received from a buffer memory, so as to create symbols (621).
Reconstruction of the symbols (621) can involve multiple different units depending on the type of the coded video picture or parts thereof (such as: inter and intra picture, inter and intra block), and other factors. Which units are involved, and how, may be controlled by the subgroup control information that was parsed from the coded video sequence by the parser (620). The flow of such subgroup control information between the parser (620) and the multiple units below is not depicted for clarity.
Beyond the functional blocks already mentioned, the video decoder (610) can be conceptually subdivided into a number of functional units as described below. In a practical implementation operating under commercial constraints, many of these units interact closely with each other and can, at least partly, be integrated into each other. The conceptual subdivision into the functional units below is made merely for the purpose of describing the disclosed subject matter.
The video decoder (610) may include a scaler/inverse transform unit (651). The scaler/inverse transform unit (651) may receive a quantized transform coefficient as well as control information, including which transform to use, block size, quantization factor, quantization scaling matrices, etc. as symbol(s) (621) from the parser (620). The scaler/inverse transform unit (651) may output blocks comprising sample values that can be input into aggregator (655).
In some cases, the output samples of the scaler/inverse transform (651) can pertain to an intra coded block; that is: a block that is not using predictive information from previously reconstructed pictures, but can use predictive information from previously reconstructed parts of the current picture. Such predictive information can be provided by an intra picture prediction unit (652). In some cases, the intra picture prediction unit (652) may generate a block of the same size and shape of the block under reconstruction, using surrounding already reconstructed information fetched from the current picture buffer (658). The current picture buffer (658) may buffer, for example, partly reconstructed current picture and/or fully reconstructed current picture. The aggregator (655), in some cases, may add, on a per sample basis, the prediction information that the intra prediction unit (652) has generated to the output sample information as provided by the scaler/inverse transform unit (651).
In other cases, the output samples of the scaler/inverse transform unit (651) can pertain to an inter coded, and potentially motion compensated block. In such a case, a motion compensation prediction unit (653) can access reference picture memory (657) to fetch samples used for prediction. After motion compensating the fetched samples in accordance with the symbols (621) pertaining to the block, these samples may be added by the aggregator (655) to the output of the scaler/inverse transform unit (651) (in this case called the residual samples or residual signal) so as to generate output sample information. The addresses within the reference picture memory (657) from where the motion compensation prediction unit (653) fetches prediction samples can be controlled by motion vectors, available to the motion compensation prediction unit (653) in the form of symbols (621) that can have, for example X, Y, and reference picture components. Motion compensation also may include interpolation of sample values as fetched from the reference picture memory (657) when sub-sample exact motion vectors are in use, motion vector prediction mechanisms, and so forth.
The output samples of the aggregator (655) may be subject to various loop filtering techniques in the loop filter unit (656). Video compression technologies may include in-loop filter technologies that are controlled by parameters included in the coded video sequence (also referred to as coded video bitstream) and made available to the loop filter unit (656) as symbols (621) from the parser (620), but may also be responsive to meta-information obtained during the decoding of previous (in decoding order) parts of the coded picture or coded video sequence, as well as responsive to previously reconstructed and loop-filtered sample values.
The output of the loop filter unit (656) may be a sample stream that can be output to a render device as well as stored in the reference picture memory (657) for use in future inter-picture prediction.
Certain coded pictures, once fully reconstructed, may be used as reference pictures for future prediction. For example, once a coded picture corresponding to a current picture is fully reconstructed and the coded picture has been identified as a reference picture (by, for example, the parser (620)), the current picture buffer (658) may become a part of the reference picture memory (657), and a fresh current picture buffer may be reallocated before commencing the reconstruction of the following coded picture.
The video decoder (610) may perform decoding operations according to a predetermined video compression technology in a standard, such as ITU-T Rec. H.265. The coded video sequence may conform to a syntax specified by the video compression technology or standard being used, in the sense that the coded video sequence adheres to both the syntax of the video compression technology or standard and the profiles as documented in the video compression technology or standard. Specifically, a profile may select certain tools as the only tools available for use under that profile from all the tools available in the video compression technology or standard. Also necessary for compliance can be that the complexity of the coded video sequence is within bounds as defined by the level of the video compression technology or standard. In some cases, levels restrict the maximum picture size, maximum frame rate, maximum reconstruction sample rate (measured in, for example megasamples per second), maximum reference picture size, and so on. Limits set by levels can, in some cases, be further restricted through Hypothetical Reference Decoder (HRD) specifications and metadata for HRD buffer management signaled in the coded video sequence.
The video encoder (703) may receive 2D images, such as padded geometry images, padded texture images and the like, and generate compressed images.
According to an example embodiment of this disclosure, the video encoder (703) may code and compress the pictures of the source video sequence (images) into a coded video sequence (compressed images) in real-time or under any other time constraints as required by the application. Enforcing appropriate coding speed is one function of a controller (750). In some embodiments, the controller (750) controls other functional units as described below and is functionally coupled to the other functional units. The coupling is not depicted for clarity. Parameters set by the controller (750) can include rate control related parameters (picture skip, quantizer, lambda value of rate-distortion optimization techniques, . . . ), picture size, group of pictures (GOP) layout, maximum motion vector search range, and so forth. The controller (750) may be configured to have other suitable functions that pertain to the video encoder (703) optimized for a certain system design.
In some example implementations, the video encoder (703) may be configured to operate in a coding loop. As an oversimplified description, in an example, the coding loop may include a source coder (730) (e.g., responsible for creating symbols, such as a symbol stream, based on an input picture to be coded, and a reference picture(s)), and a (local) decoder (733) embedded in the video encoder (703). The decoder (733) may reconstruct the symbols to create the sample data in a similar manner as a (remote) decoder also would create (as any compression between symbols and coded video bitstream is lossless in the video compression technologies considered in the disclosed subject matter). The reconstructed sample stream (sample data) may be input to the reference picture memory (734). As the decoding of a symbol stream leads to bit-exact results independent of decoder location (local or remote), the content in the reference picture memory (734) is also bit exact between the local encoder and remote encoder. In other words, the prediction part of an encoder “sees” as reference picture samples exactly the same sample values as a decoder would “see” when using prediction during decoding. This fundamental principle of reference picture synchronicity (and resulting drift, if synchronicity cannot be maintained, for example because of channel errors) is used in some related arts as well.
The operation of the “local” decoder (733) can be the same as of a “remote” decoder, such as the video decoder (610), which has already been described in detail above in conjunction with
In various embodiments in the present disclosure, any decoder technology except the parsing/entropy decoding that is present in a decoder also may necessarily needs to be present, in substantially identical functional form, in a corresponding encoder. For this reason, the disclosed subject matter focuses on decoder operation. The description of encoder technologies may be abbreviated as they are the inverse of the comprehensively described decoder technologies. Only in certain areas a more detail description is required and provided below.
During operation, in some examples, the source coder (730) may perform motion compensated predictive coding, which codes an input picture predictively with reference to one or more previously-coded picture from the video sequence that were designated as “reference pictures”. In this manner, the coding engine (732) may code differences between pixel blocks of an input picture and pixel blocks of reference picture(s) that may be selected as prediction reference(s) to the input picture.
The local video decoder (733) may decode coded video data of pictures that may be designated as reference pictures, based on symbols created by the source coder (730). Operations of the coding engine (732) may advantageously be lossy processes. When the coded video data may be decoded at a video decoder (not shown in
The predictor (735) may perform prediction searches for the coding engine (732). That is, for a new picture to be coded, the predictor (735) may search the reference picture memory (734) for sample data (as candidate reference pixel blocks) or certain metadata such as reference picture motion vectors, block shapes, and so on, that may serve as an appropriate prediction reference for the new pictures. The predictor (735) may operate on a sample block-by-pixel block basis to find appropriate prediction references. In some cases, as determined by search results obtained by the predictor (735), an input picture may have prediction references drawn from multiple reference pictures stored in the reference picture memory (734).
The controller (750) may manage coding operations of the source coder (730), including, for example, setting of parameters and subgroup parameters used for encoding the video data.
Output of all aforementioned functional units may be subjected to entropy coding in the entropy coder (745). The entropy coder (745) may translate the symbols as generated by the various functional units into a coded video sequence, by lossless compressing the symbols according to technologies such as Huffman coding, variable length coding, arithmetic coding, and so forth.
The controller (750) may manage operation of the video encoder (703). During coding, the controller (750) may assign to each coded picture a certain coded picture type, which may affect the coding techniques that may be applied to the respective picture. For example, pictures often may be assigned as one of the following picture types:
An Intra Picture (I picture) may be one that may be coded and decoded without using any other picture in the sequence as a source of prediction. Some video codecs allow for different types of intra pictures, including, for example Independent Decoder Refresh (“IDR”) Pictures. A person skilled in the art is aware of those variants of I pictures and their respective applications and features.
A predictive picture (P picture) may be one that may be coded and decoded using intra prediction or inter prediction using at most one motion vector and reference index to predict the sample values of each block.
A bi-directionally predictive picture (B Picture) may be one that may be coded and decoded using intra prediction or inter prediction using at most two motion vectors and reference indices to predict the sample values of each block. Similarly, multiple-predictive pictures can use more than two reference pictures and associated metadata for the reconstruction of a single block.
The video encoder (703) may perform coding operations according to a predetermined video coding technology or standard, such as ITU-T Rec. H.265. In its operation, the video encoder (703) may perform various compression operations, including predictive coding operations that exploit temporal and spatial redundancies in the input video sequence. The coded video data, therefore, may conform to a syntax specified by the video coding technology or standard being used.
A video may be in the form of a plurality of source pictures (images) in a temporal sequence. Intra-picture prediction (often abbreviated to intra prediction) makes use of spatial correlation in a given picture, and inter-picture prediction makes uses of the (temporal or other) correlation between the pictures. In an example, a specific picture under encoding/decoding, which is referred to as a current picture, is partitioned into blocks. When a block in the current picture is similar to a reference block in a previously coded and still buffered reference picture in the video, the block in the current picture can be coded by a vector that is referred to as a motion vector. The motion vector points to the reference block in the reference picture, and can have a third dimension identifying the reference picture, in case multiple reference pictures are in use.
In some embodiments, a bi-prediction technique can be used in the inter-picture prediction. According to the bi-prediction technique, two reference pictures, such as a first reference picture and a second reference picture that are both prior in decoding order to the current picture in the video (but may be in the past and future, respectively, in display order) are used. A block in the current picture can be coded by a first motion vector that points to a first reference block in the first reference picture, and a second motion vector that points to a second reference block in the second reference picture. The block can be predicted by a combination of the first reference block and the second reference block.
In various embodiments, the mesh encoder (400) and the mesh decoder (500) above can be implemented with hardware, software, or combination thereof. For example, the mesh encoder (400) and the mesh decoder (500) can be implemented with processing circuitry such as one or more integrated circuits (ICs) that operate with or without software, such as an application specific integrated circuit (ASIC), field programmable gate array (FPGA), and the like. In another example, the mesh encoder (400) and the mesh decoder (500) can be implemented as software or firmware including instructions stored in a non-volatile (or non-transitory) computer-readable storage medium. The instructions, when executed by processing circuitry, such as one or more processors, causing the processing circuitry to perform functions of the mesh encoder (400) and/or the mesh decoder (500).
In some other example implementations, compression of dynamic meshes may be based on decimated base meshes (e.g., encoded by draco), displacements vectors, and motion fields (if applicable), as illustrated by 800 in
In many applications, reflection and other symmetries may be a regularly occurring characteristic of meshes, especially in computer generated meshes. Such symmetries suggests redundant information in the meshes and thus may be utilized to enhance compression of the meshes. The example implementations below are described for meshes having reflection symmetries. The underlying principles described below may also be applied to improve compression efficiency of meshes having other types of symmetries.
In some example implementations, vertices of a 3D mesh characterized by a reflection symmetry may be first divided into a left part (or a left mesh) and a right part (or a right mesh) of a symmetry plane. The term “left” and “right” are merely used to designate the two different sides of the symmetry plane. They may be alternatively referred to as “first” and “second” parts of the symmetry plane. The left part may be encoded by mesh coding while the right part may be encoded by a symmetry prediction and displacement coding. For example, the reconstructed left part, after reflection by the symmetry plane, may be used as a predictor for the right part. The displacement of the vertices between the right part and the predictor may be coded to represent the right part on the encoder side. The decoder, correspondingly, may first decode the left part of the symmetric mesh from the bitstream, then decode the displacement representing the right part, and then generate the right part from the reconstructed left part and the decoded displacement.
In further detail for mesh coding based on symmetry prediction, the encoder may first identify a reflection symmetry plane for the input 3D mesh. Such a symmetry plane may be used by the encoder to cut the input 3D mesh into the left part and the right part. Using the left mesh as coding reference, vertices of the 3D mesh on or closest to the boundary of left mesh towards the symmetry plane may be referred to as left boundary vertices. The left mesh may first be coded in the encoder using any mesh vertex coding scheme described above or elsewhere. The coded left mesh may be reconstructed at the encoder, as what the decoder would do. The reconstructed left mesh, after reflection with respect to the symmetry plane, may then be used by the encoder as a reference or predictor for the right mesh. As described above, the displacement between the vertices of the right part of the input mesh and the predictor (the reflection of the reconstructed left mesh), which may include much less information than the right part itself due to reduction of redundancy, may then be encoded by the encoder and included into the bitstream of the 3D mesh.
The left boundary vertices of the encoded left mesh thus correspond to a set of right boundary vertices of the encoded right mesh, forming boundary vertex pairs. The totality of the left boundary vertices and the right boundary vertices may be further referred to as boundary vertices.
The decoder, correspondingly, after receiving the bitstream of the 3D mesh, may first determine the symmetric plane of the encoded 3D mesh and reconstruct the left part of the 3D mesh. The encoder may further reconstruct the displacement of the right part of the 3D mesh. The reconstructed left part of the 3D mesh may then be reflected by the decoder relative to the symmetry plane to generate the predictor. The reconstructed right part of the 3D mesh may then be obtained from the reconstructed displacement and the predictor.
Because of the coding loss/error, the boundary vertex pairs, particular the ones that are on the symmetry plane before the symmetry cut, may not always coincide on the symmetry plane after reconstruction. These vertex pairs, when being apart from the symmetry plane after reconstruction due to coding losses/errors and with their connectivity to other vertices in forming edges and faces of the reconstructed 3D mesh, may generate, at the symmetry plane, artificial cracks that are not true cracks present in the original input 3D mesh.
In some example implementations, in order to remove or reduce such artificial cracks (alternatively referred to as cracking artifact), the boundary vertex pairs that became apart from the symmetry plane may be merged after reconstruction at the decoder. Such a merging would involve bringing the coordinates of two vertices of a boundary pair to coincide on the symmetric plane. For example, a threshold distance from the symmetric plane may be defined. Boundary vertex pairs having vertices that are away from the symmetry plane but with a distance to the symmetry plane that is at the threshold distance or closer may be considered as having coding errors and may be merged to the symmetry plane. This is illustrated in
There may be at least two problems associates with the example implementations above with fixed or predefined merging distance threshold. As the first problem, such implementations using fixed or predefined merging distance threshold may not always be optimal for all coding configurations and contents. In some other situations, the fixed distance threshold may be too large and may produce large distortion from the original 3D mesh. In some other cases, a smaller fixed threshold distance may lead to fewer or insufficient number of boundary vertices being merged, still leading to visible artificial cracks at the symmetry plane as illustrated in
It thus may be desirable to use adaptive merge threshold distances for different situations in order to handle this first problem. For example, a suitable threshold distance for effectively removing the cracking artifact may depend on the quantization parameter (QP) of the input mesh and/or the base mesh. Effective threshold distance may also differ from one mesh to another. Fixed threshold distances for meshes with different characteristics may become inflexible and not fully effective in reducing cracking artifacts for all types of meshes.
As a second problem, opposite to the problem for fixed merging threshold distance above, some of the reconstructed boundary vertices that are apart from the symmetry plane and are yet within the threshold distances may be nevertheless due to real boundaries in the 3D mesh (rather than artificial boundaries from the symmetric cutting after reconstruction) and may thus be mistakenly or incorrectly merged. In such situation, instead of creating cracking artifacts, actual boundaries may be artificially removed.
In the various further disclosure below, example implementations are provided to solve the problems above. These implementations, alone or in combination, provide example manners to determine, signal, derive, and/or use adaptive threshold distance from the symmetry plane for after-reconstruction merging of boundary vertices, and for determining and/or signaling real boundary vertices that should not be merged. The various implementations described below may be used separately or may be combined in any order and may be used for arbitrary 3D meshes. It is assumed that the meshes are fully or partially symmetric in geometry. Symmetry coding is assumed to use a half mesh (e.g., left mesh) to predict the remainder mesh (e.g., right mesh).
In some example implementations, the decoder may be configured to determine when all the boundary vertices of the left and right meshes can be merged irrespective of the merging threshold distance. In other words, the encoder may analyze the mesh vertices at or near the symmetry plane and determine whether there are any real boundaries in the 3D mesh. If it is determined that the input meshes does not contain any real boundary vertices at or near the symmetry plane, then all the boundary vertices in the left and right meshes after reconstruction may be considered as only due to symmetry cutting. In such cases, all the boundary vertices can be merged irrespective of their distance from the symmetry plane. A syntax element, e.g., a flag, or a predefined value of the syntax element, may be included in the bitstream of the mesh to provide such indication. For example, a binary flag may be included with “0” indicating that no real boundary vertices are detected and “1” indicating that there are real boundaries. For another example, a particular value of a syntax element related to the signaling of the threshold distance may be used to indicate whether there are real boundaries. Specifically, a signaling of zero threshold distance may be used to indicate that there are no real boundary vertices and that the cut boundary vertices should be merged irrespective of their distance to the symmetry plane after reconstruction. Such signalling may be provided at various coding levels, e.g., for a particular 3D mesh frame, a portion of the frame, or a group of frames.
In order to determine whether there are real boundary vertices at or near the symmetry plane, the encoder may be configured to first determine the symmetry plane and then perform analysis of the vertices before symmetry cutting. For example, the mesh adjacency information may be determined by the encoder. The mesh adjacency information may record how vertices are connected to form edges and surfaces. Edges which belong to only one face may be identified as real boundary edges. Then, vertices belonging to the real boundary edges may be identified as real boundary vertices. If no real boundary edges (thereby no boundary vertices) are identified or detected in the input mesh, then all the boundary vertices in the left and right meshes may be deemed as due to cutting only and all of them can be merged. In some example implementations, the search for real boundary vertices may be performed by the encoder only at or close to the symmetry plane.
In some example implementations, the merging threshold distance may be adaptively determined by the encoder and signaled in the bitstream of the 3D mesh. As one example, the adaptive merging threshold distance to the symmetry plane may be determined progressively via an optimization process by the encoder. An example procedure is described below:
The example implementation above thus progressively increases the distance threshold in multiples of the factor until a deterioration in distortion metrics no longer falls within the predefined delta or until the maximum value of threshold has reached. This progressive determination process allows for more and more boundary vertices to be merged within a tolerable distortion metrics, and potentially reducing the presence of artificial cracks in the reconstructed mesh. The absolute value of the distortions is considered to account for the case where the distortion increases slightly even after filling the crack. This may be due to an insensitivity of the distortion metric to artificial cracks. The final optimal threshold distance (or the scaling factor over the initial or base threshold distance) may be signaled in the bitstream to the decoder. The incremental of the scaling factor as used in the example progressive process above is “1”, however, other finer or courser incremental (e.g., 0.5, 1.2, 2, 2.5, 3, etc.) may be used.
The effect of the progressive determination of optimal symmetry threshold is illustrated in
In some other example implementations, a process to determine incorrectly merged vertices (real boundary vertices) subject to the optimal symmetry threshold distance above may be performed. As already described above, in case that the input mesh itself has real boundary vertices, these vertices from the left mesh may nevertheless be merged with their corresponding vertices from the right mesh if their distances from the symmetry plane are within the optimal threshold. In some situations, the optimal threshold as progressively determined by the decoder may be high enough so as to lead to some original real boundary vertices from the input mesh itself being incorrectly merged. In some example implementations, these real boundary vertices that are within the optimal threshold distance from the symmetry plane as adaptively determined may be identified by the encoder, their indices may be signaled in the bitstream such that the decoder may be able to determine these vertices and avoid merging them even though they are within the optimal threshold distance to the symmetry plane. As such, these indices along with the optimal threshold distance can help the decoder apply the necessary correction and merge, to some extent, only the boundary vertices that are due to symmetry cut.
An example of handling incorrectly merged boundary vertices is shown in
In some further example implantations, in order to reducing number of bits to signal for the list of true boundary vertices, the boundary vertexes above (including both cut boundary vertices and true boundary vertices) may be ordered from small to large distances to the symmetry plane, and the indices for the list of true boundary vertices that are within the optimal threshold distance to the symmetry plane may be based off the ordered boundary vertices. In such a manner, if the true boundary vertices are statistically close to the symmetry plane, this would result in signalling of smaller indices (lower signaling magnitudes and lower signalling cost), thereby facilitating reduction of a number of bits for signalling the list of such true boundary vertices in the bitstream of the mesh. The decoder, correspondingly, after reconstruction of the mesh, may generate the same boundary vertices, reorder them according to their distance to the symmetry plane, look up the true boundary vertices using the indices received in the bitstream from the reordered vertices, and then remap back to these vertices before reordering.
In the example of
In some further example implementations, not all but only a subset of the true boundary vertices within the optimal threshold distance to the symmetry plane may be signaled in the bitstream in order to reduce the signaling cost while still maintain acceptable distortion or acceptable loss of true boundaries. For example, after the true boundary vertices from the input mesh whose distance from the symmetry plane is less than the optimal threshold distance from the symmetry plane are determined as described above, a distortion measure including but no limited to the D1-PSNR or D2-PSNR can be used to select the subset of these vertices that cause a large distortion as a result of incorrect merging and only include their indices in the bitstream. For true boundary vertices which cause a smaller distortion as a result of incorrect merging, their signaling may be skipped in the bitstream and their merging may be left uncorrected at the decoder. In some further implementations, the bitrate of signalling these indices can be jointly considered along with the distortion while deciding which of the true boundary vertices should be signaled by the encoder and corrected at the decoder. The bitrate consideration and the distortion consideration may be considered in a weighted combination.
In some other example implementations, the merging threshold distance from the symmetry plane may be determined based on one or more other factors, such as the bit-depth of vertices and/or quantization parameter (QP) for encoding the vertices of the mesh. For coding of a dynamic mesh, these factors may be associated with the input mesh or the base mesh. The dependency of the merging threshold distance on these parameters may in turn depend on the type of mesh being coded. Such dependency may be predefined so that both the encoder and the decoder and derive the adaptive threshold distance for merging rather than signaling it in the bitstream. For example, the merging threshold distance may be inversely proportional (or of other decreasing relationship) to the QP. For another example, the merging threshold distance may be proportional (or of other increasing relationship) to the QP. For another example, the merging threshold distance may be proportional (or of other increasing relationship) to the bit-depth. For yet another example, the merging threshold distance may be inversely proportional (or of other decreasing relationship) to the bit-depth. The optimal threshold can be determined in other ways and the empirically determined threshold for different base mesh QP may be stored and used to merge the left and right meshes by both the encoder and the decoder. Since the input and base mesh QP is a part of the metadata bitstream, the correct threshold can be determined at the decoder and signalling of the optimal threshold is not required in this case.
For the example implementation above involving signaling boundary vertex merging information, an example syntax structure is shown below:
In the example syntax structure above, u(n) represents unsigned integer using n bits, and b(1) represents 1 bit Boolean.
Specifically, in the example syntax structure above, the syntax element threshold_factor is the multiple to be applied the initial threshold distance to obtain the optimal symmetry threshold, as described above.
The syntax element threshold_factor, when being “0”, may be used to signal the case where there are no real boundary vertices (all boundary vertices are due to symmetry cutting) and all boundary vertex pairs should be merged, as described above.
The syntax element error_vertices_flag may represent a Boolean flag used to indicate whether a correction needs to be applied at the decoder to fix incorrectly merged vertices. This flag may not be present if threshold_factor=0.
The syntax element number_error_vertices may represent the number of vertices that are within the optical threshold distance from the symmetry plane but are true boundaries (referred to as error vertices).
The syntax element error_vertex_indices may be a one-dimensional array that stores all the error vertex indices. As described above, such indices may represent positions of the error vertex after the boundary vertices are ordered based on their distance to the symmetry plane, as described in further detail above.
The processes (1300), (1400), and (1500) can be suitably adapted. Step(s) in the processes (1300), (1400), and (1500) can be modified and/or omitted. Additional step(s) can be added. Any suitable order of implementation can be used.
The techniques disclosed in the present disclosure may be used separately or combined in any order. Further, each of the techniques (e.g., methods, embodiments), encoder, and decoder may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). In some examples, the one or more processors execute a program that is stored in a non-transitory computer-readable medium.
The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,
The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like.
The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like.
The components shown in
Computer system (1600) may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video).
Input human interface devices may include one or more of (only one of each depicted): keyboard (1601), mouse (1602), trackpad (1603), touch screen (1610), data-glove (not shown), joystick (1605), microphone (1606), scanner (1607), camera (1608).
Computer system (1600) may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (1610), data-glove (not shown), or joystick (1605), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (1609), headphones (not depicted)), visual output devices (such as screens (1610) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability-some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted).
Computer system (1600) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (1620) with CD/DVD or the like media (1621), thumb-drive (1622), removable hard drive or solid state drive (1623), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.
Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals.
Computer system (1600) can also include an interface (1654) to one or more communication networks (1655). Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general-purpose data ports or peripheral buses (1649) (such as, for example USB ports of the computer system (1600)); others are commonly integrated into the core of the computer system (1600) by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system (1600) can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi-directional, for example to other computer systems using local or wide area digital networks. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above.
Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (1640) of the computer system (1600).
The core (1640) can include one or more Central Processing Units (CPU) (1641), Graphics Processing Units (GPU) (1642), specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) (1643), hardware accelerators for certain tasks (1644), graphics adapters (1650), and so forth. These devices, along with Read-only memory (ROM) (1645), Random-access memory (1646), internal mass storage such as internal non-user accessible hard drives, SSDs, and the like (1647), may be connected through a system bus (1648). In some computer systems, the system bus (1648) can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus (1648), or through a peripheral bus (1649). In an example, the screen (1610) can be connected to the graphics adapter (1650). Architectures for a peripheral bus include PCI, USB, and the like.
CPUs (1641), GPUs (1642), FPGAs (1643), and accelerators (1644) can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM (1645) or RAM (1646). Transitional data can be also be stored in RAM (1646), whereas permanent data can be stored for example, in the internal mass storage (1647). Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU (1641), GPU (1642), mass storage (1647), ROM (1645), RAM (1646), and the like.
The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts.
As an example and not by way of limitation, the computer system having architecture (1600), and specifically the core (1640) can provide functionality as a result of processor(s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core (1640) that are of non-transitory nature, such as core-internal mass storage (1647) or ROM (1645). The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core (1640). A computer-readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core (1640) and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM (1646) and modifying such data structures according to the processes defined by the software. In addition, or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator (1644)), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software.
While this disclosure has described several exemplary embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.
This application is based on and claims the benefit of priority to U.S. Provisional Patent Application No. 63/621,840 filed on Jan. 17, 2024, and entitled “METHOD AND APPARATUS FOR ENCODING DISTANCE THRESHOLD AND ERROR VERTEX INDICES FOR MERGING SYMMETRIC MESHES,” which is herein incorporated by reference in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| 63621840 | Jan 2024 | US |