This disclosure relates generally to compression and decompression of three-dimensional (3D) volumetric content, more particularly volumetric content coding using videos and simplified meshes.
Three-dimensional (3D) volumetric content may be generated using images captured by multiple cameras positioned at different camera angles and/or locations relative to an object or scene to be captured. The 3D volumetric content includes attribute information for the object or scene, such as color information (e.g. RGB values), texture information, intensity attributes, reflectivity attributes, or various other attributes. In some circumstances, additional attributes may be assigned, such as a time-stamp when the 3D volumetric content was captured. The 3D volumetric content also includes geometry information for the object or scene, such as depth values for surfaces of the object or depth values for items in the scene. Such 3D volumetric content may make up “immersive media” content, which in some cases may comprise a set of views each having associated spatial information (e.g. depth) and associated attributes. In some circumstances, 3D volumetric content may be generated, for example in software, as opposed to being captured by one or more cameras/sensors. In either case, such 3D volumetric content may include large amounts of data and may be costly and time-consuming to store and transmit.
In some embodiments, attribute information, such as colors, textures, etc. for three-dimensional (3D) volumetric content are encoded using views of the 3D volumetric content that are packed into a 2D atlas. At least some redundant information that is shown in multiple ones of the views is removed, such that the redundant information is not repeated in multiple views included in the atlas. Geometry information for the 3D volumetric content is also generated for the views included in the atlas. For example a depth map corresponding to the views (with the redundant information omitted) may be generated. However, instead of encoding the depth map using 2D video image frames, mesh-based representations corresponding to portions of the depth map (e.g. depth patch images) are generated and encoded using a mesh-based encoder. Also, the generated mesh-based representations may be simplified by removing edges or vertices from the meshes prior to mesh encoding the simplified mesh-based representations. In some embodiments, a distortion analysis is performed that compares simplified meshes to the corresponding portions of the depth map (e.g. depth patch images) to determine a degree to which the meshes may be simplified such that the simplified meshes vary from the geometries represented by the respective portions of the depth map (e.g. depth patch images) by less than one or more threshold amounts for one or more respective distortion criteria. In some embodiments, the encoding and compression of such 3D volumetric content, as described herein, may be performed at a server or other computing device of an entity that creates or provides the encoded/compressed 3D volumetric content. The encoded/compressed 3D volumetric content may be provided to a decoding device for use in reconstructing the 3D volumetric content, at the decoding device. For example, the decoding device may render the 3D volumetric content on a display associated with the decoding device.
In some embodiments, an encoder for encoding 3D volumetric content is implemented via program instructions, that when executed on or across one or more processors (such as of an encoding device), cause the one or more processors to receive images of a three-dimensional (3D) object or scene, wherein the images are captured from a plurality of camera viewing angles or locations for viewing the 3D object or scene. For example, the received images may have been captured by a device comprising cameras for capturing immersive media content and may have been provided to an encoder of the device. The program instructions further cause the one or more processors to generate, based on the received images, an atlas comprising attribute patches for the 3D object or scene and generate, based on the received images, mesh-based representations for respective depth patches corresponding to the attribute patches of the atlas. In some embodiments, the atlas may include a main view of the object or scene and one or more additional views that do not include information already included in the main view or other ones of the views. For example, redundant information may be omitted from subsequent views. The different views (with redundant information omitted) may form patches, wherein each patch has a corresponding attribute patch image and a corresponding geometry patch image. A given attribute patch image comprises attribute values for a portion of the object or scene represented by a given patch that corresponds with a given one of the views included in the atlas. Also, a given geometry patch image comprises geometry information, such as depth values, for the given patch, wherein the depth values represent depth values for the same portion of the object or scene as is represented by the corresponding attribute patch image for the given patch.
For respective ones of the mesh-based representations, the program instructions, when executed on or across the one or more processors, further cause the one or more processors to remove one or more vertices or edges of the respective mesh-based representations to generate a simplified versions of the respective mesh-based representations. The program instructions, also cause the one or more processors to perform a distortion analysis for the respective simplified versions of the mesh-based representations. If an amount of distortion caused by removing the one or more vertices or edges is less than a threshold amount, a simplified version of the mesh-based representation is selected and if the amount of distortion caused by removing the one or more vertices or edges is equal to or greater than the threshold amount, the mesh-based representation that has not had at least some of the one or more edges or vertices removed is selected.
In some embodiments, the removal of edges or vertices and the distortion analysis may be iteratively performed until the distortion threshold is reached. Also, in some embodiments, distortion analysis may further be used to select a manner in which the vertices or edges are removed. More generally speaking, removal of an edge or vertex may be an example of a decimation operation. In some embodiments, distortion analysis is used to determine how a decimation operation is to be performed and how many decimation operations are to be performed to simplify the mesh-based representations, wherein the decimation operations are selected taking into account distortion caused by performing the decimation operations.
Once simplified versions of the mesh-based representation are selected based on the distortion analysis, the program instructions cause the one or more processors to provide the selected mesh-based representations that have been simplified based on the distortion analysis and provide the atlas comprising the corresponding attribute patch images. For example, the simplified mesh-based representations may be provided as encoded meshes and the atlas comprising the attribute patch images may be provided as a video image frame that has been video encoded.
In some embodiments, 3D volumetric content may be encoded as described above for the object or scene at a plurality of moments in time. In such embodiments, simplified mesh-based representations may be generated and selected for a group of frames, wherein respective ones of the frames of the group of frames correspond to respective ones of the plurality of moments in time. In such embodiments, different versions of a same mesh-based representation at different ones of the moments in time may be simplified in a same manner for each of the versions at the different ones of the moments in time such that the simplified mesh-based representation has a same connectivity in each of the versions included in the group of frames for the respective moments in time. While, different mesh-based representations corresponding to different portions of the object or scene for a same given moment in time may be simplified using different or differently applied decimation operations, same ones of the mesh-based representations having different versions in time across the group of frames are decimated in a same manner resulting in a same connectivity for the versions of the simplified mesh-based representation across the frames of the group of frames.
In some embodiments, the generation of the atlas comprising the attribute patch images and the generation and selection of the corresponding simplified mesh-based representations may be performed by an entity that provides 3D volumetric content, such as at a server. The mesh encoded simplified mesh-based representations and the atlas comprising corresponding attribute patch images encoded in a video encoding format may be provided to a receiving entity that renders the 3D volumetric content using the provided encoded meshes and video images. For example, a virtual reality display device, augmented reality display device, etc. may render a reconstructed version of the 3D volumetric content using the provided encoded meshes and encoded video image frames.
This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
“Comprising.” This term is open-ended. As used in the appended claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . .” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.).
“Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f), for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
“First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, a buffer circuit may be described herein as performing write operations for “first” and “second” values. The terms “first” and “second” do not necessarily imply that the first value must be written before the second value.
“Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B.
As data acquisition and display technologies have become more advanced, the ability to capture three-dimensional (3D) volumetric content, such as immersive video content, etc. has increased. Also, the development of advanced display technologies, such as virtual reality or augmented reality systems, has increased potential uses for 3D volumetric content, such as immersive video, etc. However, 3D volumetric content files are often very large and may be costly and time-consuming to store and transmit. For example, communication of 3D volumetric content, such as volumetric point cloud or immersive video content, over private or public networks, such as the Internet, may require considerable amounts of time and/or network resources, such that some uses of 3D volumetric content, such as real-time uses or on-demand uses, may be limited. Also, storage requirements of 3D volumetric content files may consume a significant amount of storage capacity of devices storing such files, which may also limit potential applications for using 3D volumetric content.
In some embodiments, an encoder may be used to generate a compressed version of three-dimensional volumetric content to reduce costs and time associated with storing and transmitting large 3D volumetric content files. In some embodiments, a system may include an encoder that compresses attribute and/or spatial information of a volumetric point cloud or immersive video content file such that the file may be stored and transmitted more quickly than non-compressed volumetric content and in a manner such that the compressed volumetric content file may occupy less storage space than non-compressed volumetric content. In some embodiments, such compression may enable 3D volumetric content to be communicated over a network in real-time or in near real-time, or on-demand in response to demand from a consumer of the 3D volumetric content.
In some embodiments, a system may include a decoder that receives encoded 3D volumetric content comprising video encoded attribute information and simplified mesh-based representations of geometry information that have been mesh-encoded via a network from a remote server or other storage device that stores or generates the volumetric content files. For example, a 3-D display, a holographic display, or a head-mounted display may be manipulated in real-time or near real-time to show different portions of a virtual world represented by 3D volumetric content. In order to update the 3-D display, the holographic display, or the head-mounted display, a system associated with the decoder may request data from the remote server based on user manipulations (or anticipated user manipulations) of the displays, and the data may be transmitted from the remote server to the decoder in a form of encoded 3D volumetric content (e.g. video encoded attribute patch images and mesh-encoded simplified mesh-based representations). The displays may then be updated with updated data responsive to the user manipulations, such as updated views.
In some embodiments, sensors may capture attribute information for one or more points, such as color attributes, texture attributes, reflectivity attributes, velocity attributes, acceleration attributes, time attributes, modalities, and/or various other attributes. For example, in some embodiments, an immersive video capture system, such as that may follow MPEG immersive video (MIV) standards, may use a plurality of cameras to capture images of a scene or object from a plurality of viewing angles and/or locations and may further use these captured images to determine spatial information for points or surfaces of the object or scene, wherein the spatial information and attribute information is encoded using video-encoded attribute image patches and mesh-encoded simplified mesh-base representations generated as described herein.
Generating 3D Volumetric Content
In some embodiments, 3D volumetric content that is to be encoded/compressed, as described herein, may be generated from a plurality of images of an object or scene representing multiple views of the object or scene, wherein additional metadata is known about the placement and orientation of the cameras that captured the multiple views.
For example,
In
In some embodiments, metadata is associated with each of the views as shown in
For example,
For example, a component of an encoder, such as an atlas constructor 510 (as shown in
For example, source camera parameters 502 may indicate locations and orientations for right side camera 110 and front right camera 106 that both capture images of a portion of a shoulder of person 102. Moreover, the atlas constructor 510 may determine that the cameras 106 and 110 are both capturing images comprising a same surface of the object (e.g. the portion of the person's shoulder). For example, pixel value patterns in the images may be matched to determine that images from both cameras 106 and 110 are capturing the same portion of the person 102's shoulder. Using the source camera parameters 502 and knowing points in the captured images that are located at a same location in 3D space, the atlas constructor 510 may triangulate a location in 3D space of the matching portions of the captured images (e.g. the portion of person 102's shoulder). Based on this triangulation from the known locations and orientations of cameras 106 and 110, the atlas constructor 510 may determine geometry/spatial information for the portion of the object, such as X, Y, and Z coordinates for points included in the matching portion of the person 102's shoulder as shown in
Furthermore, the spatial/geometry information may be represented in the form of a depth map (also referred to herein as a depth patch image). For example, as shown in
In some embodiments, depth maps may only be generated for views that are to be included in an atlas. For example, depth maps may not be generated for redundant views or redundant portions of views that are omitted from the atlas. Though, in some embodiments, image data and source camera parameters of all views may be used to generate the depth maps, but the redundant views may not be included in the generated depth maps. For example, whereas cameras 106 and 110 capture redundant information for the person 102's shoulder, a single depth map may be generated for the two views as opposed to generating two redundant depth maps for the person's shoulder. However the images captured from cameras 106 and 110 that redundantly view the person's shoulder from different locations/camera viewing angles may be used to determine the spatial information to be included in the single depth map representing the person's shoulder.
At block 302, a view optimizer (such as view optimizer 506 of the encoder shown in
The view optimizer may select one or more main views and tag the selected views as main views. In order to determine a ranking (e.g. ordered list of the views) at block 304 the view optimizer then re-projects the selected one or more main views into remaining ones of the views that were not selected as main views. For example, the front center view (FC) 120 and the back center view (BC) 122 may be selected as main views and may be re-projected into the remaining views, such as views 124-134. At block 306, the view optimizer determines redundant pixels, e.g. pixels in the remaining views that match pixels of the main views that have been re-projected into the remaining views. For example, portions of front right view 128 are redundant with portions of front center view 120, when pixels of front right view 128 are re-projected into front center view 120. In the example, these redundant pixels are already included in the main view (e.g. view 120 from the front center (FC)) and are omitted from the remaining view (e.g. view 128 from the front right (FR)).
The view optimizer (e.g. view optimizer 506) may iteratively repeat this process selecting a next remaining view as a “main view” for a subsequent iteration and repeat the process until no redundant pixels remain, or until a threshold number of iterations have been performed, or another threshold has been met, such as less than X redundant pixels, or less than Y total pixels, etc. For example, at block 308 the re-projection is performed using the selected remaining view as a “main view” to be re-projected into other ones of the remaining views that were not selected as “main views” for this iteration or a previous iteration. Also, at block 312 redundant pixels identified based on the re-projection performed at 310 are discarded. At block 314 the process (e.g. blocks 308-312) are repeated until a threshold is met (e.g. all remaining views comprise only redundant pixels or have less than a threshold number of non-redundant pixels, etc.). The threshold may be measured also be based on all of the remaining views having empty pixels (e.g. they have already been discarded) or all of the remaining views have less than a threshold number of non-empty pixels.
The ordered list of views having non-redundant information may be provided from the view optimizer (e.g. view optimizer 506) to an atlas constructor of an encoder (e.g. atlas constructor 510 as shown in
The atlas constructor 510 may prune the empty pixels from the respective views (e.g. the pixels for which redundant pixel values were discarded by the view optimizer 506). This may be referred to as “pruning” the views as shown being performed in atlas constructor 510. The atlas constructor 510 may further aggregate the pruned views into patches (such as attribute patch images and geometry patch images) and pack the patch images into respective image frames.
For example,
Attribute patch images 404 and 406 for main views 120 and 122 are shown packed in the atlas 402. Also, patch images 408 and 410 comprising non-redundant pixels for views 124 and 126 are shown packed in atlas 402. Additionally, attribute patch images 412, 414, 416, and 418 comprising non-redundant pixels for remaining views 128, 130, 132, and 134 are shown packed in atlas 402.
Atlas 420/depth map 420 comprises corresponding depth patch images 422-436 that correspond to the attribute patch images 404-418 packed into attribute atlas 402.
As further described in regard to
Also, traditional video encoding codecs may smooth values and introduce artifacts in the geometry information (e.g. depth patch images packed in depth map/atlas 420). Thus, even if the rendering device were to have sufficient capacity to render a full quantity of vertices without server-side mesh simplification, distortion or artifacts may be introduced into the rendered geometry at the decoder. In contrast, generating the meshes and using the computational capacity of the encoding device (e.g. server) to strategically simplify the meshes using a distortion analysis may both reduce distortion in the reconstructed geometry and improve rendering speeds at the decoder.
As discussed above, source camera parameters 502 indicating location and orientation information for the source cameras, such as cameras 104-118 as illustrated in
Packed atlas 402 may be provided to encoder 516 which may video encode the attribute patch images and mesh-encode the depth patch images using a mesh generation and mesh simplification as described in
Additionally, atlas constructor 510 generates an atlas parameters lists 512, such as bounding box sizes and locations of the patch images in the packed atlas. The atlas constructor 510 also generates a camera parameters list 508. For example, atlas constructor 510 may indicate in the atlas parameters list 512 that an attribute patch image (such as attribute patch image 404) has a bounding box size of M×N and has coordinates with a bottom corner located at the bottom left of the atlas. Additionally, an index value may be associated with the patch image, such as that it is a 1st, 2nd etc. patch image in the index. Additionally, camera parameter list 508 may be organized by or include the index entries, such that camera parameter list includes an entry for index position 1 indicating that the camera associated with that entry is located at position X with orientation Y, such as camera 112 (the front center FC camera that captured view 120 that was packed into patch image 404).
Metadata composer 514 may entropy encode the camera parameter list 508 and entropy encode the atlas parameter list 512 as entropy encoded metadata. The entropy encoded metadata may be included in a compressed bit stream long with video encoded packed image frames comprising attribute patch images that have been encoded via encoder 516 and along with mesh-encoded simplified mesh-based-representations that have been encoded via encoder 516.
The compressed bit stream may be provided to a decoder, such as the decoder shown in
In order to generate and simplify the mesh-based representations, encoder 516 may include a mesh depth encoder 700 that includes a mesh generation module 702 that generates mesh-based representations for depth patch images based on spatial information for the depth patch images, such as U,V and pixel value (pv) information for the depth patch image as shown in
For example,
Depth map 802 includes depth patch images 804, 806, 808, and 810. Note that for ease of illustration depth map 802 is simpler than the depth map/atlas 420 illustrated in
However, without simplification such an approach may generate a large number of vertices. Thus, returning to
The simplified mesh-based representation resulting from mesh decimation operation module 704 may be evaluated by decimated mesh distortion evaluator 706. In some embodiments, decimated mesh distortion evaluator may evaluate the decimated mesh based on one or more distortion criteria, such as spatial error, topology error, fairness, etc. Also different types of distortion may be weighted differently. For example, fairness distortion may be weighted differently than spatial distortion or topology distortion. In some embodiments, spatial distortion may be determined as differences between spatial locations of points included in the respective depth patch images and points falling on surfaces of the simplified version of the mesh-based representation resulting from applying the selected decimation operation. For example, the X, Y, Z values determined from the depth patch images for the points of the depth path image may be compared to closest points falling on a triangle of the mesh to determine spatial error. As an example, consider a decimation operation that removes a vertex. The X, Y, Z location is known for the point corresponding to the vertex prior to removal, such that after removal a location of the triangle surface at the given X,Y location can be compared to the Z value to determine a spatial error in the depth resulting from applying the vertex removal (e.g. decimation operation.
In some embodiments, topology error may be determined as deviations in topology between the respective mesh-based representation and the simplified version of the mesh-based representation. For example, the topology of the mesh-based representation prior to performing the decimation operation can be compared to the resulting mesh after applying the decimation operation and differences in topology can be determined.
In some embodiments, fairness may be determined as deviations in polygon shape and polygon normal vector orientation between the respective mesh-based representation and the simplified version of the mesh-based representation.
In order to evaluate these different types of distortion, decimated mesh distortion evaluator 706 includes spatial error evaluator 708, topology error evaluator 710, and fairness error evaluator 712. Additionally, in some embodiments decimated mesh distortion evaluator 706 includes error weighting and/or error threshold evaluator 714. For example, the error weighting and/or error threshold evaluator 714, may weigh the different types of errors differently to determine a composite error score that is compared to an error/distortion threshold or may evaluate each type of error against a separate error/distortion threshold for that type of error, or may both evaluate a composite error and individual types of error against respective error/distortion thresholds. Mesh depth encoder 700 also includes mesh/decimated mesh selection module 716 which may select a simplified mesh-based representation upon which one or more rounds of decimation operations and evaluations have been performed. For example a most simplified version of the mesh-based representation that does not violate any (or specified ones) of the distortion thresholds may be selected as the selected simplified version of the mesh-based representation that is to be mesh encoded and included in the bit stream.
In some embodiments, mesh decimation operation module 704 decimated mesh evaluator 706, and mesh/decimated mesh selection module 716 may decimate and evaluate meshes included in a group of frames as a group, wherein a same set of decimation operations is performed for a given mesh-based representation repeated in each of the frames of the group of frames. Also, in some embodiments, the distortion evaluation may ensure that the distortion thresholds are not exceed for any of the frames of the group of frames when the selected decimation operations are performed. Thus the resulting simplified meshes may have a same connectivity across the group of frames. This may improve the mesh encoding because one set of connectivity information may be signaled for the group of frames as opposed to signaling different connectivity information for each frame of the group of frames.
At block 1102, an encoding computing device (e.g. encoder) receives images of a 3D object or scene captured from a plurality of camera angles and/or camera locations. For example, the encoder illustrated in
At block 1108, the encoder selects a first (or next) mesh-based representation to evaluate in order to determine if the mesh-based representation can be simplified without introducing distortion that exceeds one or more distortion thresholds for one or more distortion criteria, such as distortion thresholds for spatial distortion, topology distortion, fairness distortion, a composite distortion measurement, etc.
As part of performing the simplification/distortion evaluation, at block 1110 the encoder performs a first decimation operation to simplify the selected mesh-based representation being evaluated. For example, the encoder may remove one or more vertices or collapse one or more edges of the selected mesh-based representation to generate a simplified version of the mesh-based representation. At block 1112, the encoder performs a distortion analysis by comparing the simplified version of the mesh-based representation to depth values of the corresponding depth patch image that corresponds with the selected mesh-based representation being evaluated. Also, the encoder may compare the simplified mesh-based representation to a prior version of the mesh-based representation without the decimation operation applied to determine differences (e.g. distortion) introduced by applying the decimation operation.
For example, the encoder may determine spatial distortion by comparing depth values of a surface of the mesh at a given location on the mesh to a depth value for a corresponding pixel in the depth patch image. Said another way, a pixel in the depth patch image with coordinates (U,V) and pixel value pv representing a depth, may be compared to a point on the surface located at location X,Y, wherein X,Y correspond to pixel U,V of the depth patch image projected from the depth patch image into 3D space. Furthermore, the depth value Z at point X,Y may be compared to the corresponding depth value pv of the depth map pixel that is projected into 3D space. In this way, a difference in the depth values at point X,Y,Z may be determined by comparing what the depth value is at the point in the simplified mesh-based representation as compared to what the depth value would have been if a vertex with coordinates X,Y,Z had been placed at that point location, wherein the vertex with coordinates X,Y,Z is generated by projecting depth map pixel (U,V, pv) into 3D space. Additionally, or alternatively, other distortion analysis may be performed and the determined levels of distortion compared to a corresponding distortion threshold for the other types of distortion analysis. For example, a topology distortion analysis or a fairness distortion analysis may alternatively or additionally be performed at block 1112.
At block 1114, the encoder determines if distortion introduced due to performing the decimation operation as determined via the distortion analysis performed at block 1112 exceeds one or more corresponding distortion thresholds for the respective type of distortion. If so, at block 1118 an earlier version of the mesh-based representation without the decimation operation that resulted in excessive distortion is selected. For example, a prior version of the mesh-based representation with less decimation operations applied, or that has not been decimated, is selected as opposed to the decimated version that resulted in excessive distortion.
If the distortion threshold is not exceed, at bock 1116 the encoder selects the simplified version of the mesh-based representation with the decimation operation applied. Note that as shown in further detail in
At block 1120, the encoder determined if there is another mesh-based representation to evaluate, if so the process reverts to block 1108 and is repeated for the next mesh-based representation to evaluate. In some embodiments, evaluation of different ones of the mesh-based representations may be performed in parallel, such that the encoder does not need to complete the evaluation of a first mesh-based representation before beginning to evaluate a next mesh-based representation.
At block 1122 the selected mesh-based representations or selected simplified mesh-based representations are mesh encoded. At block 1124, the encoder provides the mesh encoded mesh-based representations in an output bit stream. Also, at block 1126 the encoder provides the attribute patch images, which may be provided as a video encoded atlas comprising the attribute patch images that is also included in the output bit stream.
At block 1202, a decoding computer device (e.g. decoder) receives encoded meshes corresponding to portions of a 3D object or scene (e.g. the decoder may receive the mesh-encoded simplified mesh-based representations provided by the encoder at block 1124). At block 1204, the decoder also receives camera view metadata and atlas metadata, such as a camera parameter list 508 and an atlas parameter list 512 (as shown in
At block 1208, the decoder generates sub-meshes each corresponding to a portion of the 3D object or scene, wherein the sub-meshes can be combined into a larger mesh representing the whole 3D object or scene. At block 1210, the decoder renders the 3D object or scene by projecting the attribute values of the attribute patch images onto the corresponding sub-meshes and further merges the sub-meshes to form the larger mesh representing the 3D object or scene.
In some embodiments, the encoding process as described in
For example, at block 1302 the encoder generates multiple mesh-based representations for multiple sets of depth patch images representing the 3D object or scene at multiple moments in time. At block 1304, the encoder groups the multiple mesh based representations and associated attribute patch images into a group of frames. At block 1306, the encoder applies a consistent set of one or more decimation operations to the mesh-based representations of the group of frames corresponding to the depth patch images representing the 3D object or scene at the different moments in time.
In a similar manner, a decoder may receive a bit stream comprising encoded meshes and encoded attribute patch images for the group of frames and may reconstruct the 3D object or scene at the different moments in time. Because consistent decimation operations are applied, the encoded meshes received by the decoder may have consistent connectivity across the frames of the group of frames. Thus, the decoder may take advantage of this property of a group of frames (GoF) to accelerate reconstruction of the meshes. Also in some embodiments, less information may be signaled (than would be the case if GoFs were not used) because the connectivity information for the encoded meshes does not need to be repeated for each moment in time.
In some embodiments, the mesh simplification and distortion analysis process as described in blocks 1110 through 1118 of
For example, following block 1108, the encoder may reach block 1402 and may identify points or vertices corresponding to a depth discontinuity in the depth patch image or the mesh-based representation being evaluated. At block 1404 the encoder may determine if the depth discontinuity is greater than a threshold level of discontinuity. Depending on whether or not the depth discontinuity is greater or less than the threshold level of depth discontinuity, the encoder may apply different sets of distortion thresholds and weightings in the distortion analysis, as shown in
At block 1410, the encoder applies a decimation operation to the mesh-based representation being evaluated for simplification. At block 1412 topology distortion is determined as a result of the applied decimation operation, at block 1414 spatial distortion is determined as a result of the applied decimation operation, and at block 1416 fairness distortion is determined as a result of the applied decimation operation. At block 1418 a first set of weighting factors is applied to weight the distortions determined at blocks 1412, 1414, and 1416.
At block 1420, the encoder determines if the weighted composite distortion is less than a 1st distortion threshold. If so, another decimation operation is applied at 1410 and updated distortions are determined and weighted. If not, at block 1422 the encoder determines whether the weighted composite distortion is greater than a 2nd distortion threshold. If so, the prior version of the mesh-based representation without the most recent decimation operation applied is selected, if not the most recent version with the latest decimation operation applied is selected. In this way, the given mesh-based representation is decimated such that the 1st decimation threshold is exceed, but not so much that the 2nd decimation threshold is exceeded. Thus the mesh-based representation is simplified such that introduced distortion is within an acceptable range of distortion bound by the 1st and 2nd distortion thresholds.
A similar process is carried out in
Various embodiments of an encoder or decoder, as described herein may be executed in one or more computer systems 1500, which may interact with various other devices. Note that any component, action, or functionality described above with respect to
In various embodiments, computer system 1500 may be a uniprocessor system including one processor 1510, or a multiprocessor system including several processors 1510 (e.g., two, four, eight, or another suitable number). Processors 1510 may be any suitable processor capable of executing instructions. For example, in various embodiments one or more of processors 1510 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. Also, in some embodiments, one or more of processors 1510 may include additional types of processors, such as graphics processing units (GPUs), application specific integrated circuits (ASICs), etc. In multiprocessor systems, each of processors 1510 may commonly, but not necessarily, implement the same ISA. In some embodiments, computer system 1500 may be implemented as a system on a chip (SoC). For example, in some embodiments, processors 1510, memory 1520, I/O interface 1530 (e.g. a fabric), etc. may be implemented in a single SoC comprising multiple components integrated into a single chip. For example an SoC may include multiple CPU cores, a multi-core GPU, a multi-core neural engine, cache, one or more memories, etc. integrated into a single chip. In some embodiments, an SoC embodiment may implement a reduced instruction set computing (RISC) architecture, or any other suitable architecture.
System memory 1520 may be configured to store compression or decompression program instructions 1522 and/or sensor data accessible by processor 1510. In various embodiments, system memory 1520 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions 1522 may be configured to implement an image sensor control application incorporating any of the functionality described above. In some embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1520 or computer system 1500. While computer system 1500 is described as implementing the functionality of functional blocks of previous Figures, any of the functionality described herein may be implemented via such a computer system.
In one embodiment, I/O interface 1530 may be configured to coordinate I/O traffic between processor 1510, system memory 1520, and any peripheral devices in the device, including network interface 1540 or other peripheral interfaces, such as input/output devices 1550. In some embodiments, I/O interface 1530 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1520) into a format suitable for use by another component (e.g., processor 1510). In some embodiments, I/O interface 1530 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard, the Universal Serial Bus (USB) standard, IEEE 1394 serial bus standard, etc. for example. In some embodiments, the function of I/O interface 1530 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 1530, such as an interface to system memory 1520, may be incorporated directly into processor 1510.
Network interface 1540 may be configured to allow data to be exchanged between computer system 1500 and other devices attached to a network 1585 (e.g., carrier or agent devices) or between nodes of computer system 1500. Network 1585 may in various embodiments include one or more networks including but not limited to Local Area Networks (LANs) (e.g., an Ethernet or corporate network), Wide Area Networks (WANs) (e.g., the Internet), wireless data networks, some other electronic data network, or some combination thereof. In various embodiments, network interface 1540 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
Input/output devices 1550 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or accessing data by one or more computer systems 1500. Multiple input/output devices 1550 may be present in computer system 1500 or may be distributed on various nodes of computer system 1500. In some embodiments, similar input/output devices may be separate from computer system 1500 and may interact with one or more nodes of computer system 1500 through a wired or wireless connection, such as over network interface 1540.
As shown in
Those skilled in the art will appreciate that computer system 1500 is merely illustrative and is not intended to limit the scope of embodiments. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, Internet appliances, PDAs, wireless phones, tablets, wearable devices (e.g. head-mounted displays, virtual reality displays, augmented reality displays, etc.). Computer system 1500 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system 1500 may be transmitted to computer system 1500 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include a non-transitory, computer-readable storage medium or memory medium such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc. In some embodiments, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. The various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow
This application claims benefit of priority to U.S. Provisional Application Ser. No. 63/167,519, entitled “3D Volumetric Content Encoding Using 2D Videos and Simplified 3D Meshes,” filed Mar. 29, 2021, and which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5793371 | Deering | Aug 1998 | A |
5842004 | Deering | Nov 1998 | A |
5867167 | Deering | Feb 1999 | A |
5870094 | Deering | Feb 1999 | A |
5905502 | Deering | May 1999 | A |
5933153 | Deering | Aug 1999 | A |
6018353 | Deering | Jan 2000 | A |
6028610 | Deering | Feb 2000 | A |
6088034 | Deering | Jul 2000 | A |
6188796 | Kadono | Feb 2001 | B1 |
6215500 | Deering | Apr 2001 | B1 |
6239805 | Deering | May 2001 | B1 |
6256041 | Deering | Jul 2001 | B1 |
6307557 | Deering | Oct 2001 | B1 |
6429867 | Deering | Aug 2002 | B1 |
6459428 | Burk et al. | Oct 2002 | B1 |
6459429 | Deering | Oct 2002 | B1 |
6476803 | Zhang | Nov 2002 | B1 |
6522326 | Deering | Feb 2003 | B1 |
6522327 | Deering | Feb 2003 | B2 |
6525722 | Deering | Feb 2003 | B1 |
6525725 | Deering | Feb 2003 | B1 |
6531012 | Ishiyama | Mar 2003 | B2 |
6559842 | Deering | May 2003 | B1 |
6603470 | Deering | Aug 2003 | B1 |
6628277 | Deering | Sep 2003 | B1 |
6747644 | Deering | Jun 2004 | B1 |
6858826 | Mueller et al. | Feb 2005 | B2 |
7071935 | Deering | Jul 2006 | B1 |
7110617 | Zhang et al. | Sep 2006 | B2 |
7215810 | Kaufman et al. | May 2007 | B2 |
7373473 | Bukowski et al. | May 2008 | B2 |
7737985 | Torzewski et al. | Jun 2010 | B2 |
7961934 | Thrun et al. | Jun 2011 | B2 |
8022951 | Zhirkov et al. | Sep 2011 | B2 |
8040355 | Burley | Oct 2011 | B2 |
8055070 | Bassi et al. | Nov 2011 | B2 |
8264549 | Tokiwa et al. | Sep 2012 | B2 |
8315425 | Appel | Nov 2012 | B2 |
8411932 | Liu et al. | Apr 2013 | B2 |
8520740 | Flachs | Aug 2013 | B2 |
8566736 | Jacob | Oct 2013 | B1 |
8643515 | Cideciyan | Feb 2014 | B2 |
8718405 | Fujiki | May 2014 | B2 |
8780112 | Kontkanen et al. | Jul 2014 | B2 |
8805097 | Ahn et al. | Aug 2014 | B2 |
8884953 | Chen et al. | Nov 2014 | B2 |
8996228 | Ferguson | Mar 2015 | B1 |
9064311 | Mammou et al. | Jun 2015 | B2 |
9064331 | Yamashita | Jun 2015 | B2 |
9117105 | Da | Aug 2015 | B2 |
9171383 | Ahn et al. | Oct 2015 | B2 |
9191670 | Karczewicz | Nov 2015 | B2 |
9199641 | Ferguson et al. | Dec 2015 | B2 |
9214042 | Cai et al. | Dec 2015 | B2 |
9223765 | Alakuijala | Dec 2015 | B1 |
9234618 | Zhu et al. | Jan 2016 | B1 |
9256980 | Kirk | Feb 2016 | B2 |
9292961 | Korchev | Mar 2016 | B1 |
9300321 | Zalik et al. | Mar 2016 | B2 |
9317965 | Krishnaswamy et al. | Apr 2016 | B2 |
9412040 | Feng | Aug 2016 | B2 |
9424672 | Zavodny | Aug 2016 | B2 |
9430837 | Fujiki | Aug 2016 | B2 |
9530225 | Nieves | Dec 2016 | B1 |
9532056 | Jiang et al. | Dec 2016 | B2 |
9613388 | Loss | Apr 2017 | B2 |
9633146 | Plummer et al. | Apr 2017 | B2 |
9678963 | Hernandez Londono et al. | Jun 2017 | B2 |
9729169 | Kalevo | Aug 2017 | B2 |
9734595 | Lukac et al. | Aug 2017 | B2 |
9753124 | Hayes | Sep 2017 | B2 |
9787321 | Hemmer et al. | Oct 2017 | B1 |
9800766 | Tsuji | Oct 2017 | B2 |
9836483 | Hickman | Dec 2017 | B1 |
9972129 | Michel et al. | May 2018 | B2 |
10089312 | Tremblay et al. | Oct 2018 | B2 |
10108867 | Vallespi-Gonzalez | Oct 2018 | B1 |
10223810 | Chou et al. | Mar 2019 | B2 |
10259164 | Bader | Apr 2019 | B2 |
10277248 | Lee | Apr 2019 | B2 |
10372728 | Horhammer et al. | Aug 2019 | B2 |
10395419 | Godzaridis | Aug 2019 | B1 |
10462485 | Mammou et al. | Oct 2019 | B2 |
10467756 | Arlinsky et al. | Nov 2019 | B2 |
10510148 | Qui | Dec 2019 | B2 |
10546415 | Petkov | Jan 2020 | B2 |
10559111 | Sachs | Feb 2020 | B2 |
10587286 | Flynn | Mar 2020 | B1 |
10607373 | Mammou et al. | Mar 2020 | B2 |
10659816 | Mammou et al. | May 2020 | B2 |
10699444 | Mammou et al. | Jun 2020 | B2 |
10715618 | Bhaskar | Jul 2020 | B2 |
10762667 | Mekuria | Sep 2020 | B2 |
10783668 | Sinharoy et al. | Sep 2020 | B2 |
10789733 | Mammou et al. | Sep 2020 | B2 |
10805646 | Tourapis et al. | Oct 2020 | B2 |
10861196 | Mammou et al. | Dec 2020 | B2 |
10867413 | Mammou et al. | Dec 2020 | B2 |
10869059 | Mammou et al. | Dec 2020 | B2 |
10897269 | Mammou et al. | Jan 2021 | B2 |
10909725 | Mammou et al. | Feb 2021 | B2 |
10909726 | Mammou et al. | Feb 2021 | B2 |
10909727 | Mammou et al. | Feb 2021 | B2 |
10911787 | Tourapis et al. | Feb 2021 | B2 |
10939123 | Li | Mar 2021 | B2 |
10939129 | Mammou | Mar 2021 | B2 |
10977773 | Hemmer | Apr 2021 | B2 |
10984541 | Lim | Apr 2021 | B2 |
11010907 | Bagwell | May 2021 | B1 |
11010928 | Mammou et al. | May 2021 | B2 |
11012713 | Kim et al. | May 2021 | B2 |
11017566 | Tourapis et al. | May 2021 | B1 |
11017591 | Oh | May 2021 | B2 |
11044478 | Tourapis et al. | Jun 2021 | B2 |
11044495 | Dupont | Jun 2021 | B1 |
11095908 | Dawar | Aug 2021 | B2 |
11113845 | Tourapis et al. | Sep 2021 | B2 |
11122102 | Oh | Sep 2021 | B2 |
11122279 | Joshi | Sep 2021 | B2 |
11132818 | Mammou et al. | Sep 2021 | B2 |
11200701 | Aksu | Dec 2021 | B2 |
11202078 | Tourapis et al. | Dec 2021 | B2 |
11202098 | Mammou et al. | Dec 2021 | B2 |
11212558 | Sugio | Dec 2021 | B2 |
11240532 | Roimela | Feb 2022 | B2 |
11252441 | Tourapis et al. | Feb 2022 | B2 |
11276203 | Tourapis et al. | Mar 2022 | B2 |
11284091 | Tourapis et al. | Mar 2022 | B2 |
11321928 | Melkote Krishnaprasad | May 2022 | B2 |
11363309 | Tourapis et al. | Jun 2022 | B2 |
11386524 | Mammou et al. | Jul 2022 | B2 |
11398058 | Zakharchenko | Jul 2022 | B2 |
11398833 | Flynn et al. | Jul 2022 | B2 |
11409998 | Mammou et al. | Aug 2022 | B2 |
11450030 | Mammou | Sep 2022 | B2 |
11450031 | Flynn | Sep 2022 | B2 |
11461935 | Mammou et al. | Oct 2022 | B2 |
11475605 | Flynn | Oct 2022 | B2 |
11494947 | Mammou et al. | Nov 2022 | B2 |
11503367 | Yea | Nov 2022 | B2 |
11508095 | Mammou et al. | Nov 2022 | B2 |
11527018 | Mammou et al. | Dec 2022 | B2 |
11552651 | Mammou et al. | Jan 2023 | B2 |
11615557 | Flynn | Mar 2023 | B2 |
11620768 | Flynn | Apr 2023 | B2 |
11711544 | Tourapis et al. | Jul 2023 | B2 |
11727603 | Mammou et al. | Aug 2023 | B2 |
20020181741 | Masukura | Dec 2002 | A1 |
20030066949 | Mueller | Apr 2003 | A1 |
20040217956 | Besl et al. | Nov 2004 | A1 |
20060133508 | Sekiguchi | Jun 2006 | A1 |
20070025624 | Baumberg | Feb 2007 | A1 |
20070098283 | Kim et al. | May 2007 | A1 |
20070160140 | Fujisawa | Jul 2007 | A1 |
20080050047 | Bashyam | Feb 2008 | A1 |
20080154928 | Bashyam | Jun 2008 | A1 |
20080225116 | Kang | Sep 2008 | A1 |
20090016598 | Lojewski | Jan 2009 | A1 |
20090027412 | Burley | Jan 2009 | A1 |
20090087111 | Noda et al. | Apr 2009 | A1 |
20090213143 | Igarashi | Aug 2009 | A1 |
20090243921 | Gebben et al. | Oct 2009 | A1 |
20090285301 | Kurata | Nov 2009 | A1 |
20100104157 | Doyle | Apr 2010 | A1 |
20100104158 | Shechtman et al. | Apr 2010 | A1 |
20100106770 | Taylor | Apr 2010 | A1 |
20100166064 | Perlman | Jul 2010 | A1 |
20100208807 | Sikora | Aug 2010 | A1 |
20100260429 | Ichinose | Oct 2010 | A1 |
20100260729 | Cavato et al. | Oct 2010 | A1 |
20100296579 | Panchal et al. | Nov 2010 | A1 |
20110010400 | Hayes | Jan 2011 | A1 |
20110107720 | Oakey | May 2011 | A1 |
20110142139 | Cheng | Jun 2011 | A1 |
20110182477 | Tamrakar | Jul 2011 | A1 |
20120124113 | Zalik et al. | May 2012 | A1 |
20120188344 | Imai | Jul 2012 | A1 |
20120246166 | Krishnaswamy et al. | Sep 2012 | A1 |
20120300839 | Sze et al. | Nov 2012 | A1 |
20120314026 | Chen et al. | Dec 2012 | A1 |
20130034150 | Sadafale | Feb 2013 | A1 |
20130094777 | Nomura et al. | Apr 2013 | A1 |
20130106627 | Cideciyan | May 2013 | A1 |
20130156101 | Lu | Jun 2013 | A1 |
20130195352 | Nystad | Aug 2013 | A1 |
20130202197 | Reeler | Aug 2013 | A1 |
20130321418 | Kirk | Dec 2013 | A1 |
20130322738 | Oh | Dec 2013 | A1 |
20130329778 | Su et al. | Dec 2013 | A1 |
20140036033 | Takahashi | Feb 2014 | A1 |
20140098855 | Gu et al. | Apr 2014 | A1 |
20140125671 | Vorobyov et al. | May 2014 | A1 |
20140176672 | Lu | Jun 2014 | A1 |
20140198097 | Evans | Jul 2014 | A1 |
20140204088 | Kirk et al. | Jul 2014 | A1 |
20140294088 | Sung et al. | Oct 2014 | A1 |
20140334557 | Schierl et al. | Nov 2014 | A1 |
20140334717 | Jiang | Nov 2014 | A1 |
20150003723 | Huang et al. | Jan 2015 | A1 |
20150092834 | Cote et al. | Apr 2015 | A1 |
20150139560 | DeWeert et al. | May 2015 | A1 |
20150160450 | Ou et al. | Jun 2015 | A1 |
20150186744 | Nguyen et al. | Jul 2015 | A1 |
20150268058 | Samarasekera et al. | Sep 2015 | A1 |
20160035081 | Stout et al. | Feb 2016 | A1 |
20160071312 | Laine et al. | Mar 2016 | A1 |
20160086353 | Lukac et al. | Mar 2016 | A1 |
20160100151 | Schaffer et al. | Apr 2016 | A1 |
20160142697 | Budagavi et al. | May 2016 | A1 |
20160165241 | Park | Jun 2016 | A1 |
20160286215 | Gamei | Sep 2016 | A1 |
20160295219 | Ye et al. | Oct 2016 | A1 |
20170039765 | Zhou et al. | Feb 2017 | A1 |
20170063392 | Kalevo | Mar 2017 | A1 |
20170118675 | Boch | Apr 2017 | A1 |
20170155402 | Karkkainen | Jun 2017 | A1 |
20170155922 | Yoo | Jun 2017 | A1 |
20170214943 | Cohen et al. | Jul 2017 | A1 |
20170220037 | Berestov | Aug 2017 | A1 |
20170243405 | Brandt et al. | Aug 2017 | A1 |
20170247120 | Miller | Aug 2017 | A1 |
20170249401 | Eckart et al. | Aug 2017 | A1 |
20170323617 | Yang | Nov 2017 | A1 |
20170337724 | Gervais | Nov 2017 | A1 |
20170347100 | Chou et al. | Nov 2017 | A1 |
20170347120 | Chou et al. | Nov 2017 | A1 |
20170347122 | Chou et al. | Nov 2017 | A1 |
20170358063 | Chen | Dec 2017 | A1 |
20180018786 | Jakubiak | Jan 2018 | A1 |
20180053324 | Cohen et al. | Feb 2018 | A1 |
20180063543 | Reddy | Mar 2018 | A1 |
20180075622 | Tuffreau et al. | Mar 2018 | A1 |
20180189982 | Laroche et al. | Jul 2018 | A1 |
20180192061 | He | Jul 2018 | A1 |
20180253867 | Laroche | Sep 2018 | A1 |
20180260416 | Elkaim | Sep 2018 | A1 |
20180268570 | Budagavi | Sep 2018 | A1 |
20180308249 | Nash et al. | Oct 2018 | A1 |
20180330504 | Karlinsky et al. | Nov 2018 | A1 |
20180338017 | Mekuria | Nov 2018 | A1 |
20180342083 | Onno et al. | Nov 2018 | A1 |
20180365898 | Costa | Dec 2018 | A1 |
20190018730 | Charamisinau et al. | Jan 2019 | A1 |
20190020880 | Wang | Jan 2019 | A1 |
20190026956 | Gausebeck | Jan 2019 | A1 |
20190045157 | Venshtain | Feb 2019 | A1 |
20190081638 | Mammou et al. | Mar 2019 | A1 |
20190087978 | Tourapis et al. | Mar 2019 | A1 |
20190087979 | Mammou et al. | Mar 2019 | A1 |
20190088004 | Lucas et al. | Mar 2019 | A1 |
20190108655 | Lasserre | Apr 2019 | A1 |
20190114504 | Vosoughi et al. | Apr 2019 | A1 |
20190114809 | Vosoughi et al. | Apr 2019 | A1 |
20190114830 | Bouazizi | Apr 2019 | A1 |
20190116257 | Rhyne | Apr 2019 | A1 |
20190116357 | Tian et al. | Apr 2019 | A1 |
20190122393 | Sinharoy | Apr 2019 | A1 |
20190089987 | Won et al. | May 2019 | A1 |
20190139266 | Budagavi et al. | May 2019 | A1 |
20190156519 | Mammou et al. | May 2019 | A1 |
20190156520 | Mammou et al. | May 2019 | A1 |
20190195616 | Cao et al. | Jun 2019 | A1 |
20190197739 | Sinharoy | Jun 2019 | A1 |
20190199995 | Yip et al. | Jun 2019 | A1 |
20190204076 | Nishi et al. | Jul 2019 | A1 |
20190262726 | Spencer et al. | Aug 2019 | A1 |
20190289306 | Zhao | Sep 2019 | A1 |
20190304139 | Joshi et al. | Oct 2019 | A1 |
20190311502 | Mammou et al. | Oct 2019 | A1 |
20190313110 | Mammou et al. | Oct 2019 | A1 |
20190318488 | Lim | Oct 2019 | A1 |
20190318519 | Graziosi et al. | Oct 2019 | A1 |
20190340306 | Harrison | Nov 2019 | A1 |
20190341930 | Pavlovic | Nov 2019 | A1 |
20190371051 | Dore et al. | Dec 2019 | A1 |
20190392651 | Graziosi | Dec 2019 | A1 |
20200005518 | Graziosi | Jan 2020 | A1 |
20200013235 | Tsai et al. | Jan 2020 | A1 |
20200020132 | Sinharoy et al. | Jan 2020 | A1 |
20200020133 | Najaf-Zadeh et al. | Jan 2020 | A1 |
20200021847 | Kim et al. | Jan 2020 | A1 |
20200027248 | Verschaeve | Jan 2020 | A1 |
20200043220 | Mishaev | Feb 2020 | A1 |
20200045344 | Boyce et al. | Feb 2020 | A1 |
20200104976 | Mammou et al. | Apr 2020 | A1 |
20200105024 | Mammou et al. | Apr 2020 | A1 |
20200107022 | Ahn et al. | Apr 2020 | A1 |
20200107048 | Yea | Apr 2020 | A1 |
20200111237 | Tourapis et al. | Apr 2020 | A1 |
20200137399 | Li et al. | Apr 2020 | A1 |
20200151913 | Budagavi | May 2020 | A1 |
20200153885 | Lee et al. | May 2020 | A1 |
20200195946 | Choi | Jun 2020 | A1 |
20200204808 | Graziosi | Jun 2020 | A1 |
20200217937 | Mammou et al. | Jul 2020 | A1 |
20200219285 | Faramarzi et al. | Jul 2020 | A1 |
20200219288 | Joshi | Jul 2020 | A1 |
20200219290 | Tourapis et al. | Jul 2020 | A1 |
20200228836 | Schwarz et al. | Jul 2020 | A1 |
20200244993 | Schwarz et al. | Jul 2020 | A1 |
20200260063 | Hannuksela | Aug 2020 | A1 |
20200273208 | Mammou et al. | Aug 2020 | A1 |
20200273258 | Lasserre et al. | Aug 2020 | A1 |
20200275129 | Deshpande | Aug 2020 | A1 |
20200279435 | Kuma | Sep 2020 | A1 |
20200286261 | Faramarzi et al. | Sep 2020 | A1 |
20200288171 | Hannuksela et al. | Sep 2020 | A1 |
20200294271 | Ilola | Sep 2020 | A1 |
20200302571 | Schwartz | Sep 2020 | A1 |
20200302578 | Graziosi | Sep 2020 | A1 |
20200302621 | Kong | Sep 2020 | A1 |
20200302651 | Flynn | Sep 2020 | A1 |
20200302655 | Oh | Sep 2020 | A1 |
20200359035 | Chevet | Nov 2020 | A1 |
20200359053 | Yano | Nov 2020 | A1 |
20200366941 | Sugio et al. | Nov 2020 | A1 |
20200374559 | Fleureau et al. | Nov 2020 | A1 |
20200380765 | Thudor et al. | Dec 2020 | A1 |
20200396489 | Flynn | Dec 2020 | A1 |
20200413096 | Zhang | Dec 2020 | A1 |
20210005006 | Oh | Jan 2021 | A1 |
20210006805 | Urban et al. | Jan 2021 | A1 |
20210006833 | Tourapis et al. | Jan 2021 | A1 |
20210012536 | Mammou et al. | Jan 2021 | A1 |
20210012538 | Wang | Jan 2021 | A1 |
20210014293 | Yip | Jan 2021 | A1 |
20210021869 | Wang | Jan 2021 | A1 |
20210027505 | Yano et al. | Jan 2021 | A1 |
20210029381 | Zhang et al. | Jan 2021 | A1 |
20210056732 | Han | Feb 2021 | A1 |
20210074029 | Fleureau | Mar 2021 | A1 |
20210084333 | Zhang | Mar 2021 | A1 |
20210090301 | Mammou et al. | Mar 2021 | A1 |
20210097723 | Kim et al. | Apr 2021 | A1 |
20210097725 | Mammou et al. | Apr 2021 | A1 |
20210097726 | Mammou et al. | Apr 2021 | A1 |
20210099701 | Tourapis et al. | Apr 2021 | A1 |
20210103780 | Mammou et al. | Apr 2021 | A1 |
20210104014 | Kolb, V | Apr 2021 | A1 |
20210104073 | Yea et al. | Apr 2021 | A1 |
20210104075 | Mammou et al. | Apr 2021 | A1 |
20210105022 | Flynn et al. | Apr 2021 | A1 |
20210105493 | Mammou et al. | Apr 2021 | A1 |
20210105504 | Hur et al. | Apr 2021 | A1 |
20210112281 | Wang | Apr 2021 | A1 |
20210118190 | Mammou et al. | Apr 2021 | A1 |
20210119640 | Mammou et al. | Apr 2021 | A1 |
20210142522 | Li | May 2021 | A1 |
20210150765 | Mammou et al. | May 2021 | A1 |
20210150766 | Mammou et al. | May 2021 | A1 |
20210150771 | Huang | May 2021 | A1 |
20210166432 | Wang | Jun 2021 | A1 |
20210166436 | Zhang | Jun 2021 | A1 |
20210168386 | Zhang | Jun 2021 | A1 |
20210183112 | Mammou et al. | Jun 2021 | A1 |
20210185331 | Mammou et al. | Jun 2021 | A1 |
20210195162 | Chupeau et al. | Jun 2021 | A1 |
20210201541 | Lasserre | Jul 2021 | A1 |
20210203989 | Wang | Jul 2021 | A1 |
20210211724 | Kim et al. | Jul 2021 | A1 |
20210217139 | Yano | Jul 2021 | A1 |
20210217203 | Kim et al. | Jul 2021 | A1 |
20210217206 | Flynn | Jul 2021 | A1 |
20210218969 | Lasserre | Jul 2021 | A1 |
20210218994 | Flynn | Jul 2021 | A1 |
20210233281 | Wang et al. | Jul 2021 | A1 |
20210248784 | Gao | Aug 2021 | A1 |
20210248785 | Zhang | Aug 2021 | A1 |
20210256735 | Tourapis et al. | Aug 2021 | A1 |
20210258610 | Iguchi | Aug 2021 | A1 |
20210264640 | Mammou et al. | Aug 2021 | A1 |
20210264641 | Iguchi | Aug 2021 | A1 |
20210266597 | Kim et al. | Aug 2021 | A1 |
20210281874 | Lasserre | Sep 2021 | A1 |
20210295569 | Sugio | Sep 2021 | A1 |
20210319593 | Flynn | Oct 2021 | A1 |
20210383576 | Olivier | Dec 2021 | A1 |
20210398352 | Tokumo | Dec 2021 | A1 |
20210400280 | Zaghetto | Dec 2021 | A1 |
20210407147 | Flynn | Dec 2021 | A1 |
20210407148 | Flynn | Dec 2021 | A1 |
20220020211 | Vytyaz | Jan 2022 | A1 |
20220030258 | Zhang | Jan 2022 | A1 |
20220084164 | Hur | Mar 2022 | A1 |
20220101555 | Zhang | Mar 2022 | A1 |
20220116659 | Pesonen | Apr 2022 | A1 |
20220164994 | Joshi | May 2022 | A1 |
20220239956 | Tourapis | Jul 2022 | A1 |
20220383448 | Valdez Balderas | Dec 2022 | A1 |
20220405533 | Mammou et al. | Dec 2022 | A1 |
20230169658 | Rhodes | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
309618 | Oct 2019 | CA |
101198945 | Jun 2008 | CN |
10230618 | Jan 2012 | CN |
102428698 | Apr 2012 | CN |
102630011 | Aug 2012 | CN |
103329524 | Sep 2013 | CN |
103366006 | Oct 2013 | CN |
103944580 | Jul 2014 | CN |
104156972 | Nov 2014 | CN |
104408689 | Mar 2015 | CN |
105261060 | Jan 2016 | CN |
105818167 | Aug 2016 | CN |
106651942 | May 2017 | CN |
106846425 | Jun 2017 | CN |
107155342 | Sep 2017 | CN |
108632607 | Oct 2018 | CN |
1745442 | Jan 2007 | EP |
2533213 | Dec 2012 | EP |
3429210 | Jan 2019 | EP |
3496388 | Jun 2019 | EP |
3614674 | Feb 2020 | EP |
3751857 | Dec 2020 | EP |
2013111948 | Jun 2013 | JP |
200004506 | Jan 2000 | WO |
2008129021 | Oct 2008 | WO |
2013022540 | Feb 2013 | WO |
2018050725 | Mar 2018 | WO |
2018094141 | May 2018 | WO |
2019011636 | Jan 2019 | WO |
2019013430 | Jan 2019 | WO |
2019076503 | Apr 2019 | WO |
2019078696 | Apr 2019 | WO |
2019093834 | May 2019 | WO |
2019129923 | Jul 2019 | WO |
2019135024 | Jul 2019 | WO |
2019143545 | Jul 2019 | WO |
2019194522 | Oct 2019 | WO |
2019199415 | Oct 2019 | WO |
20190197708 | Oct 2019 | WO |
2019069711 | Nov 2019 | WO |
2020012073 | Jan 2020 | WO |
2020066680 | Feb 2020 | WO |
Entry |
---|
U.S. Appl. No. 18/063,592, filed Dec. 8, 2022, Khaled Mammou, et al. |
Liu Chao, “Research on point cloud data processing and reconstruction,” Full-text Database, Feb. 7, 2023. |
U.S. Appl. No. 18/189,099, filed Mar. 23, 2023, Kjungsun Kim, et al. |
U.S. Appl. No. 17/157,833, filed Jan. 25, 2021, Khaled Mammou. |
U.S. Appl. No. 18/052,803, filed Nov. 4, 2022, Mammou, et al. |
Pragyana K. Mishra, “Image and Depth Coherent Surface Description”, Doctoral dissertation, Carnegie Mellon University, The Robotics Institute, Mar. 2005, pp. 1-152. |
Robert Cohen, “CE 3.2 point-based prediction for point loud compression”, dated Apr. 2018, pp. 1-6. |
Jang et al., Video-Based Point-Cloud-Compression Standard in MPEG: From Evidence Collection to Committee Draft [Standards in a Nutshell], IEEE Signal Processing Magazine, Apr. 2019. |
Ekekrantz, Johan, et al., “Adaptive Cost Function for Pointcloud Registration,” arXiv preprint arXiv: 1704.07910 (2017), pp. 1-10. |
Vincente Morell, et al., “Geometric 3D point cloud compression”, Copyright 2014 Elsevier B.V. All rights reserved, pp. 1-18. |
U.S. Appl. No. 17/523,826, filed Nov. 10, 2021, Mammou, et a. |
Chou, et al., “Dynamic Polygon Clouds: Representation and Compression for VR/AR”, ARXIV ID: 1610.00402, Published Oct. 3, 2016, pp. 1-28. |
U.S. Appl. No. 17/804,477, filed May 27, 2022, Khaled Mammou, et al. |
Jingming Dong, “Optimal Visual Representation Engineering and Learning for Computer Vision”, Doctoral Dissertation, UCLA, 2017, pp. 1-151. |
Khaled Mammou et al., “Working Draft of Point Cloud Coding for Category 2 (Draft 1)”, dated Apr. 2018, pp. 1-38. |
Khaled Mammou et al., “Input Contribution”, dated Oct. 8, 2018, pp. 1-42. |
Benjamin Bross et al., “High Effeciency Video Coding (HEVC) Text Specification Draft 8”, dated Jul. 23, 2012, pp. 1-86. |
JunTaek Park et al., “Non-Overlapping Patch Packing in TMC2 with HEVC-SCC”, dated Oct. 8, 2018, pp. 1-6. |
Ismael Daribo, et al., “Efficient Rate-Distortion Compression on Dynamic Point Cloud for Grid-Pattern-Based 3D Scanning Systems”, 3D Research 3.1, Springer, 2012, pp. 1-9. |
Cohen Robert A et al, “Point Cloud Attribute Compression Using 3-D Intra Prediction and Shape-Adaptive Transforms”, dated Mar. 30, 2016, pp. 141-150. |
Sebastian Schwarz, et al., “Emerging MPEG Standards for Point Cloud Compression”, IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 9, No. 1, Mar. 2019, pp. 133-148. |
Li Li, et al., Efficient Projected Frame Padding for Video-based Point Cloud Compression, IEEE Transactions on Multimedia, doi: 10.100/TMM.2020.3016894, 2020, pp. 1-14. |
Lujia Wang, et al., “Point-cloud Compression Using Data Independent Method—A 3D Discrete Cosine Transform Approach”, in Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Jul. 2017, pp. 1-6. |
Yiting Shao, et al., “Attribute Compression of 3D Point Clouds Using Laplacian Sparsity Optimized Graph Transform”, 2017 IEEE Visual Communications and Image Processing (VCIP), IEEE, 2017, p. 1-4. |
Siheng Chen, et al., “Fast Resampling of 3D Point Clouds via Graphs”, arX1v:1702.06397v1, Feb. 11, 2017, pp. 1-15. |
Nahid Sheikhi Pour, “Improvements for Projection-Based Point Cloud Compression”, MS Thesis, 2018, pp. 1-75. |
Robert Skupin, et al., “Multiview Point Cloud Filtering for Spatiotemporal Consistency”, VISAPP 2014—International Conference on Computer Vision Theory and Applications, 2014, pp. 531-538. |
Bin Lu, et al., ““Massive Point Cloud Space Management Method Based on Octree-Like Encoding””, Arabian Journal forScience Engineering, https://doi.org/10.1007/s13369-019-03968-7, 2019, pp. 1-15. |
Wikipedia, ““k-d tree””, Aug. 1, 2019, Retrieved from URL: https://en.wikipedia.org/w.indec.php?title=Kd_tree&oldid=908900837, pp. 1-9. |
“David Flynn et al., ““G-PCC: A hierarchical geometry slice structure””, MPEG Meeting, Retrieved from http://phenix.intevry.fr/mpeg/doc_end_user/documents/131_Online/wg11/m54677-v1-m54677_vl.zip, Jun. 28, 2020, pp. 1-9”. |
““G-PCC Future Enchancements””, MPEG Metting, Oct. 7-11, 2019, (Motion Picture Expert Group of ISO/IECJTC1/SC29-WG11), Retrieved from http://phenix.int-evry.fr/mpeg/doc_end_user/documents/128_Geneva/wg11/w18887.zipw18887/w18887 on Dec. 23, 2019, pp. 1-30. |
Miska M. Hannuksela, “On Slices and Tiles”, JVET Meeting, The Joint Video Exploration Team of ISO/IEC, Sep. 25, 2018, pp. 1-3. |
David Flynn, “International Organisation for Standardisation Organisation International De Normalisation ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio”, dated Apr. 2020. pp. 1-9. |
R. Mekuria, et al., “Design, Implementation and Evaluation of a Point Cloud Codec for Tele-Immersive Video”, IEEE Transactions on Circuits and Systems for Video Technology 27.4, 2017, pp. 1-14. |
Jae-Kyun, et al., “Large-Scale 3D Point Cloud Compression Using Adaptive Radial Distance Prediction in Hybrid Coordinate Domains”, IEEE Journal of Selected Topics in Signal Processing, vol. 9, No. 3, Apr. 2015, pp. 1-14. |
Tim Golla et al., “Real-time Point Cloud Compression”, IROS, 2015, pp. 1-6. |
Dong Liu, et al., “Three-Dimensional Point-Cloud Plus Patches: Towards Model-Based Image Coding in the Cloud”, 2015 IEEE International Conference on Multimedia Big Data, IEEE Computer Society, pp. 395-400. |
Tilo Ochotta et al., “Image-Based Surface Compression”, dated Sep. 1, 2008, pp. 1647-1663. |
W. Zhu, et al., “Lossless point cloud geometry compression via binary tree partition and intra prediction,” 2017 IEEE 19th International Workshop on Multimedia Signal Prcoessing (MMSP), 2017, pp. 1-6, doi: 1.1109/MMSP.2017.8122226 (Year 2017). |
U.S. Appl. No. 17/718,647, filed Apr. 12, 2022, Alexandros Tourapis, et al. |
Stefan Gumhold et al, “Predictive Point-Cloud Compression”, dated Jul. 31, 2005, pp. 1-7. |
Pierre-Marie Gandoin et al, “Progressive Lossless Compression of Arbitrary Simplicial Complexes”, dated Jul. 1, 2002, pp. 1-8. |
Ruwen Schnabel et al., “Octree-based Point-Cloud Compression”, Eurographics Symposium on Point-Based Graphics, 2006, pp. 1-11. |
Yuxue Fan et al., “Point Cloud Compression Based on Hierarchical Point Clustering”, Signal and Information Processing Association Annual Summit and Conference (APSIPA), IEEE, 2013, pp. 1-7. |
Kammert, et al., “Real-time Compression of Point Cloud Streams”, 2012 IEEE International Conference on Robotics and Automation, RiverCentre, Saint Paul, Minnesota, USA, May 14-18, 2012, pp. 778-785. |
Garcia, et al., “Context-Based Octree Coding for Point Cloud Video”, 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 1412-1416. |
Merry et al., Compression of dense and regular point clouds, Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa (pp. 15-20). ACM. (Jan. 2006). |
Lustosa et al., Database system support of simulation data, Proceedings of the VLDB Endowment 9.13 (2016): pp. 1329-1340. |
Hao Liu, et al., “A Comprehensive Study and Comparison of Core Technologies for MPEG 3D Point Cloud Compression”, arXiv:1912.09674v1, Dec. 20, 2019, pp. 1-17. |
Styliani Psomadaki, “Using a Space Filing Curve for The Management of Dynamic Point Cloud Data in a Relational DBMS”, Nov. 2016, pp. 1-158. |
Remi Cura et al., “Implicit Lod for Processing and Classification in Point Cloud Servers”, dated Mar. 4, 2016, pp. 1-18. |
Yan Huang et al., Octree-Based Progressive Geometry Coding of Point Clouds, dated Jan. 1, 2006, pp. 1-10. |
Khaled Mammou, et al., “G-PCC codec description v1”, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11, Oct. 2018, pp. 1-32. |
“V-PCC Codec Description”, 127. MPEG Meeting; Jul. 8, 2019-Jul. 12, 2019; Gothenburg; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG), dated Sep. 25, 2019. |
G-PPC Codec Description, 127. MPEG Meeting; Jul. 8, 2019-Jul. 12, 2019; Gothenburg; (Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG), dated Sep. 6, 2019. |
Jianqiang Liu et al, “Data-Adaptive Packing Method for Compresssion of Dynamic Point Cloud Sequences”, dated Jul. 3, 2019, pp. 904-909. |
Jorn Jachalsky et al., “D4.2.1 Scene Analysis with Spatio-Temporal”, dated Apr. 30, 2013, pp. 1-60. |
Lasserre S et al, “Global Motion Compensation for Point Cloud Compression in TMC3”, dated Oct. 3, 2018, pp. 1-28. |
D. Graziosi et al, “An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC)” Asipa Transactions on Signal and Information Processing, vol. 9, dated Apr. 30, 2020, pp. 1-17. |
“Continuous improvement of study text of ISO-IEC CD 23090-5 Video-Based Point Cloud Compression” dated May 8, 2019, pp. 1-140. |
Mehlem D. et al, “Smoothing considerations for V-PCC”, dated Oct. 2, 2019, pp. 1-8. |
Flynn D et al, “G-PCC Bypass coding of bypass bins”, dated Mar. 21, 2019, pp. 1-3. |
Sharman K et al, “CABAC Packet-Based Stream”, dated Nov. 18, 2011, pp. 1-6. |
Lasserre S et al, “On bypassed bit coding and chunks”, dated Apr. 6, 2020, pp. 1-3. |
David Flynn et al., “G-pcc low latency bypass bin coding”. dated Oct. 3, 2019, pp. 1-4. |
Chuan Wang, et al., “Video Vectorization via Tetrahedral Remeshing”, IEEE Transactions on Image Processing, vol. 26, No. 4, Apr. 2017, pp. 1833-1844. |
Keming Cao, et al., “Visual Quality of Compressed Mesh and Point Cloud Sequences”, IEEE Access, vol. 8, 2020. pp. 171203-171217. |
Number | Date | Country | |
---|---|---|---|
63167519 | Mar 2021 | US |