Developments in three dimensional (3D) graphics technologies have led to the integration of 3D graphics in various applications. For example, 3D graphics are used in various entertainment applications such as interactive 3D environments or 3D videos. Interactive 3D environments offer immersive six degrees of freedom representation, which provides improved functionality for users. Additionally, 3D graphics are used in various engineering applications, such as 3D simulations and 3D analysis. Furthermore, 3D graphics are used in various manufacturing and architecture applications, such as 3D modeling. As developments in 3D graphics technologies have led to the integration of 3D graphics in various applications, so too have these developments led to increasing complexity associated with processing (e.g., coding, decoding, compressing, decompressing) 3D graphics. The Motion Pictures Experts Group (MPEG) of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) has published standards with respect to coding/decoding and compression/decompression of 3D graphics. These standards include the Visual Volumetric Video-Based Coding (V3C) standard for Video-Based Point Cloud Compression (V-PCC).
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or exemplary embodiments.
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
Various embodiments of the present disclosure provide an encoder comprising at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the encoder to perform segmenting a mesh representative of the 3D content into segments; processing each segment to sort faces and vertex indices within each segment; generating a connectivity information frame for each processed segment; and encoding the connectivity information frames based on a video codec.
Various embodiments of the present disclosure provide a decoder comprising at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the decoder to perform extracting a video frame from a video, wherein the video frame includes connectivity information associated with the 3D content; and reconstructing the 3D content based on the connectivity information, wherein the connectivity information comprises: segments representing the 3D content; and sorted faces and vertex indices within each segment.
Various embodiments of the present disclosure provide a method for decoding 3D content comprising: extracting a video frame from a video, wherein the video frame includes connectivity information associated with the 3D content; and reconstructing the 3D content based on the connectivity information, wherein the connectivity information comprises: segments representing the 3D content; and sorted faces and vertex indices within each segment.
As described above, 3D graphics technologies are integrated in various applications, such as entertainment applications, engineering applications, manufacturing applications, and architecture applications. In these various applications, 3D graphics may be used to generate 3D models of incredible detail and complexity. Given the detail and complexity of the 3D models, the data sets associated with the 3D models can be extremely large. Furthermore, these extremely large data sets may be transferred, for example, through the Internet. Transfer of large data sets, such as those associated with detailed and complex 3D models, can therefore become a bottleneck in various applications. As illustrated by this example, developments in 3D graphics technologies provide improved utility to various applications but also present technological challenges. Improvements to 3D graphics technologies, therefore, represent improvements to the various technological applications to which 3D graphics technologies are applied. Thus, there is a need for technological improvements to address these and other technological problems related to 3D graphics technologies.
Accordingly, the present disclosure provides solutions that address the technological challenges described above through improved approaches to compression/decompression and coding/decoding of 3D graphics. In various embodiments, connectivity information in 3D mesh content can be efficiently coded through face sorting and normalization. 3D content, such as 3D graphics, can be represented as a mesh (e.g., 3D mesh content). The mesh can include vertices, edges, and faces that describe the shape or topology of the 3D content. The mesh can be segmented into blocks (e.g., segments, tiles). For each block, the vertex information associated with each face can be arranged in order (e.g., descending order). With the vertex information associated with each face arranged in order, the faces are arranged in order (e.g., ascending order). By sorting and normalizing the faces in each block, the 3D content represented in each block can be packed into two-dimensional (2D) frames. Sorting the vertex information can guarantee an increasing order of vertex indices, facilitating improved processing of the mesh. Further, through sorting and normalizing the faces in each block, differential coding methods can be applied to represent connectivity information in a compact form (e.g., 8-bit, 10-bit) and disjunct index prediction can be applied for different vertex indices. In various embodiments, connectivity information in 3D mesh content can be efficiently packed into coding blocks. Components of the connectivity information in the 3D mesh content can be transformed from one-dimensional (1D) connectivity components (e.g., list, face list) to 2D connectivity images (e.g., connectivity coding sample array). With the connectivity information in the 3D mesh content transformed to 2D connectivity images, video encoding processes can be applied to the 2D connectivity images (e.g., as video connectivity frames). In this way, 3D mesh content can be efficiently compressed and decompressed by leveraging video encoding solutions. In various embodiments, a video connectivity frame can be terminated by signaling a restricted (e.g., reserved, predetermined) sequence of bits in the frame. When connectivity information for 3D mesh content is coded in video connectivity frames, the number of faces in a mesh may be less than the number of coding units (e.g., samples) in a video connectivity frame. By signaling termination of a video connectivity frame, compression of 3D mesh content can be improved. Thus, the present disclosure provides solutions that address technological challenges arising in 3D graphics technologies.
Descriptions of the various embodiments provided herein may include one or more of the terms listed below. For illustrative purposes and not to limit the disclosure, exemplary descriptions of the terms are provided herein.
Mesh: a collection of vertices, edges, and faces that may define the shape/topology of a polyhedral object. The faces may include triangles (e.g., triangle mesh).
Dynamic mesh: a mesh with at least one of various possible components (e.g., connectivity, geometry, mapping, vertex attribute, and attribute map) varying in time.
Animated Mesh: a dynamic mesh with constant connectivity.
Connectivity: a set of vertex indices describing how to connect the mesh vertices to create a 3D surface (e.g., geometry and all the attributes may share the same unique connectivity information).
Geometry: a set of vertex 3D (e.g., x, y, z) coordinates describing positions associated with the mesh vertices. The coordinates (e.g., x, y, z) representing the positions may have finite precision and dynamic range.
Mapping: a description of how to map the mesh surface to 2D regions of the plane. Such mapping may be described by a set of UV parametric/texture (e.g., mapping) coordinates associated with the mesh vertices together with the connectivity information.
Vertex attribute: a scalar of vector attribute values associated with the mesh vertices.
Attribute Map: attributes associated with the mesh surface and stored as 2D images/videos. The mapping between the videos (e.g., parametric space) and the surface may be defined by the mapping information.
Vertex: a position (e.g., in 3D space) along with other information such as color, normal vector, and texture coordinates.
Edge: a connection between two vertices.
Face: a closed set of edges in which a triangle face has three edges defined by three vertices. Orientation of the face may be determined using a “right-hand” coordinate system.
Surface: a collection of faces that separates the three-dimensional object from the environment.
Connectivity Coding Unit (CCU): a square unit of size N×N connectivity coding samples that carry connectivity information.
Connectivity Coding Sample: a coding element of the connectivity information calculated as a difference of elements between a current face and a predictor face.
Block: a representation of the mesh segment as a collection of connectivity coding samples represented as three attribute channels. A block may consist of CCUs.
Before describing various embodiments of the present disclosure in detail, it may be helpful to describe an exemplary approach to encoding connectivity information for a mesh.
As described above, traversal of a triangle mesh encounters these five possible cases. Vertex symbol coding for connectivity information can be based on which case is encountered while traversing the triangle mesh. So, when traversal of a triangle mesh encounters a face corresponding with case “C” 102a, then connectivity information for that face can be coded as “C”. Similarly, when traversal of the triangle mesh encounters a face corresponding with case “L” 102b, case “E” 102c, case “R” 102d, or case “S” 102e, then connectivity information for that face can be coded as “L”, “E”, “R”, or “S” accordingly.
In the various approaches to coding 3D content illustrated in
The updated vertex indices are encoded in accordance with the traversal approach described above. In various approaches to coding 3D content, connectivity information is encoded losslessly in the traversal order of the updated vertex indices. As the updated vertex indices are of a different order than that of the input mesh information, the traversal order of the updated vertex indices is encoded along with the connectivity information. The traversal order of the updated vertex indices can be referred to as a reordering information or a vertex map. The reordering information, or the vertex map, can be encoded in accordance with various encoding approaches, such as differential coding or entropy coding. The encoded reordering information, or encoded vertex map, can be added to an encoded bitstream with the encoded connectivity information derived from the updated vertex indices. The resulting encoded bitstream can be decoded, and the encoded connectivity information and the encoded vertex map can be extracted therefrom. The vertex map is applied to the connectivity information to align the connectivity information with the reconstructed vertices.
In some approaches to coding 3D content, a vertex map is not separately encoded. In such approaches (e.g., color-per-vertex), connectivity information is represented in mesh coding in absolute values with associated vertex indices. The connectivity information is coded sequentially using, for example, entropy coding.
As illustrated in
A coded bitstream for dynamic mesh is represented as a collection of components, which is composed of mesh bitstream header and data payload. The mesh bitstream header is comprised of the sequence parameter set, picture parameter set, adaptation parameters, tile information parameters, and supplemental enhancement information, etc.. The mesh bitstream payload is comprised of the coded atlas information component, coded attribute information component, coded geometry (position) information component, coded mapping information component, and coded connectivity information component.
As illustrated in
In general, a coded bitstream for a dynamic mesh (e.g., mesh frame sequence) is represented as a collection of components, which is composed of mesh bitstream header and data payload (e.g., mesh bitstream payload). The mesh bitstream header is comprised of a sequence parameter set, picture parameter set, adaptation parameters, tile information parameters, and supplemental enhancement information, etc. The mesh bitstream payload can include coded atlas information component, coded attribute information component, coded geometry (position) information component, coded mapping information component, and coded connectivity information component.
where v_idx_0, v_idx_1, v_idx_2, and v_idx_3 are vertexes with index 0, 1, 2; x, y, and z are vertex coordinates, a_1, a_2, and a_3 are attribute information, and f_idx_0 and f_idx_1 are faces. A mesh is represented by vertices in the form of an array. The index of the vertices (e.g., vertex indices) is an index of elements within the array. The mesh segmentation process 254 may be non-normative. Following the mesh segmentation process 254 is mesh block packing 256. Here, a block can be a collection of vertices that belong to a particular segment in the mesh. Each block can be characterized by block offset, relative to the mesh origin, block width, and block height. The 3D geometry coordinates of the vertices in the block can be represented in a local coordinate system, which may be a differential coordinate system with respect to the mesh origin. Following the mesh block packing 256, connectivity information 258 is provided to connectivity information coding 264. Position information 260 is provided to position information coding 266. Attribute information 262 is provided to attribute information coding 268. The connectivity information 258 can include an ordered list of face information with corresponding vertex index and texture index per block. For example, the connectivity information 258 can include:
where Block_1 and Block_2 are mesh blocks, f_idx_0, f_idx_1, and f_idx_n are faces, and v_idx_1, v_idx_2, and v_idx_3 are vertex indices. The position information 260 can include an ordered list of vertex position information with corresponding vertex index coordinates per block. For example, the position information 260 can include:
where Block_1 and Block_2 are mesh blocks, v_idx 0, v_idx 1, and v_idx_i are vertex indices, and x_1, y_1, and z_1 are vertex position information. The attribute information 262 can include an ordered list of vertex attribute information with corresponding vertex index attributes per block. For example, the attribute information 262 can include:
where Block_1 and Block_2 are mesh blocks, v_idx 0, v_idx 1, and v_idx_i are vertex indices, R, G, B are red, green, and blue color components, and Y, U, V are luminance and chrominance components. Following the providing of the connectivity information 258 to the connectivity information coding 264, the position information 260 to the position information coding 266, and the attribute information 262 to the attribute information coding 268, the coded information is multiplexed to generated a multiplexed mesh coded bitstream 270.
To process a mesh frame, the segmentation process is applied for the global mesh frame, and all the information is coded in the form of three-dimensional blocks, whereas each block has a local coordinate system. The information required to convert the local coordinate system of the block to the global coordinate system of the mesh frame is carried in a block auxiliary information component (atlas component) of the coded mesh bitstream.
Before delving further into the details of the various embodiments of the present disclosure, it may be helpful to describe an overview of an example method for efficiently coding connectivity information in mesh content, according to various embodiments of the present disclosure. The example method can include four stages. For purpose of illustration, the examples provided herein include vertexes grouped in blocks with index j and connectivity coding units (CCUs) with index k.
In a first stage of the example method, mesh segmentation can create segments or blocks of mesh content that represent individual objects or individual regions of interest, volumetric tiles, semantic blocks, etc.
In a second stage of the example method, face sorting and vertex index normalization can provide a process of data manipulation within a mesh, or a segment where each face is first processed in a manner such that for a face with index i the associated vertices are arranged in a descending order and the vertex indices in the current normalized face are represented as a difference between the current face indices and the preceding reconstructed face indices. The normalization can involve a process of expressing an absolute value sum of a previous face vertex index and a current face vertex index.
In a third stage of the example method, composition of a video frame for connectivity information coding can provide a process of transformation of a one-dimensional connectivity component of a mesh frame (e.g., face list) to a two-dimensional connectivity image (e.g., connectivity coding sample array). In this stage, for example, elements in a 1D face list (e.g., face indices) can be mapped to a 2D connectivity image (e.g., color planes).
In a fourth stage of the example method, coding can provide a process where a packed connectivity information frame or sequence is coded by a video codec, which is indicated in SPS/PPS or an external method such as SEI information.
where f[i] is a face i and v_idx[i, 0], v_idx[i, 1], and v_idx[i, 2] are vertex indices associated with the face i. At step 306, a determination is made with respect to whether the vertex indices are sorted. For example, step 306 can be determined by:
where v_idx[i, 0] and v_idx[i, 1] are vertex indices associated with face i. If the determination at step 306 is yes, then at step 308, a determination is made with respect to whether the subsequent vertex indices are sorted. For example, step 308 can be determined by:
where v_idx[i, 1] and v_idx[i, 2] are vertex indices associated with face i. If the determination at step 306 is no, then at step 310, a determination is made with respect to whether the next vertex index is sorted with respect to those evaluated at step 306. For example, step 310 can be determined by:
where v_idx[i, 0] and v_idx[i, 2] are vertex indices associated with face i. Based on the determinations made at steps 308 and 310, the face vertex indices can be reordered accordingly. If the determination at step 308 is no, then at step 312, the face vertex indices are reordered accordingly. For example, step 312 can be performed by:
where f[i] is a face i and v_idx[i, 0], v_idx[i, 1], and v_idx[i, 2] are vertex indices associated with the face i. If the determination at step 308 or at step 310 is yes, then at step 314, the face vertex indices are reordered accordingly. For example, step 314 can be performed by:
where f[i] is a face i and v_idx[i, 0], v_idx[i, 1], and v_idx[i, 2] are vertex indices associated with the face i. If the determination at step 310 is no, then at step 316, the face vertex indices are not reordered. For example, step 316 can be performed by maintaining:
where f[i] is a face i and v_idx[i, 0], v_idx[i, 1], and v_idx[i, 2] are vertex indices associated with the face i. At step 318, after all faces from the mesh frame connectivity information 302 have been sorted, frames can be split into blocks and connectivity coding units (CCUs). At step 320, coding of the processed connectivity information is performed.
In various embodiments, face sorting and normalization can involve vertex rotation. As described above, in face sorting and normalization, vertices for a face can be arranged in a descending order:
where v_idx[i, 0], v_idx[i, 1], and v_idx[i, 2] are vertex indices associated with a face i. A vertex can be represented by a 2D array of vertex indices:
where v_idx[i, w] is a vertex index associated with face i and an index w within the face. Vertex rotation can achieve vertex index arrangement while preserving the normal of a face to be oriented in the same direction as the original face. As described above, the normal of a face can be determined by a right-hand rule, or right-hand coordinate system. For example, valid rotations can include:
where f[i](0, 1, 2), f[i](1, 2, 0), and f[i](2, 0, 1) are faces with vertex indexes 0, 1, and 2. As examples of invalid rotations:
where f[i](0, 1, 2), f[i](1, 2, 0), and f[i](2, 0, 1) are faces with vertex indexes 0, 1, and 2. The faces can be sorted in ascending order such that the first vertex index of the first face is guaranteed to be less than or equal to the first vertex index of the second face:
where v_idx[i, 0] is a vertex index associated with face i and v_idx[i-1, 0] is a vertex index associated with a face preceding face i. The faces are then sorted such that:
where v_idx[i, 1] is a vertex index associated with face i and v_idx[i-1, 1] is a vertex index associated with a face preceding face i. The faces can then be sorted such that:
where v_idx[i, 2] is a vertex index associated with face i and v_idx[i-1, 2] is a vertex index associated with a face preceding face i. In this way, the vertex indices of all faces can be sorted in descending order, and all faces can be sorted in ascending order without compromising the information stored within.
An alternative composition process for connectivity video frame may be used. Here the connectivity information block is further subdivided into connectivity coding units (similar to coding tree units CTU in video coding process).
A face (e.g., face f[j, k, i]) can be encoded by calculating the connectivity coding sample (e.g., f_c[j, k, i]), represented by differential indexes of connectivity coding samples (e.g., dv_idx[j, k, i, 0], dv_idx[j, k, i, 1], dv_idx[j, k, i, 2]). The previous face (e.g., f[j, k, i-1]) can be represented by three vertices (e.g., v_idx[j, k, i-1, 0], v_idx[j, k, i-1, 1], v_idx[j, k, i-1, 2]). Therefore, the connectivity coding sample can be represented as:
where f_c[j, k, i] is a connectivity coding sample of block index j, CCU index k, and face index i, and f[j, k, i] and f[j, k, i-1] are faces. In this way, each sample in a video connectivity frame of the CCU [j, k] 302 is a connectivity coding sample f_c[j, k, i]. The connectivity coding sample is a three-component array. Each element of the connectivity coding sample represents a differential value between one face vertex index v_idx[j, k, i] and another face vertex index v_idx [j, k, i-1]:
where dv_idx[j, k, i, 0], dv_idx[j, k, i, 1], dv_idx[j, k, i, 2] are differential index values for a connectivity coding sample f_c[j, k, i]. In general, dv_idx[j, k, i, w] represents the differential index value between two vertices. And, v_idx_s[j, k, i, w] can be a four-dimensional array representing vertex v_idx[i, w] of a connectivity component in CCU k and block j (e.g., CCU [j, k] 302] of the mesh frame. In the above example, v_idx_s[j, k, i-1, 0] can be a first vertex index and v_idx_s[j, k, i, 0] can be a second vertex index. C can depend on a video codec bit depth defined as:
where bitDepth is the video codec bit depth.
where v_idx[i, 0], v_idx[j, k, i, 0], v_idx[i, 1], v_idx[j, k, i, 1], v_idx[i, 2], v_idx[j, k, i, 2] are vertex indices. At step 362, CCUs are arranged within each block in a raster-scan order. For example, step 362 can be performed for each CCU k, by:
where ccu[j, k] and ccu[k-1] are CCUs, f_c[0] and f_c[0] are faces, dv_idx[j, k, 0, 0], dv_idx[j, k, 0, 1], and dv_idx[j, k, 0, 2] are texture vertex information, v_idx_s[j, k, 0, 1], v_idx_s [j, k-1, 0, 1], v_idx_s[j, k, 0, 2], and v_idx_s [j, k-1, 0, 2] are segment vertex indices, C is a constant value and it is defined depending on bit-depth of the video codec used for connectivity video encoding. The value of the constant may be C=(2{circumflex over ( )}BitDepth—1)>>1. At step 364, connectivity information can be arranged into CCUs. The CCUs can include 2D arrays of N×N connectivity coding samples in a raster scan-order, where:
where dv_idx[j, k, i, 0], dv_idx[j, k, i, 1], and dv_idx[j, k, i, 2] are differential connectivity vertex information. At step 366, a lossless video encoder can be used to compress the constructed frame. At step 368, a coded connectivity frame bitstream is produced.
In a first stage, the connectivity component is extracted from the coded dynamic mesh bitstream and is decoded as an image. A pixel of the decoded video frame corresponds to a connectivity sample.
In a second stage, block size and position information and CCU resolution information are extracted from the header. The decoded connectivity video frame is further processed to reconstruct mesh connectivity information.
As illustrated in
At block 406, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to segment a mesh representative of 3D content.
At block 408, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to process the mesh to sort faces and vertex indices within the mesh.
At block 410, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to generate connectivity information frames based on the processed mesh.
At block 412, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to encode the connectivity information frames.
The computer system 500 can also include a main memory 506, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to the bus 502 for storing information and instructions to be executed by the hardware processor(s) 504. The main memory 506 may also be used for storing temporary variables or other intermediate information during execution of instructions by the hardware processor(s) 504. Such instructions, when stored in a storage media accessible to the hardware processor(s) 504, render the computer system 500 into a special-purpose machine that can be customized to perform the operations specified in the instructions.
The computer system 500 can further include a read only memory (ROM) 508 or other static storage device coupled to the bus 502 for storing static information and instructions for the hardware processor(s) 504. A storage device 510, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., can be provided and coupled to the bus 502 for storing information and instructions.
Computer system 500 can further include at least one network interface 512, such as a network interface controller module (NIC), network adapter, or the like, or a combination thereof, coupled to the bus 502 for connecting the computer system 500 to at least one network.
In general, the word “component,” “modules,” “engine,” “system,” “database,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component or module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices, such as the computing system 500, may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of an executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
The computer system 500 may implement the techniques or technology described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system 500 that causes or programs the computer system 500 to be a special-purpose machine. According to one or more embodiments, the techniques described herein are performed by the computer system 500 in response to the hardware processor(s) 504 executing one or more sequences of one or more instructions contained in the main memory 506. Such instructions may be read into the main memory 506 from another storage medium, such as the storage device 510. Execution of the sequences of instructions contained in the main memory 506 can cause the hardware processor(s) 504 to perform process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. The non-volatile media can include, for example, optical or magnetic disks, such as the storage device 510. The volatile media can include dynamic memory, such as the main memory 506. Common forms of the non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, an NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. The transmission media can participate in transferring information between the non-transitory media. For example, the transmission media can include coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 502. The transmission media can also take a form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
The computer system 500 also includes a network interface 518 coupled to bus 502. Network interface 518 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, network interface 518 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, network interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through network interface 518, which carry the digital data to and from computer system 500, are example forms of transmission media.
The computer system 500 can send messages and receive data, including program code, through the network(s), network link and network interface 518. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the network interface 518.
The received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.
Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 500.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
The present application is a U.S. National Stage entry of International Application No. PCT/US2022/043086, filed Sep. 9, 2022, which claims priority to U.S. Provisional Patent Application No. 63/243,016, filed Sep. 10, 2021, the entire disclosures of which are incorporated herein by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/043086 | 9/9/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63243016 | Sep 2021 | US |