The continuing consumer demand for video technology to deliver video content at higher quality and faster speed has encouraged continuing efforts to develop improvements to video technology. For example, the Moving Picture Experts Group (MPEG) has established standards for video coding so that there can be a common framework in which various video technologies can operate and be compatible with each other. In 2001, MPEG and the International Telecommunication Union (ITU) formed the Joint Video Team (JVT) to develop a video coding standard. The result of the JVT was the H.264/Advanced Video Coding (AVC) standard. The AVC standard was utilized in various video technology innovations at the time, such as Blu-ray video discs. Subsequent teams have developed additional video coding standards. For example, The Joint Collaborative Team on Video Coding (JCT-VC) developed the H.265/High Efficiency Video Coding (HEVC) standard. The Joint Video Exploration Team (JVET) developed the H.266/Versatile Video Coding (VVC) standard.
The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or exemplary embodiments.
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
A first aspect provides a computer-implemented method for encoding or decoding a video comprising determining a bit depth associated with an input video, determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video, determining a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction, and processing the input video based on the weighting factor and the offset value of the weighted prediction.
A second aspect provides a computing system for encoding or decoding a video comprising at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the computing system to perform determining a bit depth associated with an input video, determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video, determining a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction, and processing the input video based on the weighting factor and the offset value of the weighted prediction.
A third aspect provides a non-transitory storage medium of a computing system storing instructions for encoding or decoding a video that, when executed by at least one processor of the computing system, cause the computing system to perform determining a bit depth associated with an input video, determining a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video, determining a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction, and processing the input video based on the weighting factor and the offset value of the weighted prediction.
Other features and aspects of the disclosed features will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosure. The summary is not intended to limit the scope of any embodiments described herein.
As described above, the continuing consumer demand for video technology to deliver video content at higher quality and faster speed has encouraged continuing efforts to develop improvements to video technology. One way in which video technology may be improved is through improvements in video coding (e.g., video compression). By improving video coding, video data can be efficiently delivered, improving video quality and improving delivery speed. For example, the video coding standards established by MPEG generally include use of intra-picture coding and inter-picture coding. In intra-picture coding, spatial redundancy is used to correlate pixels within a picture to compress the picture. In inter-picture coding, temporal redundancy is used to correlate pixels between preceding and following pictures in a sequence. These approaches to video coding have various benefits and drawbacks. For example, intra-picture encoding generally provides less compression than inter-picture encoding. On the other hand, in inter-picture encoding, if a picture is lost during delivery, or delivered with errors, then subsequent pictures may not be able to be properly processed. Furthermore, neither intra-picture encoding nor inter-picture encoding are particularly effective at efficiently compressing video in situations, for example, involving fade effects. As fade effects can be, and are, used in a wide variety of video content, improvements to video coding with respect to fade effects would provide benefits in a wide variety of video coding applications. Thus, there is a need for technological improvements to address these and other technological problems related to video coding technologies.
Accordingly, the present application provides solutions that address the technological challenges described above. In various embodiments, weighted prediction with extended bit depth (e.g., 10-bit, 12-bit, 14-bit, 16-bit) can be implemented in a video coding process. In general, weighted prediction can involve correlating a current picture to a reference picture scaled by a weighting factor (e.g., scaling factor) and an offset value (e.g., additive offset). The weighting factor and the offset value can be applied to each color component of the reference picture at, for example, a block level, slice level, or frame level, to determine the weighted prediction for the current picture. Parameters associated with the weighted prediction, such as the weighting factor and the offset value, can be coded in a picture. In some cases, these weighted prediction parameters can be based on 8-bit additive offsets. In other cases, these weighted prediction parameters can be extended with respect to video bit depth and be based on, for example, 10-bit, 12-bit, 14-bit, or 16-bit additive offsets. The use of extended bit depth with respect to these weighted prediction parameters can be signaled by a flag. With extended bit depth with respect to weighted prediction parameters, greater precision can be achieved in video coding. These benefits are further realized in video involving fade effects, where weighted prediction is particularly effective in video coding. While various features of the solutions described herein may include proposed changes to the H.266/Versatile Video Coding (VVC) standard, the features of the solutions described herein are applicable to various coding schemes. The features of the solutions are discussed in further detail herein.
Before describing embodiments of the present disclosure in detail, it may be helpful to describe types of pictures (e.g., video frames) that are used in video coding standards, such as H.264/AVC, H.265/HEVC, and H.266/VCC.
As illustrated in
At block 406, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to determine a bit depth associated with an input video. Various video coding schemes, such as H.264/AVC and H.265/HEVC support bit depths of 8-bits, 10-bits, and more for color. Other video coding schemes, such as H.266/VVC support bit depths up to 16-bits for color. A 16-bit bit depth indicates that, for video coding schemes such as H.266/VVC, color space and color sampling can include up to 16 bits per component. Generally, this allows video coding schemes with higher bit depths, such as H.266/VVC, to support a wider range of colors than video coding schemes with lower bit depths, such as H.264/AVC and H.265/HEVC. In various embodiments, a bit depth is specified in an input video. For example, a recording device may specify the bit depth at which it records and encodes a video. In various embodiments, a bit depth of an input video can be determined based on variables associated with the input video. For example, a variable bitDepthY can represent the bit depth of luma for the input video and/or a variable bitDepthC can represent the bit depth of chroma for the input video. These variables can be set, for example, during encoding of the input video and can be read from the compressed video bitstream during decoding. For example, a video can be encoded with a bitDepthY variable, representing the bit depth of luma at which the video was encoded. When the compressed video bitstream is decoded, the bit depth of the video can be determined based on the bitDepthY variable associated with the compressed video bitstream.
At block 408, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to determine a bit depth associated with a weighted prediction of the input video based on the bit depth associated with the input video. As described above, weighted prediction provides for improved compression in video encoding. In various embodiments, weighted prediction involves applying a weighting factor and an offset value to each color component of a reference picture. The weighted prediction can be formed for pixels of a block based on single prediction or bi-prediction. For example, for single prediction, a weighted prediction can be determined based on the formula:
PredictedP=clip((SampleP*w_i+power(2,LWD−1))>>LWD+offset_i)
where PredictedP is a weighted predictor, clip( ) is an operator that clips to a specified range of minimum and maximum pixel values. SampleP is a value of a corresponding reference pixel. w_i is a weighting factor, and offset_i is an offset value for a specified reference picture. power( ) is an operator that computes the exponentiation, the base and exponent are the first and second elements in the parenthesis. For each reference picture, w_i and offset_i may be different and i here can be 0 or 1 to indicate list 0 or list 1. The specified reference picture may be in list 0 or list 1. LWD is a log weight denominator rounding factor.
For bi-prediction, a weighted prediction can be determined based on the formula:
PredictedP_bi=clip((SampleP_0*w0+SampleP_1*w_1+power(2,LWD))>>(LWD+1)+(offset_0+offset_1+1)>>1)
where PredictedP_bi is the weighted predictor for bi-prediction. clip( ) is an operator that clips to a specified range of minimum and maximum pixel values. SampleP_0 and SampleP_1 are corresponding reference pixels from list 0 and list 1, respectively, for bi-prediction. w_0 is a weighting factor for list 0, and w_1 is an offset value for list 1. offset_0 is an offset value for list 0, and offset_1 is an offset value for list 1. LWD is a log weight denominator rounding factor.
In various embodiments, weighted prediction in a compressed video bitstream can be determined based on specified variables or flags associated with the input video. For example, a flag can be set to indicate that a picture in the compressed video involves weighted prediction. A flag (e.g., sps_weighted_pred_flag, pps_weighted_pred_flag) can be set to 1 to specify that weighted prediction may be applied to P pictures (or P slices) in the compressed video. The flag can be set to 0 to specify that weighted prediction may not be applied to the P pictures (or P slices) in the compressed video. A flag (e.g., sps_weighted_bipred_flag, pps_weighted_bipred_flag) can be set to 1 to specify that weighted prediction may be applied to B pictures (or B slices) in the compressed video. The flag can be set to 0 to specify that weighted prediction may not be applied to the B pictures (or B slices) in the compressed video. In various embodiments, a weighting factor and an offset value associated with weighted prediction in a compressed video can be determined based on specified variables associated with the compressed video. For example, a variable (e.g., delta_luma_weight_10, delta_luma_weight_11, delta_chroma_weight_10, delta_chroma_weight_11) can indicate values (or deltas) for weighting factors to be applied to luma and/or chroma of one or more reference pictures. A variable (e.g., luma_offset_10, luma_offset_11, delta_chroma_offset_10, delta_chroma_offset_11) can indicate values (or deltas) for offset values to be applied to luma and/or chroma of one or more reference pictures. In general, the weighting factor and the offset value associated with weighted prediction are limited in their range of values based on their bit depth. For example, if a weighting factor has an 8-bit bit depth, then the weighting factor can have a range of 256 integer values (e.g., −128 to 127). In some cases, the range of values for the weighting factor and the offset value can be increased by left shifting, which increases the range at the cost of precision. Thus, extending the bit depth for the weighting factor and the offset value allows for increased ranges of values without loss in precision.
In various embodiments, a bit depth associated with a weighted prediction can be determined based on a bit depth of the input video. For example, an input video can have a bit depth of luma indicated by a variable (e.g., bitDepthY) and/or a bit depth of chroma indicated by a variable (e.g., bitDepthC). The bit depth of the weighted prediction can have the same bit depth as the bit depth of the input video. A variable indicating values for a weighting factor or an offset value associated with a weighted prediction can have a bit depth corresponding to a bit depth of luma and chroma of an input video. For example, an input video can be associated with a series of additive offset values for luma (e.g., luma_offset_10[i]) that are applied to luma prediction values for a reference picture (e.g., RefPicList[0][i]). The additive offset values can have a bit depth corresponding to the bit depth of luma (e.g., bitDepthY) of the input video. The range of the additive offset values can be based on the bit depth. For example, an 8-bit bit depth can support a range of −128 to 127. A 10-bit bit depth can support a range of −512 to 511. A 12-bit bit depth can support a range of −32,768 to 32,767, and so forth. An associated flag (e.g., luma_weight_10_flag[i]) can indicate whether weighted prediction is being utilized. For example, the associated flag can be set to 0 and the associated additive offset value can be inferred to be 0. As another example, an input video can be associated with a series of additive offset values, or offset deltas (e.g., delta_chroma_offset_10[i][j]), that are applied to chroma prediction values for a reference picture (e.g., RefPicList[0][i]). The bit depth of the offset deltas can have a bit depth corresponding to the bit depth of chroma channel CB or chroma channel CR of the input video. In an example embodiment, the following syntax and semantics may be implemented in a coding standard:
luma_offset_10 [i] is the additive offset applied to the luma prediction value for list 0 prediction using RefPicList[0] [i] (reference picture list). The value of luma_offset_10[i] is in the range of −(1<<(bitDepthY−1)) to (1<<(bitDepthY−1))−1, inclusive, where bitDepthY is the bit depth of luma. When an associated flag luma_weight_10_flag[i] is equal to 0, luma_offset_10[i] is inferred to be equal to 0.
delta_chroma_offset_10[1] [I] is the difference of the additive offset applied to the chroma prediction values for list 0 prediction using RefPicList[0] [i] (reference picture list) with j equal to 0 for chroma channel Cb and j equal to 1 for chroma channel Cr.
In this example, the chroma offset value, ChromaOffsetL0[i][j] can be derived as follows:
ChromaOffsetL0[i][j]=Clip3(−(1<<(bitDepthC−1)),(1<<(bitDepthC−1))−1,)−1((1<<(bitDepthC−1))+delta_chroma_offset_10[i][j]—(((1<<(bitDepthC−1))*ChromaWeightL0[i][j])>>ChromaLog2WeightDenom)))
where ChromaOffsetL0 is the chroma offset value, bitDepthC is the bit depth of the chroma, ChromaWeightL0 is an associated chroma weighting factor, and ChromaLog2WeightDenom is a logarithm denominator for the associated chroma weighting factor.
As illustrated in this example, the value of delta_chroma_offset_10[i] [j] is in the range of −4*(1<<(bitDepthC−1)) to 4*((1<<(bitDepthC−1))−1), inclusive. When chroma_weight_10_flag[i] is equal to 0, ChromaOffsetL0[i][j] can be inferred to be equal to 0. In this example, because the bit depth of the weighting factors and offset values correspond with the bit depth of the input video, the weighting factors and offset values are not left shifted. The following syntax and semantics may be implemented:
o0=luma_offset_10[refldxL0]
o1=luma_offset_11[refldxL1]
o0=ChromaOffsetL0[refldxL0][cIdx−1]
o1=ChromaOffsetL1[refldxL1][cIdx−1]
where luma_offset_10[refldxL0] is a luma offset value associated with a list 0 reference picture, luma_offset_11 [refldxL1] is a luma offset value associated with a list 1 reference picture, ChromaOffsetL0[refldxL0][cIdx−1] is a chroma offset value associated with a list 0 reference picture, ChromaOffsetL1[refldxL1][cIdx−1] is a chroma offset value associated with a list 1 reference picture. As described above, these offset values are not left shifted.
In various embodiments, a bit depth associated with a weighted prediction can be different from a bit depth of the input video. In some applications, a weighting factor and/or an offset value may have a comparatively lower bit depth than the bit depth of the input video. The weighting factor and/or the offset value may not require an extended range. In these applications, the weighting factor and/or the offset value can maintain a default or non-extended bit-depth (e.g., 8-bit bit depth) while the input video maintains a higher bit depth (e.g., 10-bit bit depth, 12-bit bit depth, 14-bit bit depth, 16-bit bit depth). The weighting factor and/or the offset value are not left shifted so that there is no loss of precision, but there is also no gain in range. As the gain in range is not required in these applications, there is no need to extend the range by left shifting. In an example embodiment, the following syntax and semantics may be implemented in a coding standard:
o0=luma_offset_10[refldxL0]
o1=luma_offset_11[refldxL1]
o0=ChromaOffsetL0[refldxL0][cIdx−1]
o1=ChromaOffsetL1[refldxL1][cIdx−1]
where luma_offset_10[refldxL0] is a luma offset value associated with a list 0 reference picture, luma_offset_11[refldxL1] is a luma offset value associated with a list 1 reference picture, ChromaOffsetL0[refIdxL0][cIdx−1] is a chroma offset value associated with a list 0 reference picture, ChromaOffsetL1[refldxL1][cIdx−1] is a chroma offset value associated with a list 1 reference picture. As described above, these offset values are not left shifted.
In various embodiments, a flag can indicate whether a bit depth associated with a weighted prediction is the same as or different from a bit depth of the input video. The flag (e.g., extended_precision_flag) can indicate whether a weighting factor and/or an offset value associated with the weighted prediction is the same as or different from a bit depth of the input video and can be indicated at a sequence, picture, and/or slice level. For example, the flag can be equal to 1 to specify that weighted prediction values are using the same bit depth as the input video. The flag can be equal to 0 to specify that the weighted prediction values are using a lower bit depth. The lower bit depth can be denoted by a variable (e.g., LowBitDepth). The variable can be set to a desired precision. In an example embodiment, the following syntax and semantics may be implemented in a coding standard:
OffsetShift_Y=extended_precision_flag?0:(bitDepthY−LowBitDepth)
OffsetShift_C=extended_precision_flag?0:(bitDepthC−LowBitDepth)
OffsetHalfRange_Y=1<<(extended_precision_flag?(bitDepthY−1):(LowBitDepth−1))
OffsetHalfRange_C=1<<(extended_precision_flag?(bitDepthC−1):(LowBitDepth-1))
where OffsetShift_Y is a left shift offset value for luma prediction values corresponding to 0 where extended_precision_flag is set to 1 or corresponding to a bit depth of luma (bitDepthY) reduced by LowBitDepth, OffsetShift_C is a left shift offset value for chroma prediction values corresponding to 0 where extended_precision_flag is set to 1 or corresponding to a bit depth of chroma (bitDepthC) reduced by LowBit Depth, OffsetHalfRange_Y is a range for the luma prediction values based on the bit depth of the luma prediction values, and OffsetHalfRange_C is a range for the chroma prediction values based on the bit depth of the chroma prediction values.
In this example, the following syntax and semantics may be implemented:
In this example, the variable ChromaOffsetL0[i][j] can be derived as follows:
ChromaOffsetL0[i][j]=Clip3(−OffsetHalfRange_C,OffsetHalfRange_C−1,(OffsetHalfRange_C+delta_chroma_offset_10[i][j]—((OffsetHalfRange_C*ChromaWeightL0[i][j])>>ChromaLog2WeightDenom)))
where ChromaOffsetL0 is the chroma offset value, ChromaWeightL0 is an associated chroma weighting factor, and ChromaLog2WeightDenom is a logarithm denominator for the associated chroma weighting factor.
While the above examples include example syntax and semantics for list 0 luma offset values and chroma offset values, the examples can be applied to list 1 values as well. Additionally, in various embodiments, a minimum pixel value and a maximum pixel value for a picture (e.g., video frame) can be specified. Final predicted samples from weighted prediction can be clipped to the minimum pixel value or the maximum pixel value for the picture.
At block 410, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to determine a weighting factor and an offset value of the weighted prediction based on the bit depth associated with the weighted prediction. As described above, a range of values for the weighting factor and the offset value can be based on a bit depth of the weighting factor and the offset value. In various embodiments, the weighting factor and the offset value can be based on the bit depth associated with the weighted prediction. The bit depth associated with the weighted prediction can be based on, for example, a bit depth of an input video, a comparative bit depth of the weighted prediction with the bit depth of the input video, or a desired bit depth. For example, in an implementation where a bit depth associated with the weighted prediction is the same as the bit depth of an input video, a weighting factor and an offset value of the weighted prediction can be determined based on a reading of their respective values, without left shifting. In an implementation where a desired bit depth is specified, such as through a LowBitDepth variable, then the weighting factor and the offset value of the weighted prediction can be determined based on a reading of their respective values left shifted in accordance with the desired bit depth. Many variations are possible.
At block 412, the hardware processor(s) 402 may execute the machine-readable/machine-executable instructions stored in the machine-readable storage media 404 to process the input video based on the weighting factor and the offset value of the weighted prediction. In various embodiments, the weighting factor and the offset value can be used as part of a video encoding process or as part of a video decoding process. For example, an encoding process involving weighted prediction can be applied to an input video to process the input video. During the encoding process, weighting factors and offset values can be determined for the weighted prediction. The weighting factors and the offset values can be set using a bit depth based on a bit depth used to encode the input video. When the compressed video bitstream is decoded, the bit depth of the weighting factors and the offset values can be determined based on the bit depth of the compressed video bitstream. As another example, during an encoding process applied to an input video, weighting factors and offset values can be set using a desired bit depth that is different from a bit depth used to encode the input video. An extended precision flag and a variable indicating the difference between the bit depth used to encode the input video and the desired bit depth can be set. When the compressed video bitstream is decoded, the bit depth of the weighting factors and the offset values can be determined based on the bit depth of the compressed video bitstream, the extended precision flag, and the variable indicating the difference between the video bit depth used to encode the input video and the desired bit depth. Many variations are possible.
The computer system 500 can also include a main memory 506, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to the bus 502 for storing information and instructions to be executed by the hardware processor(s) 504. The main memory 506 may also be used for storing temporary variables or other intermediate information during execution of instructions by the hardware processor(s) 504. Such instructions, when stored in a storage media accessible to the hardware processor(s) 504, render the computer system 500 into a special-purpose machine that can be customized to perform the operations specified in the instructions.
The computer system 500 can further include a read only memory (ROM) 508 or other static storage device coupled to the bus 502 for storing static information and instructions for the hardware processor(s) 504. A storage device 510, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., can be provided and coupled to the bus 502 for storing information and instructions.
Computer system 500 can further include at least one network interface 512, such as a network interface controller module (NIC), network adapter, or the like, or a combination thereof, coupled to the bus 502 for connecting the computer system 700 to at least one network.
In general, the word “component,” “modules,” “engine,” “system,” “database,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component or module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices, such as the computing system 500, may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of an executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
The computer system 500 may implement the techniques or technology described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system 700 that causes or programs the computer system 500 to be a special-purpose machine. According to one or more embodiments, the techniques described herein are performed by the computer system 700 in response to the hardware processor(s) 504 executing one or more sequences of one or more instructions contained in the main memory 506. Such instructions may be read into the main memory 506 from another storage medium, such as the storage device 510. Execution of the sequences of instructions contained in the main memory 506 can cause the hardware processor(s) 504 to perform process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. The non-volatile media can include, for example, optical or magnetic disks, such as the storage device 510. The volatile media can include dynamic memory, such as the main memory 506. Common forms of the non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, an NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. The transmission media can participate in transferring information between the non-transitory media. For example, the transmission media can include coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 502. The transmission media can also take a form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
The computer system 500 also includes a network interface 518 coupled to bus 502. Network interface 518 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, network interface 518 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, network interface 518 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through network interface 518, which carry the digital data to and from computer system 500, are example forms of transmission media.
The computer system 500 can send messages and receive data, including program code, through the network(s), network link and network interface 518. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network, and the network interface 518.
The received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.
Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 500.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
This application is a continuation of International Application No. PCT/US2021/065224, filed Dec. 27, 2021, which claims priority to U.S. Provisional Application No. 63/131,710, filed Dec. 29, 2020, the entire disclosures of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63131710 | Dec 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2021/065224 | Dec 2021 | US |
Child | 18339360 | US |