The disclosure generally relates signaling multiplexing instructions using a preselection element with external bitstream manipulation instructions.
The preselection element is defined in Dynamic Adaptive Streaming over HTTP (DASH) for providing media experiences by combining multiple streams. The current preselection design defines a set of specific methods for multiplexing the received streams before providing them to the decoder(s).
MPEG DASH provides a standard for streaming multimedia content over IP networks. The DASH standard provides a way to describe various content and their relation, using the preselection element. However, the current preselection design defines a set of specific and explicit methods for multiplexing the received streams before providing them to the decoder(s). Since each codec specification may require a different way of manipulation and multiplexing of the multiple streams, with the introduction of each new codec, its method(s) need to be included in the DASH standard, and that limits the extensibility and usability of the DASH specification.
The DASH CDAM2 document is developing a picture-in-picture signaling using preselection. However it includes explicit signaling for subpicture substitution.
While the DASH standard provides a way to describe various content and their relation, it does not provide an interoperable solution to annotate the VVC subpictures to be used for picture-in-picture applications. Picture-in-picture has many applications, from viewing an alternative channel at the same time as viewing the main channel to adding a sign video for the hearing-impaired audience that a small video on the corner of the main video shows a person conveying the audio information using the sign language.
According to an aspect of the disclosure, a method performed by at least one processor of a decode comprises: receiving a Dynamic Adaptive Streaming over HTTP (DASH) bitstream; determining that the DASH bitstream comprises a preselection element for multiplexing a plurality of media segments included in the DASH bitstream; parsing the plurality of media segments from the bitstream; multiplexing, by a DASH application, the plurality of segments using the preselection element and one or more policies associated with the decoder to generate a multiplexed bitstream; and outputting the multiplexed bitstream.
According to an aspect of the disclosure, a decoder comprises: at least one memory configured to store program code; and at least one processor configured to read the program code and operate as instructed by the program code, the program code comprising: receiving code configured to cause the at least one processor to receive a Dynamic Adaptive Streaming over HTTP (DASH) bitstream; first determining code configured to cause the at least one processor to determine that the DASH bitstream comprises a preselection element for multiplexing a plurality of media segments included in the DASH bitstream; parsing code configured to cause the at least one processor to parse the plurality of media segments from the bitstream; multiplexing code configured to cause the at least one processor to multiplex, by a DASH application, the plurality of segments using the preselection element and one or more policies associated with the decoder to generate a multiplexed bitstream; and outputting code configured to cause the at least one processor to output the multiplexed bitstream.
According to an aspect of the disclosure, a method performed by at least one processor in a decoder comprises processing a Dynamic Adaptive Streaming over HTTP (DASH) bitstream; wherein the DASH bitstream comprises a preselection element for multiplexing a plurality of media segments included in the DASH bitstream, wherein the plurality of media segments are parsed from the bitstream, and wherein a DASH application multiplexes the plurality of segments using the preselection element and one or more policies associated with the decoder to generate a multiplexed bitstream.
Additional embodiments will be set forth in the description that follows and, in part, will be apparent from the description, and/or may be learned by practice of the presented embodiments of the disclosure.
Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:
The following detailed description of example embodiments refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. Further, one or more features or components of one embodiment may be incorporated into or combined with another embodiment (or one or more features of another embodiment). Additionally, in the flowcharts and descriptions of operations provided below, it is understood that one or more operations may be omitted, one or more operations may be added, one or more operations may be performed simultaneously (at least in part), and the order of one or more operations may be switched.
It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code—it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.
Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” “include,” “including,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Furthermore, expressions such as “at least one of [A] and [B]” or “at least one of [A] or [B]” are to be understood as including only A, only B, or both A and B.
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present solution. Thus, the phrases “in one embodiment”, “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Furthermore, the described features, advantages, and characteristics of the present disclosure may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the present disclosure may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present disclosure.
Embodiments of the present disclosure provide an extensible method of signaling stream manipulation and multiplexing in DASH preselection. The embodiments of the present disclosure further provide a method for signaling picture in picture experience using the preselection, but allowing the codec to define the bitstream manipulation and multiplexing instructions.
The user device 110 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with platform 120. For example, the user device 110 may include a computing device (e.g., a desktop computer, a laptop computer, a tablet computer, a handheld computer, a smart speaker, a server, etc.), a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a wearable device (e.g., a pair of smart glasses or a smart watch), or a similar device. In some implementations, the user device 110 may receive information from and/or transmit information to the platform 120.
The platform 120 includes one or more devices as described elsewhere herein. In some implementations, the platform 120 may include a cloud server or a group of cloud servers. In some implementations, the platform 120 may be designed to be modular such that software components may be swapped in or out depending on a particular need. As such, the platform 120 may be easily and/or quickly reconfigured for different uses.
In some implementations, as shown, the platform 120 may be hosted in a cloud computing environment 122. Notably, while implementations described herein describe the platform 120 as being hosted in the cloud computing environment 122, in some implementations, the platform 120 may not be cloud-based (i.e., may be implemented outside of a cloud computing environment) or may be partially cloud-based.
The cloud computing environment 122 includes an environment that hosts the platform 120. The cloud computing environment 122 may provide computation, software, data access, storage, etc. services that do not require end-user (e.g., the user device 110) knowledge of a physical location and configuration of system(s) and/or device(s) that hosts the platform 120. As shown, the cloud computing environment 122 may include a group of computing resources 124 (referred to collectively as “computing resources 124” and individually as “computing resource 124”).
The computing resource 124 includes one or more personal computers, workstation computers, server devices, or other types of computation and/or communication devices. In some implementations, the computing resource 124 may host the platform 120. The cloud resources may include compute instances executing in the computing resource 124, storage devices provided in the computing resource 124, data transfer devices provided by the computing resource 124, etc. In some implementations, the computing resource 124 may communicate with other computing resources 124 via wired connections, wireless connections, or a combination of wired and wireless connections.
As further shown in
The application 124-1 includes one or more software applications that may be provided to or accessed by the user device 110 and/or the platform 120. The application 124-1 may eliminate a need to install and execute the software applications on the user device 110. For example, the application 124-1 may include software associated with the platform 120 and/or any other software capable of being provided via the cloud computing environment 122. In some implementations, one application 124-1 may send/receive information to/from one or more other applications 124-1, via the virtual machine 124-2.
The virtual machine 124-2 includes a software implementation of a machine (e.g. a computer) that executes programs like a physical machine. The virtual machine 124-2 may be either a system virtual machine or a process virtual machine, depending upon use and degree of correspondence to any real machine by the virtual machine 124-2. A system virtual machine may provide a complete system platform that supports execution of a complete operating system (OS). A process virtual machine may execute a single program, and may support a single process. In some implementations, the virtual machine 124-2 may execute on behalf of a user (e.g., the user device 110), and may manage infrastructure of the cloud computing environment 122, such as data management, synchronization, or long-duration data transfers.
The virtualized storage 124-3 includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of the computing resource 124. In some implementations, within the context of a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization may eliminate dependencies between data accessed at a file level and a location where files are physically stored. This may enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations.
The hypervisor 124-4 may provide hardware virtualization techniques that allow multiple operating systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as the computing resource 124. The hypervisor 124-4 may present a virtual operating platform to the guest operating systems, and may manage the execution of the guest operating systems. Multiple instances of a variety of operating systems may share virtualized hardware resources.
The network 130 includes one or more wired and/or wireless networks. For example, the network 130 may include a cellular network (e.g. a fifth generation (5G) network, a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g. the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, or the like, and/or a combination of these or other types of networks.
The number and arrangement of devices and networks shown in
The bus 210 includes a component that permits communication among the components of the device 200. The processor 220 is implemented in hardware, firmware, or a combination of hardware and software. The processor 220 is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, the processor 220 includes one or more processors capable of being programmed to perform a function. The memory 230 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g. a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by the processor 220.
The storage component 240 stores information and/or software related to the operation and use of the device 200. For example, the storage component 240 may include a hard disk (e.g. a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive.
The input component 250 includes a component that permits the device 200 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, the input component 250 may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator). The output component 260 includes a component that provides output information from the device 200 (e.g. a display, a speaker, and/or one or more light-emitting diodes (LEDs)).
The communication interface 270 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables the device 200 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. The communication interface 270 may permit the device 200 to receive information from another device and/or provide information to another device. For example, the communication interface 270 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like.
The device 200 may perform one or more processes described herein. The device 200 may perform these processes in response to the processor 220 executing software instructions stored by a non-transitory computer-readable medium, such as the memory 230 and/or the storage component 240. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices.
Software instructions may be read into the memory 230 and/or the storage component 240 from another computer-readable medium or from another device via the communication interface 270. When executed, software instructions stored in the memory 230 and/or the storage component 240 may cause the processor 220 to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
The manifest 303 includes MPD events or events, and an inband event and ‘moof’ parser 306 may parse MPD event segments or event segments and append the event segments to an event and metadata buffer 330. The inband event and ‘moof’ parser 306 may also fetch and append the media segments to a media buffer 340. The event and metadata buffer 330 may send event and metadata information to an event and metadata synchronizer and dispatcher 335. The event and metadata synchronizer and dispatcher 335 may dispatch specific events to DASH players control, selection, and heuristic logic 302 and application related events and metadata tracks to application 301.
According to some embodiments, a MSE may include a pipeline including a file format parser 350, the media buffer 340, and a media decoder 345. MSE 320 is a logical buffer(s) of media segments, where the media segments may be tracked and ordered based on the media segments' presentation time. Media segments may include but may not be limited to ad media segments associated with ad MPDs and live media segments associated with live MPDs. Each media segment may be added or appended to the media buffer 340 based on the media segments' timestamp offset, and the timestamp offset may be used to order the media segments in the media buffer 340.
Since embodiments of the present application may be directed to building a linear media source extension (MSE) buffer from two or more nonlinear media sources using MPD chaining, and the nonlinear media sources may be ad MPDs and live MPDs, the file format parser 350 may be used to process the different media and/or codecs used by the live media segments included in the live MPDs. In some embodiments, the file format parser may issue a change type based on a codec, profile, and/or level of the live media segments.
As long as media segments exist in the media buffer 340, the event and metadata buffer 330 maintains corresponding event segments and metadata. The sample DASH processing model 300 may include a timed metadata track parser 325 to keep track of the metadata associated with the inband and MPD events. According to
The semantics of the DASH preselection element is shown in Table 1.
Preselection
Accessibility
Role
Rating
Viewpoint
CommonAttributesElements
As is shown in the above table, the attribute @order defines very specific multiplexing schemes and each time as new scheme is needed, a new value need to be added to the specification along with the conformance rules for multiplexing of the streaming for that value.
According to one or more embodiments, a new attribute for preselection, referred to as interleaving, is provided as indicated in Table 2.
Preselection
@interleaving
provides the interleaving
instructions to be used for
interleaving samples or groups
of samples of this
representation with other
representations in this
preselection.
The syntax, semantics, and the
conformance rules are defined
by a decoder specification or
related documents.
the conformance rules for
Representations in Adaptation
Sets within the Preselection.
The information provided in
this attribute supersede @order
value.
Accessibility
Role
Rating
Viewpoint
CommonAttributesElements
RepresentationBaseType).
The new attribute in Table 2 is shown in underlining. As shown in Table 2, the @interleaving attribute is an opaque attribute, e.g., the content of this attribute is not defined by the DASH spec. The syntax and semantics of its content is defined by the decoder specs or related specs that are used in the adaptation set or content component element of the preselection. Therefore, the job of the DASH client is to provide @interleaving information to the DASH application, and the DASH application uses the information to manipulate and multiplex the received segment/subsegments.
Since the syntax and semantic of the interleaving is defined by the external specs, this attribute is future extensible, meaning any new decoder specification can define one or more interleaving schemes, syntax and semantic and since the application streaming those stream have the knowledge of the decoders, it should understand the interleaving instruction and conformance rules. Therefore, based on the embodiments of the present disclosure, the DASH specification is not tied to any specific codec.
Table 3 illustrates an alternative way of signaling the interleaving information to include the interleaving information as an option in (a order.
Preselection
Accessibility
Role
Rating
Viewpoint
CommonAttributesElements
RepresentationBaseType).
As is shown in Table 3, the @order has a new value. According to one or more embodiments, if the @order value starts with the substring “opaque”, the rest of @order value provides the information about multiplexing and manipulation of the received (sub) segments. In one or more examples, the syntax and semantic are defined by the decoder(s) specifications and is out of scope of the DASH specification.
The embodiments of the present disclosure further provide an alternative method that allows flexible bitstream manipulation independent from the DASH specification for picture-in-picture.
In the case of streaming, the main video and the pip video may be delivered as two separate streams. If there are independent streams, the main video and the PiP video are decoded by separate decoders and then are composed together for rendering. If the used video codec supports merging the streams, the pip video stream is combined with the main video stream, possibly replacing the streaming that represents the covered area of the main video with the pip video, and then the single stream is sent to the decoder for decoding and then rendering.
The DASH CDAM2 provides the following solution for picture-in-picture signaling.
In one or more examples, a SupplementalProperty element with the @schemeIdUri attribute equal to urn: mpeg: dash: pinp: 2022 is referred to as a picture-in-picture (PiP) descriptor. In one or more examples, one PiP descriptor may be present at Preselection level. The presence of a PiP descriptor in a Preselection indicates that the purpose of the Preselection is for providing a PiP experience.
In one or more examples, PiP services offer the ability to include a video with a smaller spatial resolution within a video with a bigger spatial resolution. In this case, the different bitstreams/Representations of the main video are included in the Main Adaptation Set of the Preselection, and the different bitstreams/Representations of a supplementary video, also referred to as PiP video, are included a Partial Adaptation Set of the Preselection. When a PiP descriptor is present in a Preselection, and the picInPicInfo@dataUnitsReplacable attribute is present and equal to ‘true’, the client may choose to replace the coded video data units representing the target PiP region in the main video with the corresponding coded video data units of the PiP video before sending to the video decoder. This way, separate decoding of the main video and the PiP video can be avoided. For a particular picture in the main video, the corresponding video data units of the PiP video are all the coded video data units in the decoding-time-synchronized sample in the supplemental video Representation.
The @value attribute of the PiP descriptor shall not be present. The PiP descriptor may include a picInPicInfo element with its attributes as specified in Table 4.
In one or more examples, the XML syntax of the PicImpicInfo element may be specified as follows:
In one or more examples, an alternative solution for picture-in-picture support in DASH is to use the ‘pip’ value for Role, as well as to use ContentComponent along with Role and @tag to signal the subpicture IDs or any other IDs needed.
In one or more examples the manipulation of the stream and the composition may specified by a specification for the decoder and not the dash client. In one or more examples, the dash client provides the content properties and metadata to the DASH Application, where it is the DASH Application's job to do any bitstream manipulation, location of PiP, and rendering.
As is shown in the above, the preselection element may be used, but a new element inside preselection, PicInpicInfo,to signal the bitstream manipulation is needed by replacing one or more subpictures streams of the main video with the picture-in-picture video coded stream. Such operation is very specific to a video decoder and cannot be generalized to other bitstream manipulation and multiplexing.
According to one or more embodiments, instead of providing the multiplexing information explicitly, the preselection's interleaving attribute, or order attribute, may be used to provide the opaque information to the application regarding multiplexing. In one or more examples, the opaque information is transparent to the DASH client and decoder. In this case, the DASH client doesn't need to parse and understand the interleaving attribute content. In one or more examples, the DASH Application is responsible for the bitstream manipulation and multiplexing, and the instruction, syntax, and semantic are defined by the decoder(s) specification(s). Since the DASH Application is aware of the decoder it is using, the DASH application is able understand the opaque instructions provided by the DASH client.
According to one or more embodiments, a new value is added to the DASH Role scheme. The value “pip” may indicate that the preselection element is used for the picture-in-picture experience. Example semantics of the “pip” value are shown in Table 5.
Pip
The content suitable for a picture-in-picture
Video
experience.
The underlined row indicates the new value.
In one or more examples, a normal audio/video program labels both the primary audio and video as “main.” However, when the two media component types are not equally important, for example (a) video providing a pleasant visual experience to accompany a music track that is the primary content or (b) ambient audio accompanying a video showing a live scene such as a sports event, that is the primary content, the accompanying media can be assigned a “supplementary” role.
In one or more examples, alternate media content components may be expected to carry other descriptors to indicate in what way it differs from the main media content components (e.g. a Viewpoint descriptor or a Role descriptor), especially when multiple alternate media content components including multiple supplementary media content components are available.
In one or more examples, open (“burned in”) captions or subtitles may be marked as media type component “video” only, but having a descriptor saying “caption” or “subtitle”.
In one or more examples, role descriptors with values such as “subtitle,”, “caption,” “description,” “sign,” or “metadata” may be used to enable assignment of a “kind” value in HTML 5 applications for tracks exposed from a DASH MPD.
In one or more examples, the preselection element may be used to describe a collection of adaptation sets in MPD suitable for a picture-in-picture (PiP) experience. PiP experience offers the ability to include a video with a smaller spatial resolution within a video with a bigger spatial resolution. In this case, the different bitstreams/Representations of the main video may be included in the Main Adaptation Set of the Preselection, and the different bitstreams/Representations of a supplementary video also referred to as PiP video, are included in a Partial Adaptation Set of the Preselection.
In one or more examples, the Preselection element indicating picture-in-picture presentation may include exactly one Role element in accordance with the role scheme and the value “pip”. The presence of this descriptor with this value in a Preselection may indicate that the purpose of the Preselection is for providing a PiP experience.
In one or more examples, the Preselection@interleaving, if existing in a DASH bitstream, provides the instructions for the DASH Application to process the representation segments/subsegments before providing them to the decoder(s). In one or more examples, the syntax and semantics used in this attribute may be defined by the specifications and/or related documents that define the decoders used in Adaptation Sets. In one or more examples, Content Components belong to this Preselection by Preselection@ preselectionComponents.
In one or more examples, in the case of VVC, the sub-pictures may be identified with a subpictures id. In one or more examples, the following syntax may be used for
The process may start at operation S502 where a DASH bitstream is received. The DASH bitstream may include a plurality of media segments.
The process proceeds to operation S504 where it is determined that the DASH bitstream includes a preselection element for multiplexing media segments. The preselection element may be Preselection@interleaving, or Preselection@order having a predefined string value (e.g., “opaque”).
The process proceeds to operation S506 where the media segments from the bitstream are parsed. For example, the multiplexing information for multiplexing media segments may specify two or more media segments to be multiplexed or interleaved, where these specified media segments are parsed from the bitstream.
The process proceeds to operation S508 where the media segments are multiplexed using the preselection element and one or more policies of the decoder.
According to one or more embodiments, a method includes: using a preselection for a picture-in-picture experience signaling, wherein an attribute is used to indicate multiplexing information and instructions to an application, wherein the multiplexing information and instructions are invisible to a DASH client, wherein the DASH application is capable of decoding the multiplexing information and instructions and applying corresponding multiplexings and manipulations to received media segments, thereby providing extensibility for multiplexing schemes without explicitly being defined by a DASH standard specification.
According to one or more embodiments, a method includes: making a DASH preselection element extensible for bitstream manipulation and multiplexing, wherein a new attribute is added to the DASH preselection element that carries bitstream manipulation and multiplexing information and instructions, wherein a corresponding syntax and semantic of the bitstream manipulation and multiplexing information is defined by an external specification of a decoder standard, wherein the specification of the decoder standard defines a set of syntaxes and semantics independent of DASH specification, and wherein the DASH specification includes instructions associated with the set of syntaxes and semantics to achieve extensibility in preselection bitstream manipulation and multiplexing.
The techniques described above can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media.
Embodiments of the present disclosure may be used separately or combined in any order. Further, each of the embodiments (and methods thereof) may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). In one example, the one or more processors execute a program that is stored in a non-transitory computer-readable medium.
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations.
As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
Even though combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
This application claims priority from U.S. Provisional Application No. 63/526,145, filed on Jul. 11, 2023 and U.S. Provisional Application No. 63/526,140, filed on Jul. 11, 2023, in the United States Patent and Trademark Office, the disclosures of each of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63526145 | Jul 2023 | US | |
63526140 | Jul 2023 | US |