Embodiments of the present disclosure are directed to systems and methods for parallel transcoding of media content, where the media content is split into parallel media sub-streams and each media sub-stream is transcoded using one transcoder and then the parallel media sub-streams are merged into a single transcoded stream.
Network and cloud platform are used to run various applications. The Network-Based Media Processing (NBMP) standard includes a specification for defining, instantiating, and running workflows on cloud platforms. The standard also defines splitter and merger function templates that use metadata for signaling the boundaries of the segments.
According to embodiments, cloud services running multiple transcoders are provided, which allows for increased speed of transcoding. For example, the number of parallel sub-streams can be increased to increase the speed of transcoding.
According to embodiments, a method performed by at least one processor that implements a network-based media processing (NBMP) workflow manager is provided. The method includes creating a NBMP workflow that includes: a splitter task that splits a compressed video stream into compressed sub-streams; transcoder tasks that respectively transcode the compressed sub-streams to be transcoded sub-streams, and a merger task that mergers the transcoded sub-streams into a single transcoded sub-stream. The method further includes: controlling at least one media processing entity to perform the NBMP workflow; and controlling the at least one media processing entity that performs the NBMP workflow to report to another entity at least one from among a splitter state of the splitter task, a transcoder state of at least one of the transcoder tasks, and a merger state of the merger task.
According to one or more embodiments, the at least one media processing entity is controlled to report the splitter state of the splitter task.
According to one or more embodiments, the at least one media processing entity is controlled to report the transcoder state of the at least one of the transcoder tasks.
According to one or more embodiments, the at least one media processing entity is controlled to report the merger state of the merger task.
According to one or more embodiments, the at least one media processing entity is controlled to report the splitter state of the splitter task, the transcoder state of the at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the controlling the at least one media processing entity to report includes controlling the at least one media processing entity to report the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task to a reporting server.
According to one or more embodiments, the controlling the at least one media processing entity to report includes controlling, based on information in a workflow description document (WDD) that is received by the NBMP workflow manager, the at least one media processing entity to report the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the controlling the at least one media processing entity to report includes controlling, based on information in a workflow description document (WDD) that is received by the NBMP workflow manager, the at least one media processing entity reports the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the controlling the at least one media processing entity to report includes controlling the at least one media processing entity to report the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task while a corresponding one from among the splitter task, the transcoder tasks, and the merger task is performed.
According to one or more embodiments, the reporting server is configured to cause the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task to be visualized in a web dashboard.
According to embodiments, a system is provided. The system includes: at least one memory configured to store computer program code; and at least one processor configured to access the computer program code and operate as instructed by the computer program code. The computer program code includes creating code configured to cause a network-based media processing (NBMP) workflow manager, implemented by the at least one processor, to create a NBMP workflow that includes: a splitter task that splits a compressed video stream into compressed sub-streams; transcoder tasks that respectively transcode the compressed sub-streams to be transcoded sub-streams, and a merger task that mergers the transcoded sub-streams into a single transcoded sub-stream. The computer program code further includes first controlling code configured to cause the NBMP workflow manager to control at least one media processing entity to perform the NBMP workflow; and second controlling code configured to cause the NBMP workflow manager to control the at least one media processing entity that performs the NBMP workflow to report to another entity at least one from among a splitter state of the splitter task, a transcoder state of at least one of the transcoder tasks, and a merger state of the merger task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the splitter state of the splitter task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the transcoder state of the at least one of the transcoder tasks.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the merger state of the merger task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the splitter state of the splitter task, the transcoder state of the at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task to a reporting server.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control, based on information in a workflow description document (WDD) that is received by the NBMP workflow manager, the at least one media processing entity to report the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control, based on information in a workflow description document (WDD) that is received by the NBMP workflow manager, the at least one media processing entity to report the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task.
According to one or more embodiments, the second controlling code is configured to cause the NBMP workflow manager to control the at least one media processing entity to report the at least one from among the splitter state of the splitter task, the transcoder state of at least one of the transcoder tasks, and the merger state of the merger task while a corresponding one from among the splitter task, the transcoder tasks, and the merger task is performed.
According to embodiments, a non-transitory computer-readable medium storing computer code is provided. The computer code is configured to, when executed by at least one processor, cause the at least one processor to implement a network-based media processing (NBMP) workflow manager that creates a NBMP workflow that includes: a splitter task that splits a compressed video stream into compressed sub-streams; transcoder tasks that respectively transcode the compressed sub-streams to be transcoded sub-streams, and a merger task that mergers the transcoded sub-streams into a single transcoded sub-stream. The computer code is further configured to cause the at least one processor to control at least one media processing entity to perform the NBMP workflow; and control the at least one media processing entity that performs the NBMP workflow to report to another entity at least one from among a splitter state of the splitter task, a transcoder state of at least one of the transcoder tasks, and a merger state of the merger task.
Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:
The user device 110 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with platform 120. For example, the user device 110 may include a computing device (e.g., a desktop computer, a laptop computer, a tablet computer, a handheld computer, a smart speaker, a server, etc.), a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a wearable device (e.g., a pair of smart glasses or a smart watch), or a similar device. In some implementations, the user device 110 may receive information from and/or transmit information to the platform 120.
The platform 120 includes one or more devices as described elsewhere herein. In some implementations, the platform 120 may include a cloud server or a group of cloud servers. In some implementations, the platform 120 may be designed to be modular such that software components may be swapped in or out depending on a particular need. As such, the platform 120 may be easily and/or quickly reconfigured for different uses.
In some implementations, as shown, the platform 120 may be hosted in a cloud computing environment 122. Notably, while implementations described herein describe the platform 120 as being hosted in the cloud computing environment 122, in some implementations, the platform 120 may not be cloud-based (i.e., may be implemented outside of a cloud computing environment) or may be partially cloud-based.
The cloud computing environment 122 includes an environment that hosts the platform 120. The cloud computing environment 122 may provide computation, software, data access, storage, etc. services that do not require end-user (e.g., the user device 110) knowledge of a physical location and configuration of system(s) and/or device(s) that hosts the platform 120. As shown, the cloud computing environment 122 may include a group of computing resources 124 (referred to collectively as “computing resources 124” and individually as “computing resource 124”).
The computing resource 124 includes one or more personal computers, workstation computers, server devices, or other types of computation and/or communication devices. In some implementations, the computing resource 124 may host the platform 120. The cloud resources may include compute instances executing in the computing resource 124, storage devices provided in the computing resource 124, data transfer devices provided by the computing resource 124, etc. In some implementations, the computing resource 124 may communicate with other computing resources 124 via wired connections, wireless connections, or a combination of wired and wireless connections.
As further shown in
The application 124-1 includes one or more software applications that may be provided to or accessed by the user device 110 and/or the platform 120. The application 124-1 may eliminate a need to install and execute the software applications on the user device 110. For example, the application 124-1 may include software associated with the platform 120 and/or any other software capable of being provided via the cloud computing environment 122. In some implementations, one application 124-1 may send/receive information to/from one or more other applications 124-1, via the virtual machine 124-2.
The virtual machine 124-2 includes a software implementation of a machine (e.g., a computer) that executes programs like a physical machine. The virtual machine 124-2 may be either a system virtual machine or a process virtual machine, depending upon use and degree of correspondence to any real machine by the virtual machine 124-2. A system virtual machine may provide a complete system platform that supports execution of a complete operating system (“OS”). A process virtual machine may execute a single program, and may support a single process. In some implementations, the virtual machine 124-2 may execute on behalf of a user (e.g., the user device 110), and may manage infrastructure of the cloud computing environment 122, such as data management, synchronization, or long-duration data transfers.
The virtualized storage 124-3 includes one or more storage systems and/or one or more devices that use virtualization techniques within the storage systems or devices of the computing resource 124. In some implementations, within the context of a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of the storage system flexibility in how the administrators manage storage for end users. File virtualization may eliminate dependencies between data accessed at a file level and a location where files are physically stored. This may enable optimization of storage use, server consolidation, and/or performance of non-disruptive file migrations.
The hypervisor 124-4 may provide hardware virtualization techniques that allow multiple operating systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as the computing resource 124. The hypervisor 124-4 may present a virtual operating platform to the guest operating systems, and may manage the execution of the guest operating systems. Multiple instances of a variety of operating systems may share virtualized hardware resources.
The network 130 includes one or more wired and/or wireless networks. For example, the network 130 may include a cellular network (e.g., a fifth generation (5G) network, a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, or the like, and/or a combination of these or other types of networks.
The number and arrangement of devices and networks shown in
The bus 210 includes a component that permits communication among the components of the device 200. The processor 220 is implemented in hardware, firmware, or a combination of hardware and software. The processor 220 is a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a microprocessor, a microcontroller, a digital signal processor (DSP), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or another type of processing component. In some implementations, the processor 220 includes one or more processors capable of being programmed to perform a function. The memory 230 includes a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, and/or an optical memory) that stores information and/or instructions for use by the processor 220.
The storage component 240 stores information and/or software related to the operation and use of the device 200. For example, the storage component 240 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, and/or a solid state disk), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of non-transitory computer-readable medium, along with a corresponding drive.
The input component 250 includes a component that permits the device 200 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, and/or a microphone). Additionally, or alternatively, the input component 250 may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, and/or an actuator). The output component 260 includes a component that provides output information from the device 200 (e.g., a display, a speaker, and/or one or more light-emitting diodes (LEDs)).
The communication interface 270 includes a transceiver-like component (e.g., a transceiver and/or a separate receiver and transmitter) that enables the device 200 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. The communication interface 270 may permit the device 200 to receive information from another device and/or provide information to another device. For example, the communication interface 270 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, or the like.
The device 200 may perform one or more processes described herein. The device 200 may perform these processes in response to the processor 220 executing software instructions stored by a non-transitory computer-readable medium, such as the memory 230 and/or the storage component 240. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices.
Software instructions may be read into the memory 230 and/or the storage component 240 from another computer-readable medium or from another device via the communication interface 270. When executed, software instructions stored in the memory 230 and/or the storage component 240 may cause the processor 220 to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
In an embodiment of the present disclosure, an NBMP system 300 is provided. With reference to
The NBMP source 310 may receive instructions from a third party entity 380, may communicate with the NBMP workflow manager 320 via an NBMP workflow API 392, and may communicate with the function repository 330 via a function discovery API 391. For example, the NBMP source 310 may send a workflow description document(s) (WDD) to the NBMP workflow manager 320, and may read the function description of functions stored in the function repository 330, the functions being media processing functions stored in memory of the function repository 330 such as, for example, functions of media decoding, feature point extraction, camera parameter extraction, projection method, seam information extraction, blending, post-processing, and encoding. The NBMP source 310 may comprise or be implemented by at least one processor and memory that stores code configured to cause the at least processor to perform the functions of the NBMP source 310.
The NBMP source 310 may request the NBMP workflow manager 320 to create workflow including tasks 352 to be performed by the one or more media processing entities 350 by sending the workflow description document, which may include several descriptors, each of which may have several parameters.
For example, the NBMP source 310 may select functions stored in the function repository 330 and send the workflow description document to the NBMP workflow manager 320 that includes a variety of descriptors for description details such as input and output data, required functions, and requirements for the workflow. The workflow description document may include a set of task descriptions and a connection map of inputs and outputs of tasks 352 to be performed by one or more of the media processing entities 350. When the NBMP workflow manager 320 receives such information from the NBMP source 310, the NBMP workflow manager 320 may create the workflow by instantiating the tasks based on function names and connecting the tasks in accordance with the connection map.
Alternatively or additionally, the NBMP source 310 may request the NBMP workflow manager 320 to create workflow by using a set of keywords. For example, NBMP source 310 may send the NBMP workflow manager 320 the workflow description document that may include a set of keywords that the NBMP workflow manager 320 may use to find appropriate functions stored in the function repository 330. When the NBMP workflow manager 320 receives such information from the NBMP source 310, the NBMP workflow manager 320 may create the workflow by searching for appropriate functions using the keywords that may be specified in a Processing Descriptor of the workflow description document, and use the other descriptors in the workflow description document to provision tasks and connect them to create the workflow.
The NBMP workflow manager 320 may communicate with the function repository 330 via a function discovery API 393, which may be a same or different API from the function discovery API 391, and may communicate with one or more of the media processing entities 350 via an NBMP task API 394. The NBMP workflow manager 320 may also communicate with one or more of the media processing entities 350 via a media processing entity (MPE) API 396. The NBMP workflow manager 320 may comprise or be implemented by at least one processor and memory that stores code configured to cause the at least processor to perform the functions of the NBMP workflow manager 320.
The NBMP workflow manager 320 may use the NBMP task API 394 to setup, configure, manage, and monitor one or more tasks 352 of a workflow that is performable by the one or more media processing entities 350. In an embodiment, the NBMP workflow manager 320 may use the NBMP task API 394 to update and destroy the tasks 352. In order to configure, manage, and monitor tasks 352 of the workflow, the NBMP workflow manager 320 may send messages, such as requests, to one or more of the media processing entities 350, wherein each message may have several descriptors, each of which have several parameters. The tasks 352 may each include media processing functions 354 and configurations 353 for the media processing functions 354.
In an embodiment, after receiving a workflow description document from the NBMP source 310 that does not include a list of the tasks (e.g., includes a list of keywords instead of a list of tasks), the NBMP workflow manager 320 may select the tasks based on the descriptions of the tasks in the workflow description document to search the function repository 330, via the function discovery API 393, to find the appropriate functions to run as tasks 352 for a current workflow. For example, the NBMP workflow manager 320 may select the tasks based on keywords provided in the workflow description document. After the appropriate functions are identified by using the keywords or the set of task descriptions that is provided by the NBMP source 310, the NBMP workflow manager 320 may configure the selected tasks in the workflow by using the NBMP task API 394. For example, the NBMP workflow manager 320 may extract configuration data from information received from the NBMP source, and configure the tasks 352 based on the configuration data.
The one or more media processing entities 350 may be configured to receive media content from the media source 360, process the media content in accordance with the workflow, that includes tasks 352, created by the NBMP workflow manager 320, and output the processed media content to the media sink 370. The one or more media processing entities 350 may each comprise or be implemented by at least one processor and memory that stores code configured to cause the at least processor to perform the functions of the media processing entities 350.
The media source 360 may include memory that stores media and may be integrated with or separate from the NBMP source 310. In an embodiment, the NBMP workflow manager 320 may notify the NBMP source 310 when a workflow is prepared and the media source 360 may transmit media content to the one or more of the media processing entities 350 based on the notification that the workflow is prepared.
The media sink 370 may comprise or be implemented by at least one processor and at least one display that is configured to display the media that is processed by the one or more media processing entities 350.
The third party entity 380 may comprise or be implemented by at least one processor and memory that stores code configured to cause the at least processor to perform the functions of the third party entity 380.
As discussed above, messages from the NBMP Source 310 (e.g., a workflow description document for requesting creation of a workflow) to the NBMP workflow manager 320, and messages (e.g., for causing the workflow to be performed) from the NBMP workflow manager 320 to the one or more media processing entities 350 may include several descriptors, each of which may have several parameters. In cases, communication between any of the components of the NBMP system 300 using an API may include several descriptors, each of which may have several parameters.
According to embodiments, cloud services running multiple transcoders are provided, which allows for increased speed of transcoding. For example, the number of parallel sub-streams can be increased to increase the speed of transcoding.
According to embodiments, the architecture 400 shown in
With reference to
With reference to
With reference to
According to embodiments, with reference to
With reference to
According to embodiments, the splitter 420, the transcoders, and the merger 440 may report their operation to a reporting server. For example, the media processing entity (or entities) that implements the splitter 420, the transcoders, and/or the merger 440 may send information to another component (e.g. a reporting server), the information indicating the splitter state(s), transcoder state(s), and/or merger state(s) of the function(s) performed by the media processing entity.
With reference to
According to embodiments, the NBMP client 710 may be implemented by the NBMP source 310 of
According to embodiments, the system 700 may perform a method that includes (1) creating a workflow, (2) getting available functions, (3) creating splitter, transcoder, and merger tasks, (4) running the workflow, (5) streaming media to the workflow, and (6) visualizing the workflow and task states.
For example, the NBMP client 710 may create and send a WDD 781, that describes a workflow 730, to the workflow manager 724, and the workflow manager 724 may create the workflow 730. According to embodiments, the workflow 730 may be created as previously described with reference to
According to embodiments, the workflow manager 724 may report (786) workflow states to the webUl backend 740 of the reporting server; the task manager 726 may report (787) task states to the webUl backend 740 of the reporting server; the splitter 420 may report (788) splitter states to the webUl backend 740 of the reporting server; the transcoders 430 may report (789) their respective transcoder states to the webUl backend 740 of the reporting server, and the merger 440 may report (790) merger states to the webUl backend 740 of the reporting server. The reporting of the states may include sending first information including indicators that indicate the respective states. According to embodiments, the WDD 781 may include second information indicating where one or more of the workflow manager 724, the task manager 726, the splitter 420, the transcoders 430, and the merger 440 should report their respective states, and the second information may further indicate what is to be reported. According to embodiments, the workflow manager 724 and/or task manager 726 may report their respective states based on the second information, and may control the one or more media processing entities to report the states of the splitter 420, the transcoders 430, and/or the merger 440, that are implemented by the one or more media processing entities, based on the second information.
According to embodiments, the reporting server may cause (792) visualization of data on the web dashboard 742 based on the first information received by the reporting server. For example, the data visualized may include the workflow states, the task states, a workflow graph, and a media sink video player. According to embodiments, the splitter states, the transcoders states, and the merger states may also be visualized. The reporting server may be configured to cause the data to be visualized by causing at least one display to display the web dashboard 742.
According to embodiments, systems and methods of parallel transcoding of a media streaming using two or more transcoders may be provided that increases the effective speed of transcoding. The systems and methods may implement NBMP splitter and merger functions of an NBMP standard, so as to be configurable with a number of split/merge and function using timing metadata. The systems and methods may manage instantiation, deployment, management, and monitoring of a workflow using the NBMP standard, wherein workflow tasks, as well as the NBMP workflow manager, report progress to a web-based dashboard in real-time.
According to embodiments of the present disclosure, at least one processor with memory storing computer code may be provided. The computer code may be configured to, when executed by the at least one processor, perform any number of aspects of the present disclosure.
For example, with reference to
The obtaining code 810 may be configured to cause the NBMP workflow manager 320 to obtain information from a WDD, in accordance with embodiments of the present disclosure. For example, the NBMP workflow manager 320 may receive the WDD, and parameters therein may be signaled to the NBMP workflow manager 320 such that the NBMP workflow manager 320 obtains corresponding information.
The creating code 820 may be configured to cause the NBMP workflow manager 320 to create a media processing workflow that includes tasks 352, in accordance with embodiments of the present disclosure. For example, the tasks 352 may include the functions of the splitter 420, the transcoders 430, and the merger 440. According to embodiments, the media processing workflow may be created based on information obtained from the WDD.
The first controlling code 830 may be configured to cause the NBMP workflow manager 320 to control at least one media processing entity 350 to perform the media processing workflow, in accordance with embodiments of the present disclosure.
The second controlling code 840 may be configured to cause the NBMP workflow manager 320 to control the at least one media processing entity 350 that performs the media processing workflow to report at least one from among a splitter state, a transcoder state, and a merger state, in accordance with embodiments of the present disclosure. For example, a media processing entity 350 that implements the splitter 420 may be controlled to report a splitter state, a media processing entity 350 that implements the transcoder 430 may be controlled to report a transcoder state, and a media processing entity 350 that implements the merger 440 may be controlled to report a merger state. According to embodiments, the NBMP workflow manager 320 may perform the control based on the information obtained from the WDD. For example, the NBMP workflow manager 320 may control what information is to be reported and to where the information is to be reported based on the information obtained from the WDD. According to embodiments, the NBMP workflow manager 320 may control the media processing entities 350 to report the states to the reporting server.
According to one or more embodiments, embodiments of the present disclosure may be implemented in environments different from NBMP.
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations.
As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
Even though combinations of features are recited in the claims and/or described in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.), and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
This application claims priority from U.S. Provisional Application No. 63/253,053, filed on Oct. 6, 2021, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63253053 | Oct 2021 | US |