The present disclosure is generally directed to Internet of Things (IoT) systems, and more specifically to intelligent solutions for Industrial Internet of Things (IIoT) applications to optimize the IoT operations by harnessing the power of data. Developing such solutions can involve complex event processing for IoT systems.
Related art digital twin-based solutions are attractive, but they are labor intensive and time consuming for effective deployment. Further, current digital twin-based solutions lack effective actionable business insights. There is a need for a composable on-demand digital twin architecture that is modular and scalable to customer needs. Solutions to standardize and productize digital twins along with actionable business insights can enable new smart products. The products can immensely benefit from composable digital twin solution powered by machine learning. Modular architectures allow for composability by putting together required modules to develop a solution. Scalable architectures allow for deployment on a single computer, or a cluster of computers or cloud environments. Further, the architectures that are heterogenous with different computational resources are needed.
In a related art implementation, there is a digital twin of a twinned physical system where one or more sensor values allow the system to monitor the condition of the selected portion of the twinned physical system and access the remaining useful life of the designated portion. Such related art implementations use analysis of the sensor values of a twinned physical system to further execute the optimization software and identify the optimal operational control of the twinned physical system and optimal operational practices. Such related art implementations enhance the working of the mission deployment, inspection and maintenance scheduling, and can be extended to other types of digital twins as well.
In another related art implementation, there is a hierarchical asset control system that relies on identification of an equipment list. Such a related art implementation works on determining the control path between the assets and identifies the constraints of the asset to allow a smart agent to control the asset. The related art control system is based on an intelligent asset-based templates that are populated after identifying the system bounds. The related art control system is equipped with a processor, that identifies the hierarchical arrangement of asset control relationships for a hierarchical asset control application by connecting each of the instantiated intelligent agents based on parent/child information.
Example implementations described herein are directed to an adaptive digital twin and its architecture that can be used to develop composable digital twins along with business policies that will facilitate quick development of adaptive machine learning based business solutions for complex events processing.
Example implementations described herein involve a composable modular architecture involving four modules: Analytics Solution Cores, Sensor Cores, Asset Cores, and Policy Cores. The inferencing and training pipelines will be composed on demand for complex event processing. Example implementations can compose multiple pipelines into a knowledge base of pipelines and execute only the pipelines based on events.
Analytics solution cores (ASC) represent a basic building block with machine learning algorithms that can be used for several vertical applications. An ASC store will store available algorithms in accordance with the desired implementation. Sensor cores make use of one or several analytics solution cores to provide actionable insights. Sensor cores can ingest real sensor data or virtual sensor data which is calculated by simple or complex algorithms/software in accordance with the desired implementation. An asset core represents the physical asset of interest and connects to the relevant sensor cores depending on the sensors associated with the specific asset.
The output of the asset core module will be ingested by the policy core to provide actionable insights that may use machine learning algorithms such as reinforcement learning or optimization algorithms. Further, the policy core manages creating the new pipelines with asset cores, sensor cores, and ASCs for training or inferencing while allocating compute resources to the new pipeline.
In example implementations, each of the layers (policy core, asset core, sensor core, ASC) can be multilevel. For example, the analytics solution core module can be multilevel with several analytics solution cores arranged in series. In another example, the digital twin asset can be multilevel with a parent machine with several sub-components. The data flow can be happening directly into the module or coming from the parent module depending on the desired implementation.
Aspects of the present disclosure can involve a method, which can include, for receipt of a composed digital twin, processing the composed digital twin through a policy core process that determines a policy for the digital twin; executing an asset core process that determines an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets retrieved from a metadata database and the determined policy; executing a sensor core process that determines a sensor hierarchy to be associated with the asset hierarchy based on metadata of sensors retrieved from the metadata database of the sensors and the asset core process; executing an analytics solution core that determines analytics solutions for the physical assets based on the metadata database and the sensor core process; constructing pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin; and executing the pipelines with computational resources to determine key performance indicator (KPI) values to be provided to an application programming interface (API).
Aspects of the present disclosure can involve a computer program, storing instructions which can include, for receipt of a composed digital twin, processing the composed digital twin through a policy core process that determines a policy for the digital twin; executing an asset core process that determines an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets retrieved from a metadata database and the determined policy; executing a sensor core process that determines a sensor hierarchy to be associated with the asset hierarchy based on metadata of sensors retrieved from the metadata database of the sensors and the asset core process; executing an analytics solution core that determines analytics solutions for the physical assets based on the metadata database and the sensor core process; constructing pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin; and executing the pipelines with computational resources to determine key performance indicator (KPI) values to be provided to an application programming interface (API). The computer program and the instructions can be stored on a non-transitory computer readable medium and executed by one or more processors.
Aspects of the present disclosure can involve a system, which can include, for receipt of a composed digital twin, means for processing the composed digital twin through a policy core process that determines a policy for the digital twin; means for executing an asset core process that determines an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets retrieved from a metadata database and the determined policy; means for executing a sensor core process that determines a sensor hierarchy to be associated with the asset hierarchy based on metadata of sensors retrieved from the metadata database of the sensors and the asset core process; means for executing an analytics solution core that determines analytics solutions for the physical assets based on the metadata database and the sensor core process; means for constructing pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin; and means for executing the pipelines with computational resources to determine key performance indicator (KPI) values to be provided to an application programming interface (API).
Aspects of the present disclosure can involve an apparatus, which can include a memory configured to store instructions and processor, configured to execute the stored instructions involving, for receipt of a composed digital twin, processing the composed digital twin through a policy core process that determines a policy for the digital twin; executing an asset core process that determines an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets retrieved from a metadata database and the determined policy; executing a sensor core process that determines a sensor hierarchy to be associated with the asset hierarchy based on metadata of sensors retrieved from the metadata database of the sensors and the asset core process; executing an analytics solution core that determines analytics solutions for the physical assets based on the metadata database and the sensor core process; constructing pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin; and executing the pipelines with computational resources to determine key performance indicator (KPI) values to be provided to an application programming interface (API).
Aspects of the present disclosure can include a system, which can involve a meta policy core actor configured to produce a policy for a digital twin; an asset core managing an asset core template configured to instantiate asset core actors in an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets and the policy produced by the meta policy core actor; a sensor core managing a sensor core template to instantiate one or more sensor actors in a sensor hierarchy based on a metadata database and ingests physical or virtual sensor data from a database; an analytics solution core managing an analytics solution core template that instantiates one or more analytics solution core actors and trains or inferences analytic solutions based on metadata and sensor data received through the sensor hierarchy; and a pipeline constructor configured to construct pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin and to construct additional pipelines or destruct certain pipelines during runtime execution of the pipelines.
The following detailed description provides details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.
Industrial systems have several components and a very complex hierarchy. Any damage or failure mode or event on one component can affect other components and subsequently the entire system. The effect of any event needs complex event processing to intelligently manage the IIoT systems. A digital twin based HoT management software system therefore needs to address such complex systems and events to effectively manage the entire system. Building such digital twin-based systems is very difficult due to the complexity of the software architecture and the types of models needed.
Different components of the system need different types of models. Such models could be purely data driven, purely physics based or some hybrid thereof. These models can be used to obtain actionable insights through a deep and intelligent understanding of the data by analyzing the event patterns, event filtering, event transformation or event hierarchies to determine the causality of the events.
Putting together a complex digital twin system with such diverse models is very complex. Additional complexity comes from diverse data needs for each of these component models with some needing streaming data from the component sensors, and the varied computational resources needed depending on the nature of the model. To enable such complex event processing, a modular, composable digital twin software architecture is needed to enable the processing, as well as to reduce the time to develop and time to market for different industrial customers. An iterative architecture is needed to model both static physical assets or dynamic process. Related art digital twin architectures focus on one or the other but not both. Further, architectures that encourage standardization and reduce time to market are needed to bring business value with high return on investment.
In a first example problem, there is an industrial facility with several assets and potential digital twin models for IIoT operation and complex event processing.
In a second example problem, there can be a manufacturing process problem.
In the example of
As will be described herein, a pipeline is defined as a series of policy cores, asset cores, sensor cores, and analytic solution cores put together to calculate a business outcome. The modular architecture in the present disclosure have the characteristics as follows.
Composable: The architecture should facilitate composing the computational pipelines to meet the changing requirements of physical assets in a static or dynamic fashion. Static means composing the digital twin pipeline before starting the software, and dynamic means changing the pipelines as needed while the digital twin software is executing.
Reusability: The cores need to be reusable. Reuse of cores facilitate a faster time to market and also greatly reduces the development cost.
Expandability: Each core is capable of expanding the capabilities by integrating and aligning with additional cores.
Combinability: The cores if needed, should be amenable to combine in series or parallel to build the pipeline. Such combinability can involve parallelism at the modular level, scalability with cloud, and enable required governance as per the business needs or government regulations (e.g., GDPR, and so on).
The policy core 410 is an intelligent engine that upon processing the results, determines possible next recommendations and shares the obtained insights on a dashboard for the user to gain additional insights and knowledge of the system. The policy core 410 also instantiates one or more policy actors 412 from use of a compute resource composer 411 and a composable pipeline knowledge base 413. Further details of the policy core 410 are provided with respect to
The asset core 420 instantiates one or more asset actors 421 to represent the asset hierarchy of the physical assets of the underlying system. Further details of the asset core 420 are provided with respect to
The structure of policy core 410 includes a meta policy actor 500 that interacts with other policy cores, pipeline composer 501, and a meta data store including information about sensor cores, pipeline, ASC, and assets. Meta policy actor 500 also interacts with asset actors, compute resources, business action application programming interface (API) 502, monitoring dashboard API 503, or operational control API.
Policy core 410 is capable of instantiating and executing new pipelines based on the observed events and outcomes. To start any new pipeline, the policy core 410 goes over a series of actions, which involves identifying the possible analytical solution cores and then identifying all the possible assets, data and meta data relevant to the new analytical pipeline. Policy core 410 is also aware of all the available resources (hardware, software and computational time) and calculates the optimal combination of the resources and computational power given the time constraints required for obtaining desired insights. Depending on the desired implementation, the policy core 410 can be multilevel. For example, the output of the alert optimizer can be sent to the business policies algorithm to provide an actionable insight.
Each policy core 410 can build and execute an analytical pipeline, which can involve asset core, sensor core, and ASC core. A meta policy core template is the standardized code base that can be re-used to instantiate asset actors in runtime. It can have multiple engines such as heuristic engine, deep learning-based reinforcement learning engine to make decisions for business insights, or additional pipeline generation in accordance with the desired implementation.
Depending on the desired implementation, meta policy actor 500 and policy actors can be multilevel. A meta policy actor 500 can be connected to other meta policy actors or policy actors. A policy actor can be connected to other one or several policy actors or asset actors. Further, a meta policy actor 500 can be connected to a pipeline composer 501 and to a computational resource composer 411.
The intelligence algorithms in the policy core include but not limited to heuristic based/deep learning-based reinforcement learning algorithms for prescribing business action or triggering build/execute new pipelines, and/or optimization algorithms to optimize a process parameter like maximize yield in a manufacturing process.
In the example of
At 9, the compute resource composer 411 sends relevant information to the computational environment 504 for the creation or confirmation of the desired environment. At 91, the computational environment 504 sends the confirmation of availability of the computational environment that was desired. At 21, the compute resource composer 411 send the confirmation of the compute resources to meta policy actor 500.
At 11, the meta policy actor 500 spins a new policy actor to build a new pipeline, further details of which are provided in
Subsequently at 706, the meta policy actor 500 continues to monitor the asset to gain further insights. Another possible outcome is to provide insights at 707 in conjunction with the list of affected assets and KPIs and the actions associated with them at 708. In another possible outcome, a determination 710 is made to create an optimized new pipeline at 711 with the help of the list of actions and the business heuristics at 709.
A sensor core template 902 is the standardized code base that can be re-used to instantiate sensor actors in runtime. Sensor core actors can be multiple layers. One sensor actor can be connected to one or more ASC actors, one or more other sensor actors, and/or to an asset actor depending on the desired implementation. The template can include various libraries in accordance with the desired implementation, such as but not limited to sensor specific feature engineering, compatible ASC analytics meta data, ASC pipeline generator, sensor core to asset core API, sensor core to ASC core API, and data transfer API to/from IoT data source.
At 1105, the ASC actor computes and send the results to sensor core. At 1104, the sensor actor computes and sends result back to asset actors. At 1103, the asset actors aggregate and compute the result and send the events to policy actor. At 1101, the policy actor sends the event information to meta policy actor 500. At 1108, the meta policy actor 500 will send any action triggers to business action API 502 based on the algorithms being executed. At 1110, the meta policy actor 500 sends event information to a monitoring dashboard 503 for user consumption.
The processing and data flow for monitoring is as follows. At 1215, the sensor core receives asset sensor data regarding whether sensor data or virtual senor data is to be processed by ASC actors. At 1213, the ASC actor receives operational data from the IoT store. At 1205, the ASC actor sends the detection or prediction to the sensor core.
At 1204, the sensor core sends event information to the asset actor. At 1203, the asset actor sends event information to the policy actor. At 1210, the meta policy actor 500 sends monitoring information to the monitoring dashboard 503. At 1208, the meta policy actor 500 sends business actions 502 based on prebuilt algorithms.
In an example of the event driven complex event processing, the systems monitor the assets using certain monitoring pipelines. Based on certain events, the meta policy actor 500 can spin additional pipelines during runtime to calculate to calculate additional parameters such as remaining useful life of the same component, health score of a related component and so on, to calculate and derive an actionable insight.
The event 1 pipeline (in dashed line) is in response to an event 1 that meta policy actor initiated to create to calculate additional parameters. In this case, the event 1 pipeline is for the same asset as the monitoring asset. The event 2 pipeline (in bold line) is triggered on a different asset based on an event on monitoring asset. Additionally, multiple pipelines can be triggered in parallel in response to the result of a monitoring alert or a combination of monitoring alert and previous event pipelines, depending on the desired implementation.
Distributed parallel environment builder 1701 builds and manages the cluster as per instructions from policy core runtime 1702. Policy core runtime 1702 involves the central pieces of orchestration with meta policy actor and policy actors. Digital twin composer 1703 includes the composer, meta data store and templates for asset/sensor/ASC cores. ML flow model store 1704 is a model store with pre-developed models. User input API 1705 provides user input to meta policy store. User APIs provide APIs for visual dashboard 1706 and storage 1707. Operational control system API 1708 provides instructions to control the system for further action as per operational system algorithms. Business action AP 1709 is an Application Performance Management (APM) alert system for maintenance, repairs, and so on. The model server 1710 can run the models based on IoT data received from IoT devices 1711.
Through the example implementations described herein, it is possible to facilitate complex event processing for IIoT systems, a standardization of the computational framework for asset cores to deliver business value, as well as flexibility of reuse of ASC's, sensor core, and asset cores to new assets and customers. Further, the example implementations described herein can facilitate the composition of new solutions from existing modules, scale the computation from single computer to multiple computers and to cloud infrastructure with minimal or no changes, provide standardization of analytics for quick deployment, significantly reduce the time to deploy a solution as well as enable non-experts to perform the deployment task.
The system of
Computer device 2005 can be communicatively coupled to input/user interface 2035 and output device/interface 2040. Either one or both of input/user interface 2035 and output device/interface 2040 can be a wired or wireless interface and can be detachable. Input/user interface 2035 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). Output device/interface 2040 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/user interface 2035 and output device/interface 2040 can be embedded with or physically coupled to the computer device 2005. In other example implementations, other computer devices may function as or provide the functions of input/user interface 2035 and output device/interface 2040 for a computer device 2005.
Examples of computer device 2005 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and the like).
Computer device 2005 can be communicatively coupled (e.g., via I/O interface 2025) to external storage 2045 and network 2050 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 2005 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
I/O interface 2025 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 2000. Network 2050 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
Computer device 2005 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
Computer device 2005 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
Processor(s) 2010 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 2060, application programming interface (API) unit 2065, input unit 2070, output unit 2075, and inter-unit communication mechanism 2095 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. Processor(s) 2010 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
In some example implementations, when information or an execution instruction is received by API unit 2065, it may be communicated to one or more other units (e.g., logic unit 2060, input unit 2070, output unit 2075). In some instances, logic unit 2060 may be configured to control the information flow among the units and direct the services provided by API unit 2065, input unit 2070, output unit 2075, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 2060 alone or in conjunction with API unit 2065. The input unit 2070 may be configured to obtain input for the calculations described in the example implementations, and the output unit 2075 may be configured to provide output based on the calculations described in example implementations.
Processor(s) 2010 can be configured to execute instructions or a method which can involve, for receipt of a composed digital twin, processing the composed digital twin through a policy core process that determines a policy for the digital twin; executing an asset core process that determines an asset hierarchy of physical assets represented by the digital twin based on metadata of the physical assets retrieved from a metadata database and the determined policy; executing a sensor core process that determines a sensor hierarchy to be associated with the asset hierarchy based on metadata of sensors retrieved from the metadata database of the sensors and the asset core process; executing an analytics solution core that determines analytics solutions for the physical assets based on the metadata database and the sensor core process; constructing pipelines to facilitate the analytics solutions across a policy core layer, asset core layer, sensor core layer, and analytics solution core layer of the digital twin; and executing the pipelines with computational resources to determine key performance indicator (KPI) values to be provided to an application programming interface (API) as illustrated in
Processor(s) 2010 can be configured to execute instructions or a method which can involve, for a detection of an event, triggering an automatic construction of additional pipelines based on the pipeline execution.
Processor(s) 2010 can be configured to execute instructions or a method wherein the executing the asset core process involves executing an asset core template based on the metadata of the physical assets and the determined policy to instantiate one or more asset core actors to form the asset hierarchy; connecting the one or more asset core actors to one or more policy core actors based on the determined policy; connecting the one or more asset core actors to one or more other asset core actors to build the asset hierarchy; and providing the KPI values to the one or more policy core actors.
Processor(s) 2010 can be configured to execute instructions or a method wherein the executing the sensor core process involves executing a sensor core template based on the metadata database to instantiate one or more sensor core actors as the sensor hierarchy; connecting the one or more sensor core actors to one or more asset core actors based on the asset hierarchy; connecting the one or more sensor core actors to one or more other sensor core actors to build the sensor dependency; feeding physical or virtual sensor data into the one or more sensor core actors from a database or from the one or more other sensor core actors; feeding metadata into the one or more sensor core actors from a metadata database or from the one or more asset core actors; and providing the KPI values to the one or more asset core actors.
Processor(s) 2010 can be configured to execute instructions or a method wherein the analytics solution core process involves executing an analytics solution core template on the metadata database to instantiate one or more analytics solution core actors; feeding physical or virtual sensor data from one or more sensor core actors; training or inferencing the analytics solutions based on metadata received through the sensor hierarchy; wherein the one or more analytics solution core actors write metadata to a database; wherein the KPI values are provided to the one or more sensor core actors.
Processor(s) 2010 can be configured to execute a method or instructions that further involve, for detection of one or more events associated with one or more assets from the asset hierarchy from monitoring the KPI values, generating additional pipelines during runtime execution of the pipelines for the one or more assets to calculate and derive an actionable insight for the one or more events. Depending on the desired implementation, the method or instructions can further facilitate functionality to do dynamic event generation, interpretation, and/or resolution for complex event processing. The event interaction each of pipeline can contribute and aggregate to final KPI. Depending on the size of the event, a sub pipeline can be generated to study the sub event. In addition, the predictive nature of the aggregate of the event information from pipelines executed can predict and remediate certain events before they occur in accordance with the desired implementation. Further, the optimization of event pipeline outcomes by the policy core layer could potentially serve for the prescriptive action on the asset once the event occurred.
Processor(s) 2010 can be configured to execute a method or instructions construct pipelines to facilitate the analytics solutions by generating a pipeline configuration through interaction with an infrastructure compiler based on available compute resources for the digital twin; and executing a set of pipelines from the pipeline configuration based on constraint to the available compute resources.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.
Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.
Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the techniques of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the techniques of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/018733 | 3/3/2022 | WO |