The present invention relates to a system and method for an intelligent flow framework to enable and control an artificial intelligence model to define actions or tasks and, more particularly, to a system and method implemented by an intelligent flow framework module that is communicatively coupled to an artificial intelligence module and an interface to deploy intelligent flow agents that independently select, prioritize or generate actions using intelligent flow.
This section describes the technical field in detail and discusses problems encountered in the technical field. Therefore, statements in the section are not to be construed as prior art.
In recent years, artificial intelligence has made impressive progress in natural language processing, with Large Language Models (LLMs) leading the way by transforming how machines interact with humans and revolutionizing various industries through applications such as text generation, machine translation, sentiment analysis, and question-answering systems. The emergence of LLMs has brought a paradigm shift in natural language processing (NLP) by improving the performance of various NLP tasks, such as chatbots, by enabling coherent, contextually relevant responses and fostering new possibilities for creative writing, breaking down language barriers, analyzing customer feedback, improving knowledge retrieval systems, and streamlining support services.
Large language models have made it possible to create systems that can partially or completely improve the workflow of human professional activities such as consulting, coaching, education, assistant help, and various types of services like psychological assistance, sales management, healthcare guidance, and physical education. Examples of implementing LLMs for diverse tasks include ChatGTP, LLAMA, Chameleon, Dolly, etc. However, these implementations face inherent technical limitations that can impact their effectiveness and usability in many user scenarios. The limitations of such implementations include passive agents, short or no memory, no pre-defined or self-generated workflows, limited domain knowledge, and a lack of context, emotions, self-reflection, the social aspect, common sense, reasoning, and creativity. Some of these models lack the ability to handle ambiguity, multi-lingual conversations, and vulnerability to bias. These limitations can affect the ability of LLMs to perform certain tasks, especially those that involve longitudinal goals requiring intermediary prerequisites, such as mental health therapy tasks or missions.
The current limitations with single-input generative artificial intelligence (AI) prevent them from performing long-term missions with defined goals, prioritizing tasks and goals, breaking down goals into a chain of actions, launching parallel execution of tasks and goals, accumulating and turning information into knowledge and intuition, forgetting negative experiences or erroneous information, sharing information and skills, using actions and skills from third parties without modifying an intelligent agent (IA) circuit, and exploring open and closed sources for new actions and skills through training and targeted search. These abilities will allow the AI to perform missions (task graphs) more efficiently and effectively, achieve goals, and adapt to changing circumstances. Therefore, there is a void in the technology domain for a mission or task-driven intelligent flow framework, processes, and agents with intelligent choice.
Therefore, there is a need for a system or method to improve the performance of the existing artificial intelligence system by providing a modular framework that can enable AI models to adapt to different missions by any user having little or no knowledge of the underlying AI model.
The object is solved by independent claims, and embodiments and improvements are listed in the dependent claims. Hereinafter, what is referred to as “aspect”, “design”, or “used implementation” relates to an “embodiment” of the invention and when in connection with the expression “according to the invention”, which designates steps/features of the independent claims as claimed, designates the broadest embodiment claimed with the independent claims.
An object of the present invention is to provide a system with the ability to adapt quickly to changing circumstances and make intelligent decisions to ensure the successful completion of missions/objectives.
Another object of the present invention is to provide a system with a modular architecture to allow for flexible customization and optimization to meet the unique needs of different applications.
Another object of the present invention is to provide a system to manage resources effectively and optimize the performance of the system for completing any mission, task, or objective.
Another object of the present invention is to provide a system to incorporate real-time data feeds and analytics to make informed intelligent decisions based on current conditions.
According to an aspect of the present invention, the system comprises an interface, an artificial module, and an intelligent flow framework module. The intelligent flow framework module is communicatively coupled to the interface and the artificial intelligence module. The intelligent flow framework module is configured to define at least one task based on an event and contextual data.
In an embodiment, according to the present invention, the event includes a prompt, message, signal, API call, or a combination thereof.
In an embodiment, according to the present invention, the intelligent flow framework module comprises an active knowledgebase, a contextual unit, and a user profiling database. The contextual unit includes an emotional module, an artificial conscience module, or any other sub-module required for generating the contextual data. The contextual data includes the current state of an actor, environment, actor history, workflow, or a combination thereof.
In an embodiment, according to the present invention, the intelligent flow framework module is configured to generate a task based on an event received from the interface and contextual data retrieved from at least one of the active knowledgebases, the contextual unit, or the user profiling database.
In an embodiment, according to the present invention, the intelligent flow framework module is configured to monitor the current state of the contextual data.
In an embodiment, according to the present invention, the intelligent flow framework module comprises a confidence module and a parameter module.
In an alternative embodiment, according to the present invention, the intelligent flow framework module is configured to define a mission based on the event, the contextual data, or a combination thereof. The intelligent flow framework module is configured to define the at least one task based on the mission, the event, or the contextual data. The at least one task comprises at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof.
In yet another embodiment, according to the present invention, the intelligent flow framework module is configured to define and assign the at least one task for an intelligent flow agent. The intelligent flow agent executes the at least one task assigned by the intelligent flow framework module.
In yet another embodiment, according to the present invention, the intelligent flow framework module is configured to observe the current state of the task assigned to the intelligent flow agent. The intelligent flow framework module is configured to interrupt the execution of the task assigned to the intelligent flow agent based on the event, contextual data, a new task defined by the intelligent flow framework module, or a combination thereof.
In another embodiment, according to the present invention, the intelligent flow framework module comprises network adapters to connect with external devices, sensors, communication devices, agents, machine interfaces, or web services.
In an alternative embodiment, according to the present invention, the intelligent flow framework module is configured to transfer the at least one task to a new intelligent flow agent, a network adapter, an external intelligent flow agent, or distribute the at least one task between multiple intelligent flow agents and network adapters depending upon the event, current state of contextual data, a new task defined by the intelligent flow framework module, or a combination thereof.
In another embodiment, according to the present invention, the intelligent flow agent relays the at least one task, the event, or the contextual data to an artificial intelligence module.
In yet another embodiment, according to the present invention, the artificial intelligence module includes a generative learning model. The generative model is any neural network based on a transformer architecture, pre-trained on large datasets of unlabeled text, and able to generate novel human-like text, speech, or visual.
In an embodiment, according to the present invention, the artificial intelligence module is trained on application-specific workflow or dataset. The intelligent flow framework module comprises an intelligent flow designer to enable an actor to set at least one workflow, a rule engine, an action, or a combination thereof.
According to another aspect of the present invention, the present invention provides a method implemented by an intelligent module. The method comprises the steps of: a) receiving an event; b) embedding a contextual data to the event; c) defining at least one task based on the event and the embedded contextual data; and d) assigning the at least one task to at least one intelligent flow agent; wherein the assigning the at least one task includes relaying the task, the event, or the embedded contextual data to an artificial intelligence module.
In an embodiment, according to the present invention, embedding the contextual data includes adding current state of at least one actor, environment, actor history, current workflow, or a combination thereof.
In an embodiment, according to the present invention, the at least one actor is user, human, connector, or a non-human logical structure.
In an alternative embodiment, according to the present invention, the actor is at least one of a sensor capturing an environmental or physical metric, wherein the captured metric is the event.
In another embodiment, according to the present invention, receiving an event includes generating the event based on at least one prompt, message, signal, API call or a combination thereof.
In another embodiment, according to the present invention, defining at least one task includes generating at least one action, chain of actions, graph of actions, a prompt, or a combination thereof.
In yet another embodiment, according to the present invention, relaying the task, the event, or the embedded contextual data to an artificial intelligence module comprises a step of receiving an output from the artificial intelligence module. The output comprises at least one action, a chain of actions, a graph of actions, or a combination thereof.
In yet another embodiment, according to the present invention, the method further comprises the steps of a) receiving an event; b) embedding a contextual data to the event; c) defining a mission based on the event and the embedded contextual data; d) determining available actions to complete the mission; e) generating at least one task based on the determined available actions; and f) selecting at least one task to perform and complete the defined mission based on a confidence level related to the determined available actions.
According to another aspect of the present invention, a system comprises a processor, and a non-transitory storage element. The processor hosts an intelligent flow framework module. The intelligent flow framework module comprises an intelligent flow agent, an active knowledgebase, and a contextual unit. The non-transitory storage element coupled to the processor to store the encoded instructions. The encoded instructions, when implemented by the processor, configure the system to perform the steps of: a) receiving an event; b) embedding a contextual data to the event; c) defining a mission based on the event and embedded contextual data; and d) determining all available actions to complete the mission.
According to another aspect of the present invention, the present invention provides a method implemented by an intelligent flow framework module. The method comprises the steps of: a) receiving at least one threshold-grade contextual data of the actor; b) generating an event based on the at least one contextual data; and c) relaying the event and the contextual data to a generative learning model for determining at least one task; wherein relaying of the event and the contextual data is routed through an intelligent flow agent.
Various aspects and embodiments of the present invention are better understood by referring to the following detailed description. In order to better appreciate the advantages and objects of the embodiments of the present invention, reference should be made to the accompanying drawings that illustrate these embodiments.
Specific embodiments of the invention will now be described in detail with reference to the accompanying
The figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. It should also be noted that, in some alternative implementations, the functions noted/illustrated may occur out of order. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Since various possible embodiments might be proposed of the above invention and amendments might be made in the embodiments above set forth, it is to be understood that all matter herein described or shown in the accompanying drawings is to be interpreted as illustrative and not to be considered in a limiting sense. Thus, it will be understood by those skilled in the art that although the preferred and alternate embodiments have been shown and described in accordance with the Patent Statutes, the invention is not limited thereto or thereby.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily refer to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described, which may be exhibited by some embodiments and not by others. Similarly, various requirements are described, which may be requirements for some embodiments but not all embodiments.
The conventional approach to workflow solutions involves using algorithms to define system behavior, where blocks or steps of the system are connected in a rigid execution sequence with explicit branching conditions. In contrast, the proposed method not only specifies the sequence of flow steps but also allows the model to make an independent choice of which step(s) to perform next. This method is also known as intelligent workflow. The intelligent workflow is created and edited using a web or mobile interface or by training a specialized generative learning model. The following ‘definition of terms’ section provides exemplary definitions and, or examples of key terms involved in the intelligent flow framework, intelligent workflow, and intelligent agent.
Intelligent Flow Framework Module: A system architecture of networked modules or components for generating tasks, events, or missions based on available actions or events for a generative model or intelligent flow agent to choose at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof.
Intelligent Workflow: A complete set of available actions to serve as a basis for defining a task, mission, event, or an event to be relayed to the generative model to choose at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof.
Intelligent Flow Agent: Deployed on the intelligent flow framework module to generate an event or execute a task assigned by the intelligent flow framework module. The intelligent flow agent may further be generating the event or making the intelligent choice for the at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof. Furthermore, the intelligent flow agent, as a part of the intelligent flow framework module, may generate the event and/or make the intelligent choice for at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof.
Intelligent Choice: Choosing at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof to complete a defined task or mission. These terms are interchangeably used in the description.
Actor: Actor is at least one of a user, human, connector, or a non-human logical structure connected by the connector.
Event: Event includes a prompt, message, signal, API call, or a combination thereof.
Connector/Network Adapter: Connector/Network adapter is any device, component, module, network element, or logic enabling the receiving of the event from the actor into the system or transmitting event, task, mission, at least one action, a chain of actions, a graph of actions, a prompt, or a combination to another component or module of the system.
Actions: Actions are functions performed by the actors. The actors accept arguments, perform instructions, produce an event and/or return a value or output.
EventQueue: EventQueue is a data structure used in computer programming to manage and process the number of events.
EventHandler: EventHandler executes the number of events stored in the EventQueue.
A generative model is a neural network based on transformer architecture that is pre-trained on large datasets of unlabeled text and capable of generating novel human-like text, speech, and visual content. Examples include, but are not limited to, large language model (LLM), text-to-music, text-to-voice, generative pre-trained transformer 4 (GTP-4), bidirectional encoder representations from transformers (BERT), embeddings from language model (ELMo), and DALL-E.
Prompt: Prompt is an input to the system by the actor or generated based on the determined available actions to be relayed to a generative model to fulfill the mission related to the actor and the event.
Memory Management Module: Memory management module includes active knowledge base, long-term memory consolidation (LMC), short-term consolidation (SMC), short-term memory, long-term memory, contextual units, confidence modules, and parameter modules.
Artificial Consciousness Module: Interoperation of intelligent flow agents or intelligent flow sub-agents.
Emotion Module: The emotion module includes emotion detection and determination based on the contextual data, event, actor's history, or any other data point relevant to determining emotions involved in any event, transaction, or mission executed by the system of the present invention.
Intelligent Flow Designer: Intelligent flow designer is a user interface enabled in the system to define workflows for different missions, events, profiles, or playground environments.
Mission: A complex set of actions that uses intelligent flow/choice and provides an output or desired action/goal.
The interface 102 receives an event from an actor. Alternatively, the interface 102 generates an event. The event includes but is not limited to a prompt, captured metric, message, signal, API call, or a combination thereof. The actor is at least one of a user or human, and a non-human logical structure. Alternatively, the actor is at least one of the sensors capturing an environmental or physical metric. The interface 102 includes user devices, mobile applications, input/output devices, sensor networks, or web services. In one scenario, the user devices are further connected with industry experts. The mobile application includes but is not limited to chatbot applications. In one example, the mobile application is “Google Smart Home App”. The input devices include keyboards, mouse, scanners, cameras, joysticks, or microphones. The output devices include loudspeakers, smartphones, display devices, or a signal sent to a connected device to execute. The display devices include a liquid crystal display (LCD), a light-emitting diode (LED) screen, an organic light-emitting diode (OLED) screen, or another display device. The sensor network includes a temperature sensor, a proximity sensor, a pressure sensor, an infrared sensor, a motion sensor, an accelerometer sensor, a gyroscope sensor, a smoke sensor, a chemical sensor, a gas sensor, an optical sensor, a light sensor, air quality sensor, audio sensor, contact sensor, carbon monoxide detection sensor, camera, biomedical sensor, level sensor, ultrasonic sensor, a biometric sensor, air quality sensor, electric current sensor, flow sensor, humidity sensor, fire detection sensor, a pulse sensor, a blood pressure sensor, an electrocardiogram (ECG) sensor, a blood oxygen sensor, a skin electrical sensor, an electromyographic sensor, an electroencephalogram (EEG) sensor, a fatigue sensor, a voice detector, an optical sensor or a combination thereof to receive input and event at the interface 102 effectively. The web services are network connections of the system 100 of the present invention with an external server network to receive and send information to complete the present invention's functionality. Some of the exemplary web services include connecting to a financial institution transaction system, a telephone line connected with external consultants, or any other services available through web portals.
The intelligent flow framework module 104 is communicatively coupled to the interface 102 and the artificial intelligent module 106. The intelligent flow framework module 104 receives the event from the interface 102 and processes the received event. Further, the intelligent flow framework module 104 generates a task based on the event and contextual data. Alternatively, the intelligent flow framework module 104 defines a mission based on the event, the contextual data, or a combination thereof. Further, the intelligent flow framework module 104 is configured to define the at least one task based on the mission, the event, or the contextual data. The contextual data is received through a contextual unit (not shown) of the intelligent flow framework module 104. The contextual data includes the current state of an actor, environment, actor history, workflow, or a combination thereof. The contextual data is retrieved from at least one active knowledgebase, the contextual unit, or a user profiling database. The at least one task comprises at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof. The intelligent flow framework module 104 comprises network adapters to connect with external devices, sensors, communication devices, agents, machine interfaces, or web services. The intelligent flow framework module 104 defines the at least one task for an intelligent flow agent (not shown). The Intelligent flow agent executes the at least one task based on the workflow provided by the intelligent flow framework module 104 or selects a workflow that is suitable for completing the task. Alternatively, the intelligent flow agent relays the at least one task, the event, or the contextual data to an artificial intelligence module 106.
The artificial intelligence module 106 includes a generative learning model. The generative learning model is any neural network based on a transformer architecture, pre-trained on large datasets of unlabeled text, and able to generate novel human-like text, speech, or visual. The artificial intelligence module 106 is trained on application-specific workflow or datasets. The artificial intelligence module 106 executes the at least one task or transfers the task to any other connected component or module of the system.
The interface 202 receives an event that includes a prompt, message, signal, API call, or a combination thereof. The event is generated by an actor. Alternatively, the interface 202 generates an event. The actor is at least one of a user or human, a non-human logical structure. The interface 202 includes but is not limited to user devices 202-1, mobile applications 202-2, input/output devices 202-3, sensor networks 202-4, or web services 202-5. In one scenario, the user devices 202-1 are further connected with industry experts. The mobile application 202-2 includes but is not limited to chatbot applications. In one example, the mobile application 202-2 is “Google Smart Home App”.
The intelligent flow framework module 204 is communicatively coupled to the interface 202 and the artificial intelligent module 206. The intelligent flow framework module 204 receives the event from the interface 202. The intelligent flow framework module 204 processes the received event from the interface 202.
The intelligent flow framework module 204 comprises an active knowledgebase 204-1, a contextual unit 204-2, a user profiling database 204-3, a confidence module 204-4, a parameter module 204-5, an intelligent flow agent 204-6, a network adapter 204-7, an intelligent flow designer 204-8, and an interrupt module 204-9.
The active knowledgebase 204-1 includes pre-stored values related to the event, such as event summary, event facts, event parameters, event variables, and previously executed event commands. The active knowledgebase 204-1 further includes at least one timestamp, confidence level, source code, or identification of the actor reporting the information. The active knowledgebase 204-1 includes but is not limited to at least one of a task ID, a task code name, a task summary, task facts, and task identifiers discussed in detail in
The interface 302 receives an event from an actor. The event and actor are discussed in detail in
The act log module 304 is a database of all events, including messages from actors, sensor readings, and other connector events. The act log module 304 includes a table having fields an event ID, an actor ID, a recipient ID, UTC timestamp, an event time zone, a source ID, an event type, an original content, a derived content, a unified content, a confidence level, and a consolidated date as shown below:
The short-term memory (SMC) 306 allows to keep track of current context and meaning of a conversation, and to integrate new received information. The short-term memory 306 begins with the act log module 304 between the actor and an intelligent flow agent (discussed in detail in
The short-term memory 306, the long-term memory 310 and the contextual unit 204-2 (discussed in
The short-term memory consolidation 308 (long-term memory construction algorithm) and the long-term memory 310 functions are implemented through the consolidation mechanism, i.e., extracting facts and summarizing the short-term memory 306 and placing the data in a structured form in the active knowledgebase (312, and 316). Further, each fact is assigned with timestamp of the consolidation time, user ID or IA ID, and confidence obtained from confidence module. The intelligent agent consolidation engine starts during the lowest server load period.
The long-term memory consolidation 314—the “forgetting” algorithm is protection against overflow with obsolete and already irrelevant facts necessary to constantly focus the intelligent agent (IA) on more relevant and important facts.
The active knowledgebase (AKB) module (312, 316): The active knowledgebase (AKB) module (312, 316) allows the system to specify how an intelligent agent (IA) should answer certain questions. The AKB table can contain at least 5 fields including task ID, task code name, task summary, task facts and task identifiers, as shown below:
The entries in the AKB table may have additional information: [T:2019-08-02T08:31:25Z]—time stamp when this entry was made. This is necessary to be able to pay attention first of all to later events, tasks or facts, in case of conflicting information. [C:100]—confidence in the given event or task or fact. The events or task or facts are added to the system by trusted sources that are marked with higher confidence values. The events or task or facts the intelligent agent (IA) receives from low-ranking users receive lower values. [S:232]—source code or ID of the actor who reported this information.
The algorithm for calling the active knowledgebase (AKB) module (312, 316) is clearly illustrated in
Calls to variables, commands, and workflows may be embedded in topic summary and topic facts. Therefore, this knowledge structure is called active knowledgebase module (312, 316). FIG. 3(E) illustrates an example of a workflow that is called, if topic 2347 is detected light commands (see AKB module (312, 316) table example).
The AKB module (312, 316) table example
The contextual unit 204-2 generates the contextual data. The contextual data is an additional data required in addition to the event for generating a task or mission. The contextual data includes the current state of an actor, environment, actor history, workflow, or a combination thereof.
The structure and functionality of the contextual unit 204-2 is discussed in detail in
The emotional module 402-1 stores a complete history of the emotional state of the actor and corresponding responses. Further, the emotional module 402-1 receives the current state of the actor from the interface 202. The emotional module 402-1 collects the data in real-time to determine the current emotional state of the actor. The data for determining emotional state can be derived by using artificial intelligence from the communication between the actor and the system 200, actor profile, environment detection, voice properties, camera input or other sensor inputs such as blood pressure and temperature. The emotional module 402-1 includes a voice recognition module 402-11 to collect speaker dependent and independent variables from the audio signals. The speaker independent variables include language, words, whereas speaker dependent variables include pitch, tone, pronunciation, or other speaker specific acoustic features.
The artificial conscience module 402-2 enables the intelligent flow agent 204-6 (explained below) to achieve self-awareness through continuous interaction with two or more independent intelligent flow agents, each exhibiting independent behavioral properties.
The composition of the contextual unit 204-2 is not limited to the emotional module 402-1 or the artificial conscience module 402-2. The contextual unit 204-2 may include additional modules (402-3 . . . 402-n) required for generating the contextual data. The additional modules (402-3 . . . 402-n) may include network adapters to receive data over the network, processors to compute data using multi-source sensor data, or memories that enables the contextual unit 204-2 to receive or transmit, process, and store the contextual data.
The user profiling database 204-3 stores a predefined list of actor profiles. Each actor's profile includes but is not limited to name, age, gender, weight, skin tone, height, fingerprints, facial recognition, voice patterns, iris recognition, hair follicles, or a combination thereof. Each actor's profiles are linked and stored with a unique identifier. The actor may manually add a new user profile for a new actor. The actor may select the “add option” displayed on the interface 202. Alternatively, the system 200 may automatically generate a notification after a new actor identification using a camera. For example, the smart home system identifies a new actor or person ringing the doorbell using a camera. The smart home system automatically transmits a notification for approval to the owner of the home. After receiving the approval, the smart home system asks a list of questions from the actor to complete the user profile. The smart home system allows the actor to access the home after completing the user profile and sending a message of “access granted” to the owner of the home. If the owner of the home rejects the approval notification, then the smart home system denies access to the actor. In an alternative scenario, the actor is an autonomous vehicle and the system 200 collects information from different sensors implemented in the autonomous vehicle through the sensor network. The profile of the actor is then created automatically or manually based on the parameters that are relevant to recognize the actor.
The intelligent flow framework module 204 generates a task based on the event received from the interface 202 and contextual data retrieved from at least one of the active knowledgebase 204-1, the contextual unit 204-2, or the user profiling database 204-3. Alternatively, the intelligent flow framework module 204 defines a mission based on the event, the contextual data, or a combination thereof. The intelligent flow framework module 204 defines the at least one task based on the mission, the event, or the contextual data. The at least one task comprises at least one action, a chain of actions, a graph of actions, a prompt, or a combination thereof. In one scenario, the mission of the intelligent flow framework module 204 is to act as a customer service agent by resolving the customer issue. Alternatively, in another case, the intelligent flow framework module 204 acts as healthcare specialist or doctor's assistant.
The confidence module 204-4 assigns a confidence level to each input received from the actor or task defined based on the mission assigned to the intelligent flow framework module 204. In one example, the confidence level ranges from 0 to 100. For example, the confidence module 204-4 ranks each selected workflow based on the mission, the event, the accuracy, or source of each contextual data point. In one scenario, the source of the contextual data is biometric database to provide highly confidential and accurate information. Alternatively, the confidence module 204-4 may use external sources to provide additional information to generate confidence levels. For example, the intelligent flow framework module 204 is on a mission to provide health advisory and have sufficient data on the history of a first actor, such as his medicinal record and disease history, whereas there is no information for a second actor. The confidence module 204-4 will provide higher confidence in the task defined with respect to the first actor rather than the second actor. The above example is illustrative and shall not be considered a limiting way of assigning confidence level. The objective of the confidence module 204-4 is to determine the confidence related to different tasks executed or assigned using the system 200.
The parameter module 204-5 stores a list of global parameters and actor-specific parameters. The global parameters include but are not limited to parameters related to the event, current date, and time of each input from interface 202, sensor reading received from the sensor networks 202-4, or a combination thereof. For example, the sensor reading includes but is not limited to the temperature of each room in the smart home system and the name of the frequently or last played playlist. The actor-specific parameters include but are not limited to a level of importance of an actor received from the confidence module 204-4.
The intelligent flow agent 204-6 executes the at least one task defined or assigned by the intelligent flow framework module 204. The intelligent flow agent 204-6 utilizes the table containing at least one of the task ID, the task code name, the task summary, task facts, and task identifiers from the active knowledgebase 204-1 to answer the questions defined in the at least one task. The intelligent flow agent 204-6 may follow different workflows that include at least one active journaling assistant, an active therapist, a coach, a consultant, a support assistant, a sales representative, a video surveillance or security guard, or an active companion. The intelligent flow agent 204-6 may be used in various industries, for example, therapy, sports and health coaching, education, healthcare, security and home surveillance, autonomous vehicles, robots, smart home systems, technical support and customer support, hospitality, sales and marketing, or supply chain and logistics.
Therapy: The intelligent flow agent 204-6 may provide support for mental health by acting as virtual therapists. The intelligent flow agent 204-6 may provide emotional support, cognitive behavioral therapy, and personalized recommendations based on individual needs.
Sports and health coaching: The intelligent flow agent 204-6 may be used in the sports and health industry to provide personalized coaching and training plans based on individual goals and needs.
Education: The intelligent flow agent 204-6 may be used in education to provide personalized learning experiences, help with homework, and provide feedback and guidance to students.
Healthcare: The intelligent flow agent 204-6 may be used in the healthcare industry to provide personalized health monitoring, medication reminders, and support for patients with chronic conditions.
Security and home surveillance: The intelligent flow agent 204-6 may be used in the security and home surveillance industry to monitor homes, alert homeowners of suspicious activity, and control smart home devices.
Autonomous vehicles: The intelligent flow agent 204-6 may be used in the automotive industry to control self-driving vehicles and provide real-time information to drivers.
Robots and robodogs: The intelligent flow agent 204-6 may be used in the manufacturing industry to control robots on assembly lines or in the form of robodogs to assist with tasks like search and rescue or assistance for those with disabilities.
Smart home systems: The intelligent flow agent 204-6 may be used in the home automation industry to control and optimize smart home devices like thermostats, lighting, and appliances.
Technical support and customer support: The intelligent flow agent 204-6 may be used in technical support and customer support to provide automated solutions to common problems and answer frequently asked questions.
Hospitality, sales, and marketing: The intelligent flow agent 204-6 may be used in the hospitality, sales, and marketing industries to provide personalized recommendations and customer support.
Supply chain and logistics: The intelligent flow agent 204-6 may be used in the supply chain and logistics industry to optimize operations, track inventory, and provide real-time updates on shipment status.
The intelligent flow agent 204-6 provides personalized solutions, real-time updates, and automated support to improve efficiency and effectiveness in various domains. The intelligent flow agent 204-6 is deployed on the intelligent flow framework module 204. Alternatively, the intelligent flow agent 204-6 may be deployed on the artificial intelligence model 206.
The intelligent flow agent 204-6 may include multiple intelligent agents, as shown in
In one example, the intelligent flow agent 204-6 is an active journaling assistant (AJA). The table shown below is an active log diagram of intelligent flow agent 204-6.
The log diagram is based on the conversations between the AJA and Alexei on Apr. 30, 2023, between 5 and 5:30 PM. As per the log summary, Alexei requested a 15-minute delay due to work obligations but was able to participate in the journaling session. AJA discussed Alexei's day, personal stories, and emotions, and made note of his responses for later use. AJA agreed to continue the sessions daily at 5 PM and discussed Alexei's goals for the next few months in the next session. The conversation between AJA and Alexei is as follows:
In one exemplary scenario, the AJA may have at least one of, but not be limited to, functions: 1. Assist in journaling by prompting the actor with questions and suggestions for reflection; 2. Help the actor set and track goals related to their journaling practice; 3. Provide personalized feedback and insights based on the actor's journal entries; 4. Evaluate, record, and offer resources and exercises to help the actor improve their mental and emotional well-being; 5. Protect the actor's privacy and maintain confidentiality of their journal entries; 6. Create a report/log/journal and send it back to the actor; 7. Schedule interviews; 8. Conduct interviews over the phone; and 9. Send physical and virtual gifts.
In another scenario, the intelligent flow agent 204-6 relays at least one task, the event, or the contextual data to an artificial intelligence module 206. For example, Alexei requested the system on how she can take care of his health after a challenging situation he faced at work. The intelligent flow agent 204-6 relays the task to generative AI for collecting information related to similar situations faced by other individuals and actions taken by them.
The network adapter 204-7 enables the intelligent flow framework module 204 to connect with external devices, sensors, communication devices, agents, machine interfaces, or web services. The network adapter 204-7 supports USB, Ethernet, wired, Wi-Fi, telecommunication, or a combination thereof. The network adapter 204-7 may be coupled with another communication interface. The communication interface may support any number of suitable wireless data communication protocols, techniques, or methodologies, including radio frequency (RF), infrared (IrDA), Bluetooth, Zigbee (and other variants of the IEEE 802.15 protocol), a wireless fidelity Wi-Fi or IEEE 802.11 (any variation), IEEE 802.16 (WiMAX or any other variation), direct sequence spread spectrum (DSSS), frequency hopping spread spectrum (FHSS), global system for mobile communication (GSM), general packet radio service (GPRS), enhanced data rates for GSM Evolution (EDGE), long term evolution (LTE), cellular protocols (2G, 2.5G, 2.75G, 3G, 4G or 5G), near field communication (NFC), satellite data communication protocols, or any other protocols for wireless communication.
The network adapter 204-7 may include multiple network adapters, as shown in
The intelligent flow designer 204-8 includes an intelligent flow editor to enable an actor to set at least one workflow, a rule engine, an action, a chain of action, or a combination thereof. Thus, the intelligent flow designer 204-8 assists in creating an intelligent flow design. Alternatively, the artificial intelligence module 206 may also be used to create an intelligent flow design automatically based on the learning data of the system 200. The intelligent flow designer 204-8 enables the actor to create or generate at least one workflow, a rule engine, an action, a chain of action, or a combination thereof manually or automatically based on the event, mission, contextual data, task, or combination thereof.
The intelligent flow designer 204-8 creates or generates an intelligent flow based on the self-generated chain of steps, and the intelligent flow agent 204-6 has the ability to follow not only pre-defined flow (even with the help of intelligent choice) but also achieve the goal in cases where the pre-defined flow comes to a standstill. Prompt engineering is further implemented to auto-generate the prompts based on the contextual data, mission, and events to form a sequence of steps to achieve the goal.
For example, the user Jane gives the following voice command to her smart home: “Ask Alex if he will have dinner.”
The pre-defined flow of a smart home looks like this:
However, in reality, Alex did not answer, and the program got stuck on step 3 of this algorithm. To get a self-generated workflow, we can use a request to LLM with the following content:
Here are the step-by-step instructions on how the system can perform the task using the specified functional modules:
I executed the steps 1, 2, 3. But I didn't get any response. What else should I do?” LLM's answer would be a self-generated chain of steps:
The indicated steps are then fed to the AKB input, and the commands that are closest in the description in the required sequence are executed.
The interrupt module 204-9 enables the intelligent flow framework module 204 to interrupt the execution of the task assigned to the intelligent flow agent 204-6 based on the event, contextual data, a new task defined by the intelligent flow framework module 204, or a combination thereof. The interrupt is generated by observing the current state of the task assigned to the intelligent flow agent 204-6. Alternatively, the interrupt may be generated based on the priority of the event, mission, or task aligned in the queue.
In one example, the smart home system is monitoring washing machine cycles or timers. The timer is about to expire, and the smart home system must generate the alarm based on the timer expiration. At the same time, the doorbell rang. The camera captures a person with a hammer in his hand. In this scenario, the smart home system halts the generation of alarms and generates a warning of threat detection.
The artificial intelligence module 206 includes a generative learning model 206-1 and other learning models (206-2, . . . 206-n) required for the execution of the task. The artificial intelligence module 206 is trained on application-specific workflow or dataset. The generative learning model 206-1 is any neural network based on a transformer architecture, pre-trained on large datasets of unlabeled text, and able to generate novel human-like text or speech or visual. The generative learning model 206-1 includes a large language model 206-11. The large language model 206-11 is trained to generate intelligent workflows, intelligent choices, or a combination thereof. The large language model 206-11 provides the intelligent flow framework module 204 with the ability to adapt quickly to changing circumstances and make intelligent decisions to ensure the successful completion of missions/objectives. The artificial intelligence module 206 receives relayed tasks from the intelligent flow framework module 204 through the intelligent flow agent 204-6 or a network adapter 204-7. The artificial intelligence module 206 utilizes the generative learning model 206-1 to choose the best course of action based on the output from the generative learning model 206-1. The artificial intelligence module 206 may include a memory to store a list of tasks and a corresponding set of actions.
In one example, John's smart home system is designed to provide an intelligent workflow for all aspects of the home. One day, John arrived home from work and noticed the smart home system detected a water leak in the basement. The intelligent flow framework module of John's smart home system immediately observed the current state of the actors relative to the identified mission, which was to address the water leak. The intelligent flow framework module relayed the information to the artificial intelligence module. The system determined all available actions to fulfill the mission, including shutting off the water supply to the house and contacting scheduled appointments for the following day. The system identifies solutions to set up a system to monitor the water levels and prevent future leaks. The system shut off the water supply to the house and sent an alert to John's phone, notifying him of the situation. A plumber is also contacted and set up a monitoring system to track the water levels and prevent future leaks.
With the help of the intelligent process workflow of John's smart home system, the water leak was addressed quickly and efficiently. The system's ability to perceive the event, observe the current state, determine available actions, relay actions to a generative learning model, and choose the best course of action based on the model's output helped John prevent a potential disaster and keep his home safe and secure. Thus, the intelligent flow framework (IFF) module leverages the capabilities of the generative learning model or LLM to rapidly adapt to changing circumstances and make intelligent choices to achieve objectives successfully.
The intelligent process workflow further comprises intelligent choices. The intelligent choice determines a choice of desired actions further based on priority. Moreover, in one example, the choosing of at least one of the next actions, chain of actions, or graph of actions to complete the defined mission is based on priority and confidence, as determined by at least one of the user, actor, event, local and/or global environment, or active knowledge base (consolidation of a short and long-term memory).
Alternatively, the intelligent process workflow method further comprises the step of self-generating at least one of an action, chain of actions, or graph of actions. Additionally, the intelligent workflow method further comprises the step of adapting the intelligent workflow based on a 3rd-party integration via the network adapter 204-7.
The intelligent flow framework module 204 or the artificial intelligence module 206 may be integrated into one module or may be independent units.
During operation, in one example, the intelligent flow framework module 204 receives the event from the interface 202 and the contextual data from the contextual unit 204-2. The intelligent flow framework module 204 embeds the contextual data in the event. The intelligent flow framework module 204 defines at least one task based on the event and the embedded contextual data. The intelligent flow framework module 204 assigns at least one task to at least one intelligent flow agent 204-6. The intelligent flow agent 204-6 executes the at least one task, including relaying the task, the event, or the embedded contextual data to the artificial intelligence module 206 to receive an output. The output comprises at least one action, a chain of actions, a graph of actions, or a combination thereof. The output indicates the execution of the task. The output is transmitted back to the interface 202, which displays the output to the actor.
In the second example, the intelligent flow framework module 204 receives the event from the interface 202 and the contextual data from the contextual unit 204-2. The intelligent flow framework module 204 embeds the contextual data to the event. The intelligent flow framework module 204 defines a mission based on the event and the embedded contextual data. The intelligent flow framework module 204 or the artificial intelligence module 206 determines available actions to complete the mission. The intelligent flow framework module 204 or the artificial intelligence module 206 generates at least one task based on the determined available actions. The intelligent flow framework module 204 or the artificial intelligence module 206 selects the at least one task to perform and complete the defined mission based on a confidence level related to the determined available actions.
In the third example, the intelligent flow framework module 204 receives at least one threshold-grade contextual data of the actor from the contextual unit 204-2. The contextual unit 204-2 compares the contextual data with a predefined threshold. Alternatively, the intelligent flow agent 204-6 can assist the intelligent flow framework module 204 in determining threshold-grade contextual data. The intelligent flow framework module 204 generates an event based on the at least one contextual data above the threshold. Further, the intelligent flow framework module 204 relays the event and the contextual data to the generative learning model 206-1 of the artificial intelligence module 206. The generative learning model 206-1 determines at least one task based on the at least one event and the contextual data stored in the memory. The intelligent flow framework module 204 relays the event and the contextual data to the generative learning model 206-1 through the intelligent flow agent 204-6.
The event 702 includes multiple events (702-1, 702-2, 702-3, 702-4, and 702-5) received using an interface. The event includes but is not limited to a prompt, message, signal, API call, or a combination thereof. The event is generated by an actor. The actor is at least one of a user or human, a non-human logical structure. The interface includes but is not limited to user devices, mobile applications, input/output devices, a sensor network or web services.
The interface is communicatively coupled to the intelligent flow framework module, which is further communicatively coupled with the artificial intelligence module. The interface, the intelligent flow framework module, and the artificial intelligence module may be integrated as a single component to form a system 700. The system 700 receives an event, selects a workflow, and selects a corresponding action. The event is embedded with a contextual data received from a contextual unit. The system 700 generates a mission or a task based on the event and the embedded contextual data. The system 700 contains pre-stored workflow 704 cither on the intelligent flow framework module or the artificial intelligence module.
The workflow 704 includes workflow-1 704-1, workflow-2 704-2, and workflow-3 704-3 for different profiles and personas to complete the mission or task assigned by the system 700. The workflow 704 defines a sequence of steps required for the event, the contextual data, the mission, or the task execution. Different workflows (704-1, 704-2, and 704-3) have a different sequence of steps required for the event, the contextual data, the mission, or the task execution. The resources required for different workflows are also different. The system 700 of the present invention autonomously determines the resource requirement and selects workflow based on the resource requirement. Based on the event, contextual data or task, the system 700, the intelligent flow framework module, or the artificial intelligence module automatically selects a suitable workflow. Alternatively, the system 700, the intelligent flow framework module, or the artificial intelligence module may select more than one workflow based on the complexity of the event, the contextual data, the mission, or the task execution. The system 700, the intelligent flow framework module, or the artificial intelligence module may perform an intelligent choice of workflows based on the priority and confidence level in each workflow. The priority is either defined manually by the actor or by using an interrupt signal by the system 700 based on the changed environment that includes updated contextual data. The intelligence choice is also selected based on the execution time, resource usage, and resource history of success and failure.
The workflow 704 is connected to action 706. The action 706 includes connection with the interfaces, including network adapters 706-1, user devices 706-2, artificial intelligence module 706-3, web services 706-4, and sensor network 706-5 to complete the task or mission. The workflow 706 may be connected to the network adapters 706-1 to execute the mission or task. Alternatively, the workflow 706 may be connected to the user devices 706-2, the artificial intelligence module 706-3, the web services 706-4, the sensor network 706-5, or a combination thereof to execute the mission or task. The system 700 selects the type of action based on the complexity of the mission or the task. Alternatively, the system 700 may perform the intelligent choice for type of action based on priority and confidence in different actions or chains of actions. Similar to the workflow selection, priority is either defined by the actor manually or using an interrupt signal. The intelligence choice is also selected based on the execution time, resource usage, and resource history of success and failure.
In one example, the action is activation of the network adapters 706-1. The network adapters 706-1 enable the connectivity of the at least one workflow to third parties for task execution using API calls or any other mechanism.
In the second example, the action is the user devices 706-2. The user devices 706-2 are operated by consultants or advisors to complete the at least one workflow by answering actor real-time queries for completing the task execution.
In the third example, the action is the artificial intelligence module 706-3 for automatically executing the task using the predefined set of actions corresponding to at least one workflow.
In the fourth example, the action is the web services 706-4. The web services 706-4 include but is not limited to a financial institution server that is initiated to complete one financial transaction. The workflow may include auto payment to the plumber after the completion of the task.
In the fifth example, the action is the sensor network 706-5 for automatically executing the task based on the selected at least one workflow. For example, regulating the room's temperature by comparing it with the threshold or switching off the water supply when any leak is detected.
An event trigger signal 708 is generated from either the event 702 or the action 706. The system 700 also continuously monitors the status of the event 702 and observes the current status of the workflow to generate a trigger signals 708, 710. The interrupt module 712 transfers the trigger signal 710 after embedding additional contextual data to the event 702 for generating a new event based on the current scenario. The interrupt module 712 is connected to the contextual unit 714. Alternatively, the trigger signal 710 and the contextual data after embedding forms the interrupt signal 716. The interrupt signal 716 may halt the current execution of the workflow and initiate another event to select a new workflow. Alternatively, the interrupt signal 716 only initiates a trigger to the system 700 for switching between two workflows. In an example, the workflow of switching-off the heating element connected to the water tank after certain temperature can be interrupted if the actor starts using the water from the tank, new workflow will be initiated to determine how much time the actor is using the water. If the water usage is minimal and cannot impact the temperature of the tank water, then original workflow will be continued. Otherwise, it will interrupt and put into rest until we get another threshold level.
The stage 1 comprising the following steps: (a) receiving (802) an event from an actor.
The stage 2 comprising the following steps: (b) observing (804) latest conversation between the actor and corresponding agent replies; (c) determining (806), whether any other workflow fit better for the conversation.
The stage 3 comprising the following steps: (d) if yes, retrieving (808) the workflow name, workflow description, current workflow stages description and current workflow stage instruction steps; and (e) storing (810) the workflow description, current workflow stages description and current workflow stage instruction steps into the memory; or (f) if no, continuing (812) with the current workflow.
The stage 4 comprising the following steps: (g) generating (814) reply based on the workflow description, the current workflow stages description, the current workflow stage instruction step, contextual data, and history of the conversation.
The stage 5 comprising the following steps: (h) waiting (816) for next event from the actor; (i) ending (818) the conversation if the next event is not received within a predetermined time; (j) returning (820) to the stage 2 if the next event is received from the actor.
The stage 1 comprising the following steps: (a) receiving (902) an event from an actor.
The stage 2 comprising the following steps: (b) opening (904) pre-saved workflow description, pre-saved current workflow stages description, and pre-saved current workflow stage instruction steps; (c) generating (906) fast-reply based on the workflow description, the current workflow stages description, the current workflow stage instruction steps, contextual data, and history of the conversation.
The stage 3 comprising the following steps: (d) observing (908) latest conversation between the actor and corresponding agent replies; (e) determining (910), whether any other workflow fit better for the conversation.
The stage 4 comprising the following steps: (f) if yes, retrieving (912) the workflow name, the workflow description, the current workflow stages description and the current workflow stage instruction steps; and (g) saving (914) the workflow description, the current workflow stages description and the current workflow stage instruction steps into the memory for next turn; or (h) if no, continuing (916) with the current workflow; (i) generating (918) re-think reply based on the workflow description, the current workflow stages description, the current workflow stage instruction steps, contextual data and history of the conversation.
The stage 5 comprising the following steps: (j) waiting (920) for next event from the actor; (k) ending (922) the conversation if the next event is not received within a predetermined time; or (1) returning (924) to the stage 2 if the next event is received from the actor.
The stage 1 comprising the following steps: (a) receiving (1002) an event from an actor.
The stage 2 further comprising option 1 and option 2.
The option 1 comprises the following steps: (b) retrieving (1004-1) workflow description, current workflow stages description, and current workflow stage instruction steps; (c) generating (1006-1) fast-reply based on the workflow description, the current workflow stages description, the current workflow stage instruction steps, contextual data, and history of the conversation.
The option 2 comprises the following steps: (b) observing (1004-2) latest conversation between the actor and corresponding agent replies; (c) determining (1006-2), whether any other workflow fit better for the conversation; (d) if yes, retrieving (1008-2) workflow name, workflow description, current workflow stages description and current workflow stage instruction steps; (e) saving (1010-2) the workflow description, the current workflow stages description and the current workflow stage instruction steps into the memory for next turn; and (f) stopping (1012-2) the generation of the fast-reply in step (c) of option 1; or (g) if no, continuing (1014-2) with the current workflow;
The stage 3 comprising the following steps: (h) generating (1016-2) re-think reply based on the workflow description, the current workflow stages description, the current workflow stage instruction steps, contextual data, and history of the conversation.
The stage 4 comprising the following steps: (i) waiting (1018) for next event from the actor; (j) ending (1020) the conversation if the next event is not received within a predetermined time; or (k) returning (1022) to the stage 2 if the next event is received from the actor.
Receiving (1102) an event includes generating the event based on at least one prompt, message, signal, API call or a combination thereof.
Embedding (1104) the contextual data includes adding current state of at least one actor, environment, actor history, current workflow, or a combination thereof. The at least one actor is user, human, connector, or a non-human logical structure.
Alternatively, the actor is at least one of a sensor capturing an environmental or physical metric, wherein the captured metric is the event.
Defining (1106) at least one task includes generating at least one action, chain of actions, graph of actions, a prompt, or a combination thereof.
Relaying the task, the event, or the embedded contextual data to an artificial intelligence module comprises a step of receiving an output from the artificial intelligence module. The output comprises at least one action, a chain of actions, a graph of actions, or a combination thereof.
The confidence level is assigned by a confidence module to each input received from an actor or mission or task assigned to the intelligent flow framework module. In one example, the value of the confidence level ranges from 0 to 100.
The processor 1402 may comprise a single or multi-core processor. The processor 1402 executes software instructions or algorithms to implement functional aspects of the present invention. The processor 1402 can be a cloud server that hosts an intelligent flow framework module comprising an intelligent flow agent, an active knowledgebase, and a contextual unit (as shown above in
The non-transitory storage element 1404 may include any of the volatile memory elements (for example, random access memory, such as dynamic random access memory (DRAM), static random-access memory (SRAM), synchronous dynamic random-access memory (SDRAM), etc.), non-volatile memory elements (for example, read-only memory (ROM), hard drive, etc.), magnetic, semiconductor, tape, optical, removable, non-removable, or other types of storage device or tangible and combinations thereof. Typical forms of non-transitory media include, for example, a flash drive, a flexible disk, a hard disk, a solid state drive, magnetic tape or other magnetic data storage medium, a compact disk-read-only memory (CD-ROM) or other optical data storage medium, any physical medium with patterns of holes, a non-transitory computer-readable medium, random-access memory (RAM), a programmable read-only memory (PROM), and electrically erasable programmable read-only memory (EPROM), a FLASH-EPROM, other flash memory, non-volatile random-access memory (NVRAM), a cache, a register, other memory chip or cartridge, or networked versions of the same. The non-transitory storage element 1404 may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor 1402. The non-transitory storage element 1404 can include one or more software programs, or algorithms, each of which includes an ordered listing of executable instructions for implementing logical functions.
The processor 1402, and the non-transitory storage element 1404 may communicate with each other through an internal connection path, to transfer a control signal and/or a data signal. Alternatively, the processor 1402, and the non-transitory storage element 1404 may communicate with each other using network adapters (discussed in detail in
The non-transitory storage element 1404 is configured to store encoded instructions 1406, and the processor 1402 is configured to implement the encoded instructions 1406 stored in the non-transitory storage element 1404, to perform the method steps of the present invention. The processor 1402 and the non-transitory storage element 1404 may be an independent module. Alternatively, during specific implementation, the processor 1402 and the non-transitory storage element 1404 may be integrated into one module. The processor 1402 is configured to execute the encoded instructions 1406 in the non-transitory storage element 1404 to implement the foregoing functions.
The system 1500 further comprises a connector 1504. The connector 1504 includes a contextual unit 1504-1 for generating contextual data. The system 1500 utilizes the integration ID and the connector ID to connect with connector 1504 and contextual unit 1504-1. The system 1500 generates an event 1506 based on the contextual data received from the contextual unit 1504-1 and a signal received from the actor 1 1502-1. The system 1500 allocates or generates an event ID for the generated event 1506. The system 1500 includes an intelligent flow framework module and an artificial intelligent module (discussed in detail in
Similarly, the system 1500 generates an event 1518 based on a message received from the actor 2 1502-2 (an industry expert) and the connector 1516. The connector 1516 includes a camera 1516-1 and a network adapter 1516-2. The camera 1516-1 detects the current state of a human or person to generate the contextual data. The network adapter 1516-2 may receive input or contextual data from a third party (not shown). The system 1500 utilizes the integration ID and the connector ID to connect with connector 1516, the camera 1516-1, and a network adapter 1516-2. The system 1500 allocates an event ID to the generated event 1518. The system 1500 may select a workflow-2 1520 from a plurality of workflows based on the event 1518. Alternatively, the system 1500 may generate a workflow using an intelligent flow framework module and an artificial intelligent module. After the workflow-2 1520 selection, the system 1500 generates a message or command 1522. The message or command 1522 is generated by an actor 1512-2. The actor 1512-2 is initiated by an agent person 1512. In one example, the actor 1512-2 is a sensor network, and the agent person 1512 is an industry expert. The actor 1512-2 is authenticated using previously stored external ID (agent1@newo.ai), integration ID (sendgrid), and connector ID (connector_1). Further, the message or command 1522 is connected to connector 1524 using connector ID. Further, the message or command 1522 is connected to the connector 1524. The connector 1524 includes a network adapter 1524-1. The network adapter 1524-1 relays the received message or command to a third-party for executing the desired operation.
The descriptions are merely example implementations of this application but are not intended to limit the protection scope of this application. A person with ordinary skills in the art may recognize substantially equivalent structures or substantially equivalent acts to achieve the same results in the same manner or in a dissimilar manner; the exemplary embodiment should not be interpreted as limiting the invention to one embodiment.
The discussion of a species (or a specific item) invokes the genus (the class of items) to which the species belongs as well as related species in this genus. Similarly, the recitation of a genus invokes the species known in the art. Furthermore, as technology develops, numerous additional alternatives to achieve an aspect of the invention may arise. Such advances are incorporated within their respective genus and should be recognized as being functionally equivalent or structurally equivalent to the aspect shown or described. A function or an act should be interpreted as incorporating all modes of performing the function or act unless otherwise explicitly stated.
The description is provided for clarification purposes and is not limiting. Words and phrases are to be accorded their ordinary, plain meaning, unless indicated otherwise.
Number | Date | Country | |
---|---|---|---|
63466271 | May 2023 | US |