This invention relates generally to the field of manufacturing and more specifically to a new and useful method for automatically generating steps and guidance of a digital procedure within a manufacturing facility.
The following description of embodiments of the invention is not intended to limit the invention to these embodiments but rather to enable a person skilled in the art to make and use this invention. Variations, configurations, implementations, example implementations, and examples described herein are optional and are not exclusive to the variations, configurations, implementations, example implementations, and examples they describe. The invention described herein can include any and all permutations of these variations, configurations, implementations, example implementations, and examples.
As shown in
The method S100 further includes: extracting a first set of visual features from a first frame in the live video feed in Block S122; detecting a risk event related to the first equipment unit based on the first set of visual features in Block S130; and, in response to detecting the risk event, accessing a procedure authoring model associated with the first equipment unit proximal the workspace in Block S132.
The method S100 also includes: correlating the risk event with an action language signal corresponding to an action prompt related to the first equipment unit in Block S134; correlating the risk event with a risk language signal corresponding to a first process risk associated with execution of the first action prompt with the first equipment unit in Block S136; and, based on the action language signal, the risk language signal, and the agentic procedure authoring model, generating a sequence of steps predicted to resolve the risk event in Block S140.
The method S100 further includes: initializing a second instructional block containing the sequence of steps predicted to resolve the risk event in Block S150; inserting the second instructional block in the current instance of the digital procedure in place of the first instructional block in Block S152; and presenting a first step, in the sequence of steps contained in the second instructional block, to an operator device associated with the operator in Block S154.
As shown in
The agentic platform operates as follows: It receives prompts from operators, software systems (e.g., Manufacturing Execution System software), and/or automation systems (e.g., automated manufacturing platforms). These prompts can take the form of questions, requests, reports of deviations during procedure execution, or changes in procedures before or during execution. The agentic platform confirms the roles and priorities of the different instances that need to contribute to the system response.
The agentic platform retrieves information from a database to determine the priorities and knowledge base associated with each instance acting as an agent for a particular role. It provides an AI forum where the instances and roles can engage in discussions, optimized for formulating a consensus and prioritizing responses based on the assigned priorities. The platform formulates clear and concise responses to requests, such as answering questions, providing regulatory guidance, suggesting deviation responses, or proposing changes to existing procedures or new procedure steps to address issues during execution.
All activities within the agentic platform are logged in an audit trail in a human-readable format. This secure and tamper-proof database captures the dialogue, interactions, arguments, perspectives, and consensus reached among the different AI agents.
The dialogue and responses stored in the audit trail can be utilized to improve the model through scoring, alignment, and evaluation, thereby enhancing the reliability and effectiveness of the AI system. Scoring mechanisms evaluate the performance of AI instances in terms of their arguments, consensus-building, and decision-making abilities. Alignment techniques synchronize the viewpoints of the AI instances and ensure a coherent dialogue. Regular evaluation and feedback loops, including user validation and feedback, contribute to refining and enhancing the overall functionality and performance of the system. This alignment and evaluation process also aids in maintaining regulatory compliance and validating the response process.
Generally, Blocks of the method S100 can be executed by a computer system to: receive a request, such as from an operator or an automated system performing a current instance of a digital procedure within a manufacturing facility, to resolve a process risk for a particular consumable, raw material, equipment unit or method within the facility; generate a new sequence of steps predicted to resolve the process risk for the particular consumable, raw material, equipment unit, or method within the facility based on language signals extracted from the request; modify the current instance of the digital procedure to insert the sequence of steps predicted to resolve the risk process; and serve a first step, in the sequence of steps, to an operator device (e.g., tablet, headset), such as in a text format presented at an interactive display at the operator device, to enable the operator to resolve the process risk during the current instance of the digital procedure.
In particular, the computer system can: receive a prompt from an operator proximal a workspace within the facility to initialize a first instructional block of a digital procedure; and, in response to initializing the first instructional block, access a live video feed from an optical sensor arranged proximal the workspace (e.g., at an autonomous cart, headset) and depicting the operator performing the first instructional block with a first equipment unit at the workspace. The computer system can then, implement computer vision techniques to: extract visual features from the live video feed; detect objects (e.g., equipment units, operator) present at the workspace based on the visual features in the live video feed; and interpret actions performed by the operator based on object paths, velocities, locations, in the live video feed. Accordingly, the computer system can: interpret deviation from a first instruction specified in the first instructional block based on objects and/or operator actions detected in the live video feed; and/or interpret risk events (e.g., hazardous spills) proximal the workspace based on the objects and/or operator actions detected in the live video feed.
In one example, an operator is executing a procedure from the Manufacturing Execution System (MES) and an unexpected deviation occurs. The deviation is detected by the computer system, the computer system identifies it as a risk event and provides scoring of the risk event. The computer system categorizes the risk event for the deviation and checks the procedure if guidance exists on how to proceed within the procedure. Each procedure is assigned a risk score based on the usage within the organization, the number of times it has been executed, the number of operators that have successfully executed the procedure, the Quality scoring, and any feedback or edits made to the procedure due to issues or inaccuracies. This can be paired with the risk score for the deviation event and the computer system determines if the risk score meets the specification to move forward with providing the instruction or to wait for the sign-off by a supervisor before further action is taken. This may include a Manufacturing Supervisor, Quality Supervisor, Environmental Health & Safety (EH&S) supervisor, or the Head of Quality depending on the risk score of the deviation event and the risk score of the correction to the event.
If the guidance or steps are not available to overcome the deviation event within the procedure being executed, including the steps to instruct the operator to proceed or recover from the deviation event, then the correct steps to recover from the deviation event may be searched within the database of approved procedures as either an entire procedure or a portion of an approved procedure containing the proper guidance to correct, fix, or move forward from the deviation event. If there is no guidance available for approved procedures for the manufacturing site then the computer system may search the Connected Manufacturing Network of networked sites to determine if an approved procedure or guidance exists, which may need to be auto-translated by AI into a language required by the site and/or operator. If the guidance does not exist in the Connected Manufacturing Network for approved procedures, then it may be taken from procedures that were authored but were not yet approved.
If the procedure steps are not available to recover from the deviation event, then they may be auto-generated from the AI system. An AI system may generate the response to the deviation event and provide the step-by-step instructions for the operator to execute, but such systems generally take into account only one perspective in the process which may be from the operator's view for making the task execution easiest with the least amount of time or fewest steps, from a Quality perspective which may include the thorough documentation of each step performed with multiple operators, supervisors present to sign off from the deviation event, and recording of all the events leading up to the deviation to make the Quality investigation into the root-cause of the deviation easier, or from the Safety perspective to ensure the operator(s) conducting the deviation recovery steps are performing tasks that minimize risk to the operators with full information provided to the operators of potential safety issues. Any procedure generated needs to consider all the perspectives and viewpoints for all of these roles, but to perform it in a significantly faster way so the operator can respond to a risk event in real-time without needing to wait for a committee of supervisors to get together to formulate a plan to resolve the risk event.
This is where the agentic platform comes in, which can create multiple instances of different AI agents acting on behalf of specific roles in the organization to argue on behalf of and protect the interests for those roles within the organization. Each instance for the assigned agent role may pull from a dedicated database created or trained on data for that specific role. The agent roles can include actual positions within an organization and mimic the style and requirements for the role, such as philosophy, style, interests, and requirements, or they can take build the agent role from a single or multiple people currently in the role. Alternatively, the agent roles can be for a regulatory agency to ensure the laws, regulations, and guidance are properly followed, or for a document or guidance itself such as a PDF document or text from a regulatory agency website with rules that must be followed during the auto-generation of procedure and authoring by the AI system. The database for the agent roles may be trained and curated to contain the optimized knowledge for the role and approved responses for how the agent responded previously based on scoring, alignment, and evaluation which are stored in the audit trail logs. Alternatively, if the token window is of sufficient size, all of the information for that role type may be copied into each instance when it is created to instruct the instance to assume that role and to respond accordingly based on the information reviewed.
In this example for the operator requesting assistance from the computer system to recover from the deviation event, the computer system processes and classifies the request, the agentic platform then assigns the roles of which AI agents it needs to create in the AI instances and assign a priority rank to the roles based on the requirements for the procedure creation. This may include assigning a higher priority for an Environmental Health & Safety supervisor AI agent role if the operator's health or life may be at elevated risk for the performance of an action. It may include a higher weighted priority for the Head of Quality AI agent role if a deviation has a direct impact on the product release of material to the patient population or for increased priority to Manufacturing Supervisor AI agent role if a key piece of equipment needs to be offline for a substantial period of time which will negatively affect production milestones during an investigation to overcome the deviation.
The discussion between the AI agents occurs in an AI forum where the conversation occurs in a human readable format or can easily be converted into a human readable format. The individual AI agents argue and push the agenda for their point of view to ensure the recovery from the deviation, authoring of a procedure, or in response to a question captures the primary points from each agent to ensure it reflects the requirements for the regulatory, manufacturing, safety, and operator priorities in a manufacturing environment. If a consensus is not reached, then an AI agent which is tasked with building consensus may join the conversation, with multiple rounds where the consensus building agent may gain increasing priority each round until a consensus is reached. The consensus statement, procedure, and/or output is then reviewed for conciseness, clarity, and relevance to the required output. In some models the consensus output is checked by a validation system to ensure the output follows the specifications and guidelines to provide a validated response. All captured information during this process including the discussions, arguments for and against, comments, consensus building, statements, and revisions are all captured in a secure audit-trail log for analysis, alignment, and optimization to improve the process and responses overall.
A scoring mechanism may be utilized to evaluate the performance of AI instances and agents in terms of their arguments for support of their field, viewpoints, and narratives they are trying to get across in the consensus output. Alignment techniques, such as approval of the response by a human reviewer and/or team of reviewers occupying the actual roles within in a company to evaluate, score, and approve outputs to reinforce the viewpoints of the AI agents used. This evaluation may be utilized to validate the response and to improve the overall performance of the system. This alignment and evaluation process is an important tool to ensuring the best results and maintaining regulatory compliance for responses.
In an alternate example, in response to detecting the risk event in the live video feed, the computer system can: access a procedure authoring model associated with the first procedure authoring model and characteristic of a first procedure convention for performing the set of instructional blocks by an operator at the first equipment unit; correlate a first risk event detected in the live video feed—for a current instance of the digital procedure performed by the operator—with a risk language signal corresponding to a process risk associated with execution of the first instructional block with the first equipment unit; and, based on the risk language signal and the agentic procedure authoring model, the relevant AI agents come to a consensus on the best approach to move forward while following regulatory guidance and requirements and generates a sequence of steps predicted to resolve the risk event detected in the current instance of the digital procedure performed by the operator. In this example, the computer system can then: initialize a second instructional block containing the sequence of steps predicted to resolve the risk event; and modify the current instance of the digital procedure to insert the second instructional block created by the consensus of AI agents from the agentic procedure authoring model to replace the first instructional block to enable the operator to resolve the risk event and proceed with execution of the digital procedure at the workspace. In particular, the computer system can: extract a first step, in the sequence of steps, in a first format (e.g., text format, video format) contained in the second instructional block; and present the first step to the operator device to enable the operator to resume execution of the digital procedure.
Therefore, the computer system can: detect a risk event for a current instance of a digital procedure performed by an operator proximal a workspace within the facility; implement an agentic procedure authoring model to generate a sequence of steps predicted to resolve the risk event taking into account the points of view from different AI agent roles to build consensus procedure steps; and modify the current instance of the digital procedure performed by the operator to include the sequence of steps to enable the operator to resolve the risk event and resume execution of the digital procedure at the facility.
Generally, “agentic platform” as referred to herein is the computer software platform to respond to risk events, deviations, or questions related to a process, that consists of a primary AI system that creates multiple instances, which are trained on different databases, to create multiple separate AI agents that are assigned and ranked for priority by the agentic platform to represent roles and viewpoints within an organization such as Manufacturing Director, Head of Quality, Environmental Health & Safety (EH&S), Operators, and other roles such as representing a regulatory agency, a guidance document, an equipment manual, or procedure, to argue on behalf of that role or viewpoint within an AI forum to ensure it is considered for the outputs including answers to manufacturing or regulatory questions, responses to process deviations, or in the construction of instructional blocks of digital procedures performed within a facility and/or corpus of facilities.
Generally, “agentic authoring model” as referred to herein is the agentic platform providing a consensus outcome from the multiple AI agents to the computer system that provides for the authors the instructional blocks which are the steps in a digital procedure performed within a facility and/or corpus of facilities.
Generally, “agentic verification model” as referred to herein is the agentic platform providing a consensus outcome from the multiple AI agents to the computer system that verifies the instructional blocks and procedure steps meet the requirements and/or specifications to be verified for usage, execution, and verification (including electronic signatures) for the steps in a digital procedure performed within a facility and/or corpus of facilities.
Generally, “procedure authoring” as referred to herein is the modification and/or construction of instructional blocks of a digital procedure performed within a facility and/or corpus of facilities.
Generally, a “procedure convention” as referred to herein are combinations of instructions for digital procedures representative of (e.g., common to, typical of) digital procedures currently performed within a facility and/or corpus of facilities.
Generally, a “regulation convention” as referred to herein are combinations of regulations for digital procedures representative of (e.g., common to, typical of) digital procedures currently performed within a facility and/or corpus of facilities.
Generally, a “language signal” as referred to herein is a word or phrase that represents critical language concepts for performing steps of a digital procedure within a facility and/or corpus of facilities.
In one implementation, a computer system (e.g., remote computer system) can generate the digital procedure based on a document (e.g., electronic document, paper document) outlining steps for a procedure carried out in the facility and then serve the digital procedure to the autonomous cart. In this variation, the computer system can generally: access a document (e.g., electronic document, paper document) for a procedure in the facility; and identify a sequence of steps specified in the document.
In the foregoing variation, each step in the sequence of steps specified in the document can be labeled with: a particular location within the facility associated with an operator performing the step of the procedure; a target offset distance between the autonomous cart and the operator proximal the particular location of the facility; and a supply trigger defining materials—such as consumables, raw materials, lab equipment, production equipment, devices (e.g., AR glasses, VR headsets, tablets, network devices)—configured to support the operator performing the step at the particular location. Additionally, each step in the sequence of steps can be labeled with: a risk factor corresponding to a degree of risk associated with performance of the step—by the operator—at the particular location; and an event trigger corresponding to instructions executed by the autonomous cart in response to interpreting deviations from the step—performed by the operator—specified in the document and/or in response to an emergency event.
In this implementation, the remote computer system can then, for each step in the sequence of steps: extract an instruction containing the particular location, the target offset distance, the supply trigger, the risk factor, and the event trigger for the step specified in the document; initialize a block, in a set of blocks, for the step; and populate the block with the instruction for the step. Furthermore, the computer system can: compile the set of blocks into the digital procedure according to an order of the sequence of steps defined in the document; and serve the digital procedure to the autonomous cart for execution of the method S100, in the facility, to support an operator during performance of the sequence of steps specified in the document.
In one implementation, the computer system can: access a geospatial location of the mobile device; identify a facility containing the geospatial location of the mobile device; automatically retrieve the instructional block library, such as from a remote computer system; and load the instructional block library at the mobile device for presentation to the operator. Furthermore, during performance of a modifiable digital procedure, the computer system can then automatically and/or selectively: replace instructional blocks in a current instance of the digital procedure with instructional blocks from the instructional block library; and/or add instructional blocks retrieved from the instructional block library to the current instance of the digital procedure.
For example, the mobile device can: retrieve a modifiable digital procedure containing a particular set of instructional blocks; retrieve the instructional block library containing sets of approved instructional blocks performed within the facility including the particular set of instructional blocks for the modifiable digital procedure; and present the modifiable digital procedure and these sets of instructional blocks to the operator, such as via a digital display at the mobile device associated with the operator during performance of the modifiable digital procedure. In this example, the computer system can: in response to initializing a first instructional block in the modifiable digital procedure, present a list of alternative instructional blocks for the first instructional block defined in the instructional block library to the operator; and receive confirmation of selection—by the operator at the mobile device—for an alternative instructional block from the list of alternative instructional blocks presented to the operator. The computer system can then: modify the current instance of the modifiable digital procedure to replace the first instructional block with the alternative instructional block selected by the operator; and record this modification in a procedure log for this modifiable digital procedure.
In another example, the computer system can: retrieve a modifiable digital procedure containing a particular set of instructional blocks; receive a selection—by the operator at the mobile device—to remove one or more instructional blocks in the particular set of instructional blocks for the modifiable digital procedure; and present this modified digital procedure to the operator at the mobile device. Additionally, the computer system can: retrieve the instructional block library containing sets of approved instructional blocks performed within the facility; and present the instructional block library to the operator at the mobile device. The computer system can then: receive selection of one or more instructional blocks from the instructional block library by the operator at the mobile device; and load these instructional blocks from the instructional block library to the current instance of the digital procedure. Furthermore, the computer system can: modify an order of these instructional blocks in the modified digital procedure upon selection from the operator; and generate a new digital procedure containing instructional blocks from the retrieved digital procedure and instructional blocks retrieved from the instructional block library; and transmit this new digital procedure to a supervisor for approval and/or review. Depending on the level of approval required and the requirement for the change to the procedure this supervisor review and approval can be immediate, as soon as the change is reviewed and signed off on with an electronic signature in the computer system or it may require multiple rounds of review and approvals by different supervisors within a company. To assist in the review process the changes may be highlighted in the digital procedure with the origin of where the changes are originating from, if the changes originated from a human author, if the changes came from additions were inserted by the computer system from the step library and if those changes were previously approved, if the changes came from the computer system to modify the order of the steps or insert new parameters to a template, or if the changes/modifications came from a generative AI program.
Therefore, the computer system can: modify a current instance of the digital procedure with instructional blocks from the instructional block library; generate a new digital procedure based on instructional blocks within the current instance of the digital procedure and approved instructional blocks retrieved from the instructional block library; and thereby expediting the approval process of this new digital procedure by implementing these facility approved instructional blocks from the instructional block library.
The computer system can preferentially attempt to select existing or modify existing instructional blocks from an instructional block library prior to generating a new digital procedure. This can include utilizing procedures in an instructional block library that have already been approved in the system by the Quality group at a company. If the digital procedure does not match the instructions the operator or end user requires the computer system may utilize the existing content in the approved digital procedure as a template to modify the parameters to fit the required digital procedure instructions. This can include utilizing procedures from other facilities in the company's network that are linked to the system in a connected manufacturing network, where an approved digital procedure from another site can be utilized as a template and modified by the computer system to provide the proper instructions and parameters at the facility the work will be performed. In other instances, approved sections or even individualized tasks from approved instructional blocks may be utilized to increase the likelihood that the newly generated digital procedure is accepted by the reviewers and approved by the Quality group by providing existing content from instructional blocks which has already been approved.
In the case of equipment procedures, the computer system can utilize instructional blocks from existing approved procedures for similar or related equipment and utilize those as templates for adding new instructions and parameters for a new piece of equipment being introduced into a facility or process. This can include searching and scanning an equipment manual for that new equipment and inputting it into a templated procedure to update the parameters, specifications, and steps to perform the actions with the new equipment following the digital guidance from the instructional blocks. In other instances, for new equipment that doesn't have existing procedures or no similar analog a new digital guidance procedure may be generated by the computer system using the operational specifications and instructions from the equipment manual. This can also be the case for consumables and raw materials which come with instructional guides or datasheets that contain parameters and specifications for usage and testing, such as for a consumable product like a sterilizing grade filter which would include flow rate information, pressure specifications, temperature limits, and integrity testing parameters such as bubble point and diffusion specifications for testing the filter.
In the instances of transferring methods or adopting new methods for processing the computer system can utilize other similar methods performed with the approved techniques to execute the task. The computer system may utilize methods from approved digital procedures at the facility, approved digital procedures at other facilities within the connected manufacturing network, approved sections or instructional blocks, or generate instructions from methods searched within a database, such as a cloud-based data storage system, or from the AI generation of digital content instructions and instructional blocks using trained models on existing datasets of procedures.
In one implementation, the computer system can: aggregate approved instructional blocks from each digital procedure performed at the facility; compile these instructional blocks from these digital procedures into an instructional block library; and store the instructional block library, such as at the remote computer system, for retrieval by devices within the facility. In particular, the computer system can: access an electronic document for a procedure in a facility; identify a sequence of steps specified in the electronic document; extract an instruction for each step in the sequence of steps; initialize an instructional block, in a set of instructional blocks for this step; and populate the instructional block with the instruction. The computer system can then: repeat this process for multiple electronic documents corresponding to multiple procedures at the facility; and store these sets of instructional blocks in the instructional block library contained at a remote computer system.
In another implementation, the computer system can: retrieve a particular instructional block from the instructional block library; modify text, media, values, etc. in this particular instructional block; generate a new instructional block based on this modified instructional block; and store this new instructional block in the instructional block library.
In one example of this implementation, the mobile device can: retrieve the instructional block library from a remote computer system; present the instructional block library to the operator at the mobile device; and confirm selection of a particular instructional block—by the operator—in the instructional block library presented to the operator. The computer system can then: load a modifiable instance of this particular instructional block at the mobile device of the operator; modify instructions, such as in the form of text, audio media, and visual media populated in the particular instructional block; record these modifications in a block procedure log for the particular instructional block; and generate a new instructional block based on this modified instructional block. Therefore, the computer system can generate new instructional blocks based on previously approved instructional blocks for digital procedures performed within the facility and thereby expedite review and approval process of this new instructional blocks.
In yet another implementation, the computer system can: at the mobile device of the operator, initialize a new instructional block; and generate a prompt for an operator to populate the new instructional block with an instruction. The computer system can then: serve this prompt at the mobile device of the operator; receive the instruction at the mobile device from the operator; and store this new populated instructional block at the instructional block library. For example, the computer system can: receive visual media for an instruction recorded by the operator via an optical sensor at the mobile device; receive a string of text from the operator representing the instruction via a computing interface at the mobile device; and/or receive audio media of the instruction recorded by the operator via a microphone at the mobile device. Additionally, the computer system can then populate the new instructional block with the text strings, audio media, and/or visual media received from the operator. Furthermore, the computer system can: confirm population of the new instructional block with the instruction from the operator; transmit this new instructional block to a supervisor device associated with a supervisor; and queue the new instructional block for approval and review by the supervisor.
A particular instructional block in the instructional block library can include data associated with the particular instructional block for the computer system to link the blocks to a step and/or series of steps contained within a modifiable digital procedure. For example, the particular instructional block can include labels or tags associated with this particular instructional block so an association can be made between the instructions that the particular instructional block provides, procedures currently linked to the particular instructional block, and what types of procedures and steps can be associated with the particular instructional block. Labeling of the instructional blocks allows for instructional blocks to be associated with related procedures, common procedures, equipment linked procedures, method linked procedures, or other procedure types. Instructional blocks can be uploaded from an external organization and receive labeling and/or tags when they are uploaded into the platform for association of client linked procedures for a contract manufacturing organization, where those instructional blocks are only linked to a specific client's procedures or an equipment vendor's instructional blocks where they are only linked to procedures involving that specific model of the vendor's equipment.
Additionally or alternatively, an instructional block in the block library may undergo scoring where each block receives a score for the quality of the instruction and the applicability to the procedures that they are currently linked to. In one example, an instructional block is scored by the clarity of the content, such as the clarity of the text, the audio quality of an audio clip (e.g., no static or distracting background noises), the video quality of the pixels, screen sizing, angle, and the clarity of the instructional material being shown. The quality scoring may include the conciseness of the material in the instructional block where the same level of instructional material is conveyed in a faster way compared to taking a significantly longer period of time to convey the same information. The quality of the scoring may contain the accuracy of the material where the instructional block may contain inaccurate information or outdated methods which would make it ineligible for linking to existing or new procedures. The scoring for the applicability to the procedures may include the relevance of the label and/or tags to the contents of the instructional block, the number of procedures and the types of procedures the instructional block is already linked to, and the operator voting over where they can upvote or downvote an instructional block for the quality of the content in the instructional block and the strength of the applicability to the procedures it is currently linked to. This scoring of the instructional blocks can be manually added by users and/or procedurally generated through an automated analysis algorithm.
Generally, the computer system can: retrieve manuals (e.g., equipment unit manuals, regulation manuals) from a manual library at a remote computer system; retrieve transcript documents from a transcript document library representative of previous communications between consultants (e.g., quality consultants, health consultants) and operators within the facility; and implement these manuals and transcript documents to train models representative of procedure conventions carried out for digital procedures performed within the facility.
In one implementation, the computer system can: retrieve the equipment unit manual for a particular equipment unit in the facility from an external computer system (e.g., manual document database); implement computer vision techniques to scan the equipment unit manual to detect words, phrases, images in the equipment unit manual; identify an equipment unit identifier in the words, phrases, and images in the equipment unit manual analogous to a particular equipment unit in a corpus of equipment units deployed in the particular facility; and link the equipment unit manual to the particular equipment unit within the facility in an equipment unit manual library. Additionally or alternatively, the computer system can: scan a physical document (e.g., paper document) representing the equipment unit manual for the particular equipment unit; and store this digital document as the equipment unit manual in the manual database.
In particular, the equipment unit manual can represent: a detailed suite of instructions corresponding to instructions and/or methods of operation (e.g., calibration instructions, troubleshooting instructions, modifying parameter settings) for a particular equipment unit associated with a digital procedure; and/or a suite of regulations (e.g., safety instructions, government regulations) associated with preferred handling of the equipment unit during performance of digital procedures within the facility.
Thus, the computer system can: compile a suite of equipment unit manuals corresponding to a corpus of equipment units currently located within a particular facility; and generate an equipment unit library based on the suite of equipment unit manuals linked to the particular facility. The computer system can additionally compile a suite of consumables, raw materials, and other materials from the corpus of available documentation, content, and digital guidance instructions for items located in a facility. This also applies to methods and the techniques for performing those methods in the execution of procedures, steps, and tasks within a facility.
In one implementation, the computer system to provide the core database of information to the AI instance for a specific AI agent in the agentic authoring model can: retrieve a regulatory procedure manual (e.g., safety regulation manual, environmental regulation manual) from an external computer system (e.g., regulatory document database) associated with a particular facility assigned to perform approved digital procedures; implement computer vision techniques to scan the regulatory procedure manual to detect words, phrases, and images in the regulatory procedure manual; detect a regulation identifier in the words, phrases, and images in the regulatory procedure manual corresponding to a particular approved digital procedure in a suite of approved digital procedures currently performed in the facility; and link the words, phrases, and images, in the regulatory procedure manual to the approved digital procedure in a regulatory manual library. Additionally or alternatively, the computer system can: scan a physical document (e.g., paper document) representing the regulatory procedure manual for the particular facility; and store this digital document as the regulatory procedure manual in a regulatory manual database.
In particular, the regulatory procedure manual can represent: a detailed suite of instructions (e.g., handling instructions) corresponding to instructions and/or methods of execution for a particular equipment unit associated with an approved digital procedure; and/or a detailed suite of regulations (e.g., environmental regulations, safety regulations, quality assurance regulations) corresponding to regulations and/or methods of execution of a particular digital procedure within a corresponding region (e.g., state specific regulations, country specific regulations). Thus, the computer system can: compile a suite of regulatory procedure manuals corresponding to a suite of approved digital procedures currently performed and/or scheduled for performance at a particular facility; and generate a regulatory manual library based on the suite of regulatory procedure manuals linked to the particular facility.
In one implementation, the computer system to provide the core database of information to the AI instance for a specific AI agent in the agentic authoring model can retrieve a transcript document (e.g., e-mails, audio, video, video meetings, recordings, lectures, presentations, text transcript documents) from an external computer system (e.g., transcript database) representative of communication for employees of the company, in particular employees in certain roles (e.g. Director of Manufacturing, Head of Quality, Head of Environmental Health & Safety, etc.) at a specific site, or consultants (e.g., safety consultant, quality consultant, environmental consultant, regulation specialist consultant) and an operator authoring digital procedures for the facility; implement computer vision techniques and/or audio recognition techniques to scan the transcript document to detect words, phrases, and images, in the transcript document; detect a regulation identifier in the words, phrases, and images in the transcript document corresponding to a particular approved digital procedure in a suite of approved digital procedures currently performed in the facility; and link the words, phrases, and images in the transcript document to the approved digital procedure in a transcript database. Additionally or alternatively, the computer system can: scan a physical document (e.g., paper document) containing text communication with an employee and/or consultant (e.g., safety consultant, environmental consultant), such as with a regulatory official or an operator authoring the digital procedure; and store this digital document as the transcript document in a transcript document database.
In particular, the transcript document can represent: a suite of transcribed communications with an employee and/or a consultant (e.g., safety consultant, environmental consultant), such as supporting an operator within the facility authoring a new digital procedure and/or transferring a digital procedure from a first facility to a second facility; and a suite of media (e.g., diagrams, graphs, videos, images, audio) obtained from the employee and/or consultant, which supports the operator in authoring verified instructional blocks for the particular facility. This information may be curated for relevance before being entered into a database and/or scanned and the score weighted for relevance by the computer system. Information that provides poor or contradictory information can be given a low scoring or removed from the system altogether to improve the coherence of the model for the instances of the AI agents to follow.
Thus, the computer system can: compile a suite of transcript documents representing previous communications with employees and/or consultants (e.g., safety consultants, environmental consultants) with operators within the facility; and generate a transcript document library based on the suite of transcript documents associated with a particular facility and/or a corpus of facilities assigned to support the AI agents in the agentic authoring model for the particular digital procedure.
In one implementation, the computer system can: initialize an equipment unit tag, in a set of equipment unit tags, representing a corpus of equipment units within the particular facility; populate the equipment unit tag with an equipment unit type (e.g., make and model), location within the particular facility, and calibration status of the equipment unit; and assign the equipment unit tag to the particular equipment unit at the particular facility. In this implementation, the computer system can then: query the instructional block library for a set of instructional blocks containing the equipment unit tag associated with the particular equipment unit; and aggregate the set of instructional blocks into a procedure data container corresponding to the particular equipment unit.
Furthermore, the computer system can scan the set of instructional blocks associated with the particular equipment unit and identify: sequences of texts representing steps of a procedure performed by an operator at the particular equipment unit; and images and/or video associated with the particular equipment unit. The computer system can then compile sets of data, in the procedure data container, corresponding to the sequences of texts, images, audio, and/or videos extracted from the set of instructional blocks and related to the application equipment unit.
In one example, the computer system can retrieve an equipment unit tag corresponding to a particular equipment unit within the facility analogous to a centrifuge machine located at a particular location in the facility. The computer system can then query the instructional block library for a set of instructional blocks related to and/or containing the equipment unit tag for the centrifuge machine. The instructional blocks, in the set of instructional blocks, can include: steps of a procedure related to and/or implementing the centrifuge machine within the facility; and a set of media, such as images and/or videos related to performing steps defined in the set of instructional blocks. In this example, the computer system can also: retrieve a centrifuge machine manual from the equipment unit manual library that corresponds to the centrifuge machine within the facility; and implement text recognition and/or computer vision techniques to the centrifuge machine manual to identify objects in the equipment unit manual. In particular, the computer system can: identify words and/or phrases associated with operation of the centrifuge machine; and identify reference images relevant to operation of the centrifuge machine within the facility in the equipment unit manual.
Therefore, the computer system can: aggregate data extracted from the set of instructional blocks—related to the particular equipment unit—in the procedure data container; aggregate data extracted from the equipment unit manual into the procedure data container; and subsequently train the procedure authoring model, as described below, to author (i.e., generate and/or modify) a new set of instructional blocks related to the particular equipment unit. Additionally or alternatively, the computer system can: access a procedure record library corresponding to previously performed digital procedures within the facility; scan the procedure record library to identify a set of procedure records associated with the particular equipment unit, as described above; and store the set of procedure records in the procedure data container.
Additionally or alternatively, the computer system can: implement computer vision techniques, such as those described in U.S. patent application Ser. No. 17/968,677, filed on 18 Oct. 2022, which is hereby incorporated entirely by this reference, to detect objects in a sequence of images (e.g., images, video) in a set of procedure records associated with the particular equipment unit; and store the objects in the procedure data container associated with the particular equipment unit. Furthermore, the computer system can also: implement audio recognition techniques, such as those described in U.S. patent application Ser. No. 17/968,677, to detect audio phrases in the set of data related to the set of language signals in the set of procedure records; and store the audio phrases in the procedure data container associated with the particular equipment unit.
Generally, the computer system can: link sets of data in the procedure data container to a set of language signals representing language concepts corresponding to a procedure convention for a particular equipment unit within the facility; and train a model to generate a new sequence of instructional blocks associated with the particular equipment unit based on the set of language signals and existing digital procedures (e.g., approved digital procedures) currently performed in the facility. The trained model can consist of a larger database of all procedures existing in the system, a smaller subset of all of the Good Manufacturing Process (GMP) approved procedures, a further smaller subset of GMP approved procedures by the company itself looking to generate procedures with their layout, style, and contents, and a further even smaller subset of GMP approved procedures by the company site itself taking into account the facility layout, design, and equipment to generate procedures with the required information already included from other template procedures developed at the same site. The procedure author can utilize the user interface within the computer system to switch between the different databases of procedures to yield the best results for what they are looking for in case some databases are too small, narrow, or limited in content to provide the desired results. Thus, the computer system can train a procedure authoring model to generate new sequences of instructional blocks that: are compatible (i.e., readily integrable) within the particular facility; and predicted to achieve a target outcome (e.g., calibration, batch yield) when performed by an operator within the facility at the particular equipment unit. Any content generated from the procedure authoring model can undergo multiple reviews in the review process which provides an opportunity to correct any mistakes potentially created from the AI generated procedure content generation of new instructional blocks and the various procedure authoring models.
Additionally the procedure authoring models can translate the procedures between different language types, taking into account differences in how words and sentences are structured. This will allow the procedure authoring models to include multi-lingual content or to generate content in a single language and then auto-translate the content into the language required to be executed by the site and/or by the operator. In the computer system the procedure authoring model creates an instructional block with a language layer, where the content may be translated into each supported language within the platform.
The computer system can: implement language models—such as natural language processing models or natural language understanding models tuned to particular language concepts—to detect words or phrases that represent critical language concepts in the procedure data container associated with a particular equipment unit. Additionally or alternatively, the computer system can implement natural language processing techniques to detect syntax (grammar, punctuation, spelling, formatting, sequence) characteristic for words or phrases in the procedure data container for the particular equipment unit.
In one implementation, the computer system can: scan the set of instructional blocks and the equipment unit manual stored in the procedure data container; and implement an action signal model to detect words or phrases—in the set of instructional blocks and/or the equipment unit manual—related to actions and/or instructions associated with performance of digital procedures with the particular equipment unit. For example, the computer system can detect words or phrases in the set of instructional blocks and the equipment unit manual, such as: “mixing a first material and a second material”; or “calibrate the centrifuge to a target parameter”.
Accordingly, the computer system can generate an action signal that represents the types and/or frequency of such action-related words or phrases in the procedure data container associated with the particular equipment unit. For example, for each word or phrase detected in the procedure data container, the computer system can: normalize the word or phrase; and generate a first action signal containing the normalized language value. In this example, the computer system can: normalize “turn on the centrifuge”, “initiate the centrifuge”, “start the equipment unit” to “trigger centrifuge”; and store the normalized values in discrete action signals for the procedure data container.
In another example, the computer system generates a single action signal representing presence and/or absence of action requests detected in the procedure data container. The computer system can also derive and store a frequency of action requests detected in the set of instructional blocks and the equipment unit manual or represent a ratio of action requests to other words or phrases in the procedure data container.
Similarly, the computer system can: scan the set of instructional blocks and the equipment unit manual in the procedure data container; and implement a risk signal model to detect words and/or phrases in the procedure data container related to threats, instability, and uncertainty associated with performance of digital procedures within the particular facility. For example, the computer system can detect words or phrases in the set of instructional blocks and the equipment unit manual, such as: “combustible materials”; “warning: do not inhale”; and/or “contents may be hot”.
Accordingly, the computer system can generate a risk signal that represents the types and/or frequency of such risk-related words or phrases in the procedure data container associated with the particular equipment unit. For example, for each word or phrase detected in the procedure data container, the computer system can: normalize the word or phrase; and generate a first risk signal containing the normalized language value. In this example, the computer system can: normalize “flammable materials”, “incendiary hazard”, “combustible elements” to “fire risk”; and store the normalized values in discrete action signals for the procedure data container.
In another example, the computer system generates one risk signal representing presence and/or absence of risk-related words or phrases detected in the procedure data container. The computer system can also derive and store a frequency of risk-related words or phrases detected in the set of instructional blocks and the equipment unit manual or represent a ratio of risk-related words or phrases to other words or phrases in the procedure data container.
In some procedures the health and safety warnings are included at the front of the procedure in order to inform the operator what types of training and safety equipment, personal protective equipment (PPE), is required for the execution of the procedure. The computer system and the auto-generated procedures may include the types of PPE required to execute the procedure at the start of the procedure execution and may inform the operator about safety risks for each step requiring a reminder about what are the risks or what to look out for while performing each individual step or groups of steps within a section of steps to execute.
In one implementation, the computer system can: scan the text content stored in the unverified draft instructional block; and implement an equipment unit language processing model to detect words or phrases—in the set of instructional blocks—related to a corpus of equipment units (e.g., centrifuges, bio-reactors) located within the facility. For example, the computer system can detect words or phrases in the set of instructional blocks, such as: “centrifuge model #ABCD”; “bio-reactor interface”; or “scale calibration”.
Accordingly, the computer system can generate an equipment unit signal that represent the equipment unit types of such equipment unit-related words or phrases in the unverified draft instructional block. For example, for each word or phrase detected in the unverified draft instructional block, the computer system can: normalize the word or phrase; and generate a first equipment unit action signal containing the normalized language value. In this example, the computer system can normalize: “locate centrifuge model #AABB”, “bio-reactor parameters”, and “scale calibration”; and store the normalized values in discrete action signals for the unverified draft instructional block.
In another example, the computer system generates a single equipment unit signal representing presence and/or absence of equipment unit types specified in the unverified draft instructional block. The computer system can also derive and store a frequency of equipment unit signals detected in the set of instructional blocks or represent a ratio of equipment unit signals to other words or phrases in the unverified draft instructional block.
In one implementation, the computer system to provide the core database of information to the AI instance for a specific AI agent in the agentic authoring model can: scan transcript documents in the transcript document library and regulation manuals in the regulation manual library aggregated into a data container; and implement a regulation signal model to detect words or phrases—in the transcript documents and/or the regulation manuals—related to regulations (e.g., health, safety regulations) and/or instructions associated with performance of digital procedures with a particular equipment unit within the facility. For example, the computer system can detect words or phrases in the transcript documents and the regulation manual, such as: “safety guidelines for hazardous materials”; “health guidelines for handling materials”; “environmental restrictions for equipment units”.
Accordingly, the computer system can generate a regulation signal that represents the types and/or frequency of such regulation-related words or phrases in the data container associated with a particular equipment unit and/or a particular facility. For example, for each word or phrase detected in the data container, the computer system can: normalize the word or phrase; and generate a first regulation signal containing the normalized language value. In this example, the computer system can: normalize “hazardous material spill”, “incendiary condition”, and “hazardous gas exposure” to “environmental, health and safety regulation”; and store the normalized values in discrete regulation signals for the data container.
In this implementation the computer system can utilize the regulatory documentation (such as laws, regulations, guidance, supporting information, court cases, citations, industry group organization guidance, and discussions) as the database for the AI instances for creating an AI agent role to represent the regulatory document. This AI agent can argue on behalf of following the guidance for the regulatory document and ensure the requirements are included in the agentic authoring model for procedure authoring, for deviation detection and recovery, and for answering regulatory and manufacturing related questions about a process.
Generally, the computer system can: compile procedure signal containers—representing language concepts contained in the equipment unit manual and the set of instructional blocks—into a sender model that represents combinations of language concepts representative of a procedure convention for implementing the particular equipment unit during performance of steps of digital procedures within the facility. More specifically, the computer system can: scan the procedure data container—including the equipment unit manual and the set of instructional blocks—for a set of language signals (e.g., input signals, action signals, equipment unit signals, risk signals); detect combinations of language signals in the procedure data container; and train a procedure authoring model associated with the particular equipment unit to generate new sequences of instructional blocks based on: combinations of language signals in the equipment unit manual and the set of instructional blocks; and existing digital procedures currently performed within the facility. In particular, the procedure authoring model for the particular equipment unit is characterized by a procedure convention for implementing the new instructional blocks of the digital procedure within the facility by an operator performing the digital procedure with the particular equipment unit.
In one implementation, the computer system can: scan instructional blocks, an equipment unit manual, and procedure records contained in the procedure data container associated with the particular equipment unit; implement methods and techniques as described above to detect a set of language signals in the procedure data container; and define the procedure convention for the particular equipment unit based on a frequency of language signals, in the set of language signals, detected across the procedure data container. In one example, the computer system can: retrieve a first digital procedure including a first set of instructional blocks associated with the particular equipment unit in the facility; scan the first set of instructional blocks to detect a set of language signals; and calculate correlations between the set of language signals extracted from the first set of instructional blocks and the frequency of language signals defined in the procedure convention for the particular equipment unit. Thus, the computer system can, in response to a correlation exceeding a threshold correlation, interpret the first digital procedure as non-conforming to the procedure convention defined for the particular equipment unit (i.e., the first digital procedure deviates from common or typical procedures performed using the particular equipment unit within the facility).
In one implementation, the computer system implements artificial intelligence, machine learning, regression, and/or other techniques to train a neural network to generate a sequence of instructional blocks associated with a particular equipment unit within the facility to accomplish a target outcome (e.g., calibration, batch yield).
In this implementation, the computer system can access a procedure data container for a particular equipment unit, such as containing: a first set of data corresponding to words or phrases extracted from an equipment unit manual specifying instruction and/or regulations for operation of the particular equipment unit; a second set of data corresponding to words or phrases extracted from a set of instructional blocks, retrieved from the instructional block library, associated with the particular equipment unit; and a third set of data corresponding to words or phrases extracted from a set of procedure records, retrieved from the record library, representing previously performed instances of digital procedures that implemented the particular equipment unit. The computer system can then implement methods and techniques described above to: detect a set of language signals from these sets of data; initialize a first procedure container associated with the particular equipment unit; and store the set of language signals in the first procedure container.
The computer system can thus: repeat this process for a corpus of equipment units located within the facility; and train the procedure authoring model to identify similarities and differences between integration of equipment units across the corpus of equipment units within the facility. The computer system can also: repeat this process for a corpus of equipment units across a corpus of facilities; and train the procedure authoring model to identify similarities and differences between integration of equipment units across the corpus of facilities.
Additionally, or alternatively, the computer system can: access a set of unapproved (or “failed”) procedure records from the procedure record library associated with the particular equipment unit and representing unapproved instances of digital procedures involving the particular equipment unit performed within the facility; detect a set of language signals in this set of unapproved procedure records; initialize a second digital procedure container associated with the particular equipment unit; and store the set of language signals in the second digital procedure container.
Within the set of unapproved or “failed” procedure records the reviewer may tag certain information as false or inaccurate. This data can be provided and reviewed by the neural network model as statements not to include in a procedure or groups of procedures dealing with specific equipment, methods, or facilities. This serves as a negative prompt or negative tag to exclude the statement if it has previously been flagged as false and not to include it in future generated sequences.
The computer system can then train a neural network (e.g., a convolutional neural network) to generate sequences of instructional blocks associated with the particular equipment unit based on differences and similarities between 1) the set of instructional blocks related to the particular equipment unit and approved digital procedures performed within the facility and 2) the set of instructional blocks related to the particular equipment unit and baseline (or “unapproved”) digital procedures within the facility. For example, the computer system can configure the neural network to output blocks of text representing a step-by-step process for performing the generated sequence of instructional blocks by an operator with the particular equipment unit. The computer system can then store this neural network as the procedure authoring model for the particular equipment unit within the facility.
In another implementation, the computer system implements deep learning techniques (e.g., transformer networks) to train a neural network to generate new sequences of instructional blocks corresponding to a particular equipment unit available for implementation within the facility.
Generally, the computer system can: receive a prompt to generate a sequence of instructional blocks (e.g., for calibrating the equipment unit, output a target batch yield, measuring an article) such as, at an operator device associated with an operator proximal the particular equipment unit within the facility; scan the prompt to detect the set of language signals associated with the particular equipment unit; and generate a current sequence of instructional blocks based on the set of language signals detected in the prompt and the procedure authoring model associated with the particular equipment unit. In particular the computer system can: initialize a prompt at an operator device associated with an operator located proximal the particular equipment unit within the facility; populate the prompt with words or phrases input by the operator at the operator device; scan the words or phrases in the prompt to detect the set of language signals; and input the set of language signals detected in the procedure authoring model to generate a current sequence of the instructional blocks. The computer system can then: compile the sequence of instructional blocks into a visual media, such as a flowchart diagram, and/or blocks of text; and output the visual media at a display integrated into the operator device associated with the operator.
In one implementation, the computer system can: receive modifications and/or edits to the prompt, such as input by the operator at the operator device; and input the modified prompt into the procedure authoring model to generate a second sequence of instructional blocks associated with the particular equipment unit.
In another implementation, the computer system can: input the prompt into the procedure authoring model to generate a modifiable sequence of instructional blocks associated with the particular equipment unit; and output the modifiable sequence of instructional blocks at the operator device. In this implementation, the operator device can: receive manual authoring of the generated sequence of instructional blocks from the operator; and transmit the modified sequence of instructional blocks, such as to the instructional block library and/or to a remote viewer for inspection and/or review.
In one implementation, the computer system can: access a set of wireless signals received from an operator device associated with an operator within the facility; localize the operator device at a first location within the facility based on the set of wireless signals and a facility map; detect a particular equipment unit proximal the first location within the facility, such as based on visual features extracted from optical sensors proximal the first location and/or manual confirmation of presence of the particular equipment unit by the operator at the operator device. The computer system can then retrieve a particular procedure authoring model, from a set of procedure authoring models, associated with the particular equipment unit detected proximal the first location in the facility. Thus, the computer system can, in response to detecting the particular equipment unit proximal the first location: initialize a prompt for the operator to input a string of text representing the procedure authoring request; and serve the prompt to the operator device.
The operator device can then: populate the prompt with a string of text received from the operator handling the operator device, such as “Does this equipment unit need to be calibrated?”, “Please provide instructions to use this equipment unit”, or “what are the inputs for this equipment unit”; generate the procedure authoring request based on the string of text received at the operator device; and transmit the procedure authoring request to the computer system for input into the procedure authoring model associated with the particular equipment unit. The computer system can then: scan the procedure authoring request to detect a first action signal (e.g., calibrate, measure, mix), in the set of language signals; and input the first action signal into the procedure authoring model to generate a sequence of instructional blocks predicted to yield a desired outcome for the action-related request identified in the procedure authoring request.
In one example, the operator can transmit a procedure authoring request corresponding to “provide instructions for calibrating this centrifuge machine” to the computer system. The computer system can thus: detect “calibrate” as an action signal in the procedure authoring request; input the “calibrate” action signal into a procedure authoring model for the centrifuge machine; and generate a sequence of instructional blocks for calibrating the centrifuge machine according to a procedure convention defined in the procedure authoring model. In this example, the computer system can then: transmit the generated sequence of instructional blocks to the operator device; and transmit images, video, and/or sections of the equipment unit manual associated with the “calibrate” action signal to the operator device. As described above, the operator device can: transmit secondary procedure authoring requests to the computer system following generation of the sequence of instructional blocks; and/or modify a previous procedure authoring request sent to the computer system to generate a new sequence of instructional blocks until a desired outcome is achieved by the operator.
Therefore, the computer system can: in real-time receive a procedure authoring request from an operator device proximal a particular equipment unit within the facility; and autonomously generate a sequence of instructional blocks to achieve an outcome of the procedure authoring request when performed by the operator within the facility.
In one implementation, the computer system can: access a facility schedule specifying time periods (e.g., a week, month) for performing steps of digital procedures related to a set of equipment units within the facility; and identify a particular equipment unit scheduled for maintenance in the facility schedule. The computer system can then: identify a second equipment unit, in a corpus of equipment units, within the facility similar to the particular equipment unit scheduled for maintenance; retrieve a procedure authoring model associated with this second equipment unit; and generate a sequence of instructional blocks, as described above, based on the procedure authoring model to replace the particular unit scheduled for maintenance within the facility with the second equipment unit. Thus, the computer system can stage digital procedures scheduled for performance within the facility to cycle (or “replace”) existing equipment units utilized for performing digital procedures within the facility.
In one implementation, the computer system can: identify an equipment unit type located across a corpus of facilities; and implement methods and techniques as described above to generate a procedure authoring model associated with the equipment unit type and characteristic of a procedure convention for implementing the equipment unit type across the corpus of facilities. Thus, the computer system can: receive a procedure authoring request associated with a particular unit type across a corpus of facilities; and generate a sequence of instructional blocks for the particular equipment unit type for implementation across the corpus of facilities. The corpus of facilities may consist of a connected manufacturing network, where the computer system links the facility to other facilities globally which may be internal or external to the organization, such as in the case of utilizing an external Contract Manufacturing Organization (CMO) or Contract Development Manufacturing Organization (CDMO) in the process of manufacturing products for the original company or facility within the manufacturing network.
In one implementation, the computer system can: generate a prompt requesting a user to score an outcome of the sequence of instructional blocks following performance by the operator within the facility; and transmit this prompt to the operator device associated with the operator. In response to the operator selecting a score above a threshold score, the computer system can then append the sequence of instructional blocks to the set of instructional blocks within the procedure data container for training the procedure authoring model associated with the particular equipment unit. The computer system can thus repeat this process across sets of instructional blocks generated by the procedure authoring model to routinely train the procedure authoring model associated with the particular equipment unit.
Generally, the computer system can: as described above, correlate data in the procedure data container associated with the particular equipment unit with a set of risk language signals; link media (e.g., images, strings of text, video) in the procedure data container linked to the set of language signals; and autonomously generate media (e.g., audio, video, images, augmented reality) based on the set of risk language signals aggregated from the procedure data container to mitigate exposure of risk to an operator performing procedures at the particular equipment unit. In particular, the computer system can: receive a guidance media request from an operator, such as by accessing a string of text from an operator device input by an operator performing steps of a procedure at the particular equipment unit; generate “ad-hoc” media to mitigate exposure to risk to the operator based on risk language signals detected in the guidance media request and the set of data in the procedure data container linked to these risk language signals; and transmit the media to the operator, such as by displaying text, images, video at the operator device (e.g., headset, tablet, autonomous cart) and/or broadcasting audio at the operator device.
Additionally and/or alternatively, the computer system can: correlate sets of data extracted from a procedure data container associated with a particular equipment unit to a set of risk language signals; and, in response to a risk correlation exceeding a threshold correlation, generate guidance media (e.g., text, images, video, augmented reality, audio) based on the set of risk language signals in the procedure data container.
Thus, the computer system can then: generate a new digital procedure for the particular equipment unit based on the procedure authoring request accessed from the operator device; retrieve the guidance media corresponding to the set of risk language signals associated with the particular equipment unit; and transmit the new digital procedure and the guidance media to the operator device to support the operator during performance of the new digital procedure at the particular equipment unit.
To manage existing content and differentiate it from newly generated AI content there needs to be a method for adding meta-data to the instructional blocks as a layer to differentiate between existing digital guidance content, raw captured digital guidance content, recorded videos and captured images that will serve as digital guidance content, digital guidance content that was augmented, such as with augmented images, data, audio, AI generated avatars, or other content types that were overlayed on a recorded or live stream video feed and digital guidance content that was completely AI generated. This is important to distinguish between content that was created by a human operator and the content that was created utilizing generative AI from a computing system. This difference is critical to be able to easily and readily distinguish to ensure that the content created by utilizing generative AI is not utilized to hide, obscure, or falsify media content and/or content supporting documentation that goes into the batch approval processes or regulatory review without the clear indication of where the content originated from.
One method to be able to perform this is for the computer system to create an instructional block for each set of content generated in the system or by the system. The instructional blocks from this content may contain a content audit trail layer that contains the meta-data signifying how the content was created, where it originated from, and how it was changed, altered, or augmented. The instructional block audit trail layer can contain if the content was captured by the computer system from an actual event that occurred in reality, if the original content was augmented in any way, if the content was created utilizing an AI generation program by the computer system to create the content, if the content was created by the computer system in a virtual reality environment or simulation (for training purposes), if the content was created by an integrated program that connected to the computer system, or if the content was uploaded and created external to the computer system with or without the meta-data and ability to verify the validity of the information.
When an original piece of content is generated in the computer system, such as a captured image, audio file, recorded video feed, 3D volumetric capture, or other captured media type of an event such as of an operator performing a process step or task for review later by the Quality team at a company or an external regulatory body, such as the United States Food and Drug Administration (FDA), the content generated creates an instructional block which automatically creates a Non-Fungible Token (NFTs), Metadata tag, Watermark, or Hashing of the content with date and time stamps to validate the authenticity of the content captured. This is utilized to verify and validate in the computer system that the content is the true, original, raw content as it was created by the computer system, which cannot be falsified or misrepresented in the computer system. The NFT, Metadata, Watermark, or Hashing for this content captured is available in the instructional block and can be accessed to compare the original captured media content with a version of the content that may have been altered or a different piece of content which may have been autogenerated by a generative AI program. The NFTs, Metadata, Watermarks, or Hashing generated can be stored in an external repository to the computer system such as in a cloud based storage system, where a company that is utilizing the computer system is not able to access the original NFT, Metadata, Watermark, or Hashing and raw content, in an on-premises (on-prem) or private cloud system, where they may have the opportunity to change the file types in the physical storage where they are kept. This allows an additional layer of protection where the NFTs, Metadata, Watermarks, or Hashing are stored externally, stored in a cloud environment with multiple potential servers globally storing the information, and access is only read-only, ensuring that the original content cannot be altered.
In a complaint and validated platform with a full audit trail, such as a 21 CFR Part 11 compliant audit trail, the uploading of a false AI generated video or a modified version of the original video to try to misrepresent itself as the original cannot be performed in the computer system. Even in the case of a legitimate use of the platform to alter a video would require an extensive audit trail of changes, required electronic signatures from senior level managers at the company, and stated justification for every change to ensure it is not being used in any attempt to falsify data in any way. The computer system can prominently display the origin of the content and verify if it has been altered in any way providing clarity to a reviewer if this is the original content, if it has been altered or augmented, access to the original raw content created via the read-only access to the NFTs, Metadata, Watermarks, or Hashing stored in an external database, access to the instructional block layer providing the audit trail of the content, including the content origin, access to the layers of augmentations that may have been applied to the original content, or confirmation that the content was created by an AI generative program for the purposes of generating digital guidance content in support of a work process.
In one implementation, the computer system can: receive a guidance request from an operator to autonomously generate new media to support the operator in mitigating exposure to risk associated with performing steps of a procedure associated with the particular equipment unit; scan the guidance request for a first set of language signals; and identify a risk signal, in the first set of language signals, for the guidance request. The computer system can then: autonomously generate guidance media (e.g., audio, video, images) to mitigate exposure to a risk event (e.g., explosion, hazardous material exposure) associated with the risk signal in the guidance request; and transmit the guidance media to the operator, such as by displaying an image at a headset device, broadcasting audio at an autonomous cart proximal the particular equipment unit within the facility, and/or displaying text at a tablet device associated with the operator.
In one example, the computer system can: as described above, receive a guidance request from an operator device associated with the operator proximal the particular equipment unit, such as “How to resolve warning #100a923 indicated at bioreactor?”; scan the guidance request for a set of language signals; and interpret a risk signal in the set of language signals (e.g., warning #100a923) in the guidance request corresponding to a hazardous material warning at the bioreactor. Thus, the computer system can then: generate a prompt for a user to select a guidance media type (e.g., audio, video, images) for the risk signal indicated in the guidance request; and transmit this prompt to the operator device associated with the operator. The operator device can then: receive selection of the guidance media type, such as from the operator interacting with the operator device; and transmit this selection to the computer system.
In this example, the computer system can then: scan the equipment unit manual for the particular equipment unit for the risk signal; scan the set of instructional blocks associated with the particular equipment unit for the risk signal; extract strings of text, images, charts, from the equipment unit manual and the set of instructional blocks associated with the hazardous material risk signal for the bioreactor; and aggregate the strings of text, images, and charts into a digital document (e.g., a pdf document); and transmit this document to the operator device associated with the operator.
Additionally and/or alternatively, the computer system can: implement a voiceover model (or “text-to-speech”) to convert strings of text in the digital document into an audio file; transmit this audio file to an operator device (e.g., tablet, smart glasses, autonomous cart) proximal the particular equipment unit; and broadcast this audio file, such as via a speaker coupled to the operator device toward the operator performing steps of the digital procedure proximal the particular equipment unit. Furthermore, the computer system can: query the record library for a particular record associated with the risk event of the guidance request; extract a video feed from the particular record depicting appropriate actions to mitigate exposure to the risk event for the particular equipment unit; and transmit the video feed to the operator device associated with the operator.
In an additional example, the ad-hoc generated guidance can be initiated by the computer system based on an event. The event that occurs can be manually entered into, automatically reported via an integrated system, or automatically detected by the computer system, assigned a risk score based on the severity of the event, where the risk score exceeds a set threshold to the operator, end-user (patient), or product.
In this embodiment the operator accidentally drops a sterilizing-grade filter capsule utilized for processing a sterile pharmaceutical drug product. The filter can no longer undergo filter integrity-testing to prove the drug product has been made sterile (removal of bacteria from the product) as per regulatory requirements using the standard filter integrity testing process. The event can be manually entered into the computer system as the operator is reporting the deviation or automatically by the computer system as is visually detected the filter capsule dropped onto the floor, via object detection, and images the filter capsule in a broken state where the housing of the filter capsule is cracked and/or the vent valves on the side of the filter capsule have broken off. This type of event receives a minimal risk score for health risk to the operator (depending on the material that has been processed), but requires the proper PPE to be worn when handling the filter with any potential hazardous materials coming from the filter. The higher risk is to the batch of materials not being able to be released without the passing filter integrity test result, which has the potential to require additional costs for processing the batch, in the case of being able to re-process the batch with an additional sterilize grade filter or has the risk of potentially limiting patient supply to a critical drug product.
The computer system once the event occurred and the risk score to the operator, end-user, and product passes a specified threshold, automatically adds digital guidance onto the existing documentation for the operator to execute so they can recover from the occurrence of the event. This may include the computer system automatically linking the existing procedure being executed to an already approved procedure for performing filter integrity testing on a dropped filter capsule, linking the existing procedure at the company or within the broader connected manufacturing network (encompassing the corpus of facilities in the company's network, which may be internal or external to the organization) to a templated section in a different procedure which contains the instruction but requires the test specifications to be automatically updated for the filter capsule and process type, to add a new section to the existing procedure with a plurality of instructional blocks with the instructions for testing the dropped capsule, or to utilize generative AI to create the series of steps for the operator to follow to recover from the dropped filter capsule based on the supplier's manual, web searches, and/or from the trained procedure authoring model. After the computer system prepares the digital guidance instructions the system may reach out to a supervisor for approval prior to sending the instructions to the operator for them to execute. It may provide an emergency notification to review and sign off on the steps. In addition, the emergency notification may include a menu of options for the supervisor to select from, where the at least one supervisor to select the method for the operator to execute, where the addition of the digital guidance instructions to the procedure are coming from, and if the information is correct prior to the operator performing the steps to recover from the event. All of this is tracked in the computer system's audit trail to verify which supervisor signed-off using an electronic signature on which method, the justification for why a method was selected, the confirmation that the information was correct, and the inclusion of date and time stamps for each step.
In this example to showcase the complexity of the multiple steps in the execution process the test steps for the evaluation of the dropped filter capsule may include: the initial testing attempt, inserting the compromised filter capsule into an appropriately sized stainless steel filter housing with an adapter for the downstream connection of the filter capsule connector, flushing and pressurizing the filter with the testing fluid (if a product wet integrity test is not utilized), connecting the filter integrity tester to the new filter assembly, running the filter integrity test with the correct test type and test parameters, recording the results of the filter integrity test, and reporting the results as part of the deviation report for review by the Quality group in an organization.
Therefore, the computer system can: receive (e.g., in real time) a guidance request from an operator device associated with a particular equipment unit within the facility; generate (e.g., ad-hoc) guidance media (e.g., video, images, audio) for the operator to mitigate exposure to a risk event associated with the guidance request received from the operator; and transmit this guidance media to an operator device associated with the operator.
In one implementation, during the initial time period, the computer system can: correlate sets of data in the procedure data container—associated with a particular equipment unit within the facility—to a set of risk signals, as described above; interpret a set of risk events based on subsets of risk signals, in the set of risk signals, associated with the particular equipment unit; and autonomously generate guidance media to support an operator interfacing with the particular equipment unit in mitigating exposure to these risk events.
In this implementation, the computer system can then: receive a procedure authoring request from the operator device associated with the operator; detect an action signal in the procedure authoring request; and detect a risk signal in the procedure authoring request associated with a particular risk event. Thus, during a deployment period, the computer system can then: generate a new procedure, as described above, based on the action signal and the procedure authoring model; retrieve the guidance media associated with the risk event previously generated by the computer system; and transmit the guidance media to the operator device associated with the operator. The operator device can thus, load the new procedure and the guidance media for display at the operator device.
For example, the computer system can: interpret an incendiary risk event based on a first subset of risk signals containing “combustible”, “caution: heat”, and/or “flammable material”; interpret a contamination risk event based on a second subset of risk signals containing “hazardous”, “do not mix”, and/or “isolation”; and interpret a calibration risk event based on a third second subset of risk signals containing “update calibration”, “out of specification”, and/or “calibration warning”. The computer system can then, during the initial time period: generate an audible alert warning the operator of the incendiary risk event based on the first subset of risk signals; generate a warning image depicting the contamination risk event based on the second subset of risk signals; and generate a set of instructional blocks for calibrating the equipment unit based on the third subset of risk signals and the procedure authoring model. The computer system can then: aggregate the audible alert, the warning image, and the set of instructional blocks for calibrating the equipment unit into a guidance media container associated with the particular equipment unit; and, in response to detecting the set of risk signals in the procedure authoring request, transmit the guidance media container to the operator device associated with the operator.
Therefore, during the deployment period, the computer system can: detect a risk signal in a procedure authoring request for a particular equipment unit received from an operator device; and retrieve guidance media—previously generated by the computer system—associated with the risk signal and the particular equipment unit; and concurrently load a new digital procedure and the guidance media at an operator device associated with the operator interfacing with the particular equipment unit.
As described in U.S. patent application Ser. No. 17/968,677, the computer system can: access a live video feed from an optical sensor proximal the particular equipment unit depicting an operator performing steps of a digital procedure; extract visual features from the live video feed; and interpret a deviation between a first step in the digital procedure—currently being performed by the operator—and a target step in the digital procedure based on differences between the visual features extracted from the live video feed and target visual features defined in the target step of the digital procedure. In one implementation, in response to interpreting a deviation exceeding a threshold deviation, the computer system can then: correlate the deviation to a set of risk signals associated with the particular equipment unit within the facility; generate guidance media (e.g., video, images, audio), as described above based on the set of risk signals; and transmit, in real time, the guidance media to the operator device to support the operator in mitigating a risk event associated with the set of risk signals.
For example, the computer system can: interpret a deviation during performance of the digital procedure by the operator corresponding to dropping a hazardous material on the floor during performance of the digital procedure; and link a set of risk signals (e.g., spill, hazardous material) to the deviation from the digital procedure. Thus, as described above, the computer system can autonomously generate guidance media based on the set of risk signals, such as an audible alert for a user to step away from the spill, a visual alert notifying a user to maintain a target distance from the spill, and/or augmented reality guidance defining a boundary about the spill. Therefore, the computer system can: autonomously detect deviations between a performed procedure within the facility and a target digital procedure assigned to the operator; correlate known risk signals associated with a particular risk event to the deviation; and autonomously generate guidance media to support an operator in mitigating the risk event during performance of the procedure within the facility.
In an alternate embodiment the computer system can utilize existing content, such as video recordings of an operator or multiple operators performing the task and showing a specific technique(s) that the operator needs to perform in the auto-generated digital guidance, where the computer system forms a digital model (such as a wireframe or skeletal diagram of all body movements and positions) of the operator, or compilation of a corpus of operators, performing the task and then overlays the action with a digital avatar performing the action.
The conversion of multiple video recordings from operators performing a task into a digital avatar virtualization using generative AI can be performed by collecting data from multiple video recordings of operators performing the task which are collected and prepared for processing. This includes cleaning the data and removing any irrelevant footage. The video data undergoes annotation for object recognition, to identify key features in the scene, and the actions performed by the operator(s). This can include identifying specific motions, gestures, and tool usage. This annotated video data is then used to train the generative AI model. The model learns to recognize patterns and identify key features in the video data. Once the AI model is trained, it can be used to generate a digital avatar that accurately reflects the movements and actions of the operator. The avatar can be customized to match the operator's physical characteristics and can be viewed from multiple angles. The generated avatar is tested and refined to ensure that it accurately reflects the operator's movements and actions. Any errors or inaccuracies are identified and corrected in the AI model. The digital avatar performing the task can reviewed by a reviewer and/or supervisor prior to including as digital guidance content. The final digital avatar is deployed for use in training videos or virtual training environments. The digital avatar training can be scored or rated based on feedback for accuracy and relevance to the digital guidance content for the procedure it is associated with.
The advantages of a digital avatar instead of a live video of an operator performing the same function are to take only the best elements of the technique performed to standardize the work process, to anonymize the operator or operators that performed the task where the techniques were captured for execution of the task, to remove the potential for the digital guidance content to be removed if an operator leaves the company and submits a right-to-be-forgotten notice to comply with GDPR requirements in Europe, and to ensure the computer system provides the highest quality representation of the digital guidance across all platforms including on tablets, Augmented Reality (AR) devices, Virtual Reality (VR) devices, Mixed Reality (MR or XR) devices, holographic, or other device types.
In one example, the computer system can: generate new instructional blocks of a digital procedure based on a particular action signal detected in a procedure authoring request and a procedure authoring model for a particular equipment unit; and generate guidance media associated with operation of the particular equipment unit based on a particular risk signal detected in the procedure authoring request and the procedure authoring model. In this example, the computer system interfaces with an autonomous cart and an operator device associated with an operator to deliver the new instructional blocks and the guidance media to the user. In particular, the computer system can: transmit the new instructional blocks to a tablet device associated with the operator within the facility performing steps of a digital procedure at the particular equipment unit; and transmit the guidance media to an autonomous cart assigned to the operator and arranged proximal the particular equipment unit during performance of the steps of the digital procedure. Thus, the operator can concurrently interface with the autonomous cart and the operator device during performance of the digital procedure to: review steps of the digital procedure currently being performed by the operator; and review risk events associated with performance of the digital procedure at the particular equipment unit.
In alternate embodiments the computer system can generate new instructional blocks which summon an autonomous cart to receive a loadout of new supplies (either loaded manually by a human operator and/or by an autonomous robotic system) and autonomously deliver them to an operator within the facility as part of the execution of new tasks assigned to the operator by the computer system.
In other alternate embodiments the computer system can generate new instructional blocks based on the safety risk to the operator which summons an autonomous cart to receive a loadout of new safety supplies (either loaded manually by a human operator and/or by an autonomous robotic system), autonomously moves into position in proximity to the operator until they have completed the tasks with a high safety risk to the operator and/or in the case of an event where the computer system detects that a safety event or health emergency was detected to occur with an operator or multiple operators, where the autonomous cart can deliver emergency supplies to an operator within the facility or to a person assisting the operator as assigned by the computer system. In addition the computer system can generate new instructional blocks to summon an autonomous cart to position the cart between the operator and a risk of danger within the facility, preventing the operator from moving into a position where they could potentially be injured by a risk event, such as preventing the operator opening a door that has not had the equipment inside shut down for lockout/tagout, a chemical or biohazardous spill on the floor, a collapsed scaffolding, or other facility hazard. In higher risk scenarios where the operator(s) life is at risk the computer system can also generate instructional blocks to summon the autonomous cart to position itself to serve as a barrier to provide more time for the operator to escape from a high-risk event, such as a fire, gas leak, toxic chemicals, explosion risk, or an active shooter in a facility.
Similarly, the computer system can implement steps and techniques described above to: link sets of data—extracted from transcript documents (e.g., consultant transcripts) and regulation manuals (e.g., health regulation manual)—in the data container to a set of language signals representing language concepts corresponding to a regulation convention for a particular equipment unit within the facility; and train a model to interpret compliance of a sequence of steps to the regulation convention for the particular equipment unit within the facility based on the set of language signals and existing digital procedures (e.g., verified digital procedures) currently performed within the facility.
In one implementation, the computer system can: access an unverified draft instructional block containing a sequence of steps, such as authored by an operator and/or generated by a procedure authoring model and/or the agentic authoring model; scan the unverified draft instructional block for a set of language signals; correlate an equipment unit language signal, in the set of language signals, with a particular equipment unit proximal a workspace within the facility; and, in response to correlating the equipment unit language signal with the particular equipment unit, retrieve a procedure verification model representative of a procedure convention for the particular equipment unit.
The computer system can then: correlate a regulation language signal, in the set of language signals, the computer system can open an instance with the agentic platform where the platform creates multiple instances of AI agents taking on the roles of Manufacturing Director, Head of Quality, Head of Environmental Health and Safety, an Operator agent, an agent representing the regulation manual source materials, and an agent representing the equipment manual for the particular equipment unit, and, based on the regulation language signal and the agentic procedure verification model confer between the AI agent instances, come to a consensus, and provide a consensus response; this consensus response is interpreted with a procedure verification score for the sequence of steps specified in the unverified draft instructional block representative of a degree of compliance of the sequence of steps with regulation conventions (e.g., health, safety, environmental regulations) adhered to at the facility that the agentic platform was able to come to a conclusion to taking into account all of the various viewpoints to reach a consensus response that would be represented if conducted by human employees acting in those roles.
Accordingly, in response to the procedure verification score exceeding a threshold verification score, the computer system can: transform the unverified draft instructional block containing the sequence of steps to a verified instructional block; and store the verified instructional block in the instructional block library. Alternatively, in response to the procedure verification score falling below a threshold verification score, the computer system can flag the unverified draft instructional block for manual review by an actual human supervisor overseeing performance of digital procedures within the facility.
Therefore, the computer system and agentic platform can: retrieve an unverified draft instructional block containing a sequence of steps for execution at a particular equipment unit at a facility; and implement the agentic procedure authoring and/or verification model to automatically transform the unverified draft instructional block to a verified instructional block to enable operators to perform the sequence of steps within the facility.
The computer system and agentic platform can implement computer vision techniques, such as those described in U.S. patent application Ser. No. 17/968,677, filed on 18 Oct. 2022, which is hereby incorporated entirely by this reference to: access a live video feed depicting an operator performing an instance of a digital procedure within the facility; extract a set of visual features from the live video feed; and detect objects (e.g., equipment units) and/or interpret operator actions during performance of the digital procedure based on the set of visual features extracted from the live video feed. Accordingly, the computer system can then: detect deviations between objects (e.g., equipment units) handled by the operator during a current instance of the digital procedure and previous instances of the digital procedure performed at the facility; and/or detect deviations between operator actions (e.g., operator movement, duration of time) executed by the operator during the current instance of the digital procedure and previous instances of the digital procedure performed at the facility.
In one implementation, the computer system can: access a particular digital procedure scheduled for performance at the facility by an operator; receive a prompt from an operator to initialize a first instructional block in the particular digital procedure; and extract an object manifest from the first instructional block corresponding to approved objects (e.g., equipment units) within the facility for performing the first instructional block. The computer system can then: access a live video feed, such as from an optical sensor mounted at a headset device associated with the operator and/or mounted to an autonomous cart proximal the operator depicting the operator performing steps of the first instructional block; extract a set of visual features, such as from a particular frame in the live video feed depicting the operator at an assigned workspace within the facility; and detect a set of objects (e.g., equipment units) in the particular frame handled by the operator at the workspace during performance of the first instructional block.
Accordingly, the computer system can then: detect deviations between the set of objects handled by the operator depicted in the live video feed and the manifest of objects corresponding to approved objects within the facility for performing the first instructional block; in response to detecting the deviation the agentic platform can create an instance with multiple AI agents to review the detected deviation and determine a consensus response, this consensus response generates a prompt for the operator to review the set of objects at the assigned workspace prior to proceeding with performance of the first instructional block; and present the prompt to the operator, such as at an operator device associated with the operator. In one example, the computer system can: based on visual features extracted from a particular frame in the live video feed, identify a first 250-milliliter flask and a second 250-milliliter flask handled by the operator; and detect a deviation between a 500-milliliter flask specified in the object manifest for the first instructional block and the first 250-milliliter flask and second 250-milliliter flask detected in the live video feed. The response from the agentic platform is the initiate the agentic authoring model to modify the downstream procedure steps yet to be performed and separate the steps for a single 500-milliliter flask into two 250-milliliter flasks while recalculating the additions for each of these steps so the operator does not need to manually need to perform the mathematical calculations, increasing the risk of making a mistake as the process execution continues for the completion of the procedure. Subsequent procedures that are linked to the primary procedure that was modified by the agentic authoring model may additionally be modified to accommodate the new materials such as procedures with the Quality Control microbiology lab needing to perform testing based on two 250-milliliter flasks instead of a single 500-milliliter flask, or the Quality Department needing to provide release specifications based on the new numbers of containers and split product volumes. The agentic system may additionally write up the deviation report and provide an audit log of all of the downstream steps which were affected and required modification to continue execution of the process unabated.
In another implementation, the computer system can: access a live video feed, such as from an optical sensor mounted at a headset device associated with the operator and/or mounted to an autonomous cart proximal the operator depicting the operator performing steps of the first instructional block; extract a set of visual features across a set of frames in the live video feed depicting the operator at an assigned workspace within the facility; and track motions and/or paths of objects handled by the operator based on the set of visual features during performance of the first instructional block. The computer system can then: detect deviation between an object path for a particular object tracked in the live video feed and a target object path for the particular object recorded for previous instances of the digital procedure at the facility; and correlate the deviation to a particular risk event (e.g., spill event, fire event) proximal the workspace based on the object bath for the application object. For example, the computer system can: interpret a deviation corresponding to a flask (e.g., measuring flask) falling proximal the workspace and spilling contents (e.g., hazardous liquids) across the floor of the facility; engage the agentic platform to create AI agents for EH&S, Quality, the Operator, and OSHA regulations, reach a conclusion of the risk the operator may face associated with the spill event based on the materials present in the flask, airflow changes in the facility in that area, and operator personal protective equipment (PPE) that they are wearing, provide a risk score to the operator, provide operator guidance to walk away from the spill for a set period of time, treat the spill with a chemical disinfectant for y period of time and then squeegee it into a floor drain for further deactivation and treatment. The system then correlates the deviation to a spill event associated with the current instance of the first instructional block performed by the operator and utilizes the agentic authoring model to provide the new steps the operator needs to take to recover from the deviation by getting a new flask and filling it with the same materials to get back to the step where the deviation of dropping and spilling the flask occurred to complete the procedure step execution in the most optimal way possible. Additionally or alternatively, the computer system can: leverage a suite of sensors (e.g., temperature sensors, pressure sensors, proximity sensors) arranged proximal the operator to interpret the deviation in the current instance of the digital procedure performed by the operator.
Therefore, the computer system can: maintain real-time contextual awareness of objects (e.g., equipment units) handled by the operator during performance of the digital procedure by the operator within the facility; and, in response to detecting a deviation between a current instance of the digital procedure performed by the operator and previous instances of the digital procedure at the facility, notify the operator of the deviation in order to enable the operator to recover from the deviation.
In alternate embodiments the agentic platform may create an instance of AI agents to reach a conclusion if a change that was observed, based on the sensing data such as from a camera, other sensor system, or from a fusion of data from suite of sensors, qualifies as a deviation and what level of response is required for that deviation, such as observing a minor issue that does not raise to the threshold level of reporting as a process deviation or coming to a consensus that an observed change to a process is in fact a deviation, come to a consensus on if the process requires reporting to a supervisor, and engages the agentic authoring platform to determine the best consensus response to recover from the deviation.
Generally, the computer system can: receive a prompt from an operator to initialize a first instructional block for an approved digital procedure assigned to the operator within the facility; access a live video feed from an optical sensor (e.g., headset, autonomous cart) arranged proximal to the operator and depicting the operator performing the first instructional block of the approved digital procedure with a particular equipment unit within the facility; and, as described above, detect a deviation between a current instance of the first instructional block of the approved procedure performed by the operator and previous instances of the first instructional block performed by operators at the facility. The computer system can then: create an instance with the agentic platform, have the ensemble of experts of AI agents come to a consensus conclusion on next steps, utilize the agentic authoring model to generate procedure steps corresponding to the particular equipment unit for the first instructional block; and, based on the deviation from the current instance of the first instructional block and the agentic procedure authoring model, generate an unverified draft instructional block containing a sequence of steps and guidance enabling the user to resolve the deviation and proceed with performance of the approved digital procedure while taking into consideration the multiple viewpoints from the agentic platform.
In particular, the computer system can: based on the procedure authoring model, generate the unverified draft instructional block in response to detecting a deviation during a current instance of an approved digital procedure within the facility; implement the instructional block library and the agentic verification model to automatically transform the unverified draft instructional block into a verified instructional block for execution by the operator; extract guidance (e.g., video guidance, text guidance), as described above, generated by the agentic authoring model from the verified instructional block; and present the guidance to the operator, such as at an operator device associated with the operator performing the approved digital procedure. Thus, the computer system and agentic platform can: enable operators to—in real time—generate a sequence of steps to resolve deviations during an instance of a digital procedure performed at the facility; and automatically verify the generated sequence of steps in order to execute the sequence of steps in real time during performance of the digital procedure.
In one implementation, the computer system can: receive a prompt from an operator to initialize a first instructional block, in a digital procedure, corresponding to a particular equipment unit proximal a workspace assigned to the operator within the facility; extract a first instruction—in a particular format (e.g., text format, video format)—from the first instructional block; and present the first instruction in the particular format at an operator device (e.g., tablet, headset) associated with the operator performing the digital procedure. The computer system can then: access live video feed from an optical sensor (e.g., camera) arranged proximal the workspace within the facility and depicting the operator performing the first instruction from the first instructional block; extract a first set of visual features from a first frame in the live video feed depicting the operator and the particular equipment unit associated with the first instructional block; and implement steps and techniques described above to detect a deviation from the first instruction described in the first instructional block based on the first set of visual features from the first frame of the live video feed. Accordingly, the computer system can then: generate a notification containing the deviation and alerting the operator to terminate execution of the first instruction of the first instructional block; and present the notification at the operator device.
In this implementation, in response to termination of the first instruction, the computer system can then: access the agentic authoring model corresponding to the particular equipment unit specified in the first instructional block; create an AI instance with an ensemble of experts with AI agents to come to a consensus response to generate a prompt requesting the operator to input an agentic model authoring request (e.g., text input, voice commands) to resolve the deviation detected in the live video feed of the operator performing the first instructional block; and initialize the prompt at the operator device (e.g., headset, tablet). The operator device (e.g., headset, tablet) can then: receive input of the agentic model authoring request from the operator interfacing with the operator device; and transmit the agentic model authoring request to the computer system in order to implement the agentic authoring model to create the procedure steps.
Accordingly, the computer system can then: scan words, phrases, images, in the agentic model authoring request for a set of language signals; correlate a first language signal, in the set of language signals, with a first action prompt related to the particular equipment unit; and, based on the first action signal and the agentic authoring model, generate 1) a sequence of steps (e.g., text document) predicted to resolve the deviation of the first instructional block and 2) guidance media (e.g., images, video, augmented reality) to support the operator in performing the sequence of steps predicted to resolve the deviation. Furthermore, the computer system can: initialize a second instructional block containing the sequence of steps and the guidance media generated from the agentic authoring model; and modify the current instance of the digital procedure performed by operator to include the second instructional block to resolve the deviation detected in the live video feed and enable the user to complete the current instance of the digital procedure.
Therefore, the computer system can: present the sequence of steps predicted to resolve the deviation at the operator device, such as in the form of a text document at an interactive display coupled to the operator device; and present the guidance media (e.g., images, video) at the interactive display in order to enable the operator to perform the sequence of steps.
For example, the computer system can: access live video feed from an optical sensor (e.g., camera) arranged proximal the workspace within the facility and depicting the operator performing the first instruction from the first instructional block; and implement steps and techniques described above to interpret a deviation from the first instruction in the first instructional block corresponding to a first equipment unit proximal the workspace in a non-calibrated condition. In this example, the operator can then interface with the operator device (e.g., headset, tablet) to: input a procedure authoring request (e.g., text input) corresponding to calibration of the first equipment unit; and transmit the procedure authoring request to the computer system.
Accordingly, the computer system can then: scan the procedure authoring request for a set of language signals; correlate a first language signal, in the set of language signals, create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next actions including AI agents for calibration, such as a metrology expert and an AI agent for the equipment being calibrated based on the calibration action signal related to the first equipment unit; and, based on the calibration action signal and the agentic authoring model, generate 1) a sequence of calibration steps predicted to transition the first equipment unit from a non-calibrated condition to a calibrated condition and 2) guidance media (e.g., images, videos) to support the user in performing the sequence of calibration steps. Therefore, the computer system can: present the sequence of steps and the guidance media to the operator device in order to enable the operator to perform the sequence of steps to resolve the deviation; and, in response to confirming the deviation as resolved, initializing a next instructional block, in a sequence of instructional blocks, in the digital procedure to enable the operator to complete the digital procedure.
In one implementation, the computer system can: access live video feed from an optical sensor (e.g., camera) arranged proximal the workspace within the facility and depicting the operator performing the first instruction from the first instructional block with a particular equipment unit; extract a first set of visual features from a first frame in the live video feed depicting the operator and the particular equipment unit associated with the first instructional block; and implement steps and techniques described above to detect a risk event (e.g., safety risk, environmental risk) associated with the first instructional block and the particular equipment unit proximal the workspace. In this implementation, the computer system can then: in response to detecting the risk event in the live video feed, create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next steps to respond to the risk event, initialize an agentic authoring model associated with the first equipment unit; correlate the risk event to a first risk language signal associated with execution of the first instruction with the particular equipment unit; and, based on the first risk language signal and the agentic authoring model, generate 1) a sequence of steps predicted to resolve the risk event proximal the workspace and 2) guidance media (e.g., images, videos) to support the operator in performing the sequence of steps to resolve the risk event and enable the operator to proceed with performance of the digital procedure.
For example, the computer system can: access a live video feed from an optical sensor (e.g., camera) arranged proximal the workspace within the facility and depicting the operator performing the first instruction from the first instructional block; and implement steps and techniques described above to interpret a spill event (e.g., hazardous material spill) of contents within the particular equipment unit proximal the workspace. Accordingly, the computer system can then: correlate spill event to a risk language signal associated with a hazardous exposure risk proximal the workspace; based on the risk language signal create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next steps to respond to the risk event, initialize an agentic authoring model, generate a sequence of steps predicted to resolve the spill event (e.g., clean up instructions for the spill); initialize a second instructional block containing the sequence of steps predicted to resolve the spill event; and modify a current instance of the digital procedure to contain the second instructional block following the first instructional block in order to resolve the spill event and enable the operator to complete the digital procedure based on the input and consensus from multiple AI agents.
Furthermore, the computer system can: based on the risk language signal and the agentic authoring model, generate guidance media (e.g., images, video) to support the operator in performing the sequence of steps to resolve the spill event; and populate the second instructional block with the guidance media.
Therefore, the computer system can: detect a risk event in a live video feed depicting an operator performing a current instance of a digital procedure proximal an assigned workspace within the facility; implement the agentic platform and the agentic authoring model to generate a sequence of steps predicted to resolve the risk event detect in the live video feed; and modify a current instance of the digital procedure performed by the operator to include the sequence of steps predicted to resolve the risk event in order to enable the operator to proceed with execution of the digital procedure. Additionally, in response to confirming the risk event as resolved, the computer system can initialize a next instructional block in the digital procedure to enable the operator to continue performance of the digital procedure.
In one implementation, the computer system can: detect a deviation from a first instruction described in a first instructional block based on the first set of visual features from a live video feed; and, as described above, create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to detect the deviation event and provide a consensus response to the deviation event, respond to a procedure authoring request using the agentic authoring model, generate a sequence of steps configured to resolve the deviation and enable the operator to resume performance of the digital procedure. In this implementation, the sequence of steps generated from the agentic authoring model to resolve the deviation detected in a current instance of a digital procedure may correspond to an unverified sequence of steps (i.e., unapproved steps for a facility) which can fall out-of-specification from regulations (e.g., health, safety regulations) and procedure convention within a facility.
Thus, the computer system can: initialize an unverified draft instructional block containing the sequence of steps generated from the procedure authoring model and the procedure authoring request; implement steps and techniques, such as described in U.S. application Ser. No. 18/234,808, filed on 16 Aug. 2023, to transform the unverified draft instructional block into a verified instructional block for implementation at the facility; utilize the agentic authoring model and modify a current instance of the digital procedure to include the verified instructional block to enable the operator to resolve the deviation and complete execution of the current instance of the digital procedure.
In one implementation, the computer system can: receive a procedure authoring request to resolve a deviation in a current instance of the digital procedure from an operator device associated with the operator; scan words, phrases, images, in the procedure authoring request for a set of language signals; correlate a first language signal, in the set of language signals, with a first action prompt related to the particular equipment unit; and, based on the first action signal create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next steps to respond to the risk event, and using the agentic authoring model, generate 1) a sequence of steps (e.g., text document) predicted to resolve the deviation of the first instructional block and 2) guidance media (e.g., images, video, augmented reality) to support the operator in performing the sequence of steps predicted to resolve the deviation.
Accordingly, the computer system can then: access an instructional block library containing a set of verified instructional blocks associated with approved digital procedures performed within a facility; initialize an unverified draft instructional block containing the sequence of steps predicted to resolve the deviation and the guidance media (e.g., images, videos) for the sequence of steps; and detect a set of language signals in the unverified draft instructional block. More specifically, the computer system can: correlate an equipment unit language signal, in the set of language signals, with a particular equipment unit located proximal the workspace and the operator within the facility; correlate an action language signal, in the set of language signals, with a first action prompt (e.g., calibration) related to the first equipment unit; and correlate a regulation language signal, in the set of language signals, with a regulation prompt (e.g., health, safety regulation) related to the first equipment unit and associated with the facility. Thus, the computer system can identify a first verified instructional block, in the set of verified instructional blocks contained in the instructional block library, as analogous to the unverified draft instructional block in response to the first verified instructional block including language signals associated with the first equipment unit, the first action prompt, and the regulation prompt.
The computer system can then insert the verified instructional block, in place of the unverified draft instructional block, in the current instance of the digital procedure to enable the operator to resolve the deviation and complete execution of the digital procedure at the facility.
Therefore, the computer system can: detect a deviation in a current instance of the digital procedure performed by an operator with a particular equipment unit proximal a workspace within a facility; create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next steps to respond to the risk event, implement the agentic authoring model to generate an unverified sequence of steps predicted to resolve the deviation; identify a verified instructional block, in an instructional block library, containing steps analogous to the unverified sequence of steps; and modify a current instance of the digital procedure to include the verified instructional block to enable the operator to resolve the deviation and complete execution of the digital procedure within the facility.
As described above, the sequence of steps generated from the procedure authoring model to resolve the deviation detected in a current instance of a digital procedure may correspond to an unverified sequence of steps (i.e., unapproved steps for a facility) that can fall out-of-specification from regulations (e.g., health, safety regulations) and procedure convention within a facility. Accordingly, the computer system can: initialize an unverified draft instructional block containing the sequence of steps generated from the agentic authoring model and the procedure authoring request; retrieve an agentic verification model associated with the particular equipment unit handled by the operator to perform a first instructional block in the digital procedure; and implement the agentic verification model to transform the unverified draft instructional block into a verified instructional block to enable the operator to complete the current instance of the digital procedure.
In one implementation, the computer system can: as described above, detect the risk event in the live video feed depicting the operator performing the first instructional block of the digital procedure proximal a particular equipment unit at the workspace within the facility; initialize a procedure authoring model associated with the particular equipment unit; correlate the risk event to a first risk language signal associated with execution of the first instruction with the particular equipment unit; and, based on the first risk language signal create an instance with the agentic platform which makes a decision on the AI agent instances to create and the priority scoring for each AI agent role, creates instances of multiple AI agents to act as an ensemble of experts to come to a consensus on the next steps to respond to the risk event, implement the agentic authoring model, generate 1) a sequence of steps predicted to resolve the risk event proximal the workspace and 2) guidance media (e.g., images, videos) to support the operator in performing the sequence of steps to resolve the risk event and enable the operator to proceed with performance of the digital procedure. The computer system can then: initialize an unverified draft instructional block containing the sequence of steps predicted to resolve the risk event and the guidance media (e.g., images, videos) for the sequence of steps; scan the unverified draft instructional block for a set of language signals; correlate an equipment unit language signal, in the set of language signals, with a first equipment unit proximal the workspace; and correlate a regulation language signal, in the set of language signals, to a regulation prompt related with the workspace within the facility and the particular equipment unit.
Accordingly, the computer system can then: based on the set of language signals and the agentic verification model, interpret a procedure verification score representing degree of compliance of the sequence of steps in the unverified draft instructional block with regulation conventions (e.g., health, safety regulations) adhered to at the facility; and, in response to the procedure verification score exceeding a threshold verification score (e.g., 90-100 percent verification confidence), transform the unverified draft instructional block to a verified instructional block.
Therefore, the computer system can: detect a risk event in a live video feed depicting an operator performing a current instance of a digital procedure proximal an assigned workspace within the facility; implement the agentic authoring model to generate an unverified sequence of steps predicted to resolve the risk event detected in the live video feed; implement the agentic verification model to transform the unverified sequence of steps into a verified instructional block compatible with the facility; and modify the current instance of the digital procedure performed by the digital procedure to insert the verified instructional block to enable the operator to resolve the risk event and complete execution of the digital procedure.
The computer systems and methods described herein can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software elements of a user computer or mobile device, wristband, smartphone, or any suitable combination thereof. Other systems and methods of the embodiment can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructions.
As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.
This Application claims the benefit of U.S. Provisional Application No. 63/547,301, filed on 3 Nov. 2023, which is hereby incorporated in its entirety by this reference. This Application is a continuation-in-part application of U.S. Non-Provisional application Ser. No. 18/658,257, filed on 8 May 2024, which claims the benefit of U.S. Provisional Application Nos. 63/522,840, filed on 23 Jun. 2023, and 63/522,843, filed on 23 Jun. 2023, each of which is hereby incorporated in its entirety by this reference. U.S. Non-Provisional application Ser. No. 18/658,257 is also a continuation-in-part of U.S. Non-Provisional application Ser. No. 18/204,837, filed on 1 Jun. 2023, which is a continuation of U.S. Non-Provisional application Ser. No. 17/690,944, filed on 9 Mar. 2022, which is a continuation of U.S. Non-Provisional application Ser. No. 16/678,992, filed on 8 Nov. 2019, which claims the benefit of U.S. Provisional Application No. 62/757,593, filed on 8 Nov. 2018, each of which is hereby incorporated in its entirety by this reference. This Application is also a continuation-in-part application of U.S. Non-Provisional application Ser. No. 18/440,334, filed on 13 Feb. 2024, which claims the benefit to U.S. Provisional Application Nos. 63/446,572, filed on 17 Feb. 2023, and 63/445,228, filed on 13 Feb. 2023, each of which is hereby incorporated in its entirety by this reference. U.S. Non-Provisional application Ser. No. 18/440,334 is also a continuation-in-part of U.S. Non-Provisional application Ser. No. 18/234,808, filed on 16 Aug. 2023, which claims the benefit to U.S. Provisional Application No. 63/399,137, filed on 18 Aug. 2022, each of which is hereby incorporated in its entirety by this reference. This Application is related to U.S. Non-Provisional application Ser. No. 17/984,996, filed on 10 Nov. 2022, and Ser. No. 17/719,120, filed on 12 Apr. 2022, each of which is incorporated in its entirety by this reference.
Number | Date | Country | |
---|---|---|---|
63547301 | Nov 2023 | US | |
63522840 | Jun 2023 | US | |
63522843 | Jun 2023 | US | |
62757593 | Nov 2018 | US | |
63446572 | Feb 2023 | US | |
63445228 | Feb 2023 | US | |
63399137 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17690944 | Mar 2022 | US |
Child | 18204837 | US | |
Parent | 16678992 | Nov 2019 | US |
Child | 17690944 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18658257 | May 2024 | US |
Child | 18936551 | US | |
Parent | 18204837 | Jun 2023 | US |
Child | 18658257 | US | |
Parent | 18440334 | Feb 2024 | US |
Child | 18936551 | US | |
Parent | 18234808 | Aug 2023 | US |
Child | 18440334 | US |