The present invention relates to a novel method for measuring and using human performance to optimize and ensure the reliable completion of processes and integrations of human workflows.
Complex processes operated by humans can be difficult to understand and manage. Ensuring high performance, as measured by the reliability of the process outcomes, has many factors that affect success. Automation has emerged as a critical enabler for managing complex processes operated by humans, but until this invention, we have limited visibility and understanding of the human/machine interface and the human factors that affect high performance while operating a complex process.
For example, a significant number of cybersecurity breaches can be directly linked to human error during the execution of high-risk processes. Cyber insurance providers do not have any ability to quantify the amount of risk associated with human performance. This invention allows enterprises and interested partners such as cyber insurers to view how well an organization is performing at human performance and to model how human performance affects overall cybersecurity risk.
In this invention, we fill this gap by providing methods to accurately measure human performance by ensuring that the process that is operationalized has appropriate structure to enable the calculation of human performance scores. We provide a method for computing one or more human performance scores that directly measure the effectiveness of a single operator, a team of operators and/or teams of operators on the desired outcomes of a process, associated runbooks and governing policies. We also show how these human performance scores can be calculated, analyzed and visualized in order to track, both in real-time and historically how the operator, team of operators and/or teams of operators are performing and to inform higher-level workflow management system.
The present invention provides a means to measure the human performance of an individual, team of individuals or teams of individuals performing a process. The invention ensures that the process that is being performed includes critical measures of human performance by articulating how and where the information is obtained within the process that is to be performed. The invention analyzes the operational state and state transitions of the process in order to perform measurements of human performance. The invention provides a visualization of the current operational state of the process as well as access to summary statistics that can be used to track human performance over time. Additionally, the invention includes the option to offer recommendations to improve the process itself, associated runbooks, governing policies, and/or to improve the human performance of any humans interacting with the process.
Here is a list of definitions for terms that we'll use in the following description:
These next sections describe the standard system. As currently practiced and shown in
As shown in
The next sections describe how the present invention is novel and modifies the standard system to enable highly reliable human interactions. The modified standard system is referred to as the “system” for the remainder of this document.
Management System—Implemented typically by the execution of digital code in a computing system. The software supports features needed to manage a standard system. Additionally, the software supports the present invention modification of the standard system and management of the resulting system. The Management System tracks changes made to the system. Operators of the system should use the Management System to make their changes to the system.
Each event will have a timestamp when it occurs. An action will have a timestamp when the transition initiates and a timestamp when it completes. Actions that cannot be completed successfully, or time out, will cancel the state transition and restore the current state. A system failure occurs when the system is not behaving as defined by governing policies.
An event may be triggered by a human or by other means. Events behave identically regardless of how they are triggered.
As shown in
After successful completion of all steps, the alert finishes and the procedure transitions to the next state. A failure of any step will cause the procedure transition to stop and restore the current state. Examples of failure criteria include but are not limited to the time it takes to complete the step, measured or estimated numerical values of various parameters associated with the step, the capability and/or status of measurement equipment if used, the credentials, knowledge, health, and location of the person performing the step, etc. Many other criteria required for ensuring the successful completion of any given step can be identified by one skilled in the art. If there are any unexpected issues that cannot be resolved by the human, an escalation (defined in paragraph 37) is raised. During the escalation, the current procedure is halted until the escalation resolves the issue and provides direction on how to proceed.
If an alert has enabled a completion timer and the time expires by exceeding a predetermined maximum time value, the alert will terminate with an error and the procedure will restore the current state. Each alert may have different predetermined expiration time limits or acceptable ranges of time for step completion.
Training mode can be enabled to train or test humans on the current procedure. In testing mode, humans are periodically or randomly tested on various parameters including but not limited to their identity, health and knowledge of any part of the procedure, parent process, runbooks or overarching policy prior to performing any actions. Periodic testing helps ensure humans are fully capable of successfully completing the steps they are assigned to perform when operating the system.
The Management System may offer the human Recommendations (defined in paragraph 55) for improvements on handling the alert. The human has discretion to incorporate the Recommendations or handle the alert normally.
Relevant information about the human servicing the alert is saved. Any steps performed are logged. Timestamps for the notification, when the alert starts, when steps are started and completed, and when the alert finishes are saved.
Unexpected action condition—Actions are unattended by humans and performed automatically by the system, typically through the execution of digital code in a computing system. If an action encounters any unexpected conditions when performing the transition to the next state, the software executing the action may be configured to raise an alert.
The human processing the alert will attempt to resolve the issue to allow the procedure to continue. It is possible to conFig. the software executing the action to automatically restore the current state before raising the alert. This allows a human to investigate the error while not blocking the procedure.
Watch Team Confirmation—as shown in
The verification steps should confirm that the associated action or alert has been completed properly. It is typical but not required that the procedure specifies the expected results of performing the action or alert, which can be referenced during verification. Examples of verification steps include but are not limited to confirming each step was completed on time, confirming the measured or estimated numerical values of various parameters associated with each step are within an acceptable range, confirming the capability and/or status of measurement equipment if used, confirming the credentials, knowledge, health, and location of the person performing each step, etc. Many other criteria required for confirming any given step can be identified by one skilled in the art.
If the Watch Team Confirmation detects any discrepancies, an alert is raised and the procedure remains halted until the alert can be cleared.
It is typical but not required that at least one Watch Team Confirmation is present for every procedure to confirm the final results. If desired, a Watch Team Confirmation may be used randomly or for every action/alert of the procedure.
Unacknowledged event—Events are explicitly defined within a procedure. It is expected each event will have a defined action or alert that will execute the necessary steps to transition to the next state. However, an event may arrive for a state where the event is not expected. These unacknowledged events do not have defined actions or alerts. This could happen if an unforeseen external change to the system occurred, generating the event, or there was an error with the procedure implementation.
As shown in
Escalations—An escalation is raised whenever an issue cannot be resolved. They are typically serviced by more than one human. Relevant data triggering the escalation is maintained along with the occurrence timestamp and count of how often the underlying issue has been observed. Each step in the escalation is logged along with any related timestamps. The humans participating in the escalation should have relevant experience to best handle the issue.
Anomalies—an anomaly occurs when one or more procedures is in a state where it cannot be derived from a series of recorded events, actions, alerts and escalations. This can easily occur if the system was changed outside of the Management System responsible for tracking changes.
Detection of anomalies can be done directly by monitoring the state of the system and recognizing the system state is not the same as the expected state maintained in the Management System. Indirect detection of anomalies can occur when the system is not in a consistent state with the Management System when actions and alerts are being applied.
In most cases, the detection of the anomaly will initially start as an alert which will then initiate an escalation. The root cause analysis should then categorize the issue as an anomaly.
Sometimes an error in the governing policy, runbook and/or process did not properly define the expected system state causing the anomaly. Correcting the governance error should address the issue.
Near miss—a near miss is an anomaly that has reached the point of imminent system failure before it was mitigated. The steps to mitigate a near miss should be reflected back into the governing procedure, runbook, process and/or policy.
Feedback—feedback may be given about any part of the system or about the performance of any operator of the system. Feedback may be provided anonymously. The timestamp of when the feedback is entered is maintained. Feedback is expected to be reviewed in a timely manner. A timestamp on when the review starts and completes is maintained.
Any feedback that requires mitigation steps must have an alert generated.
Human Performance Score—the human performance score can be calculated for each work category within a system. A work category is a classification of human work experience needed to perform a specific task. As such, any one individual may have one or more performance scores, each one associated with a work category. An overall performance score, which is a weighted average of the work category performance scores, is maintained. More weight may be given to more recent work. The human performance scores can be used to provide an accurate breakdown of human performance across different categories of the system.
Calculating the human performance score—Updates occur when humans are doing work or when timeouts expire. The score is calculated over a period of time. Higher scores represent a higher level of human performance. Positive scores represent beneficial outcomes. Negative scores reflect detrimental outcomes.
Indicators that increase the human performance score:
Indicators that decrease the human performance score:
Human Performance Score Relative Weightings—the following are example relative weightings that affect how the human performance score can be calculated. In one embodiment of the invention, the increasing magnitude of effect to the human performance score is listed as follows. Other embodiments of the invention may change the weighting order or weighting categories.
Human Performance Score Modifiers—under normal conditions, a human is expected to perform tasks at or near their given human performance score. Other factors affecting the human may have a positive or negative effect on the human performance. These factors include:
Hierarchy—An overall human performance score is calculated for each individual human. A team's human performance score is a weighted average of the human performance scores of all team members. An organization's human performance score is a weighted average of each team's human performance score within an organization.
Process Efficiency—a process that does not experience any errors, unacknowledged events, or escalations over a period of time is 100% efficient. In one embodiment of the invention, errors reduce the process efficiency by a small amount, unacknowledged events reduce by a moderate amount and escalations reduce by a large amount.
If the policy or runbook governing the process is not active, the process efficiency is 0% during the time the policy is inactive.
Recommendations—The Management System may offer recommendations about improving policies, runbooks, processes, procedures, alerts, escalations, human performance and other topics. The recommendations may be based on data from the current organization, from other organizations, from artificial intelligence/machine learning analysis and/or from other sources.
Human Workflow Management System Definition—as shown in
Using Human Performance Scores—
Defining Levels—ranges of human performance scores can be used to define levels. These levels can then be applied in other areas of the human workflow management system.
Leaderboards, Charts, and Tables—individual human and team leaderboards can be used to gauge overall human performance of the organization. Charts, tables and other graphics can show trends, statistics and any other indicators of human performance. This data can be used to help improve the organization's human performance scores and ability to execute various processes reliably.
Incentives and Achievements—various incentives and achievements can be provided to humans and teams on reaching and maintaining human performance milestones.
Determining Qualified Humans for a Task—the human performance score can be used to filter out unqualified humans from performing complex tasks. A task may specify a minimum human performance score to be eligible to work on the task. Highly specialized work may require a high human performance score for one or more categories before a human is allowed to perform the task.
More generalized work may only require a low human performance score for a single category or may not have a human performance score requirement in any work categories, but instead specify a minimum overall human performance score to be eligible for the task.
Triggering Specialized Human Workflows—changes to human performance scores may trigger specialized human workflows. These workflows can be used to execute human procedures that may be difficult to fully define within a process. For example, failing to complete an alert on time may trigger a workflow task for a supervisor to reassign the task. These types of human behaviors may be better suited for a human workflow management system rather than attempting to enumerate the human behavior within a process.
Real-time Responses to Human Performance Scores—a human workflow management system can react in real-time to changes in human performance scores.
Positive Changes May have the Following (or Other) Effects:
Application of Present Invention—the invention is generic and it can be applied to a wide range of applications. The following are examples of applications that benefit from the invention: