The disclosure relates to the field of workflow analysis and monitoring in browser-based applications, and more particularly to systems and methods for privacy-preserving replay and analysis of user interactions in regulated environments.
Modern enterprises increasingly rely on browser-based applications to execute critical business processes. These applications, ranging from customer relationship management (CRM) systems to healthcare records platforms, handle sensitive operations that require careful monitoring for both efficiency and compliance purposes. Traditional approaches to workflow monitoring have focused on direct screen recording or extensive logging of all user interactions, creating significant privacy and storage challenges while failing to provide actionable insights into process efficiency and compliance.
The challenge is particularly acute in regulated industries such as healthcare, financial services, and customer service, where privacy requirements and compliance standards impose strict limitations on data capture and analysis. Current solutions force organizations to choose between comprehensive monitoring and privacy protection. Screen recording solutions, while providing complete visibility, create privacy risks and massive storage requirements. Log-based approaches, though more storage-efficient, typically lack the context needed for meaningful analysis and fail to capture the nuanced interactions that define complex workflows.
Training and quality assurance processes in these environments are significantly hampered by current limitations. Organizations struggle to create realistic training scenarios without exposing sensitive data, while quality assurance teams lack tools to efficiently review and analyze workflow execution without compromising privacy. Current browser-based applications typically implement basic event logging but lack sophisticated replay and analysis capabilities that would enable process improvement while maintaining data privacy. What is needed is a system and method for capturing and replaying workflow interactions
in browser-based applications that maintains privacy while enabling detailed analysis. Such a system should provide comprehensive visibility into business processes without compromising sensitive data, support multiple use cases including training and compliance monitoring, and enable efficient identification of improvement opportunities without requiring extensive storage resources or creating privacy risks.
Accordingly, the inventor has conceived and reduced to practice, a system and method for privacy-preserving workflow analysis that monitors user interactions with an application interface while maintaining data privacy. The system generates event data that excludes sensitive content while preserving workflow sequence information, enabling reconstruction and analysis of user workflows without exposing protected information. Event data is stored and used to reconstruct workflow sequences, which can be presented through a user interface for analysis, training, and compliance monitoring. The system supports multiple privacy contexts and can substitute synthetic data during workflow reconstruction, allowing organizations to analyze business processes without compromising sensitive information. The system enables comprehensive workflow monitoring and analysis while addressing privacy requirements in regulated environments, supporting use cases including quality assurance, training, and process optimization.
According to a preferred embodiment, a system for privacy-preserving workflow analysis, comprising: a computing device comprising at least one processor and memory storing instructions that, when executed, cause the computing device to: capture workflow events from a browser-based application; generate event pointers for the captured workflow events, wherein each event pointer comprises temporal data and interaction type data while excluding sensitive content; store the event pointers in a sequence corresponding to the workflow events; reconstruct the workflow events using the stored event pointers; and display the reconstructed workflow events in a user interface.
According to another preferred embodiment, a method for privacy-preserving workflow analysis, comprising the steps of: capturing workflow events from a browser-based application; generating event pointers for the captured workflow events, wherein each event pointer comprises temporal data and interaction type data while excluding sensitive content; storing the event pointers in a sequence corresponding to the workflow events; reconstructing the workflow events using the stored event pointers; and displaying the reconstructed workflow events in a user interface.
According to an aspect of an embodiment, reconstructing the workflow events comprises: substituting synthetic data for sensitive content in the reconstructed workflow events.
According to an aspect of an embodiment, the user interface comprises: playback controls for controlling replay of the reconstructed workflow events; a timeline showing the sequence of workflow events; and an event details panel displaying metadata about current workflow events.
According to an aspect of an embodiment, the instructions further cause the computing device to: analyze the sequence of workflow events to identify workflow patterns; generate workflow efficiency metrics; and detect compliance violations.
According to an aspect of an embodiment, each event pointer comprises: a timestamp; an event type identifier; a target element identifier; a template identifier; and compliance status data. According to an aspect of an embodiment, the instructions further cause the computing device to: maintain different privacy contexts for training, quality assurance, and compliance monitoring.
According to an aspect of an embodiment, the browser-based application comprises a customer relationship management system.
According to an aspect of an embodiment, the instructions further cause the computing device to: enable addition of annotations during replay of the reconstructed workflow events.
According to an aspect of an embodiment, the instructions further cause the computing device to: validate reconstructed workflow events against predefined compliance rules.
According to an aspect of an embodiment, displaying the reconstructed workflow events comprises: displaying visual indicators highlighting current interaction points in the workflow.
The accompanying drawings illustrate several aspects and, together with the description, serve to explain the principles of the invention according to the aspects. It will be appreciated by one skilled in the art that the particular arrangements illustrated in the drawings are merely exemplary, and are not to be considered as limiting of the scope of the invention or the claims herein in any way.
The inventor has conceived, and reduced to practice, a system and method for privacy-preserving workflow analysis that monitors user interactions with an application interface while maintaining data privacy. The system generates event data that excludes sensitive content while preserving workflow sequence information, enabling reconstruction and analysis of user workflows without exposing protected information. Event data is stored and used to reconstruct workflow sequences, which can be presented through a user interface for analysis, training, and compliance monitoring. The system supports multiple privacy contexts and can substitute synthetic data during workflow reconstruction, allowing organizations to analyze business processes without compromising sensitive information. The system enables comprehensive workflow monitoring and analysis while addressing privacy requirements in regulated environments, supporting use cases including quality assurance, training, and process optimization.
According to another aspect of the invention, the replay system implements a privacy-preserving architecture that enables workflow monitoring and analysis while protecting sensitive data and maintaining compliance with privacy regulations. The system employs multiple layers of privacy controls while retaining the ability to analyze workflow patterns and efficiency. Through this architecture, organizations can gain valuable insights into their business processes without compromising data privacy or regulatory compliance requirements.
An event capture module implements selective data capture mechanisms that separate workflow metadata from sensitive content, utilizing a sophisticated data classification framework. Rather than recording complete screen contents or raw form data, the system captures event pointers that describe the type, timing, and context of user interactions without retaining the actual data values. For example, when capturing form field entries, the system may record that a phone number field was populated with a valid format at a specific time, without storing the actual phone number. The framework employs configurable rules to identify sensitive data types including personal identifying information (PII), financial data, healthcare information, and proprietary business data, while supporting custom rules for enterprise-specific data privacy requirements.
The system comprises a replay orchestrator that implements dynamic data anonymization during workflow reconstruction through a comprehensive set of privacy-preserving techniques. When replaying workflow sequences, the system substitutes sensitive data with anonymized placeholders that maintain the structural characteristics of the original data while removing identifying information. The anonymization engine supports multiple techniques including format-preserving tokenization for structured data, consistent hashing for repeated values, range-based generalization for numeric values, and pattern-based masking for semi-structured data. This approach ensures that workflow replays maintain their utility for analysis while protecting sensitive information.
An analytics engine employs privacy-preserving computation techniques that enable workflow analysis without access to sensitive data. The system may implement aggregate statistics computation over anonymized data, pattern analysis using data structure rather than content, workflow efficiency metrics based on timing and event sequences, compliance verification through metadata analysis, and anomaly detection using anonymized behavioral patterns. These capabilities are complemented by a role-based access control system that governs visibility of workflow replays and analysis results, allowing supervisors and analysts to be granted granular permissions that limit access to specific workflow categories, data elements, time periods, analysis functions, and export capabilities.
The system maintains comprehensive audit capabilities through a dedicated audit subsystem that tracks all replay and analysis activities. This includes detailed records of access attempts and authorizations, data anonymization operations, analysis queries and results, export and sharing activities, and configuration changes. The audit system works in conjunction with a configurable compliance framework that enforces privacy requirements across all components, supporting General Data Protection Regulation (GDPR) compliance through data minimization and right to erasure, Health Insurance Portability and Accountability Act (HIPAA) compliance for healthcare workflows, Payment Card Industry Data Security Standard (PCI DSS) requirements for payment processes, and custom privacy rules for enterprise requirements.
While maintaining strict privacy controls, the system enables comprehensive workflow monitoring capabilities that focus on process efficiency analysis, compliance verification, training needs identification, quality assurance, process optimization, and error pattern detection. These monitoring capabilities concentrate on workflow patterns and metadata rather than content, enabling analysis of task completion times, process sequence variations, error rates and recovery patterns, resource utilization, user interaction patterns, and system response characteristics. This approach allows organizations to derive valuable insights from their workflow data without compromising privacy protections.
The privacy-preserving capabilities can be implemented through a layered architecture comprising specialized components for data classification, anonymization, access control, audit tracking, privacy-preserving analytics, and compliance enforcement. These layers work in concert to ensure that workflow monitoring and analysis capabilities remain fully functional while maintaining data privacy and regulatory compliance. By implementing technological controls rather than relying solely on policy enforcement, the system provides comprehensive visibility into business processes while protecting sensitive information through architectural safeguards.
The system's approach to privacy-preserving workflow monitoring represents a significant advance in balancing the competing requirements of process visibility and data protection. Through its architecture and privacy-preserving mechanisms, the system enables organizations to maintain complete oversight of their business processes while ensuring that sensitive data remains protected at all times. This capability is particularly valuable in regulated industries where both process monitoring and data privacy are critical requirements that must be satisfied simultaneously.
One or more different aspects may be described in the present application. Further, for one or more of the aspects described herein, numerous alternative arrangements may be described; it should be appreciated that these are presented for illustrative purposes only and are not limiting of the aspects contained herein or the claims presented herein in any way. One or more of the arrangements may be widely applicable to numerous aspects, as may be readily apparent from the disclosure. In general, arrangements are described in sufficient detail to enable those skilled in the art to practice one or more of the aspects, and it should be appreciated that other arrangements may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the particular aspects. Particular features of one or more of the aspects described herein may be described with reference to one or more particular aspects or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific arrangements of one or more of the aspects. It should be appreciated, however, that such features are not limited to usage in the one or more particular aspects or figures with reference to which they are described. The present disclosure is neither a literal description of all arrangements of one or more of the aspects nor a listing of features of one or more of the aspects that must be present in all arrangements.
Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical.
A description of an aspect with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components may be described to illustrate a wide variety of possible aspects and in order to more fully illustrate one or more aspects. Similarly, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may generally be configured to work in alternate orders, unless specifically stated to the contrary. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the aspects, and does not imply that the illustrated process is preferred. Also, steps are generally described once per aspect, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some aspects or some occurrences, or some steps may be executed more than once in a given aspect or occurrence.
When a single device or article is described herein, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described herein, it will be readily apparent that a single device or article may be used in place of the more than one device or article.
The functionality or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality or features. Thus, other aspects need not include the device itself.
Techniques and mechanisms described or referenced herein will sometimes be described in singular form for clarity. However, it should be appreciated that particular aspects may include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. Process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of various aspects in which, for example, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art.
“Modal dialog” or sometimes referred to herein as “Modal dialog box”, “Modal dialog popup”, and “Modal dialog window” refers to a dialog that appears on top of the main content and moves the system into a special mode requiring user interaction. This dialog disables the main content until the user explicitly interacts with the modal dialog. In contrast, nonmodal (or modeless) dialogs and windows do not disable the main content: showing the dialog box doesn't change the functionality of the user interface. The user can continue interacting with the main content (and perhaps even move the window, minimize, etc.) while the dialog is open.
“Event pointer” as used herein refers to a data structure that contains metadata about a workflow event without storing the complete event data. Event pointers comprise at least a timestamp, event type identifier, and reference data sufficient to recreate the event during replay.
“Workflow sequence” as used herein refers to an ordered series of event pointers that collectively represent a complete workflow execution path.
“Pattern signature” as used herein refers to a mathematical representation of a workflow sequence that enables efficient comparison and matching of similar workflows.
According to the embodiment, system 900 comprises a cloud-based integration server 910, one or more client device 920, a plurality of 3rd party services 930 such as private service providers, customer resource management (CRM) systems 932 and 3rd party API services 934, and a communication network 140 which allows one or more of these components (and other components not shown) to be communicatively coupled to facilitate bi-directional data exchange over a suitable communication network such as, for example, the Internet or a cellular telephone network.
The enhanced architecture comprises several components operating within the client device 920. An event capture module 923 operates within the workflow engine 922 and is configured to intercept and record workflow-related events and API calls occurring within the browser-based application. The event capture module 923 creates and stores event pointers rather than complete screen recordings or full data captures, thereby minimizing storage requirements while maintaining the ability to reconstruct workflow sequences. Event pointers may comprise metadata about each captured event including, but not limited to, timestamp data, event type identifiers, API endpoint information, and contextual parameters that enable later replay of the workflow sequence.
A replay orchestrator 925 operates within the web browser and is configured to manage the reconstruction and replay of captured workflow sequences. This may comprise coordinating with workflow engine for event reconstruction. The replay orchestrator 925 processes stored event pointers to recreate workflow sequences by re-executing API calls and reconstructing user interactions in their original temporal context. According to an embodiment, replay orchestrator 925 supports variable playback speeds, allowing accelerated or decelerated replay of workflow sequences while maintaining the relative timing between events. In some implementations, replay orchestrator 925 further comprises DVR-like control capabilities including, but not limited to, pause, resume, fast-forward, and rewind functionality for detailed workflow analysis. Replay orchestrator may be further configured to interface with CRM UI for visualization functionality.
According to the embodiment, a device storage 924 may comprise a local event store 926 configured to maintain captured event pointers and associated metadata in an encrypted format. The local event store is structured to enable efficient retrieval and replay of workflow sequences while minimizing storage overhead. According to an aspect, local event store 926 implements a rolling retention policy wherein older event data may be automatically purged based on configurable retention rules.
The cloud-based integration server 910 is configured with the addition of an analytics engine 911 configured to process replay data and generate insights about workflow patterns and compliance. The analytics engine 911 receives anonymized event pattern data from event capture module 923 and processes this data to identify workflow optimizations, compliance issues, and pattern-based insights.
The database 914 schema is configured to support several new data structures including a workflow template store 914a a compliance rule store 914b, and an audit trail store 914c. The workflow template store maintains reference workflow patterns that can be used for comparison and validation. The compliance rule store comprises configurable rules that define workflow requirements and constraints. The audit trail store maintains secure records of workflow analyses and compliance checks.
According to an aspect, a differential analysis system 911a operates within analytics engine 911 and is configured to compare multiple workflow sequences to identify variations, anomalies, and potential compliance issues. The differential analysis system 911a can process multiple replays of the same workflow to identify consistencies and variations in execution patterns. In some implementations, the differential analysis system 911a can compare workflow executions across different system versions or configurations to validate that system changes do not disrupt existing business processes.
A compliance monitoring system 911b operates in conjunction with the differential analysis system 911a and is configured to validate workflow sequences against stored compliance rules. The compliance monitoring system 911b can operate in real-time during workflow capture or during replay analysis to identify compliance violations or potential issues. According to an aspect, compliance monitoring system 911b can generate compliance reports and attestations based on analyzed workflow sequences.
According to some embodiments, a training and simulation system 913 leverages the replay capabilities to create interactive training scenarios based on real workflow sequences. The training and simulation system 913 can present trainees with actual workflow scenarios while providing interactive guidance and feedback. In some implementations, the training and simulation system 913 supports branching scenarios wherein trainees can explore alternative workflow paths and their consequences.
The architecture 900 implements a comprehensive security and privacy framework that ensures sensitive data remains under client control. The security and privacy framework can implement end-to-end encryption for stored event data and ensures that only anonymized pattern data is transmitted to cloud components. Authentication and access control mechanisms may be configured to support granular permissions for replay access and analysis capabilities.
During operation, the event capture module 923 continuously monitors browser-based application interactions, creating event pointers for significant workflow events. These event pointers are stored in the local event store 926 and can be used by the replay orchestrator 925 to reconstruct workflow sequences on demand. The analytics engine 911 processes anonymized workflow data to identify patterns and potential issues, while the differential analysis system 911a can compare multiple workflow sequences to identify variations and anomalies. The system 900 supports several operational modes including, but not limited to:
1. Live Capture Mode: Real-time capture of workflow events and storage of event pointers.
2. Replay Mode: Reconstruction and replay of captured workflow sequences with DVR-like controls.
3. Analysis Mode: Processing of workflow sequences for pattern identification and compliance validation.
4. Training Mode: Creation and execution of interactive training scenarios based on captured workflows.
In various embodiments, the system implements several performance optimizations including: efficient event pointer storage using compressed metadata structures; asynchronous analytics processing to minimize impact on live operations; intelligent caching of frequently accessed workflow patterns; and dynamic resource allocation based on system load and analysis requirements.
The system 900 provides several integration points including, but not limited to: standard APIs for external system integration; pluggable analytics modules for custom analysis capabilities; extensible event capture mechanisms for custom event types; and configurable replay controls for custom visualization requirements.
According to another aspect, the system supports parallel replay capabilities wherein multiple workflow sequences can be replayed simultaneously for comparison and analysis. This capability enables analyzing the impact of system changes or for identifying inconsistencies in workflow execution patterns.
According to an embodiment, a basic replay session demonstrates the system's ability to reconstruct and analyze a customer service workflow within a browser-based CRM application. In the following example, the workflow involves a customer service representative creating a new customer account and logging a support ticket.
During the original workflow execution, event capture module 923 records the following sequence of events:
For each event in the workflow sequence, event capture module 923 creates an event pointer comprising multiple elements that enable accurate replay while maintaining data security. These elements may include, but are not limited to, a unique event identifier that distinguishes each interaction, precise timestamp information for temporal reconstruction, event type classification such as “FORM_FIELD_ENTRY” or “FORM_SUBMIT” to categorize different kinds of interactions, target element identifier indicating specific form fields or buttons involved, event data containing entered values and system responses, and user context information including user ID and role details.
The replay process begins when a supervisor selects this workflow sequence from the system's replay interface. The replay orchestrator 925 retrieves the stored event pointers and initiates sequential reconstruction of the workflow through a series of coordinated operations. These operations include resetting the CRM interface to its initial state, replaying each event in sequence with original timing, reconstructing system responses and state changes, and displaying visual indicators for user actions. Throughout the replay, supervisors have access to comprehensive playback controls that enable them to pause replay at any point, adjust replay speed through multiple presets including 1×, 2×, and 0.5× options, step through events one at a time for detailed analysis, view complete event details and metadata, and add annotations at specific points of interest.
Upon completion of the replay sequence, the system presents a detailed summary of the workflow that includes multiple metrics and analyses. This summary comprises the total workflow duration, providing insight into process efficiency, the number of events captured and replayed, clearly identified compliance issues such as the initial missing phone number, and sequence variations from standard procedure. These capabilities enable supervisors to perform multiple essential functions including comprehensive review of actual workflow execution, identification of specific training opportunities based on observed patterns, validation of compliance with established procedures, detailed analysis of process step efficiency, and thorough documentation of any workflow issues encountered.
This basic replay use case demonstrates the system's fundamental capability to capture, store, and reconstruct workflow sequences without requiring full screen recording or extensive data storage, while maintaining robust ability to analyze and improve business processes. The structured approach to event capture and replay enables detailed workflow analysis while minimizing storage requirements and maintaining system performance. Through this approach, the system provides comprehensive visibility into business processes while enabling effective supervision and continuous process improvement.
detection process for identifying recurring workflow patterns and anomalies within captured event sequences. In a first stage, workflow sequences can be converted into pattern signatures using, for example, a locality-sensitive hashing algorithm that preserves temporal relationships between events while enabling efficient comparison. The pattern signatures can be clustered using a density-based clustering algorithm such as DBSCAN to identify common patterns and outliers. In the second stage, identified patterns undergo detailed analysis to extract key characteristics including, but not limited to: average execution time, frequency of occurrence, variance in event ordering, and correlation with business outcomes. The pattern analysis module 1001 maintains a pattern library that evolves over time as new patterns are discovered and existing patterns are refined based on new data. The differential analysis system 1002 performs comparative analysis between multiple workflow sequences, identifying variations and deviations. The compliance monitor 1003 validates workflow sequences against defined compliance rules and requirements.
The middle layer comprises data processing and generation components: a data processing module 1004, a report generator 1005, and an optimization engine 1006. The data processing module 1004 performs initial processing and normalization of incoming workflow data through a multi-stage pipeline. In the first stage, raw event data undergoes validation and cleaning to handle missing or malformed data. The validation process may comprise timestamp normalization to account for different time zones and clock synchronization issues, event type standardization to ensure consistent categorization, and data format normalization to create a unified representation regardless of source.
According to the embodiment, the data processing module 1004 implements a sliding window mechanism for real-time event correlation in its second stage, with configurable window sizes based on workflow characteristics. Within each window, events may be analyzed for temporal and causal relationships using a directed acyclic graph (DAG) model. The DAG representation preserves event dependencies while enabling efficient traversal and analysis. The third stage performs feature extraction to generate derived metrics useful for analysis, including, but not limited to, inter-event timing statistics, event sequence transition probabilities, resource utilization metrics, error rate and recovery patterns, and user interaction patterns.
The processed data is stored in an optimized format, for example, using a combination of columnar storage for efficient analytical queries and graph structures for maintaining relationship information. According to an aspect, the module implements a buffer management system that can temporarily store high-velocity event streams during peak processing periods to prevent data loss.
The bottom layer comprises storage and validation components: a pattern store 1007, a rules engine 1008, and an audit system 1009. According to an embodiment, pattern store 1007 implements a specialized storage system optimized for pattern retrieval and comparison operations using a multi-level storage architecture. This architecture comprises a fast access cache layer implemented using an in-memory grid structure for frequently accessed patterns, a primary storage layer, and a long-term archive layer for historical pattern data.
The primary storage layer of the pattern store 1007 uses a combination of a graph database for storing pattern structures and relationships, a columnar store for pattern metadata and statistics, and a time-series database for temporal pattern characteristics. Patterns may be stored using a composite key structure that includes pattern signature hash, pattern category identifiers, temporal validity ranges, confidence scores, and usage frequency metrics.
The pattern store 1007 implements versioning to track pattern evolution over time and maintains bidirectional links between related patterns. An automatic pruning mechanism may be implemented which removes or archives patterns that become obsolete or are rarely matched. In some aspects, pattern matching operations may use an LSH (Locality-Sensitive Hashing) index to enable efficient similarity searches. The system provides configurable retention policies that can be customized based on pattern significance scores, usage frequency, resource constraints, and compliance requirements. A background optimization process continuously monitors access patterns and adjusts storage distribution across layers to optimize retrieval performance.
According to another embodiment, rules engine 1008 implements a flexible rule evaluation system supporting both simple conditional rules and complex temporal logic. According to an aspect, the engine uses a domain-specific language (DSL) for rule definition that enables expression of sequential constraints (A must occur before B), temporal constraints (A must occur within X time of B), conditional branching (If A occurs, then either B or C must follow), resource constraints (A must be performed by role X), data validation rules (Field A must match pattern X), and compliance requirements (A must be logged with attributes X, Y, Z). Rules may be compiled into an optimized internal representation using a directed
hypergraph structure that enables efficient evaluation of complex rule chains, detection of rule conflicts and circular dependencies, dynamic rule prioritization and resolution, and parallel rule evaluation where dependencies permit. The engine implements a two-phase evaluation strategy comprising fast-path evaluation using pre-compiled rule chains for common cases and full evaluation for complex scenarios requiring dynamic rule interpretation.
The rules engine 1008 runtime rule evaluation employs incremental evaluation to avoid re-checking unchanged conditions, caching of intermediate results for frequently evaluated rules, just-in-time rule compilation for dynamically generated rules, and statistical tracking of rule evaluation patterns for optimization. The engine maintains an evaluation context that includes current workflow state, historical event data within the relevant window, environmental conditions and parameters, user and resource context information, and compliance status and requirements. Rule execution can be monitored and logged for performance optimization, compliance documentation, rule effectiveness analysis, and pattern discovery feedback.
During operation, workflow event data enters the system through either the event capture module or replay orchestrator. The central processing pipeline is anchored by the data processing module 1130, which receives input from both sources and performs initial data normalization and preparation. The processed data flows to two primary analysis modules shown on the right: the pattern analysis module 1140 for identifying workflow patterns and the compliance check module 1150 for validating workflow compliance. Analysis results flow to the report generator 1160 shown in the lower center, which produces standardized and custom reports based on the analysis results. The pattern store 1170 and audit store 1180 provide persistent storage for analysis results and maintain audit trails of all analyses.
The system supports both synchronous and asynchronous processing modes, allowing real-time analysis of live workflow events while simultaneously processing historical data for pattern analysis and optimization. The modular architecture enables new analysis components to be added without disrupting existing functionality, while the separated storage systems ensure that analysis results are properly preserved for future reference and audit purposes.
The reconstructed workflow view 1210 comprises a primary display area configured to present an accurate reproduction of the (possibly) browser-based application interface as it appeared during the original workflow execution. The view implements a sandboxed execution environment that enables interaction replay without affecting production systems or data. According to the embodiment, the reconstructed view maintains visual fidelity with the original interface including, but not limited to, form layouts, button states, dynamic content updates, and system messages.
The reconstructed workflow view 1210 may comprise an event visualization system that highlights active interface elements during replay. Visual indicators 1222 comprise configurable markers such as, for example, highlighting, outlining, or animated overlays that track the current point of interaction within the workflow. The visualization system supports multiple concurrent indicators to represent complex interactions involving multiple interface elements.
Within the reconstructed workflow view 1210, the new customer form 1220 demonstrates the system's ability to recreate complex form states. The form reconstruction process may comprise: precise positioning and styling of form elements; population of field values in correct sequence; recreation of validation states and error messages; simulation of dynamic form behaviors; and maintenance of field focus and cursor positions.
The timeline component 1230 provides a visual representation of the complete workflow sequence and enables direct navigation within the replay. According to some embodiments, the timeline implements: Progress indication showing completed and pending events; visual markers for significant events or annotations; click-to-navigate functionality for random access; zoom controls for detailed timeline inspection; highlight regions for selected time ranges; and visual indicators for different event types.
The control panel 1250 implements a suite of replay management tools organized into functional groups. In some aspects, the panel employs a modular architecture that allows for the addition or customization of control components based on specific replay requirements.
The playback controls 1252 implement standard media control functions adapted for workflow replay. The controls may include, but are not limited to: play/pause toggle for starting and stopping replay; step forward/backward controls for event-by-event navigation; skip controls for jumping to next/previous significant events; return-to-start and jump-to-end controls; and configurable keyboard shortcuts for all control functions
The speed control system 1254 enables dynamic adjustment of replay timing. According to an embodiment, the system provides: continuous speed adjustment (e.g., from 0.25× to 4× original speed); preset speed selections for common replay velocities; independent timing controls for events and animations; adaptive speed adjustment based on event density; and option(s) to normalize timing between events.
The event details panel 1256 provides a detailed view of the current event context including, but not limited to: precise timestamp and duration information; event type classification and parameters; target element identification and state; associated data values and changes; context information such as user role and system state; and related events and dependencies.
An annotation system 1258 may be present and configured to enable the addition and management of replay commentary. The system can implement one or more of: text annotation creation and editing; timestamp-based annotation anchoring; category classification for annotations; annotation search and filtering; export capabilities for annotation sets; and multi-user annotation support.
The interface implements a sophisticated synchronization framework that maintains consistent state across all components, ensuring synchronized updates of all visual components, consistent timing across reconstructed view and controls, real-time update of event details and annotations, coordinated response to user interactions, and state persistence during replay navigation. This framework is complemented by a configuration system that enables customization of visual indicator styles and behaviors, timeline display options and scale, control panel layout and visible components, keyboard shortcuts and control mappings, annotation categories and formatting, and event detail display preferences.
According to an embodiment, the replay interface extends its functionality through export and integration capabilities including, but not limited to, screenshot and video export capabilities, event sequence data export, and annotation export in multiple formats. The interface provides integration with training systems, APIs for external control and monitoring, and custom event handlers for system integration. These capabilities enable the interface to serve as a comprehensive platform for workflow analysis, training, and process improvement while maintaining flexibility for integration with existing systems and workflows.
According to an embodiment, the interface further comprises a configuration system that enables customization of: visual indicator styles and behaviors; timeline display options and scale; control panel layout and visible components; keyboard shortcuts and control mappings; annotation categories and formatting; and event detail display preferences.
According to the embodiment, a method for privacy-compliant workflow replay in a healthcare environment begins at step 1301 with an initial capture phase wherein the system initiates event capture for a clinical documentation session. During this phase, the system detects clinician authentication events, verifies user roles and permissions, initializes privacy rules based on context, and creates a unique session identifier. At step 1302 the system configures privacy filters for PHI detection by loading healthcare-specific data patterns, initializing synthetic data mappings, setting up field-type classification rules, and activating HIPAA compliance controls. Real-time event monitoring commences at step 1303 with tracking of user interface interactions, form field entries, navigation patterns, system responses, and compliance checkpoints.
The event processing phase implements comprehensive processing of each workflow event through a series of operations at step 1304. The system generates unique event identifiers, records timestamps and durations, classifies event types and contexts, maps to workflow templates, and applies privacy filters. For each interaction at step 1305, the system creates event pointers that store interaction type and target, record template section identifiers, save field type classifications, log compliance status, and maintain temporal relationships. Each captured event undergoes validation at step 1306 to verify data completeness, check temporal consistency, validate event relationships, confirm privacy compliance, and verify pointer integrity.
During the storage phase, the system prepares the event sequence for persistent storage through a structured process. The preparation may comprise organizing event pointers sequentially, creating session metadata, generating integrity checksums, and applying encryption where required. The storage operation itself encompasses saving the event pointer sequence, storing session context, recording compliance metadata, maintaining an audit trail, and updating index structures to support efficient retrieval and analysis operations.
The replay execution phase begins with workflow reconstruction at step 1403, wherein the system resets the interface state, initializes synthetic contexts, sets up event handlers, and prepares monitoring tools. As each event pointer is processed at step 1404, the system reconstructs the interface state, applies synthetic data, simulates system responses, updates visual indicators, and logs replay progress. Analysis components are continuously updated at step 1405 to calculate efficiency metrics, check compliance status, update quality indicators, record annotations, and generate alerts as necessary.
During the analysis operations phase, the system enables comprehensive reviewer interactions through multiple mechanisms. Reviewers can utilize navigation controls, input annotations, process comparison requests, handle export operations, and manage bookmarks. At step 1406 the system maintains the analysis context by tracking review progress, updating quality metrics, recording findings, generating summaries, and preparing reports. These capabilities enable detailed examination of workflow patterns while maintaining privacy controls.
The quality improvement phase processes analysis results through a structured approach that identifies workflow patterns, detects improvement opportunities, generates training recommendations, compiles compliance reports, and documents findings. The system actively supports improvement initiatives by enabling export of analysis data, generating recommendations, creating training materials, updating workflow templates, and enhancing compliance controls. Throughout all phases, the method maintains continuous privacy controls and compliance monitoring, with validation checks ensuring that privacy and security requirements are consistently met.
This method demonstrates the system's ability to support complex healthcare workflows while maintaining strict privacy controls and enabling detailed analysis for quality improvement purposes. The structured approach to event capture, processing, storage, replay, and analysis ensures comprehensive coverage of workflow monitoring needs while protecting sensitive healthcare information. The method's success in maintaining HIPAA compliance while enabling detailed workflow analysis illustrates its effectiveness in supporting privacy-sensitive use cases in highly regulated environments.
According to the embodiment, client device 120 comprises an operating system 121, a web browser 122 application, and device storage 124 such as a memory and/or non-volatile storage device (e.g., hard drive, solid state drive, etc.). Examples of client devices include, but are not limited to, personal computers, laptops, tablet computers, smart phones, smart wearables, personal digital assistants, and/or the like. Also present on client device 120 but not shown in the drawing is at least one processor operating on client device 120, the at least one processor configured to read, process, and execute a plurality of machine readable instructions which cause the client device to perform at least some of the functions and processes described herein. Operating within the web browser 122 of client device 120 is a session manager 123 and a workflow engine 125 which both provide support for third party service integration directly into web browser 122. According to various implementations, session manager 123 is configured to monitor and store session state information associated with the client device 120 user's current (e.g., ongoing) session with a browser based application such as, for example, a customer relationship management (“CRM”) system. According to an aspect, session state information may comprise session variables such as (non-limiting) session login time (e.g., time at which the user first logged into the CRM system), client device identification or identifiers (e.g., MAC address, IMEI number, ESN number, etc.), and user identification or identifiers (e.g., username, password, email address, group, privileges, etc.). In some implementations, session manager 123 may also store a client session token associated with client device 120 and received from authentication engine 112 responsive to a client session login request. When a client session login request is generated within the browser 122, the session state information may be retrieved by session manager 123 and sent to workflow engine 125 which sends the client session state data to cloud-based integration server 110.
According to the embodiment, workflow engine 125 is present to support third party service integration into the browser based application (e.g., browser-based CRM system) and configured to process browser based service requests originating from inside the browser user interface (e.g., CRM system user interface). When a service request is made, workflow engine retrieves a client session token from session manager 123 and creates a wrapper token. In some implementations, the wrapper token comprises the client session token and any appropriate service request data which was received as part of the service request. Workflow engine 125 can send the wrapper token to authentication engine 112 for third party service integration. As an example, a client clicks on an interactive button for a service (e.g., service request) within the browser 122 UI, which causes an API call in the browser 122, wherein workflow engine 125 captures this API call and includes it in the wrapper token which is then sent to cloud-based integration server 110 wherein the service request may be fulfilled. Upon fulfillment of the request, the workflow engine 112 may receive one or more third party service data objects (e.g., JSON, etc.) from an authentication engine 112 stored and operating within the cloud-based integration server 110 and then display the one or more data objects within the user interface of the browser 122.
According to the embodiment, cloud-based integration server 110 may comprise an authentication engine 112 operating on a cloud device 110 that authenticate an user (i.e., client) and presents a service integration token (or authentication identifier token) for integration through the operating system and software applications (i.e., web browser 122) on the client device 120, wherein interacting with the service integration token produces third party data objects to be used to execute customer relationship management (“CRM”) client workflows incorporating client application, context, and trust information. Cloud-based integration server 110 may comprise one or more computing devices, each of the one or more computing devices comprising at least one processor and a memory. In some implementations, cloud-based integration server 110 may be a distributed architecture wherein the components and functionality of the system may be distributed across two or more computing devices at a given location (e.g., a data center) or distributed across two or more computing devices at different geographic locations (e.g., two data centers).
According to the embodiment, authentication engine 112 can be configured to receive client session login requests or data from a browser based application operating on the client device 120. For example, a client may be a contact center agent and the browser based application may be a CRM system. In some implementation, the client session login request or data may comprise information related to the specific client device from which the request/data originated from, session details (e.g., session state) associated with the client's current session within the browser based application (e.g., CRM system), and client information including, but not limited to, username, password, group, privileges, and/or the like. When authentication engine 112 receives a client session login request/data it may store the received data in a database 114. In some implementations, the stored data may be used to validate users (e.g., clients) associated with a received service request. Once the client session login data has been stored, authentication engine 112 may create and transmit a client session token to a session manager 123 operating within the web browser 122 of the client device 120. The client session token represents that the user of the client device has successfully logged into the cloud-based integration server 110 and can be used to authenticate the user during subsequent service requests from client device 120. In some implementations, the client session token may comprise session state information such as, for example, device ID, session ID, and user/client ID information.
According to the embodiment, authentication engine 112 is further configured to receive a wrapper token from workflow engine 125. A client may submit a service request (such as, for example, via pressing on an interactive element of the browser UI) which is intercepted by workflow engine 125 and passed as a wrapper token to authentication engine 112. Authentication engine 112 can parse the wrapper token to retrieve the session information embedded into the session token. Authentication engine 112 can validate the user by comparing the parsed session information to stored session information in database 124. If the user cannot be validated, then the service request is terminated and, in some implementations, an error message may be displayed to the user via the browser 122 interface on the client device. If the user can be validated because the session information matches stored information in database 124, then authentication engine 112 may generate an authentication identifier token. In various implementations, the authentication identifier token may be logically linked to the wrapper token. In some aspects, authentication identifier token may comprise service request information and credentials. Authentication engine 112 can send the authentication identifier token to the third party service 150, 130 and/or third party microservice 151, 152 associated with the service request. Authentication engine 112 receives back from the third party service a payload (e.g., whatever data was necessary to fulfill service request) in the form of one or more third party data objects (e.g., JSON files, XML files, etc.) which may or may not be encrypted, and sends the payload to workflow engine 112 which causes the third party data objects to be displayed in the browser 122 user interface.
In operation, a user of client device 120 may submit a service request by interacting with an interactive element 202 of the CRM browser UI 201. For example, within the CRM bowser UI 201 there is displayed a button that says get order tracking information which, when clicked upon by the CRM user (e.g., contact center agent) generates a service request for an order tracking service provided by a third party service provider 150, 130. The service request may be intercepted or obtained by workflow engine 112 which combines the service request information with the client session token stored in session manager 123 to form a wrapper token which is sent to authentication engine 112. As an example, clicking on interactive element 202 such as a button may generate a service request in the form of an application programming interface (“API”) call, wherein the API call may comprise various service request information (e.g., service address, requested data, client data and metadata, etc.). Authentication engine 112 parses the received wrapper token to first validate the user wherein, upon successful user validation, the service request is passed to the third party service as part of an authentication identifier token. Authentication engine 112 receives back from the third party service the payload (e.g., order tracking information) which may be in the form of one or more various types of data objects. The payload may be sent to workflow engine 125 which can display the third party data objects in a data display element 203 of the CRM browser UI 201.
In this way authentication engine 112 can provide improved security for the client and client device 120 by facilitating data exchange between and a plurality of public and private 3rd party services and/or microservices whereby the user does not have to directly send their credentials to the plurality of third party services thus reducing the risk of malicious cyberattacks such as man in the middle and other such network packet capturing/monitoring attack vectors. Additionally, workflow engine 125 and session manager 123 provide authentication and third party service and/or microservice integration functions wherein third party data objects may be used to execute various user defined workflows all while operating within the web browser 200 of a client device.
In some implementations, the workflow engine 125 is able to display more than one payload on the CRM browser UI 201 utilizing one or more display elements 203.
According to the aspect, workflow engine 600 comprises an event monitor 610 which may be configured to receive, retrieve, or otherwise obtain various CRM application process data and user behavior to monitor the steps in a business workflow. In some embodiments, a database 630 may be present and configured to store a plurality of business rules and/or workflows which can be used to compare against received application data to determine when a business workflow step has been skipped or otherwise omitted by the CRM application user (e.g., agent, customer service representative, etc.). Event monitor 630 may analyze the plurality of received data to detect an event. Data associated with the event may be captured by event monitor 630 and used to determine the type of modal dialog box should be displayed within the CRM browser U 201. In some implementations, event monitor 630 can access a plurality of event data and associated modal dialog data such that each event may have an associated one or more modal dialogs, wherein event monitor 630 selects a modal dialog based on its analysis of the captured event data.
According to the aspect, a selected modal dialog may be sent to an adapter 620 which formats the selected modal dialog for display within the CRM application UI. Adapter 620 may access database 630 to retrieve formatting rules for modal dialogs, wherein the formatting rules may be specific to an enterprise software suite. For example, a SALESFORCE™ CRM may follow different formatting (e.g., font size and type, window size and location on screen, etc.) requirements than that of a ZENDESK™ CRM. The displayed modal dialog may be represented as a box or pop-up window. The modal dialog may comprise a call-to-action or some other requirement that needs to be satisfied in order for the agent's normal workflow to continue. Once the modal dialog has been displayed, event monitor 610 continues to provide real-time monitoring of the agent's action taken responsive to the displayed modal dialog. If the agent's action is valid, that is, it satisfies the call-to-action or other requirement, then the agent may continue on with their workflow. If the agent's action is invalid (e.g., manually dismissing the modal dialog, etc.), then workflow engine 600 can repeatedly display the modal dialog until a valid action is determined to occur.
The methods and processes described herein are illustrative examples and should not be construed as limiting the scope or applicability of the workflow monitoring platform. These exemplary implementations serve to demonstrate the versatility and adaptability of the platform. It is important to note that the described methods may be executed with varying numbers of steps, potentially including additional steps not explicitly outlined or omitting certain described steps, while still maintaining core functionality. The modular and flexible nature of the workflow monitoring platform allows for numerous alternative implementations and variations tailored to specific use cases or technological environments. As the field evolves, it is anticipated that novel methods and applications will emerge, leveraging the fundamental principles and components of the platform in innovative ways. Therefore, the examples provided should be viewed as a foundation upon which further innovations can be built, rather than an exhaustive representation of the platform's capabilities
According to the embodiment, a workflow engine 125 is stored and operating on a client device (e.g., PC, laptop, smart phone, tablet, smart wearable, etc.) and configured to integrate a plurality of 3rd party software applications into a browser-based CRM system, wherein the integration of third party services allows the client (i.e., contact center agent, customer service representative, etc.) to create bespoke workflows using third party service data which instantiate and execute within the browser-based CRM system all while improving network security by removing the direct exchange of personal identifying information (“PII”) between the client device and the various services and by reducing the required amount of different server connections (and therefore reducing the amount of potential opportunities for malicious cyberattacks on data in transit).
At step 704 event monitor 610 determines a modal dialog associated with the detected event. In some implementations, a database 630 may be used to compare against stored events wherein each event may have a modal dialog associated with it. The selected modal dialog may be sent to an adapter 620 at step 706 which formats the modal dialog based on one or more formatting rules which may be retrieved from database 630. Workflow engine 600 may then display the formatted modal dialog in the web-based CRM application at step 708. At step 710 an agent takes an action on the displayed modal dialog. At this point event monitor 610 receives application data associated with the action that was just taken by the agent and analyzes the action data to determine if the action was valid at step 712. A valid action is an action which directly addresses and corresponds to the displayed modal dialog. For example, a modal dialog may appear that asks and requires the system user to select one option from multiple displayed options and a valid action would correspond to the agent selecting one of the options. An invalid action, therefore, is any action which does not address or otherwise satisfy the parameters of the modal dialog. For example, if an agent manually dismisses the modal dialog by clicking the CRM backdrop or a dismissal button, then event monitor could capture this application data, analyze the dismissal event data, and then determine that the manual dismissal is a form of invalid action. If at step 712 the action is determined to be invalid, then workflow engine 600 goes back to step 708 and repeats the process until the agent performs a valid action i.e., performing the instructed step as disclosed in the modal dialog. If, instead at step 712 the action is determined to be valid, then workflow engine 600 goes back to step 702 and continues to provide real-time monitoring of application data and user behavior data to detect business workflow events and regulate agent compliance with enterprise-specific best practices and standards.
The exemplary computing environment described herein comprises a computing device 10 (further comprising a system bus 11, one or more processors 20, a system memory 30, one or more interfaces 40, one or more non-volatile data storage devices 50), external peripherals and accessories 60, external communication devices 70, remote computing devices 80, and cloud-based services 90.
System bus 11 couples the various system components, coordinating operation of and data transmission between, those various system components. System bus 11 represents one or more of any type or combination of types of wired or wireless bus structures including, but not limited to, memory busses or memory controllers, point-to-point connections, switching fabrics, peripheral busses, accelerated graphics ports, and local busses using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry
Standard Architecture (ISA) busses, Micro Channel Architecture (MCA) busses, Enhanced ISA (EISA) busses, Video Electronics Standards Association (VESA) local busses, a Peripheral Component Interconnects (PCI) busses also known as a Mezzanine busses, or any selection of, or combination of, such busses. Depending on the specific physical implementation, one or more of the processors 20, system memory 30 and other components of the computing device 10 can be physically co-located or integrated into a single physical component, such as on a single chip. In such a case, some or all of system bus 11 can be electrical pathways within a single chip structure.
Computing device may further comprise externally-accessible data input and storage devices 12 such as compact disc read-only memory (CD-ROM) drives, digital versatile discs (DVD), or other optical disc storage for reading and/or writing optical discs 62; magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices; or any other medium which can be used to store the desired content and which can be accessed by the computing device 10. Computing device may further comprise externally-accessible data ports or connections 12 such as serial ports, parallel ports, universal serial bus (USB) ports, and infrared ports and/or transmitter/receivers. Computing device may further comprise hardware for wireless communication with external devices such as IEEE 1394 (“Firewire”) interfaces, IEEE 802.11 wireless interfaces, BLUETOOTH® wireless interfaces, and so forth. Such ports and interfaces may be used to connect any number of external peripherals and accessories 60 such as visual displays, monitors, and touch-sensitive screens 61, USB solid state memory data storage drives (commonly known as “flash drives” or “thumb drives”) 63, printers 64, pointers and manipulators such as mice 65, keyboards 66, and other devices 67 such as joysticks and gaming pads, touchpads, additional displays and monitors, and external hard drives (whether solid state or disc-based), microphones, speakers, cameras, and optical scanners.
Processors 20 are logic circuitry capable of receiving programming instructions and processing (or executing) those instructions to perform computer operations such as retrieving data, storing data, and performing mathematical calculations. Processors 20 are not limited by the materials from which they are formed or the processing mechanisms employed therein, but are typically comprised of semiconductor materials into which many transistors are formed together into logic gates on a chip (i.e., an integrated circuit or IC). The term processor includes any device capable of receiving and processing instructions including, but not limited to, processors operating on the basis of quantum computing, optical computing, mechanical computing (e.g., using nanotechnology entities to transfer data), and so forth. Depending on configuration, computing device 10 may comprise more than one processor. For example, computing device 10 may comprise one or more central processing units (CPUs) 21, each of which itself has multiple processors or multiple processing cores, each capable of independently or semi-independently processing programming instructions. Further, computing device 10 may comprise one or more specialized processors such as a graphics processing unit (GPU) 22 configured to accelerate processing of computer graphics and images via a large array of specialized processing cores arranged in parallel.
System memory 30 is processor-accessible data storage in the form of volatile and/or nonvolatile memory. System memory 30 may be either or both of two types: non-volatile memory and volatile memory. Non-volatile memory 30a is not erased when power to the memory is removed, and includes memory types such as read only memory (ROM), electronically-erasable programmable memory (EEPROM), and rewritable solid state memory (commonly known as “flash memory”). Non-volatile memory 30a is typically used for long-term storage of a basic input/output system (BIOS) 31, containing the basic instructions, typically loaded during computer startup, for transfer of information between components within computing device, or a unified extensible firmware interface (UEFI), which is a modern replacement for BIOS that supports larger hard drives, faster boot times, more security features, and provides native support for graphics and mouse cursors. Non-volatile memory 30a may also be used to store firmware comprising a complete operating system 35 and applications 36 for operating computer-controlled devices. The firmware approach is often used for purpose-specific computer-controlled devices such as appliances and Internet-of-Things (IoT) devices where processing power and data storage space is limited. Volatile memory 30b is erased when power to the memory is removed and is typically used for short-term storage of data for processing. Volatile memory 30b includes memory types such as random access memory (RAM), and is normally the primary operating memory into which the operating system 35, applications 36, program modules 37, and application data 38 are loaded for execution by processors 20. Volatile memory 30b is generally faster than non-volatile memory 30a due to its electrical characteristics and is directly accessible to processors 20 for processing of instructions and data storage and retrieval. Volatile memory 30b may comprise one or more smaller cache memories which operate at a higher clock speed and are typically placed on the same IC as the processors to improve performance.
Interfaces 40 may include, but are not limited to, storage media interfaces 41, network interfaces 42, display interfaces 43, and input/output interfaces 44. Storage media interface 41 provides the necessary hardware interface for loading data from non-volatile data storage devices 50 into system memory 30 and storage data from system memory 30 to non-volatile data storage device 50. Network interface 42 provides the necessary hardware interface for computing device 10 to communicate with remote computing devices 80 and cloud-based services 90 via one or more external communication devices 70. Display interface 43 allows for connection of displays 61, monitors, touchscreens, and other visual input/output devices. Display interface 43 may include a graphics card for processing graphics-intensive calculations and for handling demanding display requirements. Typically, a graphics card includes a graphics processing unit (GPU) and video RAM (VRAM) to accelerate display of graphics. One or more input/output (I/O) interfaces 44 provide the necessary support for communications between computing device 10 and any external peripherals and accessories 60. For wireless communications, the necessary radio-frequency hardware and firmware may be connected to I/O interface 44 or may be integrated into I/O interface 44.
Non-volatile data storage devices 50 are typically used for long-term storage of data. Data on non-volatile data storage devices 50 is not erased when power to the non-volatile data storage devices 50 is removed. Non-volatile data storage devices 50 may be implemented using any technology for non-volatile storage of content including, but not limited to, CD-ROM drives, digital versatile discs (DVD), or other optical disc storage; magnetic cassettes, magnetic tape, magnetic disc storage, or other magnetic storage devices; solid state memory technologies such as EEPROM or flash memory; or other memory technology or any other medium which can be used to store data without requiring power to retain the data after it is written. Non-volatile data storage devices 50 may be non-removable from computing device 10 as in the case of internal hard drives, removable from computing device 10 as in the case of external USB hard drives, or a combination thereof, but computing device will typically comprise one or more internal, non-removable hard drives using either magnetic disc or solid state memory technology. Non-volatile data storage devices 50 may store any type of data including, but not limited to, an operating system 51 for providing low-level and mid-level functionality of computing device 10, applications 52 for providing high-level functionality of computing device 10, program modules 53 such as containerized programs or applications, or other modular content or modular programming, application data 54, and databases 55 such as relational databases, non-relational databases, and graph databases.
Applications (also known as computer software or software applications) are sets of programming instructions designed to perform specific tasks or provide specific functionality on a computer or other computing devices. Applications are typically written in high-level programming languages such as C++, Java, and Python, which are then either interpreted at runtime or compiled into low-level, binary, processor-executable instructions operable on processors 20. Applications may be containerized so that they can be run on any computer hardware running any known operating system. Containerization of computer software is a method of packaging and deploying applications along with their operating system dependencies into self-contained, isolated units known as containers. Containers provide a lightweight and consistent runtime environment that allows applications to run reliably across different computing environments, such as development, testing, and production systems.
The memories and non-volatile data storage devices described herein do not include communication media. Communication media are means of transmission of information such as modulated electromagnetic waves or modulated data signals configured to transmit, not store, information. By way of example, and not limitation, communication media includes wired communications such as sound signals transmitted to a speaker via a speaker wire, and wireless communications such as acoustic waves, radio frequency (RF) transmissions, infrared emissions, and other wireless media.
External communication devices 70 are devices that facilitate communications between computing device and either remote computing devices 80, or cloud-based services 90, or both. External communication devices 70 include, but are not limited to, data modems 71 which facilitate data transmission between computing device and the Internet 75 via a common carrier such as a telephone company or internet service provider (ISP), routers 72 which facilitate data transmission between computing device and other devices, and switches 73 which provide direct data communications between devices on a network. Here, modem 71 is shown connecting computing device 10 to both remote computing devices 80 and cloud-based services 90 via the Internet 75. While modem 71, router 72, and switch 73 are shown here as being connected to network interface 42, many different network configurations using external communication devices 70 are possible. Using external communication devices 70, networks may be configured as local area networks (LANs) for a single location, building, or campus, wide area networks (WANs) comprising data networks that extend over a larger geographical area, and virtual private networks (VPNs) which can be of any size but connect computers via encrypted communications over public networks such as the Internet 75. As just one exemplary network configuration, network interface 42 may be connected to switch 73 which is connected to router 72 which is connected to modem 71 which provides access for computing device 10 to the Internet 75. Further, any combination of wired 77 or wireless 76 communications between and among computing device 10, external communication devices 70, remote computing devices 80, and cloud-based services 90 may be used. Remote computing devices 80, for example, may communicate with computing device through a variety of communication channels 74 such as through switch 73 via a wired 77 connection, through router 72 via a wireless connection 76, or through modem 71 via the Internet 75. Furthermore, while not shown here, other hardware that is specifically designed for servers may be employed. For example, secure socket layer (SSL) acceleration cards can be used to offload SSL encryption computations, and transmission control protocol/internet protocol (TCP/IP) offload hardware and/or packet classifiers on network interfaces 42 may be installed and used at server devices.
In a networked environment, certain components of computing device 10 may be fully or partially implemented on remote computing devices 80 or cloud-based services 90. Data stored in non-volatile data storage device 50 may be received from, shared with, duplicated on, or offloaded to a non-volatile data storage device on one or more remote computing devices 80 or in a cloud computing service 92. Processing by processors 20 may be received from, shared with, duplicated on, or offloaded to processors of one or more remote computing devices 80 or in a distributed computing service 93. By way of example, data may reside on a cloud computing service 92, but may be usable or otherwise accessible for use by computing device 10. Also, certain processing subtasks may be sent to a microservice 91 for processing with the result being transmitted to computing device 10 for incorporation into a larger processing task. Also, while components and processes of the exemplary computing environment are illustrated herein as discrete units (e.g., OS 51 being stored on non-volatile data storage device 51 and loaded into system memory 35 for use) such processes and components may reside or be processed at various times in different components of computing device 10, remote computing devices 80, and/or cloud-based services 90.
Remote computing devices 80 are any computing devices not part of computing device 10. Remote computing devices 80 include, but are not limited to, personal computers, server computers, thin clients, thick clients, personal digital assistants (PDAs), mobile telephones, watches, tablet computers, laptop computers, multiprocessor systems, microprocessor based systems, set-top boxes, programmable consumer electronics, video game machines, game consoles, portable or handheld gaming units, network terminals, desktop personal computers (PCs), minicomputers, main frame computers, network nodes, and distributed or multi-processing computing environments. While remote computing devices 80 are shown for clarity as being separate from cloud-based services 90, cloud-based services 90 are implemented on collections of networked remote computing devices 80.
Cloud-based services 90 are Internet-accessible services implemented on collections of networked remote computing devices 80. Cloud-based services are typically accessed via application programming interfaces (APIs) which are software interfaces which provide access to computing services within the cloud-based service via API calls, which are pre-defined protocols for requesting a computing service and receiving the results of that computing service. While cloud-based services may comprise any type of computer processing or storage, three common categories of cloud-based services 90 are microservices 91, cloud computing services 92, and distributed computing services 93.
Microservices 91 are collections of small, loosely coupled, and independently deployable computing services. Each microservice represents a specific computing functionality and runs as a separate process or container. Microservices promote the decomposition of complex applications into smaller, manageable services that can be developed, deployed, and scaled independently. These services communicate with each other through well-defined application programming interfaces (APIs), typically using lightweight protocols like HTTP or message queues. Microservices 91 can be combined to perform more complex processing tasks.
Cloud computing services 92 are delivery of computing resources and services over the Internet 75 from a remote location. Cloud computing services 92 provide additional computer hardware and storage on as-needed or subscription basis. Cloud computing services 92 can provide large amounts of scalable data storage, access to sophisticated software and powerful server-based processing, or entire computing infrastructures and platforms. For example, cloud computing services can provide virtualized computing resources such as virtual machines, storage, and networks, platforms for developing, running, and managing applications without the complexity of infrastructure management, and complete software applications over the Internet on a subscription basis.
Distributed computing services 93 provide large-scale processing using multiple interconnected computers or nodes to solve computational problems or perform tasks collectively. In distributed computing, the processing and storage capabilities of multiple machines are leveraged to work together as a unified system. Distributed computing services are designed to address problems that cannot be efficiently solved by a single computer or that require large-scale computational power. These services enable parallel processing, fault tolerance, and scalability by distributing tasks across multiple nodes.
Although described above as a physical device, computing device 10 can be a virtual computing device, in which case the functionality of the physical components herein described, such as processors 20, system memory 30, network interfaces 40, and other like components can be provided by computer-executable instructions. Such computer-executable instructions can execute on a single physical computing device, or can be distributed across multiple physical computing devices, including being distributed across multiple physical computing devices in a dynamic manner such that the specific, physical computing devices hosting such computer-executable instructions can dynamically change over time depending upon need and availability. In the situation where computing device 10 is a virtualized device, the underlying physical computing devices hosting such a virtualized computing device can, themselves, comprise physical components analogous to those described above, and operating in a like manner. Furthermore, virtual computing devices can be utilized in multiple layers with one virtual computing device executing within the construct of another virtual computing device. Thus, computing device 10 may be either a physical computing device or a virtualized computing device within which computer-executable instructions can be executed in a manner consistent with their execution by a physical computing device. Similarly, terms referring to physical components of the computing device, as utilized herein, mean either those physical components or virtualizations thereof performing the same or equivalent functions.
The skilled person will be aware of a range of possible modifications of the various aspects described above. Accordingly, the present invention is defined by the claims and their equivalents.
Priority is claimed in the application data sheet to the following patents or patent applications, each of which is expressly incorporated herein by reference in its entirety: 19/044,96618/447,27418/072,36463/422,906
| Number | Date | Country | |
|---|---|---|---|
| 63422906 | Nov 2022 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 18447274 | Aug 2023 | US |
| Child | 19044966 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 19044966 | Feb 2025 | US |
| Child | 19091804 | US | |
| Parent | 18072364 | Nov 2022 | US |
| Child | 18447274 | US |