Modern network management requires analysis of network topologies and communication graphs for diverse tasks. However, the absence of a cohesive approach results in a challenging learning curve and heightened errors and inefficiencies.
The Detailed Description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of similar reference numbers in different instances in the description and the figures may indicate similar or identical items.
Modern network management requires analysis of network topologies and communication graphs for diverse tasks. However, the absence of a cohesive approach results in a challenging learning curve and heightened errors and inefficiencies. The present concepts include an innovative approach to transform network management by leveraging large artificial intelligence (LAI) models, such as large language models (LLMs) to generate task-specific code for graph manipulation through natural language queries. Stated another way, the present concepts enable natural-language-based network management experiences, leveraging LLMs to generate task-specific code from natural language queries. This technical solution tackles challenges relating to explainability, scalability, and privacy by allowing network operators to inspect the generated code. This eliminates the need to share network data with LLMs, and concentrates on application-specific requests combined with general program synthesis techniques. Implementations are evaluated using benchmark applications, showcasing high accuracy, cost-effectiveness, and the potential for further enhancements using advanced and complementary program synthesis techniques.
One critical aspect of contemporary network management involves analyzing and performing actions on network topologies and communication graphs for tasks such as capacity planning, configuration analysis, and traffic analysis. For instance, network operators may pose capacity planning questions such as “What is the most cost-efficient way to double the network bandwidth between these two data centers?” using network topology data. Similarly, they may ask diagnostic questions like “What is the required number of hops for data transmission between these two nodes?” using communication graphs. Network operators today rely on an expanding array of tools and domain-specific languages (DSLs) for these operations. The absence of a unified approach to these tasks leads to a steep learning curve and increased errors and inefficiencies in manual operations. The present concepts provide a unified approach that provides a technical solution that reduces the learning curve and minimizes errors and inefficiencies in manual operations.
Leveraging large artificial intelligence models, such as large language models (LLMs), offers a promising solution to these technical challenges by enabling network administrators and operators to streamline tasks using natural language. LLMs have demonstrated exceptional proficiency in interpreting human language and providing high-quality answers across various domains. The capabilities of LLMs can potentially bridge the gap between diverse tools and DSLs, leading to more efficient network management and a more cohesive approach to handling network-related questions and tasks.
Unfortunately, no existing systems facilitate graph manipulation using natural language, and asking LLMs to directly manipulate network topologies introduces three fundamental challenges related to explainability, scalability, and privacy. First, explaining the output of LLMs and enabling LLMs to reason about complex problems remain unsolved issues. This complicates the process of determining the methods employed by LLMs in deriving answers and evaluating their correctness. Second, LLMs are constrained by limited token window sizes, which restricts their capacity to process extensive network topologies and communication graphs. For example, state-of-the-art LLMs such as Bard, ChatGPT, and GPT-4 permit only 2k to 32k tokens in their prompts, which can accommodate merely a small network topology comprising tens of nodes and hundreds of edges. Third, network data may consist of personally identifiable information (PII), such as IP addresses, raising privacy concerns when transferring this information to LLMs for processing. The present technical solutions address these challenges to develop a more effective approach to integrating LLMs in network management tasks.
The present concepts involve a novel approach to revolutionize network management by leveraging the power of LLMs to create task-specific code for graph analysis and manipulation. This novel approach enhances network management by leveraging LLMs to create task-specific code for graph analysis and manipulation, which facilitates a natural-language-based network administration.
Stated another way, these FIGS. depict how an example system generates and executes LLM-produced code in response to a network operator's natural language query. This approach tackles the explainability challenge by allowing network operators to examine the LLM-generated code, enabling them to comprehend the underlying logic that fulfills the query. Additionally, it delegates computation to program execution engines, thereby minimizing arithmetic inaccuracies and LLM-induced hallucinations. Furthermore, this approach overcomes scalability and privacy issues by removing the need to share network data with LLMs.
A primary technical challenge lies in generating high-quality code to accomplish network management tasks. Al through LLMs have shown remarkable capabilities in general code generation. However, LLMs lack an understanding of domain and application specific requirements. To tackle this challenge, the present concepts involve a novel framework that combines application-specific requests with general program synthesis techniques to create customized code for graph manipulation tasks in network management. The architecture divides code generation into two components: (1) an application-specific element that provides context, instructions, or plugins, which enhance the LLMs' comprehension of network structures and terminology, and (2) a code generation element that leverages suitable libraries and program synthesis techniques. This architecture fosters independent innovation of the distinct components, and evidence indicates substantial code quality improvements.
This sequence of user-interfaces shows how the present implementations provide a technical solution to the explainability challenge by enabling users (e.g., network operators) to examine the code 106 and understand the techniques used by LLMs to derive answers while assessing their accuracy. Moreover, these implementations provide technical solutions that overcome both scalability and privacy concerns by removing the necessity to transfer network data to LLMs, as the input for LLMs is the natural language query and the output solely comprises LLM-generated code. The code is then applied to the network information rather than putting the network information into the LLM.
A core principle of the novel framework involves integrating application-specific requests with general program synthesis techniques to create code customized for graph manipulation tasks in network management. The novel architecture provides a technical solution that separates the process of generating high-quality code into two key components: (1) an application-specific element (e.g., application prompt generator) that supplies context, instructions, or plugins, which enhances the LLMs' comprehension of network structures, attributes, and terminology, and (2) a code generation element (e.g., code-gen prompt generator) that leverages suitable libraries and cutting-edge program synthesis techniques to produce code. This architecture fosters independent innovation of distinct components.
The novel aspects provide a technical approach to network management that employs LLMs for generating code for graph manipulation tasks, enabling a natural-language-based network administration experience. A benchmarking system is employed that encompasses two network administration applications: network traffic analysis and network lifecycle management. Applications are evaluated with three code generation techniques and four distinct LLMs to validate the capability of the present approach in generating high-quality code for graph manipulation tasks. These aspects are described in more detail below.
The description now examines graph analysis and the role of graph manipulation in network management, followed by discussing LLM advancements and their potential application in this domain.
Graph analysis and manipulation is useful in network management. Network management tends to involve an array of tasks such as network planning, monitoring, configuration, and/or troubleshooting. As networks expand in size and complexity, these tasks become progressively more challenging. For instance, network operators are required to configure and monitor thousands of network devices to enforce intricate policies and monitor these devices to guarantee their proper functionality. Numerous operations can be modeled as graph analysis and manipulation for network topologies or communication graphs. Two examples of these tasks are described below.
The first example involves network traffic analysis. Network operators analyze network traffic for various reasons, such as identifying bottlenecks, congestion points, underutilized resources, and/or traffic classification, among others. A valuable representation in traffic analysis is traffic dispersion graphs (TDGs) or communication graphs, in which nodes represent network components like routers, switches, and/or devices, and edges symbolize the connections or paths between these components. An example is illustrated in
The second example involves network lifecycle management. Managing the entire lifecycle of a network entails various phases, including capacity planning, network topology design, deployment planning, and/or diagnostic operations, among others. The majority of these operations necessitate an accurate representation of network topology at different abstraction levels and the manipulation of topology to achieve the desired network state. For example, network operators might employ a high-level topology to plan the network's capacity and explore different alternatives for increasing bandwidth between two data centers. Similarly, network engineers may utilize a low-level topology to ascertain the location of network devices and their connections to other devices.
For at least the above reasons, graph analysis and manipulation are crucial in network management. A unified interface capable of comprehending and executing these tasks has the potential to significantly streamline the process, saving network operators considerable time and effort.
The description now turns to LLMs and program synthesis. Automated program generation based on natural language descriptions, also known as program synthesis, has been a long-standing research challenge. Until recently, program synthesis had primarily been limited to specific domains, with general program synthesis considered to be out of reach. The breakthrough emerged with the advancement of LLMs, which are trained on extensive corpora of natural language text from the internet and massive code repositories such as GitHub. LLMs have demonstrated remarkable proficiency in learning the relationship between natural language and code, achieving state-of-the-art performance in domain-specific tasks, such as natural language to database queries as well as human-level performance in tasks such as programming competitions and mock technical interviews. Recently, these advancements have led to experimental plugins designed to solve mathematical problems and perform data analysis through code generation.
The recent breakthrough in program synthesis using LLMs has ignited a surge of research aimed at advancing the state of the art in this field. These techniques can generally be classified into three approaches: (1) code selection, which involves generating multiple samples with LLMs and choosing the best one based on the consistency of execution results or auto-generated test cases; (2) few-shot examples, which supply LLMs with several examples of the target program's input-output behavior; and (3) feedback and self-reflection, which incorporates a feedback or reinforcement learning outer loop to help LLMs learn from their errors. These advanced techniques continue to expand the horizons of program synthesis, empowering LLMs to generate more complex and accurate programs.
The system framework 200 is designed to offer flexibility regarding code-gen prompt generator 218 and LLMs 220. This enables the use of various techniques for different applications 204. The application prompt generator 214 can be configured to utilize the application wrapper 202 to encapsulate domain-specific knowledge that includes definitions of nodes and edges and can be configured to transform application data into graph 210. Subsequently, the task-specific prompt 216 is combined with general code-gen prompt generator 218 to generate a complete prompt 219. The complete prompt 219 is used to instruct the LLM 220 to produce/generate code 222 for graph analysis and manipulation tasks. Upon generating the code 222, it is executed within an execution sandbox 224 on the graph representation 210 of the network. The generated code 222 can utilize plugins and libraries to respond to user's queries 212 in the constructed graph 210. The LLM-generated code 222 and the resulting execution outcomes are displayed on a user-experience (UX) interface 226 and as shown in
The application wrapper 202 offers context-specific information related to the network management application and the network itself. For instance, the multi-abstraction-layer topology representation (MALT) wrapper can extract the graph of entities and relationships from the underlying data, describing entities (e.g., packet switches, control points, etc.) and relationships (e.g., contains, controls, etc.) in natural language. This information assists LLMs 220 in comprehending the network management application and the graph data structure. Additionally, the application wrapper 202 can provide application-specific plugins and/or code libraries to make LLM tasks more straightforward. The application wrapper also offers a secure environment to run the code 222 generated by the LLMs 220.
The purpose of the application prompt generator 214 is to accept both the user query 212 and the information from the application wrapper 202 as input, and then generate a prompt specifically tailored to the query and task for the LLM. To achieve this, the application prompt generator 214 can utilize a range of static and dynamic techniques. For instance, when working with MALT, the application prompt generator 214 can dynamically select relevant entities and relationships based on the user query 212, and then populate a prompt template with the contextual information. The system framework is designed to offer flexibility with regard to the code-gen prompt generator 218 and LLMs 220, enabling the use of various techniques for different applications 204.
The execution sandbox 224 offers a secure environment for running the code generated by LLMs. The execution sandbox 224 can be established using virtualization or containerization techniques, ensuring limited access to program libraries and system calls. Additionally, this element provides a chance to enhance the security of both code and system by validating network invariants and/or examining output formats.
This novel system framework 200 is designed to revolutionize network management by utilizing LLMs to generate task-specific code. This system framework is founded on two key insights. First, numerous network management operations can be transformed into graph analysis and manipulation tasks. This transformation allows for a unified design and a more focused task for program synthesis (e.g., code generation). Second, the prompt generation techniques can be separated between domain-specific techniques and general program-synthesis techniques. This separation enables high-quality code generation by combining the strengths of domain specialization and advanced program synthesis techniques. This allows the system framework to generate high-quality code for network management tasks.
The description now turns to implementation and evaluation of the LLM generated code execution. This aspect can be achieved by a benchmark system.
The benchmark system 300 can evaluate the effectiveness of LLM-based network management systems.
The description first turns to the golden answer selector 302. For each test input user query, a “golden answer” is created with the help of human experts. These verified answers are stored in a golden answer selector's dictionary file. The golden answer acts as the ground truth for evaluating LLM-generated code.
The results evaluator 304 provides evaluation and analysis of the LLM-generated code (222,
The results logger 306 allows the system to log the results of each user query 212, including the LLM-generated code 222, the golden answer, and the comparison results provided by the evaluation and analysis module 312. The results logger 306 also records any code execution errors that may have occurred during the evaluation process. These logs facilitate the analysis of the LLM's performance and the identification of potential improvement.
The description now explains the experimental setup utilized to test the performance of the present implementations.
Two applications were implemented and evaluated relating to network traffic analysis and network lifecycle management. In relation to network traffic analysis, synthetic communication graphs were generated with varying numbers of nodes and edges. Each edge represents communication activities between two nodes with random weights in bytes, connections, and packets. 24 queries were developed by curating trial users' queries, encompassing common tasks such as topology analysis, information computation, and graph manipulation.
In relation to network lifecycle management, the example MALT model was utilized and converted into a directed graph with 5493 nodes and 6424 edges. Each node represents one or more device or node types in a network, such as packet switches, chassis, and ports, with different node types containing various attributes. Directed edges encapsulate relationships between devices, like control or containment associations. Nine network management queries were developed focusing on operational management, WAN capacity planning, and topology design.
The queries are grouped into three complexity levels (“Easy,” “Medium,” and “Hard”) according to their solution length. Table 1 displays an example query from each category due to page limits.
Four state-of-the-art LLMs were studied, including GPT-4, GPT-3, text-davinci-003 (a variant of GPT 3.5), and Google Bard. It is contemplated that the present concepts can be applied to future LLMs. Two open LLMs, StarCoder and InCoder, were also examined, but the results were inconsistent. With all OpenAI LLMs, their temperature was set to 0 to ensure consistent output across multiple trials. Since the temperature of Google Bard cannot be changed, each query was sent five times and the average passing probability was calculated.
Three code-generation approaches were implemented, including NetworkX, Pandas, and SQL.
In relation to NetworkX, the network data was represented as a NetworkX graph, which offers flexible APIs for efficient manipulation and analysis of network graphs.
In relation to Pandas, the network data was represented using two Pandas dataframes: a node dataframe, which stores node indices and attributes, and an edge dataframe, which encapsulates the link information among nodes through an edge list. Pandas provides many built-in data manipulation techniques, such as filtering, sorting, and grouping.
In relation to SQL, the network data was represented as a database queried through SQL, consisting of a table for nodes and another for edges. The table schemas are similar to those in Pandas.
A (strawman) baseline was also evaluated that directly feeds the original network graph data in JSON format to the LLM and requests it to address the query. However, in light of the token constraints on LLMs, the evaluation of this approach was limited to synthetic graphs for network traffic analysis, where data size can be controlled.
Table 2 summarizes the benchmark system (e.g., code correctness) results for network traffic analysis and network lifecycle management, respectively. Three key points were observed. First, using LLMs for network management code generation significantly outperforms the strawman baseline in both applications. Second, employing a graph library (NetworkX) greatly enhances code accuracy compared to Pandas and SQL, as LLMs can leverage or directly map natural-language graph operation to NetworkX's graph manipulation APIs to streamline or simplify the generated code. This trend is consistent across all four tested LLMs. Finally, pairing NetworkX with the state-of-the-art GPT-4 model produces the highest (e.g., best) results (88% and 78%, respectively), making it a promising strategy for network management code generation.
To understand the impact of task difficulty, the accuracy results are broken down in Tables 2 and 3. The accuracy of LLM-generated code decreases as task complexity increases. This trend is consistent across all LLMs and approaches, with the performance disparities becoming more pronounced for network lifecycle management (Table 2). Analysis of the LLM-generated code reveals that the complex relationships in the MALT dataset make LLMs more prone to errors in challenging tasks.
The description now turns to case studies. For the NetworkX approach across all four LLMs, there are 36 failures out of 96 tests (24×4) for network traffic analysis and 17 failures out of 36 tests (9×4) for network lifecycle management, respectively. Table 5 summarizes the error types. More than half of the errors are related to syntax errors or imaginary attributes. A case study was utilized to see whether using complementary and more advanced program synthesis techniques could correct these errors.
Two techniques were assessed: (1) pass@k, where the LLM is queried k times with the same question, and it is deemed successful if at least one of the answers is correct. This method reduces errors arising from the LLM's inherent randomness and can be combined with code selection techniques for improved results; (2) self-debug, which involves providing the error message back to the LLM, encouraging it to correct the previous response.
A case study was employed using the Bard model and three unsuccessful network lifecycle queries with the NetworkX approach. Table 6 shows that both pass@k (k=5) and self-debug significantly enhance code quality, resulting in improvements of 100% and 67%, respectively. These results indicate that applying complementary techniques has considerable potential for further improving the accuracy of LLM-generated code in network management applications.
The LLM cost was examined utilizing GPT-4 pricing on Azure for the network traffic analysis application. LLM models and pricing are provided for purposes of explanation and are expected to evolve over time.
Recent advancements in LLMs have paved the way for new opportunities in network management. The present concepts include a system framework (200,
As the evaluation demonstrates, the LLM-generated code (222,
In summary, the present concepts offer a pioneering step in introducing a general framework for using LLMs in network management, presenting a new frontier for streamlining and/or simplifying network operators' tasks.
As described above, LLM-generated code can tackle explainability, scalability, and privacy challenges in LLM-based network management. However, merely applying existing approaches is inadequate for network management tasks, as existing techniques do not comprehend the domain-specific and application-specific requirements. One of the key technical challenges lies in harnessing recent advancements in LLMs and general program synthesis to develop a unified interface capable of accomplishing network management tasks, which forms the design requirements for the present solutions.
The example user-interface described relative to
Several implementations are described in detail above.
At block 504, the method can provide a natural language prompt for a graph manipulation task to the LLM.
At block 506, the method can receive code from the LLM that addresses the graph manipulation task. Thus, the method receives the code from the LLM without providing the graph or graph information to the LLM. Thus, data privacy can be maintained. Instead, the code is applied locally to manipulate the graph from the original graph to an updated graph. Further, the user can inspect the code and decide whether to run the code on the graph. Further still the user can then decide whether to retain the updated graph or revert to the original graph.
The order in which the disclosed methods are described is not intended to be construed as a limitation, and any number of the described acts can be combined in any order to implement the method, or an alternate method. Furthermore, the methods can be implemented in any suitable hardware, software, firmware, or combination thereof, such that a computing device can implement the method. In one case, the methods are stored on one or more computer-readable storage media as a set of instructions such that execution by a processor of a computing device causes the computing device to perform the method.
Computing devices 602 can include a communication component 608, a processor 610, storage resources (e.g., storage) 612, and/or graph analysis and manipulation tool 614.
The graph analysis and manipulation tool 614 can be configured to receive a natural language prompt relating to a network management activity and to access a graph resource and to generate code that addresses the network management activity as a graph manipulation task. Example graph resources can include plugins, libraries, application program interfaces (APIs), and/or instructions relating to graphs, among others. The graph analysis and manipulation tool 614 can implement and/or interact with the system frameworks 200 and 300 introduced above relative to
In configuration 616(1), the graph analysis and manipulation tool 614 can be manifest as part of the operating system 620. Alternatively, the graph analysis and manipulation tool 614 can be manifest as part of the applications 618 that operates in conjunction with the operating system 620 and/or processor 610. In configuration 616(2), the graph analysis and manipulation tool 614 can be manifest as part of the processor 610 or a dedicated resource 626 that operates cooperatively with the processor 610.
In some configurations, each of computing devices 602 can have an instance of the graph analysis and manipulation tool 614. However, the functionalities that can be performed by the graph analysis and manipulation tool 614 may be the same or they may be different from one another when comparing computing devices. For instance, in some cases, each graph analysis and manipulation tool 614 can be robust and provide all of the functionality described above and below (e.g., a device-centric implementation).
In other cases, some devices can employ a less robust instance of the graph analysis and manipulation tool 614 that relies on some functionality to be performed by a graph analysis and manipulation tool 614 on another device.
The term “device,” “computer,” or “computing device” as used herein can mean any type of device that has some amount of processing capability and/or storage capability. Processing capability can be provided by one or more processors that can execute data in the form of computer-readable instructions to provide a functionality. Data, such as computer-readable instructions and/or user-related data, can be stored on/in storage, such as storage that can be internal or external to the device and is configured to store the data. The storage can include any one or more of volatile or non-volatile memory, hard drives, flash storage devices, and/or optical storage devices (e.g., CDs, DVDs etc.), remote storage (e.g., cloud-based storage), among others. As used herein, the term “computer-readable media” can include signals. In contrast, the term “computer-readable storage media” excludes signals. Computer-readable storage media includes “computer-readable storage devices.” Examples of computer-readable storage devices include volatile storage media, such as RAM, and non-volatile storage media, such as hard drives, optical discs, and flash memory, among others.
As mentioned above, device configuration 616(2) can be thought of as a system on a chip (SOC) type design. In such a case, functionality provided by the device can be integrated on a single SOC or multiple coupled SOCs. One or more processors 610 can be configured to coordinate with shared resources 624, such as storage 612, etc., and/or one or more dedicated resources 626, such as hardware blocks configured to perform certain specific functionality. Thus, the term “processor” as used herein can also refer to central processing units (CPUs), graphical processing units (GPUs), field programmable gate arrays (FPGAs), controllers, microcontrollers, processor cores, hardware processing units, or other types of processing devices.
Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed-logic circuitry), or a combination of these implementations. The term “component” as used herein generally represents software, firmware, hardware, whole devices or networks, or a combination thereof. In the case of a software implementation, for instance, these may represent program code that performs specified tasks when executed on a processor (e.g., CPU, CPUs, GPU or GPUs). The program code can be stored in one or more computer-readable memory devices, such as computer-readable storage media. The features and techniques of the components are platform-independent, meaning that they may be implemented on a variety of commercial computing platforms having a variety of processing configurations.
Various examples are described above. Additional examples are described below. One example includes a device-implemented method comprising providing a graph library to a large language model (LLM), providing a natural language prompt for a graph manipulation task to the LLM, and receiving code from the LLM that addresses the graph manipulation task.
Another example can include any of the above and/or below examples where providing a natural language prompt comprises providing a task-specific prompt to the LLM.
Another example can include any of the above and/or below examples where the graph manipulation task relates to an original graph, and wherein providing a natural language prompt for a graph manipulation task to the LLM does not include providing the original graph to the LLM.
Another example can include any of the above and/or below examples where the method further comprises causing a user-interface to be generated that presents the code for a user that generated the graph manipulation task.
Another example can include any of the above and/or below examples where the user-interface further allows the user to select to execute the code on the original graph or discard the code.
Another example can include any of the above and/or below examples where the method further comprises executing the code on the original graph without exposing the original graph to the LLM.
Another example can include any of the above and/or below examples where the method further comprises causing the user-interface to present both an updated graph that reflects the code execution and the original graph.
Another example can include any of the above and/or below examples where the method further comprises causing the user-interface to allow the user to select the updated graph.
Another example can include any of the above and/or below examples where in a condition where the user selects the updated graph, further comprising updating a network state to reflect the updated graph.
Another example can include a system comprising hardware, and a graph analysis and manipulation tool configured to receive a natural language prompt relating to a network management activity and to access a graph resource and to generate code that addresses the network management activity as a graph manipulation task.
Another example can include any of the above and/or below examples where the graph resource comprises plugins, libraries, and/or instructions relating to graphs.
Another example can include any of the above and/or below examples where the system further comprises an application wrapper configured to encapsulate domain-specific knowledge comprising definitions of nodes and edges of a network and configured to transform application data of the network into a graph.
Another example can include any of the above and/or below examples where the system further comprises a code-generation prompt generator configured to utilize a task-specific prompt to cause a large language model (LLM) to produce code for the task-specific prompt as a graph analysis and manipulation task.
Another example can include a system comprising a storage configured to store computer executable instructions for executing an application wrapper configured to encapsulate domain-specific knowledge comprising definitions of nodes and edges and configured to transform application data into a graph, an application prompt generator configured to utilize the encapsulated domain-specific knowledge and the graph to create a task-specific prompt for a large language model (LLM), and a code-generation prompt generator configured to utilize the task-specific prompt to cause the LLM to produce code for the task specific prompt as a graph analysis and manipulation task.
Another example can include any of the above and/or below examples where the system is configured to execute the code on the graph in a sandbox.
Another example can include any of the above and/or below examples where the system is configured to present the code for approval before running the code.
Another example can include any of the above and/or below examples where the system is configured to update the graph with the code and send the updated graph back to an application wrapper to modify a network state and record input and output for future prompt enhancements.
Another example can include any of the above and/or below examples where the system includes the LLM or wherein the system accesses the LLM.
Another example can include any of the above and/or below examples where the graph represents a network state and the updated graph represents an updated network state.
Another example can include any of the above and/or below examples where the code-gen prompt generator is configured to utilize the task-specific prompt to cause the LLM to produce the code without exposing the graph or the network state to the LLM.
The description includes graph analysis and manipulation concepts. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims and other features and acts that would be recognized by one skilled in the art are intended to be within the scope of the claims.
This utility patent application is a non-provisional of, and claims priority to, U.S. Provisional Application 63/528,756 filed on Jul. 25, 2023, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63528756 | Jul 2023 | US |