Operational telemetry data can be collected by monitoring elements of communication systems, computing systems, software applications, operating systems, user devices, or other devices and systems. The operational telemetry data can indicate a state of operation for various nodes of a communication network, and is typically accumulated into logs or databases over periods of time. The various networks and systems for which telemetry data is observed can include many physical, logical, and virtualized communication elements which might experience problems during operation. These problems can arise from increased traffic, overloaded communication pathways and associated data or communication processing elements, as well as other sources of issues. However, detection of problems with large communication systems can be difficult. These problems can be especially difficult to detect when the communication systems include geographically distributed computing and communication systems, such as employed in large multi-user network conferencing platforms.
Systems, methods, and software for operational anomaly detection in communication systems is provided herein. An exemplary method includes obtaining a measured sequence of state information associated with the communications system during a first timeframe, processing the measured sequence of state information to determine a predicted sequence of state information for the communication system during a second timeframe, and monitoring current state information for the communication system over at least a portion of the second timeframe. The method also includes determining operational anomalies associated with the communication system based at least on a comparison between the current state information and the predicted sequence of state information.
This Overview is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. It may be understood that this Overview is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Many aspects of the disclosure can be better understood with reference to the following drawings. While several implementations are described in connection with these drawings, the disclosure is not limited to the implementations disclosed herein. On the contrary, the intent is to cover all alternatives, modifications, and equivalents.
Operational telemetry data can be collected by monitoring elements of communication systems, computing systems, software applications, operating systems, user devices, or other devices and systems. The operational telemetry data can indicate a state of operation for various nodes of a communication network, and is typically accumulated into logs or databases over periods of time. Detection of problems and anomalies with communication systems can be difficult when the communication systems include geographically distributed computing and communication systems, such as employed in large multi-user network conferencing platforms. For example, communications related to Skype for Business and other network telephony and conferencing platforms can transit many communication elements which transport user traffic over various elements of the Internet, packet networks, private networks, or other communication networks and systems.
The various examples herein discuss enhanced anomaly detection in communication systems, or other computing systems. These anomalies can indicate deviations from expected behavior of a particular communication system or computing system, which can vary in severity. For example, a deviation from expected behavior can be due to unpredicted traffic or overloading of an affected element, or can instead occur due to lower than expected loading or traffic patterns. Other deviations can exist, and can be detected using the predictive anomaly detection discussed herein. Advantageously, the predictive anomaly detection processes and platforms discussed herein provide the technical effects of faster determination of failures and issues, increased uptime for computer networks and communication systems, automated alerting to operators, and more reliable communication systems, among other technical effects.
In many communications systems, prevailing operating behavior is considered normal. Anomalies indicate system behavior which is undesirable or unpredicted, and can indicate failures, errors, overloading, malicious attacks, or other events. Operators of the communication systems typically have access to a range of real time measurements including performance counters, system events, event logs, streaming operational status, or other telemetry data. For example, for a communication service system, telemetry information can be collected that indicates a number of concurrent user connections, processor utilization, memory utilization, average network latency, and the like for particular nodes or elements of the communication system as well as for the communication system as a whole. The telemetry information can be measured, observed, collected, received, or otherwise accumulated into an anomaly detection platform. Taken together, the telemetry information forms a vector of measurements, which describe the current state of the system. Anomaly detection maps the telemetry information to an anomaly reading. The reading can be categorical, i.e. “normal” vs “anomaly”, or quantitative, such as a number describing the degree or severity of anomaly.
Anomaly detection can take an indicated telemetry measurement vector and compare against a collection of telemetry measurement vectors from a history of the system. Mathematically, this methodology can include assessing a density of a probability distribution of the points in n-dimensional space of real numbers, where each point corresponds to a vector of telemetry measurements. An anomaly can be declared when the density estimate is low, or low enough according to some predetermined threshold. Some example anomaly detection methods include: one-class classification (such as one-class Support Vector Machine), reconstruction error of neural net auto-encoders, clustering approaches such as density-based spatial clustering of applications with noise (DBSCAN), and others. These classical methods can also be applied when the vector being evaluated is expanded to include the history of measurements over time, not just at a single time instance. There is a variety of ways for doing so. One way is to merely concatenate the measurement vectors across a number of equally spaced time instances spanning some time range. Another way is to concatenate averages over a number of intervals. For example, the intervals maybe comprise intervals of last hour, last 10 hours, last 100 hours, among others.
In the examples discussed herein, prediction of a ‘tail’ of a telemetry sequence is determined based on a ‘head’ of the telemetry sequence. Deviation and degree of variation between the prediction and an observed tail can indicate anomalous behavior, among other indications. Anomaly determination is based upon predictions of a future part of a sequence of measurements based on knowledge of a past part of the sequence. If a prediction quality is good, then the anomaly detection system concludes the system is behaving normally or nominally. If the prediction quality is significantly off from measured telemetry, the anomaly detection system can declare an anomalous behavior, such as by alerting an operator of the system. The resulting anomaly detection methods are typically interpretable by operators of the system, in part because the predictions are based on predicting outcomes based on past system behavior. The predictions may also serve other needs in addition to anomaly detections. For example, capacity forecasting or aiding expectations of operators ahead of time, even if the predicted events are not aberrations or anomalies.
As a first example of telemetry event correlation,
In operation, telemetry source 130 can provide telemetry information, such as sequences of state information related to communication elements, to anomaly processing system 110. This telemetry information can include telemetry data, event data, status data, state information, or other information that can be monitored or measured by telemetry source 130 for associated communication elements which can include software, hardware, or virtualized elements. For example, telemetry source 130 can include application monitoring services which provide a record or log of events associated with usage of associated applications or operating system elements. In other examples, telemetry source 130 can include hardware monitoring elements which provide sensor data, environmental data, user interface event data, or other information related to usage of hardware elements. These hardware elements can include computing systems, such as personal computers, server equipment, distributed computing systems, or can include discrete sensing systems, industrial or commercial equipment monitoring systems, sensing equipment, or other hardware elements. In further examples, telemetry source 130 can monitor elements of a virtualized computing environment, which can include hypervisor elements, operating system elements, virtualized hardware elements, software defined network elements, among other virtualized elements.
The telemetry information, once obtained by anomaly processing system 110 can be analyzed to determine sequences of state information over various timeframes for associated communication elements. Anomaly processing system 110, along with sequence prediction platform 111 and anomaly detection platform 112, can be employed to process the sequences of state information according to the desired analysis operations to detect and report anomalies in the operation of the communication elements. Operator interface system 120 can provide an interface for a user to control the operations of anomaly processing system 110 as well as receive information related to anomalies or predicted behavior of the communication elements.
To further explore example operation of the elements of
This state information can be obtained from telemetry source 130 over link 150, and can comprise telemetry data which is processed to determine the state information. Sequences of the state information can be determined by monitoring or observing operation of communication elements 131 over various timeframes. In a specific example, a first sequence of measured state information is transferred by telemetry source 130 as sequence 140 that covers time period ΔT1. Anomaly processing system 110 can receive sequence 140 over link 150.
Anomaly processing system 110 processes (212) the measured sequence of state information to determine a predicted sequence of state information for the communication system during a second timeframe. The predicted sequence of state information indicates a predicted behavior for the communication system during the second timeframe. In
Sequence prediction platform 111 can process measured sequence 140 of state information using one or more machine learning algorithms. Sequence prediction platform 111 can process measured sequence 140 of state information using a recurrent neural network (RNN) process that determines the predicted sequence of state information based at least on measured sequence 140 of state information. The RNN process can be initially trained to determine the predicted sequence of state information can include using past state information observed for the communication system. Training the RNN process using the past state information can be provided by at least subdividing the past state information into a historical portion and a future portion, selecting the historical portion as an input to the RNN process, and iteratively evolving the historical portion using the RNN process until the future portion is predicted by the RNN process to within a predetermined margin of error. Other training methods and processes can be employed, and these can be included both automated and supervised training processes.
Anomaly processing system 110 monitors (213) current state information for the communication system over at least a portion of the second timeframe, where the current state information indicates an observed behavior of the communication system during the second timeframe. In some examples, anomaly detection platform 112 observes this current state information for anomaly detection. In
Anomaly processing system 110 determines (214) operational anomalies associated with the communication system based at least on a comparison between the current state information and the predicted sequence of state information. When differences are detected between the current state information and the predicted sequence of state information, then an anomaly might be occurring, and one or more alerts can be issued to an operator via system 120 and link 152, and the one or more alerts can provide information related to the operational anomalies.
In
Referring back to the elements of
Sequence prediction platform 111 and anomaly detection platform 112 each comprises various telemetry data processing modules which provide machine learning-based data processing, analysis, and prediction. In some examples, sequence prediction platform 111 and anomaly detection platform 112 are included in anomaly processing system 110, although elements of sequence prediction platform 111 and anomaly detection platform 112 can be distributed across several computing systems or devices, which can include virtualized and physical devices or systems. Sequence prediction platform 111 and anomaly detection platform 112 each can include algorithm repository elements which maintain a plurality of data processing algorithms Sequence prediction platform 111 and anomaly detection platform 112 can also include various models for evaluation of the algorithms to determine output performance across past datasets, supervised training datasets, and other test/simulation datasets. A further discussion of machine learning examples is provided below.
Operator interface system 120 comprises network interface circuitry, processing circuitry, and user interface elements. Operator interface system 120 can also include user interface systems, network interface card equipment, memory devices, non-transitory computer-readable storage mediums, software, processing circuitry, or some other communication components. Operator interface system 120 can be a computer, wireless communication device, customer equipment, access terminal, smartphone, tablet computer, mobile Internet appliance, wireless network interface device, media player, game console, or some other user computing apparatus, including combinations thereof.
Telemetry source 130 comprises one or more monitoring elements and computer-readable storage elements which observe, monitor, and store telemetry data for various operational elements, such as communication elements 131. The telemetry elements can include monitoring portions composed of hardware, software, or virtualized elements that monitor operational events and related data. Telemetry source 130 can include application monitoring services which provide a record or log of events associated with usage of associated applications or operating system elements. In other examples, telemetry source 130 can include hardware monitoring elements which provide sensor data, environmental data, user interface event data, or other information related to usage of hardware elements. In further examples, telemetry source 130 can be included within each of the communication elements 131 employed in a communication system or communication network that handles packet-based or network-provided telephony, video conferencing, audio conferencing, or other communication services.
Communication elements 131 can each include network telephony routing and control elements, and can perform network telephony routing and termination for endpoint devices. Communication elements 131 can comprise session border controllers (SBCs) in some examples which can handle one or more session initiation protocol (SIP) trunks between associated networks. Communication elements 131 can include endpoints, end user devices, or other elements in a network telephony environment. Communication elements 131 each can include computer processing systems and equipment which can include communication or network interfaces, as well as computer systems, microprocessors, circuitry, cloud-based systems, or some other processing devices or software systems, and can be distributed among multiple processing devices. Examples of communication elements 131 can include software such as an operating system, routing software, logs, databases, utilities, drivers, networking software, and other software stored on a computer-readable medium.
Communication links 150-154 each use metal, glass, optical, air, space, or some other material as the transport media. Communication links 150-154 each can use various communication protocols, such as Internet Protocol (IP), transmission control protocol (TCP), Ethernet, Hypertext Transfer Protocol (HTTP), synchronous optical networking (SONET), Time Division Multiplex (TDM), asynchronous transfer mode (ATM), hybrid fiber-coax (HFC), circuit-switched, communication signaling, wireless communications, or some other communication format, including combinations, improvements, or variations thereof. Communication links 150-154 each can be a direct link or may include intermediate networks, systems, or devices, and can include a logical network link transported over multiple physical links. In some examples, links 150-154 comprise wireless links that use the air or space as the transport media.
Turning now to further examples of anomaly detection and sequence prediction,
In
Although an anomaly score can be computed, i.e., score=Distance(Sf, Sp)=Distance(Sf,Predict(Sh)). However, the use of the score can include thresholds. For example, a threshold can be set as a value corresponding to the 99th percentile of scores in a sufficiently large and representative collection of examples. One example anomaly detection might look at the whole sequence of measurements S=[x1, . . . , xn, . . . xn+m], in order to determine how rare a given instance is relative to many others already observed, using a variety of mathematical methods.
More generally, an RNN consists of a number of chained cells. A single cell is shown on
The cells are chained as shown in example 330 of
To train the RNN process into a reliable predictor, various techniques can be employed. A large number of n+m long sequences can be collected, such as from a history of the system measurements. These can then be employed as training examples. The RNN is characterized by a set of model parameters, a.k.a., weights. A search is performed in the space of weights, using numerical optimization techniques, in order to find the set of weights that minimizes the training error, i.e., the disparity between the predicted tail of the sequence Sp and the actual tail Sf, for all the examples in the training set. In other words, supervised learning methodologies are applied to the structures shown in
To illustrate specific examples of RNN training,
Turning now to
Computing system 601 may be implemented as a single apparatus, system, or device or may be implemented in a distributed manner as multiple apparatuses, systems, or devices. Computing system 601 includes, but is not limited to, processing system 602, storage system 603, software 605, communication interface system 607, and user interface system 608. Processing system 602 is operatively coupled with storage system 603, communication interface system 607, and user interface system 608.
Processing system 602 loads and executes software 605 from storage system 603. Software 605 includes anomaly processing environment 606, which is representative of the processes discussed with respect to the preceding Figures. When executed by processing system 602 to enhance anomaly detection and telemetry prediction processing, software 605 directs processing system 602 to operate as described herein for at least the various processes, operational scenarios, and environments discussed in the foregoing implementations. Computing system 601 may optionally include additional devices, features, or functionality not discussed for purposes of brevity.
Referring still to
Storage system 603 may comprise any computer readable storage media readable by processing system 602 and capable of storing software 605. Storage system 603 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of storage media include random access memory, read only memory, magnetic disks, resistive memory, optical disks, flash memory, virtual memory and non-virtual memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other suitable storage media. In no case is the computer readable storage media a propagated signal.
In addition to computer readable storage media, in some implementations storage system 603 may also include computer readable communication media over which at least some of software 605 may be communicated internally or externally. Storage system 603 may be implemented as a single storage device, but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system 603 may comprise additional elements, such as a controller, capable of communicating with processing system 602 or possibly other systems.
Software 605 may be implemented in program instructions and among other functions may, when executed by processing system 602, direct processing system 602 to operate as described with respect to the various operational scenarios, sequences, and processes illustrated herein. For example, software 605 may include program instructions for implementing the anomaly processing environments and platforms discussed herein.
In particular, the program instructions may include various components or modules that cooperate or otherwise interact to carry out the various processes and operational scenarios described herein. The various components or modules may be embodied in compiled or interpreted instructions, or in some other variation or combination of instructions. The various components or modules may be executed in a synchronous or asynchronous manner, serially or in parallel, in a single threaded environment or multi-threaded, or in accordance with any other suitable execution paradigm, variation, or combination thereof. Software 605 may include additional processes, programs, or components, such as operating system software or other application software, in addition to or that include anomaly processing environment 606. Software 605 may also comprise firmware or some other form of machine-readable processing instructions executable by processing system 602.
In general, software 605 may, when loaded into processing system 602 and executed, transform a suitable apparatus, system, or device (of which computing system 601 is representative) overall from a general-purpose computing system into a special-purpose computing system customized to facilitate anomaly detection and operational state prediction in communication systems and various computing systems. Indeed, encoding software 605 on storage system 603 may transform the physical structure of storage system 603. The specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media of storage system 603 and whether the computer-storage media are characterized as primary or secondary storage, as well as other factors.
For example, if the computer readable storage media are implemented as semiconductor-based memory, software 605 may transform the physical state of the semiconductor memory when the program instructions are encoded therein, such as by transforming the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. A similar transformation may occur with respect to magnetic or optical media. Other transformations of physical media are possible without departing from the scope of the present description, with the foregoing examples provided only to facilitate the present discussion.
Anomaly processing environment 606 includes one or more software elements, such as OS 621 and applications 622. These elements can describe various portions of computing system 601 with which users, operators, telemetry elements, machine learning environments, or other elements, interact. For example, OS 621 can provide a software platform on which applications 622 are executed and provide for detecting performance anomalies in a communication system, obtaining a measured sequence of state information associated with the communications system during a first timeframe, processing the measured sequence of state information to determine a predicted sequence of state information for the communication system during a second timeframe, monitoring current state information for the communication system over at least a portion of the second timeframe, and determining operational anomalies associated with the communication system based at least on a comparison between the current state information and the predicted sequence of state information.
In one example, telemetry handling service 623 can obtain measured sequences of state information associated with a communications system, receive datasets from telemetry elements or other data sources, store various status, telemetry, or state data for processing in storage system 603, and transfer anomaly information to users or operators. In
Communication interface system 607 may include communication connections and devices that allow for communication with other computing systems (not shown) over communication networks (not shown). Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry. The connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media. Physical or logical elements of communication interface system 607 can receive data from telemetry sources, transfer telemetry data and control information between one or more machine learning algorithms, and interface with a user to receive data selections and provide anomaly alerts, and information related to anomalies, among other features.
User interface system 608 is optional and may include a keyboard, a mouse, a voice input device, a touch input device for receiving input from a user. Output devices such as a display, speakers, web interfaces, terminal interfaces, and other types of output devices may also be included in user interface system 608. User interface system 608 can provide output and receive input over a network interface, such as communication interface system 607. In network examples, user interface system 608 might packetize display or graphics data for remote display by a display system or computing system coupled over one or more network interfaces. Physical or logical elements of user interface system 608 can receive data or data selection information from operators, and provide anomaly alerts or information related to predicted system behavior to operators. User interface system 608 may also include associated user interface software executable by processing system 602 in support of the various user input and output devices discussed above. Separately or in conjunction with each other and other hardware and software elements, the user interface software and user interface devices may support a graphical user interface, a natural user interface, or any other type of user interface. In some examples, portions of API 626 are included in elements of user interface system 608.
Communication between computing system 601 and other computing systems (not shown), may occur over a communication network or networks and in accordance with various communication protocols, combinations of protocols, or variations thereof. Examples include intranets, internets, the Internet, local area networks, wide area networks, wireless networks, wired networks, virtual networks, software defined networks, data center buses, computing backplanes, or any other type of network, combination of network, or variation thereof. The aforementioned communication networks and protocols are well known and need not be discussed at length here. However, some communication protocols that may be used include, but are not limited to, the Internet protocol (IP, IPv4, IPv6, etc.), the transmission control protocol (TCP), and the user datagram protocol (UDP), as well as any other suitable communication protocol, variation, or combination thereof. In some examples, portions of API 626 are included in elements of user interface system 608.
Certain inventive aspects may be appreciated from the foregoing disclosure, of which the following are various examples.
A method of detecting performance anomalies in a communication system, the method comprising obtaining a measured sequence of state information associated with the communications system during a first timeframe, processing the measured sequence of state information to determine a predicted sequence of state information for the communication system during a second timeframe, monitoring current state information for the communication system over at least a portion of the second timeframe, and determining operational anomalies associated with the communication system based at least on a comparison between the current state information and the predicted sequence of state information.
The method of Example 1, further comprising determining when the comparison between the current state information and the predicted sequence of state information indicates deviations between the current state information and the predicted sequence of state information, and determining the operational anomalies based on a distance of deviation between the current state information and the predicted sequence.
The method of Examples 1-2, where the distance of deviation corresponds to a severity level in the operational anomalies.
The method of Examples 1-3, further comprising indicating one or more alerts to an operator system that provide information related to the operational anomalies.
The method of Examples 1-4, further comprising processing the measured sequence of state information using a recurrent neural network (RNN) process that determines the predicted sequence of state information based at least on the measured sequence of state information.
The method of Examples 1-5, where the RNN process is trained to determine the predicted sequence of state information using past state information for the communication system.
The method of Examples 1-6, further comprising training the RNN process using past state information observed for the communication system by at least subdividing the past state information into a historical portion and a future portion, selecting the historical portion as an input to the RNN process, and iteratively evolving the historical portion using the RNN process until the future portion is predicted by the RNN process to within a predetermined margin of error.
The method of Examples 1-7, where the predicted sequence of state information indicates a predicted behavior for the communication system during the second timeframe, and where the current state information indicates an observed behavior of the communication system during the second timeframe.
The method of Examples 1-8, where the state information associated with the communications system comprises operational telemetry information retrieved from one or more communication nodes of the communication system, the operational telemetry information comprising one or more indications of concurrent user connections, node processor utilization, node memory utilization, and network latency.
An apparatus comprising one or more computer readable storage media, and program instructions stored on the one or more computer readable storage media. The program instructions, when executed by a processing system, direct the processing system to at least obtain a measured sequence of state information associated with the communications system during a first timeframe, process the measured sequence of state information to determine a predicted sequence of state information for the communication system during a second timeframe, monitor current state information for the communication system over at least a portion of the second timeframe, and determine operational anomalies associated with the communication system based at least on a comparison between the current state information and the predicted sequence of state information.
The apparatus of Example 10, comprising further program instructions, when executed by the processing system, direct the processing system to at least determine when the comparison between the current state information and the predicted sequence of state information indicates deviations between the current state information and the predicted sequence of state information, and determine the operational anomalies based on a distance of deviation between the current state information and the predicted sequence.
The apparatus of Examples 10-11, where the distance of deviation corresponds to a severity level in the operational anomalies.
The apparatus of Examples 10-12, comprising further program instructions, when executed by the processing system, direct the processing system to at least indicate one or more alerts to an operator system that provide information related to the operational anomalies.
The apparatus of Examples 10-13, comprising further program instructions, when executed by the processing system, direct the processing system to at least process the measured sequence of state information using a recurrent neural network (RNN) process that determines the predicted sequence of state information based at least on the measured sequence of state information.
The apparatus of Examples 10-14, where the RNN process is trained to determine the predicted sequence of state information using past state information for the communication system.
The apparatus of Examples 10-15, comprising further program instructions, when executed by the processing system, direct the processing system to at least train the RNN process using past state information observed for the communication system by at least subdividing the past state information into a historical portion and a future portion, selecting the historical portion as an input to the RNN process, and iteratively evolving the historical portion using the RNN process until the future portion is predicted by the RNN process to within a predetermined margin of error.
The apparatus of Examples 10-16, where the predicted sequence of state information indicates a predicted behavior for the communication system during the second timeframe, and where the current state information indicates an observed behavior of the communication system during the second timeframe.
The apparatus of Examples 10-17, where the state information associated with the communications system comprises operational telemetry information retrieved from one or more communication nodes of the communication system, the operational telemetry information comprising one or more indications of concurrent user connections, node processor utilization, node memory utilization, and network latency.
A method of processing telemetry data, the method comprising obtaining an initial sequence of telemetry data measured during a first timeframe, processing the initial sequence of telemetry data to determine a predicted sequence of telemetry data during a second timeframe, observing current telemetry data over at least a portion of the second timeframe, determining deviations between the predicted sequence of telemetry data and the current telemetry data, and reporting the deviations as one or more alerts indicating operational anomalies for the current telemetry data.
The method of Example 19, further comprising processing the initial sequence of telemetry data using a recurrent neural network (RNN) process that determines the predicted sequence of telemetry data based at least on the initial sequence of telemetry data, where the RNN process is trained using past telemetry data by at least subdividing the past telemetry data into a historical portion and a future portion, selecting the historical portion as an input to the RNN process, and iteratively evolving the historical portion using the RNN process until the future portion is predicted by the RNN process to within a predetermined margin of error.
The functional block diagrams, operational scenarios and sequences, and flow diagrams provided in the Figures are representative of exemplary systems, environments, and methodologies for performing novel aspects of the disclosure. While, for purposes of simplicity of explanation, methods included herein may be in the form of a functional diagram, operational scenario or sequence, or flow diagram, and may be described as a series of acts, it is to be understood and appreciated that the methods are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.
The descriptions and figures included herein depict specific implementations to teach those skilled in the art how to make and use the best option. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these implementations that fall within the scope of the invention. Those skilled in the art will also appreciate that the features described above can be combined in various ways to form multiple implementations. As a result, the invention is not limited to the specific implementations described above, but only by the claims and their equivalents.