The present description generally relates to anomaly detection for data stored in databases, including detecting anomalies in data related to applications executing on one or more devices.
Existing techniques for anomaly detection of log files can require manual review by developers or system administrators, which can be difficult and inefficient. Other approaches for anomaly detection of log files may utilize rules provided by expert systems that can require frequent updating to adapt to changes in behavior from executing applications.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several implementations of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
For a given computing device, execution anomalies from applications executing on the computing device, including erroneous behavior or unexpected long response times, may cause losses of revenue and/or unsatisfactory user experiences. In an example, such anomalies may be caused by hardware problems, network communication congestion, and/or software bugs. Computing devices, in many cases, generate and store log messages related to events that occur from executing applications in log files for troubleshooting by developers and administrators. These log files also be stored in a database system for future querying, such as to extract information for analyzing the log files.
Computing devices may include software systems that may be used, in conjunction with hardware resources on such devices, to perform complex processes. An example of such a process may involve a computing device, with telephony capabilities, making a call to another device. Events that occur during such a process may be used to determine a set of performance measures in the form of Key Performance Indicators (KPIs). As used herein, a key performance indicator (KPI) is a quantifiable measure of a software application's performance. In one or more implementations, the subject technology may utilize KPIs to determine events that are related to failures of a process performed by the software application (e.g., voice call drop failure rate indicating a percentage of call failures).
Implementations of the subject technology described herein can also detect a root cause for software regressions (e.g., a software bug that makes a particular software feature stop functioning as intended after a certain event e.g., a system upgrade, system patching or a change to daylight saving time, etc.) in an automated and timely manner. Further, the subject technology minimizes the need to periodically manually monitor and analyze reports, and enables building a scalable solution that may eliminate manual data analysis, such as by intuition and/or ad-hoc measures.
The network environment 100 includes an electronic device 110, and a server 120. The network 106 may communicatively (directly or indirectly) couple the electronic device 110 and/or the server 120. In one or more implementations, the network 106 may be an interconnected network of devices that may include, or may be communicatively coupled to, the Internet. For explanatory purposes, the network environment 100 is illustrated in
The electronic device 110 may be, for example, desktop computer, a portable computing device such as a laptop computer, a smartphone, a peripheral device (e.g., a digital camera, headphones), a tablet device, a wearable device such as a watch, a band, and the like. In
The electronic device 110 may include a framework that provides access to different software libraries. A framework can refer to a software environment that provides particular functionality as part of a larger software platform. In one or more implementations, the electronic devices 110 may include a framework that is able to access and/or execute telephony related functionality (e.g., setting up a call, etc.), which may be provided in a particular software library in one implementation.
The electronic device 110 may execute applications that populate one or more log files with log entries. For example, an application may execute code that prints out (e.g., writes) log entries into log files when performing operations in accordance with running the application, such as for debugging, monitoring, and/or troubleshooting purposes. The log entries may correspond to error messages and/or to unexpected application behavior that can be detected as anomalies using the subject technology. Examples of anomalies include errors in connection with work flow that occurs during execution of the application (e.g., call setup failure), while some other anomalies are connected to low performance where the execution time takes much longer than expected in normal cases although the execution path is correct.
As illustrated, the computing environment 200 includes a memory 250 including application logs 252. In an example, each of the application logs 252 may be stored as one or more log files with multiple log lines in the memory 250. Each log entry may correspond to a log line in a given log file. Applications 240 may be executing on the electronic device 110 and provide log entries that are stored within one or more of the application logs 252. Each of the applications 240 may include one or more threads (e.g., a single threaded or multiple threaded application) in which a thread may be performing operations for the application. A given application with multiple threads can therefore perform respective operations concurrently in respective threads.
As further shown, the computing environment 200 includes an anomaly detector 210 which is configured to perform anomaly detection using at least local outlier factor (LOF) techniques and conditional mutual information, and further identifying trends in data related to features to facilitate eliminating particular features that are irrelevant to an anomaly as further described herein. Although LOF techniques are discussed in the examples herein, it is appreciated that other techniques for determining anomalies may be utilized by the subject technology. The anomaly detector 210 may also perform pre-processing operations on data stored in the application logs 252, such as data filtering and/or aggregation of data based on different criteria as discussed further herein. Additionally, the anomaly detector 210 may automatically generate rules to address an anomaly, which are then pushed to a device to potentially handle or avoid an anomaly at a future time.
Depending on number of values for features 310 in the table 300, a total number of rows 320 can be significant. As illustrated, the table 300, for explanatory purposes, illustrates details pertaining to VoLTE (Voice over Long-Term Evolution) call functionality. However, log data from any other functionality may also be collected and utilized in a similar manner to detect anomalies.
The table 300, as shown in
In the table 300, a time series dimension 330 could correspond to an actual time and/or actual date; however it is appreciated that the time series dimension as utilized by the subject technology may be a changeable dimension outside of time (e.g., a software version). Each feature in a respective column of the table 300 may be considered a different dimension that can grouped according to the time series dimension. As mentioned already, the KPI 350 relates to the voice call drop failure rate (e.g., percentage of time where the call has failed). In one or more implementations, the anomaly detector 210 groups entire datasets by the time series dimension with the different feature values, and then determines the KPI. Each of the features in respective columns of the table 300 can have an effect on the KPI 350.
In one or more implementations, the anomaly detector 210 removes statistically insignificant feature value(s) and/or feature combinations from the table 300, e.g., feature values and/or feature combinations that do not significantly impact the KPI 350. As referred to herein, a feature combination includes a combination of respective values corresponding to respective features (e.g., as shown in a particular row of the features 310 in the table 300). In an example, the anomaly detector 210 may utilize statistical filtering on the input data to remove decoys, e.g., features values that do not impact the KPI 350, and focus on the features values that impact the KPI 350. Such statistical filtering may be based on at least one threshold (e.g., a cutoff threshold), although more than one threshold may be utilized in some implementations (e.g., two separate thresholds corresponding to a lower bound threshold and an upper bound threshold). In an example, the anomaly detector 210 may plot the data as a probability density function (PDF) and determine a cutoff from the PDF to remove features and/or feature combinations without a sufficient sample size such as having a very low occurrence in the input data set (e.g., the application logs 252). In an example, the threshold may be configured such that 90% of the dataset, after filtering, remains in the input data. The filtered input data may then be applied through a local outlier factor algorithm as discussed further below.
In one or more implementations, the anomaly detector 210 utilizes the LOF technique to detect anomalies, based on feature combinations, in the input data (e.g., after filtering based on thresholds as discussed above). As an example, the LOF technique determines anomalous data points by measuring the local deviation of a given data point with respect to its neighbors. The anomaly detector 210 utilizes the LOF technique for assigning an anomaly score to each data point by computing the ratio of the average densities of the data point's neighbors to the density of the data point itself as follows:
where k indicates the number of neighbors, N(p) is the set of k nearest neighbors and d(p, x) is the distance between two data points p and x.
In the example of
In an example, one advantage of the LOF technique is that it produces an outlier prediction value (e.g., an anomaly score) and not a binary decision indicating whether a data point is an anomaly or not. Based on the anomaly score (e.g., the outlier prediction value), an anomaly class is generated to classify the corresponding data point as an anomaly or not an anomaly. For example, data points with a high anomaly score are classified as anomalies depending on data quantiles.
In one or more implementations, a given KPI can have a one-dimensional failure rate (e.g. related to a single feature) or a multi-dimensional failure rate (e.g., related to multiple features). In an example related to a one-dimensional failure rate where the system is only tracking a failure rate of a single feature, the LOF technique is not applied by the anomaly detector 210 to detect anomalies and a ranking algorithm may be utilized instead (e.g., picking the highest failure rates or lowest failure rates).
Alternatively, in an example related to a multi-dimensional failure rate, the anomaly detector 210 can track a failure type (e.g., receiving a failure indicator such as a rejected call 403 code, or a rejected call 404 code) related to the voice call drop failure rate (e.g., the KPI 350). For a given table (e.g., the table 300) that includes a KPI (e.g., the KPI 350) corresponding to feature combinations, the anomaly detector 210 checks a multi-dimensional vector and applies the vector as an input to the LOF technique. For example, if a particular feature combination has a specific failure type with a same failure rate, and a different feature combination has the exact same failure rate but with a completely different failure type, then that different combination will get scored higher when using the local outlier factor algorithm.
As illustrated,
After the group of feature values from each row of the table 300 are classified according to anomaly classes, the anomaly detector 210 may utilize an information theory approach of determining which feature is more important for the KPI.
For a given dataset, it is crucial to understand which features have the most contribution to an anomaly in the monitored KPI. Without calculating mutual information, subject matter knowledge or manual iteration over various dimensions/features are used, and potentially requiring more time to determine the impact on monitored KPI.
In an example, the subject technology utilizes information theory techniques to rank the importance of features. In particular, the anomaly detector 210 uses mutual information, which is a measure of the mutual dependence between two random variables (e.g., a particular feature and an anomaly class). The mutual information quantifies the amount of information about one random variable, through the other random variable. In at least an implementation, mutual information techniques can exploit the dependency measure between the features of the dataset to the anomaly class that was generated (e.g., as discussed in
As illustrated in
In one or more implementations, the anomaly detector 210 calculates mutual information between each feature and the anomaly class random variable (e.g., the anomaly class from the column 560). The anomaly detector 210 may then divide the value of the mutual information by the entropy of the anomaly class to normalize the calculation. For example, respective features that are part of the mutual information calculation may be related to variables such as a network model, roaming, radio access technology information, pre-conditions, etc. The anomaly detector 210 utilizes the mutual information technique to determine an importance value of each feature with respect to the KPI. Thus, the importance value may be indicative of how impactful each feature is with respect to the KPI.
In one or more implementations, the anomaly detector 210 sorts respective importance values. In particular, features are ranked according to an importance value defined by a ratio denoted as the following:
which corresponds to a feature being divided by an entropy of the anomaly class.
For a particular KPI, the anomaly detector 210 may rank the importance values as determined using mutual information. As further illustrated in
Important features, as determined above, may be used for aggregation purposes. In some instances, however, there are correlated features, with high importance values, that may not be considered as a significant contributing feature (e.g., network, country) to an anomaly based on an additional calculation of conditional mutual information as discussed further below.
In one or more implementations, the anomaly detector 210 may select a feature with a highest importance value based on mutual information and calculate, in an iteratively manner, conditional mutual information to provide an additional re-ranking of features. The goal of this additional re-ranking using conditional mutual information is to potentially eliminate features that do not include significant additional information with respect to being correlated to a particular KPI being tracked. In this manner, the anomaly detector 210 utilizes conditional mutual information to eliminate redundant features in the data set.
The example of
In the table 702, the feature of “network” has the highest importance value based on mutual information. In the table 710, the feature of “model” has the highest importance value based on conditional mutual information. In the table 720, the feature of “ratInfo” (e.g., radio access technology information) has the highest importance value based on further conditional mutual information between the feature of “network” and the feature of “model”. Using information aggregated from the table 702, the table 710, and the table 720, the anomaly detector 210 sorts the features to generate a ranked set of features 730 based on the respective importance values. In this example, the feature of network has the highest ranking, with features of “model”, “ratInfo”, “type” and “country” following in an order of descending ranking. Even though the feature of “country” was initially indicated as having a relatively high importance value, after iteratively undergoing conditional mutual information processing by the anomaly detector 210, this feature was re-ranked as the least important feature in the set of features 730.
After the features are re-ranked using the iterative approach in determining conditional mutual information, as discussed in
Based on this time series analysis, the anomaly detector 210 can then determine a particular point in the time series (e.g., as depicted in a chart 800) where a significant regression occurred in the data. In the example chart 800, a significant regression is identified, at a point in time 810, for a software version corresponding to build “15A5318g”. After the point in time 810 is identified, the anomaly detector 210 may perform feature profiler techniques based on data related to the point in time 810.
In an example where the input data includes ten different dimensions (e.g., features), and the input data has been aggregated using the most important dimension (e.g. based on conditional mutual information as discussed above), there could still be several remaining different dimensions that the anomaly detector 210 may determine as relevant or not relevant. Using the point in time (e.g., the point in time 810 where the regression occurred in the data) determined from the time series analysis discussed above, the anomaly detector 210 may apply feature profiler techniques for all the remaining dimensions. For each of the remaining dimensions, a KPI distribution, with respect to different values related to a particular dimension at the point in time, may be plotted to determine relevancy or irrelevancy. In
In an example, the anomaly detector 210 automatically checks a normalized failure rate distribution related to the KPI for different values of a particular feature. In one or more implementations, the anomaly detector 210 utilizes a standard deviation threshold on a given plot to determine whether the plot is relatively uniform and then determine whether the feature related to the plot is irrelevant or relevant. For example, the plot 910 is indicated as being irrelevant as the standard deviation is relatively uniform. Alternatively, when the distribution of the failure rate, with respect to the values of this feature, shows a high standard deviation, the anomaly detector 210 can determine that this feature is relevant, which is shown in the plot 950. In this manner, the anomaly detector 210 can perform this process for each of the remaining features to eliminate irrelevant dimensions or determine the relevant dimensions from the remaining dimensions.
After determining the relevant features from the data set and the relevant mapping of values of those features (e.g., based on the point in time where a regression was identified), the anomaly detector 210 can send the feedback, in the form of one or more rules, to a device as discussed in the following.
In an implementation, for features with many different values, the subject technology may utilize the following techniques to determine if a particular dimension is relevant or not. For example, a Coefficient of Variation Method (Standard Deviation/Mean) may be implemented, which includes the following operations: 1) calculate a Coefficient of Variation (CV) for the KPI Distribution for a feature; 2) if the CV is less than the uniform distribution threshold, determine that the feature is irrelevant; 3) if the CV is greater than the uniform distribution, then determine that the feature is relevant, remove values until the distribution is uniform, and provide all removed values as a “root cause” for the anomaly. Alternatively, a weighted average check may be implemented which determines if the KPI for a specific value(s) of a feature is above or below a predefined threshold compared to the weighted average KPI.
In one or more implementations, the anomaly detector 210 may generate expert systems rules for sending to a particular device. Such rules may indicate how the device may avoid or address the failure related to the anomaly. In an implementation, the subject technology provides 1) a rule-based expert system with an inference engine that supports forward chaining, and 2) a tamper-proof secure system to provide OTA (over the air) rules that are expressed in a declarative language for a device in the field.
As illustrated, a workflow 1010 shows example operations to for the anomaly detector 210 to auto-create rules for updating a configuration of a device. The anomaly detector 210 may perform automated table analysis as previously discussed. In one or more implementations, as also illustrated in the workflow 1010, the anomaly detector 210 identifies a particular set of feature values as set of conditions attributed to causing the anomaly. A rule is generated based on the identified feature values and, subsequently, pushed OTA (over the air and/or through a network connection, etc.) to the device. After receiving the rule, the device may take a particular optimization action as a work around to the issue and/or capture more debug information. For example, the rule could provide instructions that when encountering this anomaly with the indicated feature values, the device should perform a particular action such as avoid doing handovers to a network that had setting(s) related to the feature values and/or instructing the device to capture more detailed logs.
The anomaly detector 210 receives an input data set, the input data set including rows of values for different features (1210). In an example, each row may include a different combination of values for the features. The anomaly detector 210 classifies one or more of the rows of values as an anomaly based on anomaly scores determined for each of the rows of values (1212). The anomaly detector 210 determines a subset of the different features that affect the anomaly scores of the one or more rows classified as the anomaly (1214). The anomaly detector 210 determines a root cause for at least one of the rows classified as the anomaly based on values of the subset of the different features for the at least one of the rows (1216). Further, the anomaly detector 210 provides an indication of the root cause to a device to enable the device to perform an action when encountering conditions corresponding to the root cause at a subsequent time (1218).
The bus 1308 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1300. In one or more implementations, the bus 1308 communicatively connects the one or more processing unit(s) 1312 with the ROM 1310, the system memory 1304, and the permanent storage device 1302. From these various memory units, the one or more processing unit(s) 1312 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1312 can be a single processor or a multi-core processor in different implementations.
The ROM 1310 stores static data and instructions that are needed by the one or more processing unit(s) 1312 and other modules of the electronic system 1300. The permanent storage device 1302, on the other hand, may be a read-and-write memory device. The permanent storage device 1302 may be a non-volatile memory unit that stores instructions and data even when the electronic system 1300 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1302.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1302. Like the permanent storage device 1302, the system memory 1304 may be a read-and-write memory device. However, unlike the permanent storage device 1302, the system memory 1304 may be a volatile read-and-write memory, such as random access memory. The system memory 1304 may store any of the instructions and data that one or more processing unit(s) 1312 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1304, the permanent storage device 1302, and/or the ROM 1310. From these various memory units, the one or more processing unit(s) 1312 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1308 also connects to the input and output device interfaces 1314 and 1306. The input device interface 1314 enables a user to communicate information and select commands to the electronic system 1300. Input devices that may be used with the input device interface 1314 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 1306 may enable, for example, the display of images generated by electronic system 1300. Output devices that may be used with the output device interface 1306 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.