Modern technologies such as IoT, Big Data, Cloud, data center consolidation, among others, demand smarter IT infrastructure and operations. These systems continuously generate voluminous logs to report their operational activities. Efficient operation and maintenance of the infrastructure require many of the least glamorous applications, such as troubleshooting, debugging, monitoring, security breaching in real-time. A log is a semi-structured record which carries operational information. Complex systems like, nuclear power plant, data center, etc., emit logs daily. Logs come periodically with high volume and extreme velocity. Although logs are scattered but they spot system's operational status, and thus logs are very useful to the administrators to capture snapshot of a system's running behaviors.
Logs spot the fundamental information for system administrators and are useful to diagnose the root cause of a complex problem. Log analysis is the process of monitoring and extracting valuable information from logs to resolve a problem. It has variety of usages in security or audit compliance, forensics, system operation and management, or system troubleshooting. Analyzing high volume logs in real-time is daunting task for system administrators, and so they require a scalable automated log analysis solution. Because of the high volumes, velocities, and varieties of log data, it is an overwhelming task for humans to analyze these logs without a real-time scalable log analysis solution.
In one aspect, systems and methods are disclosed for processing a stream of logged data by: creating one or more models from a set of training logs during a training phase; receiving testing data in real-time and generating anomalies using the models created during the training phase; updating the one or more models during real-time processing of a live stream of logs; and detecting a log anomaly from the live stream of logs.
In another aspect, a log analysis framework collects continuous logs from multiple sources and performs log analysis using its models which are built before. Log analysis has two parts: learning/training and testing. It builds training model from normal logs and performs anomaly detection in real-time. Multiple anomaly detectors find unusual behavior of the system by identifying anomaly in logs. Log analysis can be stateful or stateless. Stateful analysis stores temporal data flow/transaction is kept in memory or state, e.g., database transnational log sequence. Stateless analysis doesn't require this, e.g., filtering Error logs.
Implementations may include one or more of the following. The system provides an end-to-end framework for heterogeneous log analytics: It provides reference architecture for streaming log analytics with a clear workflow for “stateful”, “stateless” and “time-series” log analysis. The framework provides a dynamic log model management component—this allows dynamically updating changing and deleting log models in data streaming frameworks not done by existing systems.
Advantages of the system may include one or more of the following. The log analysis tools and solutions are easy to use due to automation to ease human burden. In particular the system provides an easy to use workflow to develop log analytics tools. In one embodiment called LogLens, a streaming log analytics application provides reference architecture and workflows for analyzing logs in real-time. The system is a comprehensive service oriented architecture as well as a dynamic model update mechanism for streaming infrastructures. The system provides unsupervised learning: The automated log analyzer works from scratch without any prior knowledge or human supervision. For logs from new sources, it does not require any inputs from the administrator. The system provides heterogeneity: Logs can be generated from different applications and systems. Each system may generate logs in multiple formats. The log analyzer can handle any log formats irrespective of their origins. The system is deployable as a service as logs volume and velocity could be very high. It is scalable, and operates without any service disruption to handle drift in the log data behaviors.
LogLens collects continuous logs from multiple sources and performs log analysis using its models which are built before. Log analysis has two parts: learning/training and testing. It builds training model from normal logs and performs anomaly detection in real-time. Multiple anomaly detectors in LogLens find unusual behavior of the system by identifying anomaly in logs. Log analysis can be stateful or stateless. Stateful analysis stores temporal data flow/transaction is kept in memory or state, e.g., database transnational log sequence. Stateless analysis doesn't require this, e.g., filtering Error logs.
The LogLens architecture is divided into several components. At a high level, first models are created from a set of training logs, and saved into the database. One implementation uses spark streaming for the instant testing phase, which receives testing data in real-time and generates anomalies using the models generated during the training phase. A model manager with dynamic model update functionality allows models to be updated during real-time processing of data in the instant streaming analytics. The system has the following modules:
Service Layer 1: The service layer is a “restful” API service which provides a mechanism for users to send and receive service layer requests for new data Sources. Transactions from the service request can be either for the training dataset in order to create training data models, or for the testing dataset to generate anomalies from the testing data.
Training Engine 2: The training engine takes input training logs and generates models, which are passed to the model manager 4. Training requests from the Service layer are forwarded to the training engine, along with parameters to support training the models.
Testing Module 3: This component takes a streaming input of logs, and transforms these logs into tokenized key, value pairs based on patterns/regular-expressions from log parsing model generated in the training phase. This tokenized log is then taken as an input to generate anomalies based on training models.
Model Manager 4: The model manager is the instant model management component that gets new models from the training component, manages model load into Spark Testing, and dynamically either updating the models or deleting previous models from existing broadcast variables.
Anomaly Database 5: The anomaly database stores all the anomalies which are reported to the end-user in real-time. All anomalies are reported to the anomaly database. It also responds to all user interfaces with the anomalies requested.
For instance the sample log shown
Turning to the model generation (202), the framework includes a platform for extracting a variety of profiles/models, which are aggregated together in a global model database. These models can be used later by anomaly detection service components to find relevant anomalies and alert the user.
a. Content Profile Model: This model looks creates a frequency profile of the various values for each key in the pattern/regular expression of a category of logs.
b. Sequence Order Model: This model extracts sequential ordering relationships between various patterns. For instance an observed transaction could be defined as Pattern 1 (P1) could be followed by Pattern 2 (P2), with a maximum time difference of 10 seconds, and at max 3 such transactions can happen concurrently
c. Volume Model: This model maintains a frequency distribution of the number of logs of each pattern received for a fixed time internal. It is then used for detecting unusual spikes of certain patterns and reports them as alerts to the user.
Agent (301): The agent sends log data in a real-time streaming fashion to the log parser for tokenization. It controls the incoming log rate, and identifies the log Source.
Log Parser (302): The log parser components takes incoming streaming logs, and log-pattern model from the model manager as input. It preprocesses and creates a parsed output for each input log in a streaming fashion, and forwards it to the log sequence anomaly detector. All unparsed logs are reported as anomalies and presented to the user. Log parser is an example implementation of stateless anomaly detection algorithm.
Tokenized logs generated as output from the log parser are in the form of <key,value>pairs, where each key is either a word, number etc. The tokenized log also contains special keys for fields denoting either IP addresses or timestamps.
The streaming compute layer uses models learnt during training phase to process incoming log streams and generate anomalies. From an implementation point of view, these are all implementations of the instant Violation Checker abstractions (explained in Step 306), and can be classified into three different types of analytics.
Stateless Anomaly Detection Module (303): Stateless algorithms as the name suggests do not keep any state across multiple consecutive logs. This ensures that these classes of anomaly detection algorithms are completely parallelizable. Examples of stateless anomaly detection can be log tokenization and detecting logs which do not match any previous patterns. Clearly for log parsing and tokenization each log can be independently processed and does not depend on any previous log, making the whole process completely parallelizable. The content profile model in Step 202 is stateless. Later we describe an exemplary stateless anomaly detection algorithm in detail using reference architecture in
Stateful Anomaly Detection Module (304): Stateful anomaly detection is the category of anomaly detection algorithms which require causal information to be kept, and an “in-memory” state as the anomaly is dependent on a number of logs and the order in which they happen. This means that the logs must be processed in their temporal order, and an “in-memory” state should be maintained which allows processing of a log given existing state calculated from all previous logs. An example of Stateful anomaly detection is looking at complex event processing; wherein we have a model depicting stateful automata's of logs. For instance database logs can have “start” events and “stop” events for each transaction, and there are expected time frames within which these events must finish up. Detecting such anomalies requires us to have knowledge of previous “start” events when processing “stop” events. The Sequence order model in Step 202 is stateful. Later we describe an exemplary stateful anomaly detection algorithm in detail using reference architecture in
Time-Series Analyzer (305): Several anomaly detection techniques are based on time-series can be used. In the context of LogLens, time-series based anomaly detection either works directly on time-series data (CPU usage, memory usage etc.) or on time-series generated from text logs (frequency of logs of each pattern in a fixed time resolution). Time-series analyzer is a stateful algorithm. A Violation Checker interface provides several abstractions for this purpose:
Violation Checker (306): Violation Checker is an abstraction created for the streaming compute system and the model management system to provide ease of model update for each anomaly detection algorithm. The Violation Checker essentially allows for a “plug and play” framework, where new anomaly detection algorithms can simply implement the interface, and the LogLens framework will take care of all the common tasks through this abstraction. Some of the interfaces provided by the Violation Checker are:
Read Model: This gives an abstraction which integrates with the model manager to read the model from the model database, parse and save it in memory on the spark driver. These abstractions avoid having each analytical algorithm to develop an end-to-end connection with the model database.
Update Model: Once the model has been read using the “read model” abstraction, the update model abstraction is executed to update the model using the instant novel dynamic model update procedure into all distributed stream processing cluster
Delete Model: Similar to update model the delete model abstraction deletes relevant model from the distributed memory by invalidating it and removing it from all nodes in the cluster.
Execute Stream: The execute stream interface takes as input the log stream, and transforms the log stream according to the anomaly algorithm.
Execute Multiple Streams (time-series): This interface takes as input multiple log streams including time-series stream, and process multi-stream input using the anomaly algorithms and transform them into anomalies.
Turning now to
As to Model Consolidation (402), this sub-component deals with model-updates to support incremental or distributed learning processes. The update of the model themselves depends on the learning algorithm and the model profile. For instance the instant volume model can be easily updated using min/max and merging with them from the newer model. The Model Consolidation includes the following:
The Model Database (403) has a hierarchical schema with each model kept as follows:
1. Timestamp—Time at which model is generated
2. TimeRange—The time range of the training logs during which the model was created
3. Category—The category of model
4. Model—The model can be saved as a BLOB entry or a separate table
The Model Manager also handles Service API with a module 404 that supports the following Service API's:
a. Distributed Learning: Large training log sessions may sometimes require distributed learning to speed up the learning process. This requires novel model consolidation process, which can lock/unlock updates to the global model database, and allow for update queries to existing models.
b. Incremental Learning: Similar to distributed learning, models can be periodically updated with newer training data.
c. Query Models: At the time of testing querying model is a requirement this can be dependent on time-range, Source etc.
d. Model Database: A schema of a model management in a storage database.
The model manager also does several important functions for the instant compute framework:
1. Stores models generated by the training phase into a model database.
2. Fetches and updates the model in the instant testing compute system (spark streaming)
3. Deletes the model from testing compute system (spark streaming)
In particular operations (2) and (3) are updated using a unique dynamic model update mechanism (exemplified in
The instant modification has been made for micro-batch oriented streaming architectures like spark which split incoming data into small data chunks called micro-batches before executing them. The dynamic model update queues all model update requests in an internal queue and updates them in between subsequent micro-batches. This is done by modifications in the scheduler, which checks the model queue for any new models before starting the execution of the data stream.
For new models a model entry is added to the model hash-map inside the driver memory, and this is fetched by each worker whenever it tries to look for the model. For model update processes, existing entries of the model object in the model hash-map in the driver memory is updated. When the workers are executed they detect the change and dynamically fetch the new model.
Thus, the framework of
Agent collects logs from multiple log sources and sends them to log manager periodically. Agent is a daemon process and sends logs using Apache Kafka and its topic. it tags source information to logs. Agent operates in two ways: It sends real-time logs e.g., syslog or it can simulate real-time log from a large log file e.g., simulate a log stream from database.
Log Manager receives raw logs from agents, performs pre-processing and sends them to log parser. It controls incoming logs rate and identifies log source. It attaches this information into log and makes raw log into a semi-structured data. Finally, it stores log into log storage.
Log Storage stores logs. It uses a distributed storage with proper indexing while storing logs. Stored logs can be used for building models during log analysis. They can also be used future log replay for further analysis. It is connected to graphical user interface so that human can dig through logs and give their feedback.
Model Builder builds training model for each anomaly detector. It takes raw logs assuming that they represent normal behavior and uses unsupervised techniques to build models which are used for real-time anomaly detection. As log stream may evolve over time, models need to be updated periodically. Therefore, model builder collects logs from log store and build models accordingly and stores it to model database.
Model DB stores models. It uses a distributed storage with proper indexing while storing logs. All the anomaly detectors read models directly from model database. Furthermore, they are directly attached to graphical user interface, so that user can validated the model and update them if required.
Model Manager reads models from model DB and notifies model controller for model update. LogLens supports both automatic and human interaction inside model manager. For example, it has an automatic configuration which tells model builder to update model at the end of each day. On the other way, human expert can directly modify the model and updates model in model database using model manager.
Model Controller gets notifications from model manager and sends model control instruction to anomaly detectors. Model can be inserted, updated or deleted. Each of them needs separate instructions. Moreover, for an updated model, from where the anomaly detector will lookup/read the model is clearly defined in model instructions. Anomaly detectors read the control instructions and update model accordingly.
Log Parser parses incoming streaming logs from log manager. It reads log and finds its pattern. It identifies logs as anomaly when it cannot parse them and stores them to anomaly database. Log parser uses a model that is built in unsupervised. Model controller notifies it to update model when it requires and it reads model from model db. As log parser parses one log at a time, it is stateless.
Log Sequence Anomaly Detector captures anomalous log sequence of an event or transaction in real-time. It uses an automaton to identify abnormal log sequences. As it requires transaction, it is stateful. Its automata are built in unsupervised and it reads model from model DB when model controller notifies. It stores all anomalies to anomaly database.
Heartbeat Controller controls identifies open transactions in log sequence anomaly detector. Each transaction has begin, end, and intermediate states. Using heartbeat controller, anomaly detector easily detects open state in an event log sequence and report them as anomalies.
Anomaly DB stores all kind of anomalies for human validation. It uses a distributed storage with proper indexing while storing logs. Moreover, each anomaly has a type, severity, reason, location in logs, etc. All of them are stored in an anomaly database which is connected to a graphical user interface for user feedback.
Visualization Dashboard provides a graphical user interface and dashboard to end user. It gets data from log storage and anomaly database. Each dashboard contains particular anomaly type, reason, details information, time of occurrence, etc. User will get them in real-time or in a batch report and takes necessary steps if required. Human can easily validate anomaly and give their feedback.
Model Learning/Training Workflow is described next. LogLens requires model for each of its anomaly detector. These models are built from normal logs. Machine learning techniques have been used to build the model. Log manager collects log from agents and sends logs to log storage periodically. Log storage has the log with time stamp. Model Manager instructs model builder to build model for an anomaly detector. Inside model manager, we can set the LogLens configuration so that it can instruct model builder to build model for an anomaly detector at certain time interval, e.g., at the end of the day. We can also instruct model builder manually to build model when it requires. As soon as model builder gets instruction from model manager, it collects its required logs from log storage. Model manager tells model builder about time period for logs to build the model. Different anomaly detectors use different techniques to build the model. For example, log parser used unsupervised clustering technique to build its model, while event based log sequence anomaly detector uses unsupervised event id discovery to generate its model. After building model, model builder stores model into model database. Human expert validates the model and changes them if requires. The updated models are stored in model database via model manager. After building model, model manager notifies model controller and model controller informs an anomaly detector to update model. Anomaly detector reads model directly from model database.
Anomaly Detection Workflow is described next. Real-time anomaly detection is the key part of LogLens. Agent continuously sends logs. Log parser collects these logs via log manager. It parses these logs using its model. Its model contains a set of rules which discover log patterns. Logs are reported to anomaly with unknown log pattern when it cannot parse them properly. After that, all the parsed logs go to log sequence anomaly detector. It has automata containing a set of rules to find an anomalous sequence for an event transaction. Its model contains the automata with rules. Logs belonging to a particular sequence are reported as anomaly if they do not follow the rules in automata. All of the reported anomalies are stored in anomaly database for human validation. Capturing anomalous sequence from incoming log is a stateful operation as it requires storing transaction. On the other hand, parsing logs is a stateless operation. So, LogLens covers both stateful and stateless anomaly detection.
Model Update Workflow is described next. Behavior of logs may change over time. Sometimes new log source may come. These require model update dynamically. LogLens performs model update through its model manager, model controller, and model builder. Model update can be instructed automatically of manually from model manager to model builder. Model builder builds new model and stores it to model database. Model manger then notifies model controller. Model controller sends control instruction to anomaly detector to update model. Finally anomaly detector read updated model from model database.
Real-time Stateful Workflow is described next. Real-time stateful anomaly detection requires controlling open state. For example, in log sequence based anomaly detection, some states may be opened for a long time because they have not reached the end state of the transaction. LogLens has a module Heartbeat (HB) controller to control these open states. Basically, it sends a HB message and when anomaly detector finds this message, it scans through all the open state and report missing end event anomaly if they are opened for a long time. Finally, these anomalies are stored to anomaly database.
Next, the real-time anomaly detection, and dynamic model update and heartbeat control are detailed. As per
Log Parser parses logs and extracts patterns from heterogeneous logs without any prior information. It reports anomaly while it fails to extract pattern from log. Log parser requires training and testing for extracting patterns. The system can either use user provided log format as training model or generate log format automatically. LogLens uses following steps to generate a model for log parsing:
Log parser uses patterns discovered during learning stage for anomaly detection. If a log message does not match with any patterns in the model, it is reported as an anomaly. If a match is found, it parses a log into various fields based on matched pattern format. We implement Log parser using a simple MapReduce logic in Spark. Pattern models are read from the model storage and broadcasted to all workers. In workers, Map function does the parsing using discovered patterns and tags all logs as matched or unmatched. Reduce function simply collects all logs. After that, we filter unmatched logs and send them to the anomaly storage, while matched logs are sent to the log sequence anomaly detector for identifying anomalous log sequence.
LogLens provides stateful log analysis using event log sequence based anomaly detector which gets matched logs from log parser as input. It groups logs belonging to an event as a sequence or transaction and detects anomaly if they do not follow certain rules. For this, it keeps track log flow information into memory or state. It requires training and testing for its anomaly detection.
LogLens uses an unsupervised technique to discover event identifier (ID) from logs and builds automata using them. Each automaton contains a set of rules which are used to validate anomalous log sequences. Event IDs in streams of system event logs are the type of content which may appear the same in multiple log instances, in many unique values through the history, at stable locations in the same log event type, or in stable structure across multiple log event types. They allow deterministic association of logs representing system/service behaviors such as Database transactions, Operational requests, Work job scheduling events, and so on.
Event identifier discovery technique is discussed next. First a reverse index of training logs is constructed by LogLens. It generates a reverse index table where indexing key is event ID content and value is a list of log patterns that have this key. It generates a hash table. Algorithm 1 describes the procedure based on the log formats
LogLens takes training logs as input and generates a hash table as output. It initializes a hash table H where key is an index key, and value is an object set. For each training log Li, it repeats in finding the format pattern Pattern-x matching Li (e.g., through a regular expression testing) among the recognized log formats and assigning the value v for each variable field PxF, in Pattern-x for the matched part in Li; insert into H under the key v as H(v).insert((PxFj, Li)).
Log parser uses model to extract pattern from heterogeneous log. Any log message does not match to any patterns in the model is reported to anomaly. Matched logs are sent to log sequence anomaly detector for identifying anomalous log sequence.
Log parser extracts log patterns in real-time. It is built using Apache Spark. It has map reduce based efficient pattern discovery technique.
ID field discovery is done on pattern field pairs. This produces event ID set that covers all logs in training. It uses associate rule mining technique to produce event ID field set. Algorithm 2 describes it. It takes log pattern field sets grouped under hash table keys as input and generates an event ID field set as output. It initializes a hash table T where key is a composite index key, and value is an object set. For each entry under the key k in the hash table H, it repeats by creating a composite key ck which includes all the pattern fields in H(k), and insert into T under the key ck all the log numbers Li in H(k). After that, it initializes another hash table F where key is a composite index key, and value is an integer initialized as 0. For each entry under the key k in the hash table T, it repeats by assigning the integer i as the total number of unique logs in T (k) and for each unique 2-fields pair P=(PiFx, PjFy) derived from the pattern fields contained in the composite key k, it updates T so that F (p)=F (p)+i. Finally, for each entry under the key k=(PiFx, PjFy) in the hash table F, it repeats by inserting k=(PiFx, PjFy) into pattern ID field set container IDs if F (k) equals to the number of the training logs matching pattern Pi or Pj.
Event automata modeling corresponds to the processes of profiling and summarizing event behaviors on log sequence sets grouped by ID content in automata.
1. Event automata modelling procedure. For the training logs, it builds the automata model based on the ID field knowledge. Algorithm 3 shows the details of event automata modelling procedure. It takes training logs and event ID field set as input and generates automata as output. Initially it groups on ID content. It initializes a hash table G where key is a composite index key, and value is an ordered object list. For each log Li in the training logs, it finds the log pattern with associate fields and creating a composite key k which consists of the log content matching those ID fields. After that it inserts into the hash table G as G(k).insert((TimeStamp(Li), IDs(Pj)), where the ordered object list is sorted by the time stamps and IDs(Pj) is the ID fields of the log format Pj.
2. Event automata generation based on log groups. After that it initializes an automaton mode set M. For each entry under the key k in the hash table G, it finds begin, end, and intermediate the ordered ID field sets in G(k). Let IDs(Pb) be the earliest based on the time order, IDs(Pe) be the latest based on the time order, and IDs[Pi be the rest or intermediate of the ID field sets. If the model set M has no event automata with its begin event pattern matching IDs(Pb) and end event pattern matching IDs(Pe), create a new event automata model in M, with its begin event type set as IDs(Pb), and its end event pattern set as IDs(Pe), the min, max duration between its begin event type and its end event pattern set as the difference between the time stamp of IDs(Pb) and IDs(Pe), and adding the intermediate event types with IDs(Pi), and set all the min, max concurrency of the intermediate event types based on their frequency in IDs(Pi). Otherwise, there is an already event automata model in the model set M with its begin event pattern matching IDs(Pb) and end event pattern matching IDs(Pe) and update that event automata model on the min, max duration between its begin event type and its end event pat tern based on the difference between the time stamp of IDs(Pb) and IDs(Pe), and also update the intermediate event types and their min, max concurrency based on IDs(Pi) accordingly.
Event sequence anomaly detection takes heterogeneous logs collected from the same system for event sequence behavior testing. The process uses the event automata for profiling and detecting abnormal behaviors of log sequence sets grouped by ID content, which is defined by the event ID field content.
Algorithm 4 shows the details of the event sequence anomaly detection procedure for the testing procedure. It takes testing log, event ID field set, and automata as input and report anomaly as output. It initializes a hash table for active event automata instances. The hash table E uses ID content as the key, and active automata instances as the value. Initially it is empty. Here, logs are grouped based on ID content. For each arriving log L, from the testing log stream, if its matching format pattern does not contain any ID field in any automata model, it skips. Otherwise, it finds event automata matching on log sequence groups. For the ID content k and ID fields in the log Li, if there is no active automata instance in the hash table E under the key k, and Li's ID fields do not match the beginning event type of any automata, it reports an alert message for log Li about missing other expected events based on the automata model it matches, goes back for the next log. On the other hand, if there is no active automata instance in the hash table E under the key k, and Li's ID fields match the begin event type of any automata, it inserts into the hash table E under the key k a new active automata instance and goes back for the next log. When there is an active automata instance A in the hash table E under the key k, it first checks if Li's ID fields match the end event type of A, check A's model parameter violation on the (min, max) duration and (min, max) intermediate event concurrency based on the past logs with the same ID content k and the log L. If there is any violation, it reports an alert message for the automata instance A about those logs causing the model violation and removes automata instance A from the hash table E and goes for the next log. Otherwise, If Li's ID fields does not match the end event type of A, update A's model parameters on the related (min, max) intermediate events concurrency based on the past logs with the same ID content k and the log Li. If there is any violation, it reports an alert message for log Li causing the model violation and goes back for the next log.
Tables 1 show all the anomaly types. Automata in the training model have all the rules related to normal event log sequences. Any violation will generate those anomalies.
Real-time anomaly detection uses Apache Spark in one embodiment. LogLens uses Apache Spark to detect anomalous log sequence in real-time. The MapWithState API in Spark Streaming is an ideal solution for sequence analysis as it stores temporal information in state/memory.
By way of example, a block diagram of a computer or controller or analyzer to support the invention is discussed next. The computer or controller preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus. The computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM. I/O controller is coupled by means of an I/O bus to an I/O interface. I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link. Optionally, a display, a keyboard and a pointing device (mouse) may also be connected to I/O bus. Alternatively, separate connections (separate buses) may be used for I/O interface, display, keyboard and pointing device. Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims priority to Provisional Application Ser. No. 62/420,034 filed Nov. 10, 2016, the content of which is incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62420034 | Nov 2016 | US |