1. FIELD OF INVENTION.
This invention relates to computing and in particular to methods and systems for analyzing data relationships within a KStore interlocking trees data structure.
2. Description of Related Art
Corporations from all industries routinely store vast amounts of data in databases. The stored data can range from economic data relating to financial expenditures to scientific data collected during an experiment. Database users then take this data and query, or question, the database in the expectation of retrieving valuable information. Based on how present day databases are maintained and used, there are two scenarios that occur when a user queries a database.
In the first scenario, the user knows what types of information are contained in the database, knows the relationship between the data they are looking for, and knows of a way to search for it. The first scenario is most often characterized by the application of a single analytic, known to produce results, on the database. Examples of the first scenario are where the user desires to create graphs or charts, such as the rate of profit increase by a financial institution or a chemical company's research data showing changes in chemical diffusion across a cellular membrane. The output generated when an analytic is applied is an answer to a known query of a known relationship between known pieces of data.
The second scenario occurs when the user does not know what, if any, relationships exist between data within a database or databases. The user is presented with the daunting task of finding answers to questions based on these unknown relationships. Because of this, the users must focus not on what they know about the data, but rather, on what they do not know about the data. It is in this second scenario where the user employs a process called Data Mining, or Knowledge Discovery in Databases (KDD). The mining of databases through the application of analytics enhances the user's understanding of the data that is being collected.
Data Mining is the process by which raw data, collected and stored in a database warehouse, is analyzed using single or multiple analytics to find previously unknown relationships or patterns between the data. The result of the query is not the pattern of data that the user knows about, but rather, the result is the pattern, or more frequently patterns, the user does not know about. Although the application of single or multiple analytics to a database can theoretically generate millions of patterns, the user will only want to retrieve relationships that contain useful knowledge, or, are interesting. Once the user mines the database and finds interesting patterns, the user can then limit the search fields of the applied analytics to focus the knowledge gained from Data Mining onto specific variables, further increasing the specificity or exactness of understanding of the knowledge contained in the database.
In the current state of the art, the process of mining a database for knowledge is common and well known to those skilled in the art. First, before the data miner application can be applied to a given database, the user determines what type of database the Data Miner will be applied to. Examples of the varying types of databases can be static databases such as warehouses or dynamic databases as used in real-time data sampling. The user then decides what Data Miner applications can be used and if any optimizations are necessary to prevent the retrieval of uninteresting or useless patterns. If the user determines that no current Data Miner applications exist for their particular situation, the user then creates a Data Miner application that fits his/her needs. The Data Miner then applies varying analytics, as prescribed by the user, to a database and attempts to find interesting relationships therein.
With the current art, the application of analytics is a standard operation. First, the user must either use an existing database or “seed” a new database with raw data. Then, the user must determine what types of data are needed to solve his particular need. The user then either devises and implements a script that mines the database and retrieves the needed data or the user implements a canned script already prepared by an outside source. Because of the nature of the database, being only populated with raw data with no relational data contained therein, in order for the analytic to be applied, the script often requires the setting up of tables that will be populated with the mined data. If the database is not in a form proper for the previously prepared analytic, the database may need to be reconstructed if key data is not in indexes that are searched for by the data miner. Once the table or tables are constructed and populated with the mined data, the script looks through the information and returns an output using the algorithm implemented by the analytic.
Methods for mining large amounts of complex data are fairly common in the art. For example, U.S. Patent Application Nos. 2004/0010505 entitled “Method and system for data mining automation in domain-specific analytic applications” teaches methods for using predefined data mining algorithms to mine data from a data schema.
U.S. Patent Application No. 2005/0069863, entitled “Systems and methods for analyzing gene expression data for clinical diagnostics” teaches methods, computer programs and computer systems for constructing a classifier for classifying a specimen into a class. The classifiers are models. Each model includes a plurality of tests. Each test specifies a mathematical relationship (e.g., a ratio) between the characteristics of specific cellular constituents.
U.S. Patent Application No. 2002/0077790, entitled “Analysis of retail transactions using Gaussian mixture models in a data mining system” teaches a computer-implemented data mining system that analyzes data using Gaussian Mixture Models. The data is accessed from a database, and then an Expectation-Maximization (EM) algorithm is performed in the computer-implemented data mining system to create the Gaussian Mixture Model for the accessed data. The EM algorithm generates an output that describes clustering in the data by computing a mixture of probability distributions fitted to the accessed data.
There are several limitations with the current state of the art of analytics and in turn, current Data Mining applications. First, it may take excessive human capital to implement an analytic. Data is collected and stored in raw form in a database. If the database is not indexed in the format necessary for a canned analytic to mine the database, either the database administrator must reconfigure the database or the administrator must modify the analytic so it can work within their particular database. This requires human capital because either the database administrator must compare how the user's database is formulated and alter it in a way that the canned analytic can be applied, or the corporation must enlist the help of programmers to re-write the analytic script so that it may be applied to their particular database, or, the programmers may have to write an entirely new analytic depending on the amount of changes that are required.
Second, valuable computer resources are taken away from computing and reallocated towards the application of an analytic. If a database is not indexed in the format needed to apply a particular analytic, the database would either need to be re-indexed or be completely reconstructed. The application of an analytic often requires the generation of a tables. If the tables need to be updated based upon a determination that the database contains new data, the analytic must repopulate the tables with an entirely fresh set of data which includes not only any new or updated data, but also, the already mined data. In addition, if subsequent applications of different analytics require information that is not contained in the existing tables, new tables would need to be created or the existing tables would need to be expanded with the additional data required for this new analytic. If the previous table contains excess information, or if the tables have to be updated or refreshed with new data, the system will have to unnecessarily populate these tables with extra data carried forth from the previous analytic.
All references cited herein are incorporated herein by reference in their entireties.
A data analysis system for performing an analytic to obtain an analytic result in a computing device having memory associated therewith, the data analysis system including a data analyzer interface, at least one interlocking trees datastore within the associated memory of the computing device, and at least one analytic application executed by the computing device. The data analysis system of the invention also includes a plurality of interlocking trees datastores wherein the at least one interlocking trees datastore is selected from the plurality of interlocking trees datastores in accordance with the data analyzer interface. The system can include a plurality of data sources wherein the at least one interlocking trees datastore is created from a data source selected from the plurality of data sources in accordance with the data analyzer interface. The at least one interlocking trees datastore can be a static interlocking trees datastore or a dynamic interlocking trees datastore. The at least one interlocking trees datastore continuously records new data.
The at least one interlocking trees datastore includes records of data and the at least one interlocking trees datastore continuously receives updates of the records of data. The at least one analytic application is selected from the plurality of analytic applications in accordance with the data analyzer interface. The at least one analytic application analyzes a static interlocking trees datastore or a dynamic interlocking trees datastore. The at least one analytic application can be any type of analytic, including an accounting/mathematical functional category analytic, such as a sum analytic, a statistical functional category analytic, a classification functional category analytic, a relationship functional category analytic, a visualization functional category analytic, a statistical functional category analytic, a meta-data functional category analytic or any other further functional category analytic. The data analyzer interface provides access to at least one administration application.
A data analysis method for performing an analytic to obtain an analytic result in a data processing device having a memory associated therewith, includes providing a data analyzer interface for the data processing device and storing at least one interlocking trees datastore in the memory of the data processing device. At least one analytic application is executed in accordance with the at least one interlocking trees datastore. The associated memory of the data processing device includes a plurality of interlocking trees datastores further and the at least one interlocking trees datastore is selected from the plurality of interlocking trees datastores in accordance with the data analyzer interface. The data processing device includes a plurality of data sources further and the at least one interlocking trees datastore is created from a data source selected from the plurality of data sources in accordance with the data analyzer interface. The data processing device includes a plurality of analytic applications further comprising selecting the at least one analytic application from the plurality of analytic applications in accordance with the data analyzer interface.
The KStore Data Analyzer overcomes the inherent limitations associated with the prior art of Data Analysis or Mining, that use traditional relational databases by using KStores that model the data, in combination with the application of a unique set of analytics called KStore Analytics. These KStore Analytics take advantage of the information contained in the Knowledge Store (KStore) interlocking trees data structure. As described in U.S. patent application Ser. Nos. 10/385,421, entitled “System and method for storing and accessing data in an interlocking trees datastore” and 10/666,382, entitled “System and method for storing and accessing data in an interlocking trees datastore” the KStore data structure does away with the distinction between transactional data and stored (relational) data.
It is through this combination, the use of a KStore structure and analytics specifically designed for that structure, that many of the limitations with the prior art are overcome. First, human capital costs are reduced. When the KStore Engine is applied to static data or data from an existing database that has been previously populated, or dynamic data that is being populated on a timely basis, the KStore Engine formulates all the relationships upon data entry. Therefore, an interlocking tree datastore administrator or user does not need to verify that the data is set up in a specific way because the KStore Engine has already performed the task prior to analytic application. Also, because the KStore Engine models data in a consistent manner based on specific rules, the interlocking trees datastore administrator or user does not need to determine if certain analytics can be applied to the data while others cannot. Because the analytics use the structure of the KStore, various analytics in varying combinations, if desired, can be applied to the KStores regardless of the original data input.
Second, computer resources are not unnecessarily used for processes such as table generation or excess data updating. The KStore Data Analyzer implements analytics that take advantage of the relational information already contained in the KStore, removing the need to create tables to determine that information, as is the case in the prior art. The process by which KStore Analytics analyze the data allows for the application of various analytics to interlocking trees datastores without the need to generate a table for each analytic. Further, because no tables are generated, valuable computing resources are not needed to repopulate tables with excess data should a user want to use more than one analytic on a data set when those analytics require different data. KStore Data Analyzer using KStore Analytics on KStores only use minimal resources because the KStore Engine has already learned and developed the KStore structure based on all possible relationships between the data.
Because the present invention overcomes the limitations of the previous art, the KStore Data Analyzer provides levels of flexibility and agility for the user previously not found in prior art Data Mining techniques. Not only can various analytics in various combinations be applied to the same data without the need to generate tables, the same analytic can also be applied to various KStores because all analytics are optimized to work on the same modeling of information by the KStore Engine. KStore Analytics also provide the flexibility of implementing queries that are able to run while the structure is being populated.
The KStore Analytics also provide flexibility in personnel support. KStore administrators would need little or no understanding of the structure of the data or of the information contained therein. The KStore Analytics mine the data and implement analytics based on the knowledge the KStore Engine generates while populating the interlocking trees data store. An administrator would only need to know that the data had been placed in a KStore structure in order to be able to use any of the KStore Analytics.
The invention will be described in conjunction with the following drawings in which like reference numerals designate like elements and wherein:
Referring now to
When the KStore Engine processes particles of a data stream, the KStore Engine may record the events by generating Nodes based on relationships between two pieces of information. The resulting Nodes, which do not connect but rather relate two pieces of information, may contain two pointers, one pointer being the Case and the other, the Result. As the number of times the same relationship between the same two pieces of information occurs, or more accurately as the number of times the same Node is traversed during a learn operation, the KStore Engine may increase a counter field to indicate the number of times the same relationship has been recorded into the KStore. The KStore Engine, along with building pointers and updating counts in the Node, also may build two pointer lists into the KStore interlocking trees data store for each Node. The first list may contain pointers to other Nodes that reference the current Node as a Case Node. The other pointer list may contain pointers to other Nodes that reference the current Node as Result Node.
Since it is possible to retrieve every possible count of every value in every context represented in a KStore, a KStore is capable of supporting any possible analytic, descriptive or predictive, static or in real-time. Therefore, the KStore Analytics implemented by the KStore Data Analyzer may return useful patterns containing knowledge using any analysis technique from either a static or dynamic KStore. The KStore Data Analyzer uses the knowledge from the pointers and pointer lists contained in the Nodes to retrieve relational information about the data and uses the count fields to perform statistical analysis of those relationships. In addition, the sequences of events captured within the interlocking trees data store may also be used for analysis of the data.
The KStore Data Analyzer may exist in either a batch environment or in an interactive environment. The various KStore applications, including Analytics, Utilities, and Data Sources that the KStore Data Analyzer utilizes may also exist in either a batch or interactive mode, depending upon the requirements of the specific KStore environment. In a preferred embodiment, the KStore Data Analyzer is used in an interactive environment and may use at least two types of Graphical User Interfaces (GUIs) to assist the user in performing data mining operations on interlocking tree datastores.
The first type of GUI is a KStore Administration interface which provides access to administration functions, including definition of data sources, as well as all the analytics currently available to the user. This interface performs the functions of the data analyzer 12, including selecting a specific analytic application from applications 10 and specific data sources from data source applications 8. In addition, the interface may provide access to functions other than analytics in the KStore applications 10 which, for instance, may include Save/Restore routines that provide persistence for the KStore data structure.
The second type of GUI provides a specific interface for a user selected analytic application as shown in applications 10. The format for an analytic interface depends upon which analytic was chosen and may contain various fields, or directives which include, among others, the focus variable currently in use, any constraints, results required, and what KStores are being mined.
Along with the previously mentioned fields and directives, in order to help the user sort through and narrow the resulting knowledge to a desired specificity, the analytic may display selectable constraint lists and focus variables. A constraint list contains constraints that are variables that limit the records a query will process whereas the focus is generally a variable value that is the subject of interest, usually within a context defined by a set of constraints. For example, a basic query could return the total number of widgets sold. To reduce the total number of records analyzed, the user could constrain the KStore by a specific salesman in order to determine the total number of widgets sold by that salesman. In the preceding example, the focus would be the number of widgets sold and the constraint would be the particular salesman.
KStore Analytics
KStore Analytics use information recorded by the KStore Engine and implement special analytic scripts that capitalize on this information. KStore Analytics use information contained in the KStore such as the number of occurrences of a variable and the relationship of that variable with the rest of the data in the KStore.
It will be understood that the analytics set forth herein are not intended to be exhaustive of all of the analytics possible in keeping with the spirit and scope of the invention. Rather, they are intended to be merely representative of the analytics that may be performed according to the invention.
KStore analytics may be implemented against a KStore by applying a focus and possibly one or more constraints to the KStore to obtain a result. The results obtained by the KStore Analytic are based on the result requested. The results include values such as numeric values or particle sequence values. Since the order in which values are recorded by a KStore is, in itself, information, sequence information is also a result that may be obtained by an analytic. An example of the use of sequence information by an analytic is an analysis of timings of banking transactions.
KStore Analytics may be grouped into any number of functional categories. The accounting/mathematical functional category includes such analytics as “Sum,” “Distinct Count,” and “Data Aggregation.” The statistical functional category includes analytics such as “Single Variable Prediction.” The classification functional category includes analytics such as “Contexted Classification,” “Bayes Classification,” and “Dynamic Decision Tree.” The relationship functional category includes analytics such as “Associated Rules”. The visualization functional category includes analytics such as “Chart Generator” and “Field Chart.” The meta-data functional category includes analytics such as “Constraint Manager.” Additionally, analytics can be divided into categories based on any criteria a user may find convenient. For example, a user may define a category of analytics that tend to be useful to users analyzing the results of drug studies. A user may also define a category of analytics that tend to be useful to users studying amino acids. Thus, the number of such functional categories is unlimited. The functional categories and the analytics in each functional category can be stored by the data analyzer 12 in
KStore Utilities
In addition to the functional analytics, the KStore Data Analyzer may provide access to various tools and utilities. These utilities may be used to load, save, restore, or simulate data, or to develop KStore-related GUI applications, among other functions.
In the following discussion, sample analytics and utilities will be defined and an example will be used with screen shots to show how each of these analytics may be accomplished. The examples are not meant to be an exhaustive list of examples, but are merely included to show how the KStore Analytics work with the information in KStore to analyze data.
Referring now to
Data records such as the data records shown in the Table below can be imported into the interlocking trees datastore 250. The methods for building a KStore such as the K 14a
Accordingly, the fifteen data records of the Table set forth the information for a total of fifteen transactions which can be stored as shown in the datastore 250. The presence of fifteen data records in the datastore 250 is indicated by the count of the end of thought node 350 which is the sum of the counts of all end product nodes within the datastore 250. It will be understood that the term ‘transactions’ herein includes both the trials and the outright sales shown in the data records of the Table.
The paths representing the fifteen transactions of the Table within the interlocking trees datastore 250 include the K paths that contain the ‘Bill’ subcomponent node 252 and K paths that contain the ‘Tom’ subcomponent node 300. The ‘Bill’ paths 262, 278, 290 are the paths extending from the BOT node 340 through the Bill subcomponent node 252. The ‘Tom’ paths 310, 328 are the K paths extending from the BOT node 340 through the Tom subcomponent node 300.
Using the interlocking trees datastore 250 it is possible to determine, for example, that Bill had six sold transactions on Tuesday in Pennsylvania by referring to K path 262. Furthermore, it is possible to determine that he had one sold transaction on Monday in New Jersey by referring to K path 278. Additionally, it is possible to determine the total number of items sold by either Bill or Tom by determining the number of times ‘sold’ is used within the interlocking trees datastore 250. This information can be determined by obtaining the count of the sold elemental root node 346. The count in the sold elemental root node 346 is nine.
KStore User Interface
Refer to
In the following discussion of the KStore Analytics, the user may start from the main window 710.
Accounting/Mathematical Functional Category
Many analytics provide basic math functions against the data, for instance the summing of columns. This functional category of analytics may include the analytics “Sum Column,” “Distinct Count,” and “Data Aggregation.” Each is discussed below.
Sum Column
The “Sum Column” analytic may return the sum of numeric values in a data set. Optionally constraints may be added to reduce the data set to specific records to sum. For example, the Sum Column analytic may calculate how many sofas Tom sold, or if the data set includes sales amounts, the analytic may calculate the total sales amount for a specific salesperson, such as Bill. The nodes on the asResult list of the Bill elemental root node (not shown) may be followed to the Bill subcomponent node 252 to determine a set of K paths which include Bill, paths 262, 278, 290. Traversing to the end product nodes 264, 280, 292 of Bill's K paths 262, 278, 290 a determination can be made whether any of these K paths also include the value “sold”. A determination is therefore made that K paths 262, 278 include the value “sold”. The corresponding end product nodes 264, 280 have counts 6 and 1, respectively. Additionally, Bill's K paths 262, 278 also include the values 100 and 103 for the Amount field, respectively. Thus, the “Sum Column” analytic for the amount returns the sum of (100×6)+(103×1) or 703.
Refer to
Distinct Count
The “Distinct Count” analytic returns the number of distinct values in a given data set. With Distinct Count, duplicate values are not counted. For example, for the category or focus field “SalesPerson” in a given exemplary data set, there are only two values “Bill” and “Tom”. While there may be hundreds of occurrences of “Bill” and “Tom,” duplicates are not counted; only two distinct values for the focus “SalesPerson” are returned.
Refer to
Data Aggregation
Data aggregation is any process in which information is gathered and expressed in a summary (or aggregated) form for purposes such as statistical analysis. For example, daily sales data may be aggregated so as to compute monthly or annual total amounts. The KStore Data Aggregation analytic finds co-existence of items in a record and also performs numeric calculations on data as identified in user-defined queries. In one preferred embodiment, it performs a summation calculation. In alternate preferred embodiments of the invention it may perform calculations such as averaging, distinct count, distinct count percentage, distinct count ratio, record count, record count percentage, record count ratio, among others. The structure and methods of the KStore Data Aggregation analytic have been described in patent application Serial No. (TN406), entitled, “Data Aggregation User Interface and Analytic Adapted for a KStore.”
It will be understood by those skilled in the art that any number of additional analytics in the Accounting/Mathematical Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Accounting/Mathematical Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
Statistical Functional Category
Analytics that perform statistical calculations fall into this category. This functional category includes the analytic “Single Variable Prediction.”
Single Variable Prediction
The Single Variable Prediction analytic returns the probability of a focus variable. Any one of the variables in the data set may be designated as the focus variable. The probability of the focus variable is equal to the number of records containing the focus variable over the total number of records. The scope of the prediction may be optionally limited by constraints, which are typically one or more values that determine which records will be isolated for analysis. In this case, the probability of the focus variable is equal to the number of records containing the focus variable over the total number records within the set of constrained records.
Using the Table of data records above, upon application of the KStore Engine to the data, the KStore would have learned that there are 9 occurrences of the variable ‘sold’ in the 15 total records of the Table. Therefore, selecting ‘sold’ as the focus variable, the probability of it occurring in all the records is 9/15 or 60%. If the user selects ‘Bill’ as the constraint variable, then only the records containing ‘Bill’ are considered. Upon application of the KStore Engine to the data, the KStore would have learned that there are 7 occurrences of ‘sold’ in the total of 10 occurrences of ‘Bill.’ Therefore, the probability of the focus variable ‘sold,’ constrained by the variable ‘Bill’ is 7/10 or 70%. The data set can be constrained by more than one variable. Taking the data set above, in the context of ‘Bill’ and ‘Tuesday’ the probability of ‘sold’ is 100%. Some examples of the uses of this type of analytic are finding the probability of a single variable, or, in trend analysis using a series of single variable predictions using time as the constraint.
Refer to
Refer to
It will be understood by those skilled in the art that any number of additional analytics in the Statistical Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Statistical Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
Classification Functional Category
This functional category includes the analytics “Contexted Classification,” “Bayes Classification,” and “Dynamic Decision Tree,” each of which are explained below. Classification is a form of data analysis that can be used to extract models describing important data classes used for making business decisions. For example, a classification analytic may be used to categorize bank loan applications as either safe or risky.
Contexted Classification
The Contexted Classification analytic returns the classification of a sample X within a context. The data set is constrained by the sample variables so that only the records containing all the variables in the sample are considered and the highest probability variable of the classification field is chosen. This analytic will return no value if there are no instances of the specified context and therefore has a limited use when a decision is required. The variables are selected in a manner similar to the Single Variable Prediction analytic. Using the example record set above, if the sample X were ‘Bill’+‘Monday,’ there would be 4 records in the set. The probability of ‘sold’ would be ¼ and the probability of trial would be ¾. Therefore, the classification of the sample X would be ‘trial.’ This type of analytic can be used for such queries as credit risk analysis, churn analysis and customer retention.
Refer to
Refer now to
Bayes Classification
Bayes classification is known to come in two probability models: naïve and full. This KStore analytic uses the Naïve Bayes probability model. Naïve Bayes is a technique for estimating probabilities of individual feature values, given a class, from data and to then allow the use of these probabilities to classify new records. A Naïve Bayes classification is a simple probabilistic classifier. Naïve Bayes classifiers are based on probability models that incorporate strong independence assumptions which often have no bearing in reality, hence are (deliberately) naïve. The probability model is derived using Bayes' Theorem (credited to Thomas Bayes). In spite of their naïve design and apparently over-simplified assumptions, Naïve Bayes classifiers often work much better in many complex real-world situations, such as for diagnosis and classification tasks.
The Naïve Bayes Classification analytic returns the classification of a sample X using Bayes theorem. For example, if the user wanted to classify the sample X (Tom, Tuesday) using the class variables as shown in column 4 of the sample data (sold and trial), the user would select the X variables and the class. Upon application of the KStore engine to the data, the KStore would have learned the number of occurrences of each variable and the relation of the variable to other variables. The analytic performs preliminary calculations:
P(sold)= 9/15=0.6
P(trial)= 6/15=0.4
P(Tom|sold)= 2/9=0.22
P(Tom|trial)= 3/6=0.5
P(Tuesday|sold)= 6/9=0.67
P(Tuesday|trial)= 0/6=0 (a small number such as 0.001 is actually used)
The resulting P(X|sold)P(sold)=0.15×0.6=0.09 and the P(X|trial)P(trial)=0.0005×0.4=0.00002. Therefore the Naïve Bayes classifier predicts X=“sold.” Given “Tom” and “Tuesday,” the probability of items “sold” is higher than it is for items on “trial.”
Refer to
Refer now to
Dynamic Decision Tree
The Dynamic Decision Tree analytic creates a hierarchical tree representation of a given data set that may be used to classify a sample X. A tree consists of nodes and branches starting from a single root node. Nodes of the tree represent decisions that may be made in the classification of the sample. The goal is to be able make a classification for a sample using the fewest number of decisions or, in other words, by traversing the fewest number of nodes. Following each decision node, the data set is partitioned into smaller and smaller subsets until the sample has been classified. The analytic creates a decision tree by performing an analysis on the remaining categories or attributes at each node of the tree and, depending on the results of the analysis another set of branches and nodes is created. This process is followed until each tree path ends with a value of the desired classifier category. In this manner, a prediction (class assignment) may be made for a particular sample. Refer to
A focus or classification variable is selected, in this case ‘sold’. At each node the decision of which category variables to use for the branches is based on which variable contains the greatest number of the focus variable. Different decision trees may use different criteria for determining which categories to choose at each node level. Initially, the analytic reviews all categories over all the records. The records containing ‘Bill’ also contain the largest number of ‘sold’ (7 of the 10 ‘Bill’ records also contain ‘sold’.) So the category or column containing ‘Bill’ and ‘Tom’ is used to create the first branches. In the context (the set) of the ‘Bill’ records, all 6 of the ‘Tuesday’ records also contain sold, so the column containing ‘Tuesday’ and ‘Monday’ is used to create the next branches under ‘Bill’. The branching is complete when all the focus variables are accounted for. In the context of ‘Tom’, the column containing ‘103’ and ‘100’ is used to create the next branch. The column thatcontains ‘PA’ and ‘NJ’ could have also been used as the data distribution happens to be the same as for ‘103’ and ‘100’. A user may want to classify the sample X(Bill,Tuesday) using the class variables in column 4 (sold and trial). Classification can either be done visually by the user with the aid of the analytic GUI or presented as a response by the analytic itself. In this case, X has the probability for ‘sold’ of 100%. This type of analytic could be used for performing such queries as credit risk analysis, churn analysis, customer retention or advanced data exploration.
Refer to
Refer to
Each node represents the occurrences of “Bill” and “Tom’ in the constrained data up to that point and selecting that node changes the values in the “Results” box.
It will be understood by those skilled in the art that any number of additional analytics in the Classification Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Classification Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
Relationship Functional Category
This category may be used to discover relationships among the data. This functional category may include the analytics “Associated Rules” and “Market Basket.”
Associated Rules
The Associated Rules analytic searches for interesting relationships among items in a given data set and returns a list of variables and combinations of variables and their probability of co-occurring with one or more focus variables. As a practical use of this analytic, association rules describes events that tend to occur together. The variables are selected in a manner similar to the Single Variable Prediction analytic. This type of analytic could be used for queries such as performing an advanced data exploration.
Using the sample data set, if the focus variable is “sold,” the analytic would use the information in KStore and make the following examples of calculations:
Refer to
Market Basket
Market Basket Analysis may be used to determine which products sell together. In data mining, Market Basket Analysis is an algorithm that examines a list in order to determine the probability with which items within the list occur together. It takes its name from the idea of a person in a supermarket throwing all of their items into a shopping cart (a “market basket”). Market Basket Analysis may then be used to determine which products sell together. The results may be particularly useful to any company that sells products, whether it's in a store, a catalog, or directly to the customer. For example, market studies have shown that people who go into a convenience store to purchase one item, such as diapers, tend to purchase a non-related item, such as beer.
The KStore Market Basket analytic searches for interesting relationships among items in a given data set and returns a list of variables and combinations of variables and their probability of co-occurring with a focus variable.
Refer to
Refer to
It will be understood by those skilled in the art that any number of additional analytics in the Relationship Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Relationship Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
Visualization Functional Category
This functional category may include the analytics “Chart Generator” and “Field Chart.” The structure and methods of KStore Chart Generator and Field Chart have both been described in patent application U.S. Ser. No. 11/014,494 filed Dec. 16, 2004.”
Chart Generator
KStore Chart Generator is a general method for providing a display of data such as charts and graphs, from an interlocking trees datastore in a graphical display system having a graphic display device. KStore Chart Generator analytic graphs the counts of the fields and values selected.
Field Chart
KStore Field Chart analytic graphs the occurrences of the categories selected.
It will be understood by those skilled in the art that any number of additional analytics in the Visualization Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Visualization Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
Meta-Data Functional Category
This functional category includes the analytic “Constraint Manager.”
Constraint Manager
KStore Constraint Manager enables the user to see associations or relationships that are not obvious in the raw data. Through the use of user-defined “constraints” (a field value or a field name/field value pair that limits a data set to only those records containing it) and “field categories” (a constraint set having a user defined logical relation between them), the KStore Constraint Manageranalytic is able to associate information in an interlocking tree data store.
It will be understood by those skilled in the art that any number of additional analytics in the Constraints Management Functional Category can be defined by a user in keeping with the spirit and scope of the invention. For example, many such analytics are set forth in the Appendix. A person of ordinary skill in the art can determine the operations performed by other analytics in the Constraints Management Functional Category, whether it is listed in the Appendix or not. The skilled artisan can then write programs to implement the analytics according to the specifications of KStore technology in the same manner as such programs can be written according to the specifications of other types of database technologies.
KStore Utilities
Besides the functional analytics discussed above, the KStore Data Analyzer provides access to various utilities some of which may be used to load, save and restore, simulate data, and develop KStore-related GUI applications. Each of these is discussed briefly below and are all subject to co-pending patents.
Save and Restore
“Save” and “Restore” refer to the structure and methods of saving an interlocking trees data store from memory to permanent storage and of restoring an interlocking trees data store from permanent storage to memory.
To use this feature, the user may select the “Tools” tab 717 from the KStore Administration main window 710
“Save” and “Restore” has been described in patent application U.S. Serial No10/958,830 filed Oct. 5, 2004 entitled, “Saving and restoring an interlocking trees datastore.”
Data Simulation and Load
“Data Simulation” is a method for generating simulated data that randomly generates instances of data sequences (records). The simulator can be directed to generate one or multiple threads to test processor usage or to allow for the simulation of complicated data sets such as streaming data from multiple cash registers or sales people. This also allows for the simulation of data sets including data in different formats from different sources, such the data sets of sales data and data from inventory. “Load” refers to a method to load data into the K engine.
To use this feature, the user may select the “Tools” tab 717 from the KStore Administration main window 710
To use “Load,” the user may select the “Data Source” tab 716 from the KStore Administration main window 710 from
A method for data Simulation has been described in patent application U.S. Serial No. ______, filed on Apr. 13, 2005 entitled, “Multiple stream data simulation adapted for a KStore” owned by the assignee of the present application.
Application Designer
The KStore Application Designer can be used to design and develop GUI applications that incorporate and associate the KStore analytics with the user's live data. In a single session, the user can design and test a KStore application, using live production data that has been loaded into KStore. Because of the unique data structure of KStore, no data corruption can occur. The user does not have to wait for runtime to see if the application worked as designed. Because the user is using live data, it is instantly obvious (as the application is built) if the analytics are working with the data as designed and the GUI design shows the data properly.
The Application Designer also provides a method and system for rapidly developing applications without having to understand how the code behind each KStore analytics works. Using simple drag and drop technology, the programmer can build applications that use the KStore analytics and other KStore tools that enable the programmer to build and define data constraints. The programmer needs to simply understand what each KStore analytic is pre-programmed to accomplish when it is associated with a field or group of fields; there is no need to actually understand the code behind the analytics.
To use this feature, the user may select the “Tools” tab 717 from the KStore Administration main window 710
KStore Application Designer has been described in patent application U.S. Ser. No. 11/150,063 filed Jun. 10, 2005, entitled, “KStore Application Designer.”
Those skilled in the art will appreciate that any number of such analytics can be conceived and implemented on various types of known data manipulation technologies. Furthermore, it will be understood that any analytic that can be conceived and implemented on known and future data manipulation technologies can be implemented on an interlocking trees datastore as well. In order to implement such analytics the skilled artisan can use the examples shown herein to illustrate the manner in which any other defined analytics can be implemented within interlocking trees datastore technology.
Thus, the number of different analytics that can be performed within interlocking trees datastores is limited only by the number of analytics that a user can conceive and implement. Just as the skilled artisan can develop and implement methods for performing desired analytics in known data structures according to the specifications of the data structures used, the skilled artisan can use the techniques for developing analytics demonstrated herein and any other techniques known to the skilled artisan to provide analytics.
Conditional Logit
Least Squares
Moore-Penrose Matrix
Correlation
Least Squares
Multiple Regression
Correlation Coefficient
Least Squares
Nonlinear Least
Correlation
Least Squares
Normal Equation
Correlation Index
Linear Regression
Probability Paper
Correlation Ratio
Loess Local
Pseudoinverse
Figure-of-Merit Function
Logistic Regression
Regression
Gasser-Müller Technique
Matrix 1-Inverse
Regression Coefficient
Least Squares Fitting
Merit Function
Residual
Least Squares Fitting-. . .
Minimax Polynomial
Statistical Correlation
Absolute Frequency
Fisher Information
Quantile
Adjacent Value
Frequency Distribution
Quartile
Almost Surely
Frequency Polygon
Quartile Deviation
Batch
Full Width at Half
Quartile Variation
Benford's Law
Gauss's Inequality
Record Setting
Bimodal
H-Spread
Regression to the
Bimodal Distribution
High-Water Mark
Relative Cumulative
Bin
Hinge
Relative Frequency
Bowley Skewness
Indicator
Reversion to the Mean
Clarity
Interquartile Range
Running Maximum
Class
Mean Absolute Deviation
Signed Deviation
Class Boundaries
Mesokurtic
Statistical Depth
Class Interval
Midrange
Statistical Dispersion
Class Limits
Mode
Statistical Median
Class Mark
Multimodal
Statistical Range
Cumulative Frequency
Outlier
Step
Factor Level
Percentile
Trimodal
Far Out
Percentile Rank
I-Statistic
Fence
Plotting Position
Unimodal
Fisher Information
Population
Zipf's Law
Source 6
Quality Control
Absolute Error
Accuracy
Arbitrary Precision j
Confidence Interval
Confidence Limits
Deviation
Equiripple
Error
Error Propagation
Estimate
Fixed Precision
Margin of Error
Minimax Approximation
Outlier
Percentage Error
Precision
Relative Error
Significance Arithmetic
Significant Digits
Source 7
Estimator
S
Biased Estimator
Estimator
Estimator Bias
Expectation Value
Fisher's Estimator Ine . . .
h-Statistic
k-Statistic
L-Estimate
M-Estimate
Maximum Likelihood
Maximum Likelihood Est . . .
Maximum Likelihood Method
Point Estimator
Polyache
Polykay
R-Estimate
Robust Estimator
Sample Central Moment
Sample Mean
Sample Variance
Unbiased Estimator
Wald's Equation
Source 8
Markov Processes
Chapman-Kolmogorov Equ . . .
Markoff Chain
Markov Chain
Markov Process
Markov Sequence
Smith's Markov Process . . .
Stochastic Matrix
Source 9
Moments
Absolute Deviation
Absolute Moment
Average Absolute Devia . . .
Berry-Esséen Theorem
Bessel's Correction
Bessel's Formulas
Central Moment
Characteristic Function
CharlierCheck
Covariance
Cumulant
Cumulan-Generating Fu . . .
Excess
Factorial Moment
Gamma Statistic
h-Statistic
Heteroscedastic
Homoscedastic
k-Statistic
Kendall Operator
Kurtosis
L-Moment
Leptokurtic
Mean
Mean Deviation
Mesokurtic
Moment
Moment-Generating Func . . .
Moment Problem
Moment Sequence
Momental Skewness
Pearson Mode Skewness
Pearson's Skewness Coe . . .
Polyache
Polykay
Population Mean
Population Variance
Raw Moment
Relative Deviation
Robbin's Inequality
Root-Mean-Square
Sample Central Moment
Sample Mean
Sample Raw Moment
Sample Variance
Sample Variance Comput . . .
Sample Variance Distri . . .
Sheppard's Correction
Skewness
Standard Deviation
Standard Deviation Dis . . .
Standard Error
Standard Unit
Standardized Moment
Variance
Variation Coefficient
Source 10
Multivariate
Statistics
Bagging
Bivariate
Bivariate Normal Distr . . .
Boosting
Cluster Analysis
Discriminant Analysis
FindClusters
Kendall Operator
Multinormal Distribution
Multivariate
Multivariate Normal Di . . .
Principal Component An . . .
Trivariate Normal Dist . . .
Univariate
Wishart Distribution
Source 11
Functions
Absolute Value
Absolutely Monotonic F . . .
Additive Function
Almost Periodic Function
Antiperiodic Function
Arithmetic Function
Bilinear Function
Borsuk-Ulam Theorem
Closed Map
Codomain
Complete Biothogonal . . .
Complete Convex Function
Complete Orthogonal Sy . . .
Complete Set of Functions
Completely Monotonic F . . .
Completely Multiplicat . . .
Complex Map
Complex Modulus
Complex Variable
Constant Map
Decreasing Function
Domain
Doubly Periodic Function
Elementary Function
Euler's Homogeneous Fun . . .
Even Function
Exponentially Decreasi . . .
Exponentially Increasi . . .
Function
Function Centroid
Function Convex Hull
Function Space
Function Value
Fundamental Theorem of . . .
Gram-Schmidt Orthonorm . . .
Hamburger Moment Problem
Homogeneous Function
Image
Implicit Function
Inverse Function
Inverse Function Theorem
Jensen's Theorem
Kepler's Equation
Lacunary Function
Least Period
Linear Function
Linearly Dependent Fun . . .
Liouville's Principle
Lipschitz Function
Logarithmically Concav . . .
Logarithmically Convex . . .
Logarithmically Decrea . . .
Logarithmically Increa . . .
Many-to-One
Map
Map Germ
Map Orbit
Masser-Gramain Constant
Möbius Periodic Function
Monotone Function
Multilinear
Multiple-Valued Function
Multiplicative Function
Multivalued Function
Multivariate Function
Natural Boundary
Natural Domain
Negative Part
Nested Function
Normal Function
Numerica Function
Odd Function
Operation
Orthogonal Fucntions
Orthonormal Functions
Oscillating Function
Oscillation
Particularly Well-Beha . . .
Plurisubharmonic Function
Positive Definite Func . . .
Positive Part
Pringheim's Theorem
Range
Real Analytic Function
Real Function
Real Variable
Rectifiable Set
Reflection Relation
Regular Sequence
Riemann's Moduli Problem
Riemann's Moduli Space
Rodrigues Representation
Saltus
Scalar Function
Scalar-Valued Function
Schwartz Function
Schwartz Space
Schwartz's Inequality
Semianalytic
Sharkovsky's Theorem
Single-Valued Function
Singleton Function
Singly Periodic Function
Smooth Function
Special Function
Surjection
Symmetric Function
Totally Multiplicative . . .
Transcendental Equation
Transcendental Function
Triply Periodic Function
Unary Operation
Univalent Function
Univariate Function
Unknown
Value
Variable
Implicit Function Theorem
Increasing Function
Injection
Integer Function
Path Trace
Period
Periodic Function
Periodic Point
Weighting Function
Zero Map
Source 12
Web Sites
1—http://www.microstrategy.com/QuickTours/HTML/MSTR7/content7.htm
2—http://www.cas.lancs.ac.uk/glossary_v1.1/nonparam.html#nonparat
3—http://www.cas.lancs.ac.uk/glossary_v1.1/catdat.html#chigof
4—http://staff.washington.edu/bskiver/ratlab/stats-notes.html
5—http://mathworld.wolfram.com/StatisticalTest.html
6—http://mathworld.wolfram.com/topics/DescriptiveStatistics.html
7—http://mathworld.wolfram.com/topics/ErrorAnalysis.html
8—http://mathworld.wolfram.com/topics/Estimators.html
9—http://mathworld.wolfram.com/topics/MarkovProcesses.html
10—http://mathworld.wolfram.com/topics/Moments.html
11—http://mathworld.wolfram.com/topics/MultivariateStatistics.html
12—http://mathworld.wolfram.com/topics/Functions.html
13—http://mathworld.wolfram.com/topics/StatisticalPlots.html
14—http://www.itl.nist.gov/div898/handbook/eda/section3/bihistog.htm
15—http://www.halfbakery.com/idea/chernoff—20face—20stock—20screens
16—
Number | Date | Country | |
---|---|---|---|
60625922 | Nov 2004 | US |