The present invention relates generally to databases. More particularly, the present invention relates to relating data in databases to one another.
It has been common in the computer industry, more specifically with the management of data, to build relationships between many types of data that a business and its trading partners encounter on a recurring basis. This is certainly a common issue that a business in a growth phase must accommodate, especially when enhancing existing data management systems or when incorporating new data management systems. Tasks associated with these efforts are often referred to as integration or conversion projects.
Examples of when a business might encounter an integration or conversion project include:
The integration or conversion project typically requires domain expertise of source and target systems at the data field level. The expert knowledge helps determine the proper data field mappings required to properly exchange data for integration or communication. These tasks are also referred to as mapping projects. These mapping projects can be described in detail, and the tasks then delegated to other teams not necessarily requiring specific expertise. In any case, these tasks are usually very time and resource intensive.
In the past few years several tools have been brought to market that facilitate these tasks, particularly with graphical user interfaces and simple conversion functions. But the data field mapping remains a manual process, and expert domain knowledge is still required to facilitate the process.
A need exists today to automate the development of field level relationships established between two or more databases, tables, and files, and with domain knowledge somewhat less than that of the level of expert. This automated data mapping must be faster than existing manual methods, and must require fewer resources as well. There should also be a means for verifying the accuracy of the mappings, overriding of some of the mappings if necessary, as well as the addition of relationships known as the result of expert domain knowledge of the data.
The present invention is designed to fulfill the above listed needs. The invention provides a tool that can build field level relationships between two or more disparate databases, tables, and files in an automated fashion, and do so without expert knowledge of the databases, tables, and files.
The foregoing and other objects and advantages will become more apparent when viewed in light of the accompanying drawings and following detailed description.
It is therefore a feature and advantage of the present invention to provide an intelligent engine that builds field level relationships between disparate databases, tables and files, allowing for a singular and functional view of these relationships.
In one embodiment of the invention a database merging apparatus includes a database pair generator that creates a database pair from a first database, a probe set generator that creates a database probe set from a second database and a comparator in communication with the database pair generator and the probe set generator. The comparator determines if the database probe set correlates to the database pair. An identifier is in communication with the comparator to identify a correlation between the database pair and the database probe set so that correlating data from the first database and the second database can be accessed if there is a correlation.
In another embodiment of the invention, a method for merging two or more databases includes the steps of generating one or more database pairs from a first database, generating a database probe set from a second database and determining if the database pairs correlate to the database probe set. A correlation between the database pairs and the database probe set are identified so that correlating data from the first database and the second database can be accessed if there is a correlation.
In an alternate embodiment of the invention, a system for merging two or more databases includes a means for generating database pairs from a first database, a means for generating a database probe set from a second database and a means for determining if the database pairs correlate to the database probe set. A means for identifying a correlation between said database pairs and said database probe set is provided so that correlating data from the first database and the second database can be accessed if there is a correlation.
There has thus been outlined, rather broadly, the more important features of the invention in order that the detailed description thereof that follows may be better understood, and in order that the present contribution to the art may be better appreciated. There are, of course, additional features of the invention that will be described below and which will form the subject matter of the claims appended hereto.
In this respect, before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as the abstract, are for the purpose of description and should not be regarded as limiting.
As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the present invention. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the present invention.
Databases, and other sources of data that can be accessed as if they were databases, can be described by four distinct characterizations:
These characterizations of a database can be extended to span multiple databases using the same analysis that was applied to a single database. And within one large database, there can be many tables partitioned in to groups. These groups are not explicitly or implicitly linked to each other via referential integrity or naming. Instead, relationships must be known via expert domain knowledge, or must be uncovered by in-depth analysis of each table based on type of information and on contents, using naming of fields only as very broad hints.
In the context of this invention, a relationship is where two fields within a database are determined to represent the same entity. A unique element is a field where its only relationship to other data fields is that it appears in the same database row as other data elements.
The HAP performs an intra-database analysis to find the explicit and implicit relationships between fields. Explicit relationships are determined, where the database structure, itself declared by means of foreign indices, lists the links that exist between tables. This information is obtained directly from the database using Meta data, or “data about data”.
The implicit relationships are determined by analyzing the types of fields and the field contents. The analysis of the data types determines the set of possible relationships, and produces a set of pairs to test for commonality. A pair is defined as a table and field combination.
Testing is done by probing the database with known values. Once a pair of fields has been identified, values are fetched from one table and used to probe corresponding fields in the other table. Adjustable heuristics are defined that determine whether a match is found. These heuristics can be processed in one of three strategies:
This heuristic operates as a confidence interval for the data being examined. So, for example, if one field out of twenty in a record matches, then this is a poor fit. However, if eighteen out of twenty match, then it is a good fit.
This three-tiered matching is essential. Consider the case where in one table there is a last name, a first name, and a middle initial. In a second table is a last name and a first name. In addition, assume that there are names in each table that do not appear in the other. The intersection of the two sets representing the two tables is the result of interest. The intersection is compared to each table as a percentage of the probe set, not of the total table size. If the matching percentage of the intersection set exceeds a pre-set and configurable value, the entire probe set is considered a match and a relationship is reported.
Several criteria can be set across all of three strategies, when appropriate. These include not checking “flag fields” (fields of type “BIT” and length one), not checking Binary Large OBjects (BLOB) fields, not checking fields that are all zeros or are null (zeros and null can represent uninitialized data). There is also a mechanism to limit the size of the sections of a database that are analyzed. Smaller sections are analyzed faster, but are less accurate. In one embodiment of the invention, a size of around 2,000 can be used to obtain desired results.
Probe sets are defined as a retry on error model. A retry on error model is defined as the scenario where more time is invested in the process when no match is found. Similar to exception processing within some computer programming languages, processing proceeds quickly under normal circumstances, but increases when errors (in this case, no matches) occur, and additional time is devoted to discovering alternatives.
Probe sets are constructed in several ways. For small tables, the entire table can be used as a probe set. For larger tables, a subset of the data is used, and selection will range from differing strategies, including the first portion of a table or alternating or randomized record selection. The strategy for probe set selection is normally automated, and is dependent on field characteristic as well as data characteristic. For example, if the data for a particular field is sorted, the application will choose alternating or randomized record selection strategy for the probe set.
A retry mechanism will automatically switch to a different selection strategy when a particular probe set fails. This automated reselection occurs on failure in order to eliminate false negative results.
The selection criteria for probe sets can be specified at run time if defaults are not preferred. By relaxing control over probe set selection criteria, probes are more accurate. By selecting stricter control over probe set selection criteria, the application will run faster.
HAP results in three sets of information:
The set of intra-database relationships, also referred to as inner relationships
The set of inter-database relationships, also referred to as outer relationships
A composite schema which represents all of the databases presented for analysis. Unique fields are represented in the schema, and can be logically represented by what remains of the schema when the inner and outer relationships are subtracted.
These three sets are what enable the navigation of the aggregation of databases provided to the Sypherlink Harvester for analysis.
It is not necessary to run the Harvester Analyzer (HAP) prior to each start of the run time component of Harvester; ideally, the HAP will only be run once, or again if any of the organization of the source data sets are changed in a material way.
In practice, it is anticipated that only a structural change to a source data set would constitute a material change, and then require another run of the HAP.
When an instance of the Harvester Runtime Component (HRC) is started, the three outputs from the HAP are loaded. As mentioned above, these components are the schema, intra- and inter-relationships. A connection to each physical data set is initiated, and a persistent cache is created using local ODBC connections. The HRC is now ready to process ODBC requests as indicated by the ODBC application programmers interface (API).
Within the HRC where structure query language (SQL) commands are to be processed for read operations, the application will
When SQL commands are to be processed for write operations, the application will
The many features and advantages of the invention are apparent from the detailed specification, and thus, it is intended by the appended claims to cover all such features and advantages of the invention which fall within the true spirits and scope of the invention. Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
This application claims priority to provisional U.S. patent applications entitled, System and Method for Making Multiple Databases Appear as a Single Database, filed May 31, 2002, having Ser. No. 60/384,101, the disclosure of which is incorporated herein by reference.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5701400 | Amado | Dec 1997 | A |
| 5895467 | Ubell et al. | Apr 1999 | A |
| 5943666 | Kleewein et al. | Aug 1999 | A |
| 6298342 | Graefe et al. | Oct 2001 | B1 |
| 6347313 | Ma et al. | Feb 2002 | B1 |
| 6523041 | Morgan et al. | Feb 2003 | B1 |
| 6691109 | Bjornson et al. | Feb 2004 | B1 |
| 6735598 | Srivastava | May 2004 | B1 |
| 6801921 | Tsuchida et al. | Oct 2004 | B1 |
| 20010044737 | Halligan et al. | Nov 2001 | A1 |
| 20020194187 | McNeil et al. | Dec 2002 | A1 |
| 20020194201 | Wilbanks et al. | Dec 2002 | A1 |
| 20030033290 | Garner et al. | Feb 2003 | A1 |
| Number | Date | Country | |
|---|---|---|---|
| 20030225780 A1 | Dec 2003 | US |
| Number | Date | Country | |
|---|---|---|---|
| 60384101 | May 2002 | US |