The data ingestion and conversion process is generally known as data mining, and the creation of robust systems to handle this problem is the subject of much research, and has spawned the creation of many specialized languages (e.g., Perl) intended to make this process easier. Unfortunately, while there have been some advances, the truth of the matter is that none of these ‘mining’ languages really provides anything more than a string manipulation library embedded into the language syntax itself. In other words, such languages are nothing more than shorthand for the equivalent operations written as a series of calls to a powerful subroutine library. A prerequisite for any complex data processing application, specifically a system capable of processing and analyzing disparate data sources, is a system that can convert the structured, semi-structured, and un-structured information sources into their equivalent representation in the target ontology, thereby unifying all sources and allowing cross-source analysis.
For example, in a current generation data-extraction script, the code involved in the extraction basically works its way through the text from beginning to end trying to recognize delimiting tokens and once having done so to extract any text within the delimiters and then assign it to the output data structure. When there is a one-to-one match between source data and target representation, this is a simple and effective strategy. As we widen the gap between the two, however, such as by introducing multiple inconsistent sources, increasing the complexity of the source, nesting information in the source to multiple levels, cross referencing arbitrarily to other items within the source, and distributing and interspersing the information necessary to determine an output item within a source, the situation rapidly becomes completely unmanageable by this technique, and highly vulnerable to the slightest change in source format or target data model. This mismatch is at the heart of all problems involving the need for multiple different systems to intercommunicate meaningful information, and makes conventional attempts to mine such information prohibitively expensive to create and maintain. Unfortunately for conventional mining techniques, much of the most valuable information that might be used to create truly intelligent systems comes from publishers of various types. Publishing houses make their money from the information that they aggregate, and thus are not in the least bit interested in making such information available in a form that is susceptible to standard data mining techniques. Furthermore, most publishers deliberately introduce inconsistencies and errors into their data in order both to detect intellectual property rights violations by others, and to make automated extraction as difficult as possible. Each publisher, and indeed each title from any given publisher, uses different formats, and has an arrangement that is custom tailored to the needs of whatever the publication is. The result is that we are faced with a variety of source formats on CD-ROMs, databases, web sites, and other legacy systems that completely stymie standard techniques for acquisition and integration. Very few truly useful sources are available in a nice neat tagged form such as XML and thus to rely on markup languages such as XML to aid in data extraction is a woefully inadequate approach in real-world situations.
One of the basic problems that makes the extraction process difficult is that the control-flow based program that is doing the extraction has no connection to the data itself (which is simply input) and must therefore invest huge amounts of effort extracting and keeping track of its ‘state’ in order to know what it should do with information at any given time. What is needed, then, is a system in which the content of the data itself actually determines the order of execution of statements in the mining language and automatically keeps track of the current state. In such a system, whenever an action was required of the extraction code, the data would ‘tell’ it to take that action, and all of the complexity would melt away. Assuming such a system is further tied to a target system ontology, the mining problem would become quite simple. Ideally, such a solution would tie the mining process to compiler theory, since that is most powerful formalized framework available for mapping source textual content into defined actions and state in a rigorous and extensible manner. It would also be desirable to have an interpreted language that is tied to the target ontology (totally different from the source format), and for which the order of statement execution could be driven by source data content.
The system of this invention takes the data mining process to a whole new level of power and versatility by recognizing that, at the core of our past failings in this area, lies the fact that conventional control-flow based programming languages are simply not suited to the desired system, and must be replaced at the fundamental level with a more flexible approach to software system generation. There are two important characteristics of the present invention that help create this paradigm shift. The first is that, in the preferred embodiment, the system of the present invention includes a system ontology such that the types and fields of the ontology can be directly manipulated and assigned within the language without the need for explicit declarations. For example, to assign a value to a field called “notes.sourceNotes=” of a type, the present invention would only require the statement “notes.sourceNotes=”. An ontology is an explicit formal specification of how to represent the objects, concepts and other entities that are assumed to exist in some area of interest and the relationships that hold among them. The second, and one of the most fundamental characteristics, is that the present invention gives up on the idea of a control-flow based programming language (i.e., one where the order of execution of statements is determined by the order of those statements within the program) in order to dramatically simplify the extraction of data from a source. In other words, the present invention represents a radical departure from all existing “control” notions in programming.
The present invention, hereinafter referred to as MitoMine, is a generic data extraction capability that produces a strongly-typed ontology defined collection referencing (and cross referencing) all extracted records. The input to the mining process tends to be some form of text file delimited into a set of possibly dissimilar records. MitoMine contains parser routines and post processing functions, known as ‘munchers’. The parser routines can be accessed either via a batch mining process or as part of a running server process connected to a live source. Munchers can be registered on a per data-source basis in order to process the records produced, possibly writing them to an external database and/or a set of servers. The present invention embeds an interpreted ontology based language within a compiler/interpreter (for the source format) such that the statements of the embedded language are executed as a result of the source compiler ‘recognizing’ a given construct within the source and extracting the corresponding source content. In this way, the execution of the statements in the embedded program will occur in a sequence that is dictated wholly by the source content. This system and method therefore make it possible to bulk extract free-form data from such sources as CD-ROMs, the web etc. and have the resultant structured data loaded into an ontology based system.
In the preferred embodiment, a MitoMine parser is defined using three basic types of information:
1) A named source-specific lexical analyzer specification
2) A named Backus-Naur form (BNF) specification for parsing the source
3) A set of predefined plug-in functions capable of interpreting the source information via C** statements.
Other improvements and extensions to this system will be defined herein.
[NONE]
The present invention is built upon this and, in the preferred embodiment, uses a number of other key technologies and concepts. For example, these following patent applications (which are expressly incorporated herein) disclose all the components necessary to build up a system capable of auto-generating all user interface, storage tables, and querying behaviors required in order to create a system directly from the specifications given in an ontology description language (ODL). These various building-block technologies have been previously described in the following patent applications:
1) Appendix 1—Memory Patent (page 55) now U.S Pat. No. 7,103,749 issued on Sep. 05, 2006.
2) Appendix 2—Lexical Patent (page 68) now U.S Pat. No. 7,328,430 issued on Feb. 05, 2008.
3) Appendix 3—Parser Patent (page 89) now U.S Pat. No. 7,210,130 issued on Apr. 24, 2007.
4) Appendix 4—Types Patent (page 112) now U.S Pat. No. 7,158,984 issued on Jan. 02, 2007.
5) Appendix 5—Collections Patent (page 140) now U.S Pat. No. 7,308,449issued on Dec. 11, 2007.
6) Appendix 6—Ontology Patent (page 199) now U.S Pat. No. 7,240,330 issued on Jul. 03, 2007.
In the Parser Patent, a system was described that permits execution of the statements in the embedded program in a sequence that is dictated wholly by the source content, in that the ‘reverse polish’ operators within that system are executed as the source parse reaches an appropriate state and, as further described in that patent, these operators are passed a plug-in hint string when invoked. In the preferred embodiment, the plug-in hint string will be the source for the interpreted ontology-based language and the plug-ins themselves will invoke an inner level parser in order to execute these statements. The Ontology Patent introduced an ontology based language that is an extension of the C language known as C*. This is the preferred ontology based language for the present invention. We will refer to the embedded form of this language as C**, the extra ‘*’ symbol being intended to imply the additional level of indirection created by embedding the language within a source format interpreter. The output of a mining process will be a set of ontology defined types (see Types Patent) within a flat data-model collection (see Memory Patent and Collection Patent) suitable for instantiation to persistent storage and subsequent query and access via the ontology (see patent reference 6).
In the preferred embodiment, a MitoMine parser is defined using three basic types of information:
1) A named source-specific lexical analyzer specification
2) A named BNF specification for parsing the source
3) A set of predefined plug-in functions capable of interpreting the source information via C** statements.
The BNF format may be based upon any number of different BNF specifications. MitoMine provides the following additional built-in parser plug-ins which greatly facilitate the process of extracting unstructured data into run-time type manager records:
These two plug-ins delimit the start and end of an arbitrary possibly multi-lined string to be assigned to the field designated by the following call to <@1:5:fieldPath=$>. This is the method used to extract large arbitrary text fields. The token sequence for these plug-ins is always of the form <@1:1><1:String><@1:2>, that is any text occurring after the appearance of the <@1:1> plug-in on the top of the parsing stack will be converted into a single string token (token # 1) which will be assigned on the next <@1:5> plug-in. The arbitrary text will be terminated by the occurrence of any terminal in the language (defined in the .LEX specification) whose value is above 128. Thus the following snippet of BNF will cause the field ‘pubName’ to be assigned whatever text occurs between the token <PUBLICATION> and <VOLUME/ISSUE> in the input file:
In the preferred embodiment, when extracting these arbitrary text fields, all trailing and leading white space is removed from the string before assignment, and all occurrences of LINE_FEED are removed to yield a valid text string. The fact that tokens below 128 will not terminate the arbitrary text sequence is important in certain situations where a particular string is a terminal in the language and yet might also occur within such a text sequence where it should not be considered to have any special significance. All such tokens can be assigned token numbers below 128 in the .LEX specification thus ensuring that no confusion arises. The occurrence of another <@1:1> or a <@1:4> plug-in causes any previous <1:String> text accumulated to be discarded. A <@1:5> causes execution of a C** statements that generally cause extracted information to be assigned to the specified field and then clears the record of the accumulation. If a plug-in hint consisting of a decimal number follows the <@1:1> as in <@1:1:4> that number specifies the maximum number of lines of input that will be consumed by the plug-in (four in this example). This is a useful means to handle input where the line number or count is significant.
In the preferred embodiment, the occurrence of this plug-in indicates that the extraction of a particular record initiated by the <@1:4> plug-in is complete and should be added to the collection of records extracted.
In the preferred embodiment, the occurrence of the plug-in above indicates that the extraction of a new record of the type specified by the ‘typeName’ string is to begin. The “typename” will preferably match a known type manager type either defined elsewhere or within the additional type definitions supplied as part of the parser specification.
In the preferred embodiment, the plug-in above is used to assign values to either a field or a register. Within the assigned expression, the previously extracted field value may be referred to as ‘$’. Fields may be expressed as a path to sub-fields of the structure to any depth using normal type manager path notation (same as for C). As an example, the field specifier “description[$aa].u.equip.specifications” refers to a field within the parent structure that is within an array of unions. The symbol ‘$aa’ is a register designator. There are 26*26 registers ‘$aa’ to ‘$zz’ which may be used to hold the results of calculations necessary to compute field values. A single character register designator may also be used instead thus ‘$a’ is the same as ‘$aa’, ‘$b’ is the same as ‘$ba’ etc. Register names may optionally be followed by a text string (no spaces) in order to improve readability (as in $aa:myIndex) but this text string is ignored by the C** interpreter. The use of registers to store extracted information and context is key to handling the distributed nature of information in published sources. In the example above, ‘$a’ is being used as an index into the array of ‘description’ fields. To increment this index a “<@1:5:$a=$a+1>” plug-in call would be inserted in the appropriate part of the BNF (presumably after extraction of an entire ‘description’ element). All registers are initially set to zero (integer) when the parse begins, thereafter their value is entirely determined by the <@1:5> plug-ins that occur during the extraction process. If a register is assigned a real or string value, it adopts that type automatically until a value of another type is assigned to it. Expressions may include calls to functions (of the form $FuncName), which provide a convenient means of processing the inputs extracted into certain data types for assignment. These functions provide capabilities comparable to the string processing libraries commonly found with older generation data mining capabilities.
When assigning values to fields, the <@1:5> plug-in performs intelligent type conversions, for example:
1) If the token is a <1:String> and the field is a ‘charHdl’, a handle is created and assigned to the field. Similarly for a ‘charPtr’. If the field is a fixed length character array, the string is copied into it. If it won't fit, a bounds error is flagged. If the field is already non-empty (regardless of type) then the <@1:5> plugin appends any new text to the end of the field value (if possible). Note that registers do not append automatically unless you use the syntax $a=$a+“string”.
2) If the field is numeric, appropriate type conversions from the extracted value occur. Range checking could be automatic. Multiple assignments may be separated by semi-colons. The full syntax supported within the ‘assignment’ string is defined by the system BNF language “MitoMine” (described below).
Note that because the order of commutative operator (e.g., “+”) evaluation is guaranteed to be left-to-right, multiple non-parenthesized string concatenation operations can be safely expressed as a single statement as in:
fieldname=“Hello”+$FirstCapOnly($a)+“do you like”+$b+“\n”
The <@1:5> plug-in may also be used to support limited conditional statements which may be performed using the ‘if’ and ‘ifelse’ keywords. The effect of the ‘if’ is to conditionally skip the next element of the production that immediately follows the <@1:5> containing the ‘if’ (there can be only one statement within an ‘if’ or ‘ifelse’ block). For example:
More significantly, since it is possible to discard any element from the production in this manner, the prudent use of conditional <@1:5> evaluation can be used to modify the recognized syntax of the language. Consider the following production:
myProduction ::=<@1:5:ifelse ($a>=0)>positive_prod negative_prod
In this example, the contents of register ‘$a’ is determining which of two possible productions will get evaluated next. This can be a very powerful tool for solving non-context-free language ambiguities (normally intractable to this kind of parser) by remembering the context in one of the registers and then resolving the problem later when it occurs. The results of misusing this capability can be very confusing and the reader is referred to the incorporated materials of the Parser Patent for additional details. That having been said, the following simplified guidelines should help to ensure correctness:
Ensure that if then Clause is not NULLABLE, and if necessary (depending on other occurrences of nextElement), include a production elsewhere {that may never be executed} to ensure that First(nextElement) is entirely contained within Follow(preElement).
Note that all plug-ins may contain multiple lines of text by use of the <cont> symbol (see Parser patent). This may be required in the case where a <@1:5> statement exceeds the space available on a single line (e.g., many parameters to a function). The maximum size of any given plug-in text in the preferred embodiment is 8 KB.
The present invention also permits the specification of the language specific parser to include any user dialogs and warnings that might be required for the parser concerned, any additional type definitions that might be required as part of parser operation, and any custom annotations and scripts (see Collections Patent) that might be necessary.
Within the <@1:5> plug-in, in addition to supporting conditionals, additive, multiplicative and assignment operators, this package preferably provides a number of built-in functions that may be useful in manipulating extracted values in order to convert them to a form suitable for assignment to typed fields. These functions are loosely equivalent to the string processing library of conventional mining languages. Function handlers may be registered (via a registry API—see Parser Patent for further details) to provide additional built in functions. In the built-in function descriptions below, for example, the type of a given parameter is indicated between square brackets. The meaning of these symbols in this example is as follows:
[I]—Integer value (64 bit)
[F]—Floating point value (double)
[S]—String value
The following is a partial list of predefined built-in functions that have been found to be useful in different data mining situations. New functions may be added to this list and it is expected that use of the system will often include the step of adding new functions. In such a case, if a feature is not provided, it can be implemented and registered as part of any particular parser definition. On the other hand, none of the features listed below are required meaning that a much smaller set of functions could also be used. In the preferred embodiment, however, the following functions (or ones having similar functionality) would be available.
1) [F] $Date( )
2) [F] $StringToDate([S] dateString,[S] calendar)
3) [S] $TextAfter([S] srcStr,[S] delimStr)
4) [S] $TextBefore([S] srcStr,[S] delimStr)
5) [S] $TextBetween([S] srcStr,[S] startStr,[S] endStr)
6) [I] $Integer([S] aString)
7) [F] $Real([S] aString)
8) [I] $IntegerWithin([S] aString[I] n)
9) [F] $RealWithin([S] aString[I] n)
10) [S] $StripMarkup([S] aString)
11) [S] $SourceName( )
12) [S] $SetPersRefInfo([S] aString)
13) [S] $FirstCapOnly([S] aString)
14) [S] $TextNotAfter([S] srcStr,[S] delimStr)
15) [S] $TextNotBefore([S] srcStr,[S] delimStr)
16) [S] $TextNotBetween([S] srcStr,[S] startStr,[S] endStr)
17) [S] $TruncateText([S] srcStr,[I] numChars)
18) [S] $TextBeforeNumber([S] srcStr)
19) [S] $TextWithout([S] srcStr,[S] sequence)
20) [S] $WordNumber([S] srcStr,[I] number)
21) [S] $Ask([S] promptStr)
22) [S] $TextWithoutBlock([S] srcStr,[S] startDelim,[S] endDelim)
23) [S] $ReplaceSequence([S] srcStr,[S] sequence,[S] nuSequence)
24) [S] $AppendIfNotPresent([S] srcStr,[S] endDelim)
25) [S] $ProperNameFilter([S] srcStr,[I] wordMax,[S] delim)
26) [S] $Sprintf([S] formatStr, . . . )
27) [S] $ShiftChars([S] srcStr,[I] delta)
28) [S] $FlipChars([S] srcStr)
29) [S] $ReplaceBlockDelims([S] srcStr,[S] startDelim,[S] endDelim,[S] nuStartDelim, [S] nuEndDelim,[I] occurrence, [I] reverse)
30) [S] $RemoveIfFollows([S] srcStr,[S] endDelim)
31) [S] $RemoveIfStarts([S] srcStr,[S] startDelim)
32) [S] $PrependIfNotPresent([S] srcStr,[S] startDelim)
This function determines if ‘srcStr’ starts with ‘startDelim’ and if not prepends ‘startDelim’ to ‘srcStr’ returning the result.
33) [S] $NoLowerCaseWords([S] srcStr)
34) [S] $ReplaceBlocks([S] srcStr,[S] startDelim,[S] endDelim,[I] occurrence,[S] nuSequence)
35) [S] $AppendIfNotFollows([S] srcStr,[S] endDelim)
36) [I] $WordCount([S] srcStr)
37) [S] $PreserveParagraphs([S] srcStr)
38) [I] $StringSetIndex([S] srcStr,[I] ignoreCase,[S] setStrl . . . [S] setStrN)
39) [S] $IndexStringSet([I] index,[S] setStrl . . . [S] setStrN)
40) [S] $ReplaceChars([S] srcStr,[S] char,[S] nuChar)
41) [S] $Sentence([S] srcStr,[I] index)
42) [S] $FindHyperlink([S] srcStr,[S] domain, [I] index)
43) [S] $AssignRefType([S] aString)
This function allows you to assign directly to the typeID sub-field of a persistent reference field rather than assigning to the name. The function result is equal to ‘aString’ but the next assignment made by the parser will be to the typeID sub-field ‘aString’ is assumed to be a valid type name), not the ‘name’ sub-field.
44) [I] $RecordCount( )
45) [S] $Exit([S] aReason)
46) [I] $MaxRecords( )
47) [I] $SetMaxRecords([I] max)
48) [I] $FieldSize([S] fieldName)
49) [I] $TextContains([S] srcText,[S] subString)
50) [I] $ZapRegisters([S] minReg,[S] maxReg)
51) [I] $CRCString([S] srcText)
Note that parameters to routines may be either constants (of integer, real or string type), field specifiers referring to fields within the current record being extracted, registers, $ (the currently extracted field value), or evaluated expressions which may include embedded calls to other functions (built-in or otherwise). This essentially creates a complete programming language for the extraction of data into typed structures and collections. The C** programming language provided by the <@1:5> plug-ins differs from a conventional programming language in that the order of execution of the statements is determined by the BNF for the language and the contents of the data file being parsed. In the preferred embodiment, the MitoMine parser is capable of recognizing and evaluating the following token types:
The plug-in 5 MitoMine parser, in addition to recognizing registers, $, $function names, and type field specifications, can also preferably recognize and assign the following token types:
Character constants can be a maximum of 8 characters long, during input, they are not sign extended. The following custom parser options would preferably be supported:
These options may be specified for a given parser language by adding the corresponding hex value to the parser options line. For example, the specification below would set kTraceAssignments+kpLineTrace options in addition to those supported by the basic parse package:
The lexical analyzer options line can also be used to specify additional white-space and delimiter characters to the lexical analyzer as a comma separated list. For example the specification below would cause the characters ‘a’ and ‘b’ to be treated as whitespace (see LX_AddWhiteSpace) and the characters ‘Y’ and ‘Z’ to be treated as delimiters (see LX_AddDelimiter).
Appendix A (page 28) provides a sample of the BNF and LEX specifications that define the syntax of the <@1:5> plug-in (i.e., C**) within MitoMine (see Parser Patent for further details). Note that most of the functionality of C** is already provided by the predefined plug-in functions (plug-in 0) supplied by the basic parser package. A sample implementation of the <@1:5> plug-in one and a sample implementation of a corresponding resolver function are also provided.
As described previously, the lexical and BNF specifications for the outermost parser vary depending on the source being processed (example given below), however the outer parser also has a single standard plug-in and resolver. A sample implementation of the standard plug-in one and a sample implementation of a corresponding resolver function are also provided in Appendix A.
The listing below gives the API interface to the MitoMine capability for the preferred embodiment although other forms are obviously possible. Appendix A provides the sample pseudo code for the API interface.
In the preferred embodiment, a function, hereinafter called MN_MakeParser( ), initializes an instance of the MitoMine and returns a handle to the parser database which is required by all subsequent calls. A ‘parserType’ parameter could be provided to select a particular parsing language to be loaded (see PS_LoadBNF) and used.
In the preferred embodiment, a function, hereinafter called MN_SetRecordAdder( ) determines how (or if) records once parsed are added to the collection. The default record adder creates a set of named lists where each list is named after the record type it contains.
In the preferred embodiment, a function, hereinafter called MN_SetMineFunc( ), sets the custom mine function handler for a MitoMine parser. Additional functions could also be defined over and above those provided by MitoMine within the <@1:5: . . . > plugin context. A sample mine function handler follows:
In the preferred embodiment, a function, hereinafter called MN_SetMaxRecords( ), sets the maximum number of records to be mined for a MitoMine parser. This is the number returned by the built-in function $GetMaxRecords( ). If the maximum number of records is not set (i.e., is zero), all records are mined until the input file(s) is exhausted.
In the preferred embodiment, a function, hereinafter called MN_SetMineLineFn( ), sets the MitoMine line processing function for a given MitoMine parser. A typical line processing function might appear as follows:
These functions can be used to perform all kinds of different useful functions such as altering the input stream before the parser sees it, adjusting parser debugging settings, etc. The ‘aMineLineParam’ parameter above is an arbitrary string and can be formatted any way you wish in order to transfer the necessary information to the line processing function. The current value of this parameter is set using MN_SetMineLineParam( ).
In the preferred embodiment, a function, hereinafter called MN_SetMineLineParam( ), sets the string parameter to a MitoMine line processing function.
In the preferred embodiment, two functions, hereinafter called MN_SetParseTypeDB( ) and MN_GetParseTypeDB( ), can be used to associate a type DB (probably obtained using MN_GetMineLanguageTypeDB) with a MitoMine parser. This is preferable so that the plug-ins associated with the extraction process can determine type information for the structures unique to the language. In the preferred embodiment, the function MN_GetParseTypeDB( ) would return the current setting of the parser type DB.
In the preferred embodiment, a function, hereinafter called MN_SetFilePath( ), sets the current file path associated with a MitoMine parser.
In the preferred embodiment, a function, hereinafter called MN_GetFilePath( ), gets the current file path associated with a MitoMine parser.
In the preferred embodiment, a function, hereinafter called MN_SetCustomContext( ), may be used to get the custom context value associated with a given MitoMine parser. Because MitoMine itself uses the parser context (see PS_SetContextID), it provides this alternative API to allow custom context to be associated with a parser.
In the preferred embodiment, a function, hereinafter called MN_GetCustomContext( ), may be used to get the custom context value associated with a given MitoMine parser. Because MitoMine itself uses the parser context (see PS_SetContextID), it provides this alternative API to allow custom context to be associated with a parser.
In the preferred embodiment, a function, hereinafter called MN GetParseCollection( ), returns the collection object associated with a parser. MN_SetParseCollection( ) allows this value to be altered. By calling MN_SetParseCollection( . . . ,NULL) it is possible to detach a collection from the parser in cases where you wish the collection to survive the parser teardown process.
In the preferred embodiment, a function, hereinafter called MN_SetParseCollection( ), returns the collection object associated with a parser. MN_SetParseCollection( ) allows this value to be altered. By calling MN_SetParseCollection( . . . ,NULL) it is possible to detach a collection from the parser. This would be useful in cases where it is preferable to permit the collection to survive the parser teardown process.
In the preferred embodiment, a function, hereinafter called MN_GetMineLanguageTypeDB( ), returns a typeDB handle to the type DB describing the structures utilized by the specified mine language. If the specified typeDB already exists, it is simply returned, otherwise a new type DB is created by loading the type definitions from the designated MitoMine type specification file.
In the preferred embodiment, a function, hereinafter called MN_KillParser( ), disposes of the Parser database created by MN_MakeParser( ). A matching call to MN_KillParser( ) must exist for every call to MN_MakeParser( ). This call would also invoke MN_CleanupRecords( ) for the associated collection.
In the preferred embodiment, a function, hereinafter called MN_Parser( ), invokes the MitoMine parser to process the designated file. The function is passed a parser database created by a call to MN_MakeParser( ). When all calls to MN_Parse( ) are complete, the parser database must be disposed using MN_KillParser( ).
In the preferred embodiment, a function, hereinafter called MN_RunMitoMine( ), creates the selected MitoMine parser on the contents of a string handle. An parameter could also be passed to the MN_MakeParser( ) call and can thus be used to specify various debugging options.
In the preferred embodiment, a function, hereinafter called MN_CleanupRecords( ), cleans up all memory associated with the set of data records created by a call to MN_RunMitoMine( ).
In the preferred embodiment, a function, hereinafter called MN_RegisterMineMuncher( ), can be used to register by name a function to be invoked to post process the set of records created after a successful MitoMine run. The name of the registered Muncher function would preferably match that of the mining language (see MN_Parse for details). A typical mine-muncher function might appear as follows:
The ‘scanP’ parameter is the same ‘scanP’ passed to the file filter function and can thus be used to communicate between file filters and the muncher or alternatively to clean up any leftovers from the file filters within the ‘muncher’. Custom ‘muncher’ functions can be used to perform a wide variety of complex tasks, indeed the MitoMine approach has been used successfully to extract binary (non-textual) information from very complex sources, such as encoded database files, by using this technique.
In the preferred embodiment, a function, hereinafter called MN_DeRegisterMineMuncher( ), de-registers a previously registered mine muncher function.
In the preferred embodiment, a function, hereinafter called MN_InvokeMineMuncher( ), invokes the registered ‘muncher’ function for the records output by a run of MitoMine (see MN_RunMitoMine). If no function is registered, the records and all associated memory are simply disposed using MN_CleanupRecords( ).
In the preferred embodiment, a function, hereinafter called MN_RegisterFileFilter( ), can be used to register by name a file filter function to be invoked to process files during a MitoMine run. If no file filter is registered, files are treated as straight text files, otherwise the file must be loaded and pre/post processed by the file filter. A typical file filter function might appear as follows:
In the preferred embodiment, a function, hereinafter called MN_ListFileFilters( ), obtains a string list of all know MitoMine file filter functions.
In order to illustrate how MitoMine is used to extract information from a given source and map it into its ontological equivalent, we will use the example of the ontological definition of the Country record pulled from the CIA World Fact book. The extract provided in Appendix B (page 45) is a portion of the first record of data for the country Afghanistan taken from the 1998 edition of this CD-ROM. The format of the information in this case appears to be a variant of SGML, but it is clear that this approach applies equally to almost any input format. The lexical analyzer and BNF specification for the parser to extract this source into a sample ontology are also provided in Appendix B. The BNF necessary to extract country information into a sample ontology is one of the most complex scripts thus far encountered in MitoMine applications due to the large amount of information that is being extracted from this source and preserved in the ontology. Because this script is so complex, it probably best illustrates a less than ideal data-mining scenario but also demonstrates use of a large number of different built-in mining functions. Some of the results of running the extraction script below can be seen in the Ontology patent relating to auto-generated UI.
Note that in the BNF provided in Appendix B, a number of distinct ontological items are created, not just a country. The BNF starts out by creating a “Publication” record that identifies the source of the data ingested; it also creates a “Government” record, which is descended from Organization. The Government record is associated with the country and forms the top level of the description of the government/organization of that country (of which the military branches created later are a part). In addition, other records could be created and associated with the country, for example the “opt_figure” production is assigning a variety of information to the ‘stringH’ field of the “mapImage” field that describes a persistent reference to the file that contains the map image. When the data produced by this parse is written to persistent storage, this image file is also copied to the image server and through the link created, can be recalled and displayed whenever the country is displayed (as is further demonstrated in the UI examples of the Ontology Patent). In fact, as a result of extracting a single country record, perhaps 50-100 records of different types are created by this script and associated in some way with the country including government personnel, international organizations, resources, population records, images, cities and ports, neighboring countries, treaties, notes, etc. Thus it is clear that what was flat, un-related information in the source has been converted to richly interconnected, highly computable and usable ontological information after the extraction completes. This same behavior is repeated for all the diverse sources that are mined into any given system and the information from all such sources becomes cross-correlated and therefore infinitely more useful than it was in its separate, isolated form. The power of this approach over conventional data mining technologies is clear.
The foregoing description of the preferred embodiments of the invention has been presented for the purposes of illustration and description. For example, although described with respect to the C* programming language, any programming language that includes the appropriate extensions could be used to implement this invention. Additionally, the claimed system and method should not be limited to the particular API disclosed. The descriptions of the header structures should also not be limited to the embodiments described. While the sample pseudo code provides examples of the code that may be used, the plurality of implementations that could in fact be developed is nearly limitless. For these reasons, this description is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
This application is a continuation of application Ser. No. 10/357,290 filed on Feb. 3, 2003 now abandoned, titled “A System And Method For Mining Data,” which claims the benefit of U.S. Provisional Application Ser. No. 60/353,487 filed on Feb. 1, 2002, titled “Integrated Multimedia Intelligence Architecture,” both of which are incorporated herein by reference in their entirety for all that is taught and disclosed therein.
Number | Name | Date | Kind |
---|---|---|---|
4905138 | Bourne | Feb 1990 | A |
5105353 | Charles et al. | Apr 1992 | A |
5214785 | Fairweather | May 1993 | A |
5276880 | Platoff et al. | Jan 1994 | A |
5303392 | Carney et al. | Apr 1994 | A |
5339406 | Carney et al. | Aug 1994 | A |
5375241 | Walsh | Dec 1994 | A |
5410701 | Gopalraman | Apr 1995 | A |
5487147 | Brisson | Jan 1996 | A |
5586329 | Knudsen et al. | Dec 1996 | A |
5596752 | Knudsen et al. | Jan 1997 | A |
5677835 | Carbonell et al. | Oct 1997 | A |
5682535 | Knudsen | Oct 1997 | A |
5694523 | Wical | Dec 1997 | A |
5748975 | Van De Vanter | May 1998 | A |
5768580 | Wical | Jun 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5819083 | Chen et al. | Oct 1998 | A |
5870608 | Gregory | Feb 1999 | A |
5897642 | Capossela et al. | Apr 1999 | A |
5903756 | Sankar | May 1999 | A |
5915255 | Schwartz et al. | Jun 1999 | A |
5963742 | Williams | Oct 1999 | A |
5991539 | Williams | Nov 1999 | A |
5995920 | Carbonell et al. | Nov 1999 | A |
6061675 | Wical | May 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6083282 | Caron et al. | Jul 2000 | A |
6094650 | Stoffel et al. | Jul 2000 | A |
6115782 | Wolczko et al. | Sep 2000 | A |
6139201 | Carbonell et al. | Oct 2000 | A |
6163785 | Carbonell et al. | Dec 2000 | A |
6182281 | Nackman et al. | Jan 2001 | B1 |
6199034 | Wical | Mar 2001 | B1 |
6219830 | Eidt et al. | Apr 2001 | B1 |
6237005 | Griffin | May 2001 | B1 |
6263335 | Paik et al. | Jul 2001 | B1 |
6289338 | Stoffel et al. | Sep 2001 | B1 |
6353925 | Stata et al. | Mar 2002 | B1 |
6366933 | Ball et al. | Apr 2002 | B1 |
6453321 | Hill et al. | Sep 2002 | B1 |
6487545 | Wical | Nov 2002 | B1 |
6507833 | Hichwa et al. | Jan 2003 | B1 |
6539460 | Castelli et al. | Mar 2003 | B2 |
6564263 | Bergman et al. | May 2003 | B1 |
6591274 | Smith et al. | Jul 2003 | B1 |
6640231 | Andersen et al. | Oct 2003 | B1 |
6654953 | Beaumont et al. | Nov 2003 | B1 |
6658627 | Gallup et al. | Dec 2003 | B1 |
6678677 | Roux et al. | Jan 2004 | B2 |
6704737 | Nixon et al. | Mar 2004 | B1 |
6721723 | Gibson et al. | Apr 2004 | B1 |
6728692 | Martinka et al. | Apr 2004 | B1 |
6748481 | Parry et al. | Jun 2004 | B1 |
6748585 | Proebsting et al. | Jun 2004 | B2 |
6826744 | McAuley | Nov 2004 | B1 |
6847979 | Allemang et al. | Jan 2005 | B2 |
6862610 | Shuster | Mar 2005 | B2 |
6883087 | Raynaud-Richard et al. | Apr 2005 | B1 |
7158984 | Lewis | Jul 2005 | B2 |
6950793 | Ross et al. | Sep 2005 | B2 |
7003764 | Allison | Feb 2006 | B2 |
7240330 | Ingberg | Mar 2006 | B2 |
7027975 | Pazandak et al. | Apr 2006 | B1 |
7062760 | Tonouchi | Jun 2006 | B2 |
7210130 | Nahar | Jun 2006 | B2 |
7100153 | Ringseth et al. | Aug 2006 | B1 |
7103749 | Fairweather | Sep 2006 | B2 |
7111283 | Fraser et al. | Sep 2006 | B2 |
7143087 | Fairweather | Nov 2006 | B2 |
20040044836 | Wong et al. | Mar 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20060235811 A1 | Oct 2006 | US |
Number | Date | Country | |
---|---|---|---|
60353487 | Feb 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10357290 | Feb 2003 | US |
Child | 11455304 | US |