Generally speaking a global computer network, e.g., the Internet, is formed of a plurality of computers coupled to a communication line for communicating with each other. Each computer is referred to as a network node. Some nodes serve as information bearing sites while other nodes provide connectivity between end users and the information bearing sites.
The explosive growth of the Internet makes it an essential component of every business, organization and institution strategy, and leads to massive amounts of information being placed in the public domain for people to read and explore. The type of information available ranges from information about companies and their products, services, activities, people and partners, to information about conferences, seminars, and exhibitions, to news sites, to information about universities, schools, colleges, museums and hospitals, to information about government organizations, their purpose, activities and people. The Internet became the venue of choice for every organization for providing pertinent, detailed and timely information about themselves, their cause, services and activities.
The Internet essentially is nothing more than the network infrastructure that connects geographically dispersed computer systems. Every such computer system may contain publicly available (shareable) data that are available to users connected to this network. However, until the early 1990's there was no uniform way or standard conventions for accessing this data. The users had to use a variety of techniques to connect to remote computers (e.g. telnet, ftp, etc) using passwords that were usually site-specific, and they had to know the exact directory and file name that contained the information they were looking for.
The World Wide Web (WWW or simply Web) was created in an effort to simplify and facilitate access to publicly available information from computer systems connected to the Internet. A set of conventions and standards were developed that enabled users to access every Web site (computer system connected to the Web) in the same uniform way, without the need to use special passwords or techniques. In addition, Web browsers became available that let users navigate easily through Web sites by simply clicking hyperlinks (words or sentences connected to some Web resource).
Today the Web contains more than one billion pages that are interconnected with each other and reside in computers all over the world (thus the term “World Wide Web”). The sheer size and explosive growth of the Web has created the need for tools and methods that can automatically search, index, access, extract and recombine information and knowledge that is publicly available from Web resources.
The following definitions are used herein.
Web Domain
Web domain is an Internet address that provides connection to a Web server (a computer system connected to the Internet that allows remote access to some of its contents).
URL stands for Uniform Resource Locator. Generally, URLs have three parts: the first part describes the protocol used to access the content pointed to by the URL, the second contains the directory in which the content is located, and the third contains the file that stores the content:
<protocol>:<domain><directory><file>
For example:
Commonly, the <protocol> part may be missing. In that case, modern Web browsers access the URL as if the http://prefix was used. In addition, the <file> part may be missing. In that case, the convention calls for the file “index.html” to be fetched.
For example, the following are legal variations of the previous example URLs:
Web page is the content associated with a URL. In its simplest form, this content is static text, which is stored into a text file indicated by the URL. However, very often the content contains multi-media elements (e.g. images, audio, video, etc) as well as non-static text or other elements (e.g. news tickers, frames, scripts, streaming graphics, etc). Very often, more than one files form a Web page, however, there is only one file that is associated with the URL and which initiates or guides the Web page generation.
Web Browser
Web browser is a software program that allows users to access the content stored in Web sites. Modern Web browsers can also create content “on the fly”, according to instructions received from a Web site. This concept is commonly referred to as “dynamic page generation”. In addition, browsers can commonly send information back to the Web site, thus enabling two-way communication of the user and the Web site.
As our society's infrastructure becomes increasingly dependent on computers and information systems, electronic media and computer networks progressively replace traditional means of storing and disseminating information. There are several reasons for this trend, including cost of physical vs. computer storage, relatively easy protection of digital information from natural disasters and wear, almost instantaneous transmission of digital data to multiple recipients, and, perhaps most importantly, unprecedented capabilities for indexing, search and retrieval of digital information with very little human intervention.
Decades of active research in the Computer Science field of Information Retrieval have yield several algorithms and techniques for efficiently searching and retrieving information from structured databases. However, the world's largest information repository, the Web, contains mostly unstructured information, in the form of Web pages, text documents, or multimedia files. There are no standards on the content, format, or style of information published in the Web, except perhaps, the requirement that it should be understandable by human readers. Therefore the power of structured database queries that can readily connect, combine and filter information to present exactly what the user wants is not available in the Web.
Trying to alleviate this situation, search engines that index millions of Web pages based on keywords have been developed. Some of these search engines have a user-friendly front end that accepts natural languages queries. In general, these queries are analyzed to extract the keywords the user is possibly looking for, and then a simple keyword-based search is performed through the engine's indexes. However, this essentially corresponds to querying one field only in a database and it lacks the multi-field queries that are typical on any database system. The result is that Web queries cannot become very specific; therefore they tend to return thousands of results of which only a few may be relevant. Furthermore, the “results” returned are not specific data, similar to what database queries typically return; instead, they are lists of Web pages, which may or may not contain the requested answer.
In order to leverage the information retrieval power and search sophistication of database systems, the information needs to be structured, so that it can be stored in database format. Since the Web contains mostly unstructured information, methods and techniques are needed to extract data and discover patterns in the Web in order to transform the unstructured information into structured data.
The Web is a vast repository of information and data that grows continuously. Information traditionally published in other media (e.g. manuals, brochures, magazines, books, newspapers, etc.) is now increasingly published either exclusively on the Web, or in two versions, one of which is distributed through the Web. In addition, older information and content from traditional media is now routinely transferred into electronic format to be made available in the Web, e.g. old books from libraries, journals from professional associations, etc. As a result, the Web becomes gradually the primary source of information in our society, with other sources (e.g. books, journals, etc) assuming a secondary role.
As the Web becomes the world's largest information repository, many types of public information about people become accessible through the Web. For example, club and association memberships, employment information, even biographical information can be found in organization Web sites, company Web sites, or news Web sites. Furthermore, many individuals create personal Web sites where they publish themselves all kinds of personal information not available from any other source (e.g. resume, hobbies, interests, “personal news”, etc).
In addition, people often use public forums to exchange e-mails, participate in discussions, ask questions, or provide answers. E-mail discussions from these forums are routinely stored in archives that are publicly available through the Web; these archives are great sources of information about people's interests, expertise, hobbies, professional affiliations, etc.
Employment and biographical information is an invaluable asset for employment agencies and hiring managers who constantly search for qualified professionals to fill job openings. Data about people's interests, hobbies and shopping preferences are priceless for market research and target advertisement campaigns. Finally, any current information about people (e.g. current employment, contact information, etc) is of great interest to individuals who want to search for or reestablish contact with old friends, acquaintances or colleagues.
As organizations increase their Web presence through their own Web sites or press releases that are published on-line, most public information about organizations become accessible through the Web. Any type of organization information that a few years ago would only be published in brochures, news articles, trade show presentations, or direct mail to customers and consumers, now is also routinely published to the organization's Web site where it is readily accessible by anyone with an Internet connection and a Web browser. The information that organizations typically publish in their Web sites include the following:
One purpose of the present invention is to collect publicly available information about people and organizations published in the Web. Usually information about organizations is published in Web sites maintained by the organizations themselves and includes the above-mentioned information. However, very often relevant information can be collected by press releases, news articles, product reviews and other independent sources.
As to the present invention collecting publicly available information about people from Web sources, such information may include:
This information is usually published in the Web either by people who publish their own resume, or by organizations who publish biographical and other information about their employees. In addition, other sources of such information include news sites, club and association sites, etc.
In the preferred embodiment of the invention, computer apparatus and method for extracting data from a Web page implements the steps of:
The step of refining includes rejecting predefined (common phrase) formal names as not being people names of interest. Further, the step of refining includes determining aliases of respective people and organization names in the combined set, so as to reduce effective duplicate names.
In the preferred embodiment, the step of finding further finds addresses, telephone numbers, email addresses, professional titles and organization for which a person named on the given Web page holds that title. The step of finding further includes determining educational background and other biographical information (i.e., employment history) relating to a person named on the given Web page. The determined educational background information includes at least one of name of institution, degree earned from the institution and date of graduation from the institution.
Preferably, the invention apparatus and method is rules based. In one embodiment, the invention apparatus and method determine type/structure of Web page, structure or arrangement of contents of the Web page, type or purpose of each line and/or regular recurrence of a certain type of line (or pattern of elements) in the subject Web page. As such, desired people/organization information is extracted as a function of pattern/placement of the contents or determined line and/or page types and determined boundaries of elements of interest.
In accordance with another aspect of the present invention, subsets of lines are grouped together to form text units. The invention extracts from the formed text units desired people and/or organization information.
In accordance with a further aspect of the invention, additional information regarding a person or organization named on a given Web page is deduced. The additional information supplements information found on another Web page of a same Web site as the given Web page.
In a preferred embodiment, a database stores the extracted information, and a post processor normalizes (standardizes, reduces duplicates, etc.) the stored data.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
a and 5b are block diagrams of working database records storing information extracted by the extractor of
With reference to
The Crawler 11
The component referred to as “Crawler” 11 is a software robot that “crawls” the Web visiting and traversing Web sites with the goal of identifying and retrieving pages 12 with relevant and interesting information.
The Extractor 41
The “Extractor” 41 is the component that performs data extraction on the pages 12 retrieved by the Crawler 11. This data extraction in general is based on Natural Language Processing techniques and uses a variety of rules to identify and extract the relevant and interesting pieces of information.
The Loader 43
Data produced by the extractor 41 are saved into a database 45 by the “Loader” 43. This component 43 also performs many post-processing tasks to clean-up and refine the data before storing information in database 45. These tasks include duplicate removal, resolving of aliases, correlating data produced from different Web sites, filtering and/or combining information, etc.
In the preferred embodiment, the Crawler 11 is a version of the software robot described in U.S. patent application Ser. No. 09/821,908 filed on Mar. 30, 2001 for a “Computer Method and Apparatus for Collecting People and Organization Information from Web Sites” and assigned to the assignee of the present invention. Specific rules are used to identify pages that contain organization information or relevant people information (e.g. name, employment, contact info, etc). For example, pages with a street address of the organization, press release pages, product list pages, pages that contain information about the management team, employee directory, etc. All the interesting pages 12 that the Crawler 11 collects are then passed (through a local storage 48) to the Extractor 41 for further processing and data extraction.
The role of the Extractor 41 is to extract information about people and/or organizations from a single Web page. For people, the extractor 41 has to find all mentions of a person, identify information related to people and associate it with the right person. For organizations, the extractor 41 must identify all occurrences of organization names, identify information related to the organizations and recognize descriptive paragraphs of texts related to an organization.
The original source of data on which the extractor 41 operates is in the form of text (in possibly different formats: plain text, html, rtf, etc.). Alternatively, these texts are converted to a standard format where the boundary of each sentence is clearly located and in which each individual line of text is assigned various types (sentence, header line, copyright notice, other indications of purpose, etc.) and is associated with a series of style elements (bold, underlined, font size, etc).
Before specific data extraction is applied, the text is analyzed with Natural Language Processing (NLP) tools in order to obtain the following information
In the preferred embodiment of the present invention, these are obtained through the NLP techniques described in U.S. patent application Ser. No. 09/585,320 filed on Jun. 2, 2000 for a “Method and Apparatus for Deriving Information from Written Text”.
The Extractor 41 relies on rules and algorithms to identify people and organizations and to identify and link related information. Those rules and algorithms have many possible variations. In general, a variation on a rule or algorithm will result in a trade-off between coverage and accuracy. Depending on the final application for the extracted data, higher accuracy or higher coverage may be desirable. For instance, if the data is used as a database for a general search engine, more coverage is desirable even at the cost of lower accuracy. On the other hand, for instance, if the data is used to create mailing lists, higher accuracy is desirable. Everything that is described hereafter is understood in this context and the description of specific rules and algorithms is done in a general way and are meant to include such variations.
Recursively identifying the page structure: Many pages contain lists of elements often within a hierarchy. Once noun phrase types, specific headers and style tags have been identified, it is possible to deduce the overall structure of the page by recursively looking for patterns using the method described later in step 114. For instance, a page could consist of a list of states, then within each state a list of city, then within each city a list of companies and then within each company, a mailing address and a list of people. This is recognized by first locating the boundary of the smaller most embedded sections by identifying header lines that are more prominent than what follows OR by locating clusters of repeated patterns using the method of Step 114 (discussed later). At this point, clusters and headers at a higher level can then be detected to recognize higher level of hierarchies. Ultimately, this produces a structure of the complete page which can allow, among other things, to attribute the correct semantic type to noun phrases that could not be identified through regular noun phrase classification.
Recognizing Specific Headers: Applicants have devised mechanism to recognize specific headers and specific elements within a page (e.g.: navigation map, copyright notice, index). This mechanism is based on rules that specify specific keywords or family of keywords along with the way those keywords should appear (e.g.: by themselves, preceded by an organization name, at the end of a line, etc.). Some headers will allow to deduce specific special information.
Assigning Style Tags to lines: In order to recognize the structure of a page, it is necessary to recognize that some lines are more prominent than others and that different lines correspond to the same structural element. In order to do this, it is necessary, at least, to compute a style tag for each line on the page (step 112,
After the line type is identified in step 112, Extractor 41 performs a lexical analysis 113 as further illustrated in
Inside a text, all relevant pieces of information (names of people, titles, names of organizations, phone numbers, fax numbers, addresses, etc.) must be identified as such (step 21,
Noun Phrase Detection: With the use of a tagger/disambiguator 23, the lexical analysis 113 recognizes all noun phrases in a text. The noun phrase recognition mechanism is rendered more precise by adding to the dictionary some lexical elements useful for name recognition. Those words are coupled with a corresponding semantic flag:
Noun Phrase Typing: With the use of a noun phrase classifier joiner, the lexical analysis 113 identifies all noun phrases that could potentially correspond to a person's or organization's name (step 25). An example noun phrase classifier joiner is disclosed in U.S. patent application Ser. No. 09/585,320 filed Jun. 2, 2000, herein incorporated by reference. In order to identify such noun phrases, rules describing the composition of a NAME must be defined. Those rules define what are the different parts of a name and the different order in which they can appear. In the preferred embodiment, names of people have seven possible parts: Address, FirstName, Initial, MiddleName, NameParticle (e.g.: van, de), LastName, NameSuffix.
Names of organizations have specific organization keywords at the end (e.g., Inc., Ltd., LLD, etc.) or at the beginning (e.g., Bank of, Association, League of, etc.) Certain organization names are followed by a respective stock ticker symbol (e.g., “ . . . Acme (NASDAQ:ACME) . . . ”).
Each rule describes a possible combination of those parts where such combination can serve as a valid name. Each rule is a succession of “tokens”. Each token specifies 4 things (elements in parenthesis are the symbols used in the preferred implementation):
All unrecognized capitalized noun phrases on a page are compared with all domains on the page. Those domains come from either: e-mail addresses, links, and/or explicit URLs. When a domain is matched, the unknown noun phrase is retyped as being an organization name. Matching is done by scanning each letter of the domain from left to right trying to match at least the first letter of each word in the noun phrase (backtracking if necessary). For instance “Federal Express” will match “FedEx”, “International Business Machines” will match “IBM”. A domain may contain more than one string separated by a period (“.”). For instance “Apple Corporation” will match “info.apple”. Different conditions may be imposed on the match depending on the desired trade-off between coverage and accuracy. In particular, it is possible to allow that not all words in the noun phrase be matched to at least one letter of the domain. For instance, a maximum number of unmatched words may be specified.
Referring now to noun phrase semantics (step 27,
Noun Phrase Joining: In some cases, names of people span across more than one noun phrase. In particular, this is the case when commas “,” appear within a name (e.g.: “John Smith, Jr.”, “Smith, John”). With the use of a noun phrase joiner (see patent application Ser. No. 09/585,320 filed on Jun. 2, 2000 for a “Method and Apparatus for Deriving Information from Written Text”), rules have been defined to (i) detect such construction, (ii) join the different parts in a single noun phrase and (iii) assign the correct name part to each word.
Noun Phrase Splitting: In many cases, improper punctuation or irregular format create a situation where the name of a person is immediately followed by a title or something else (e.g.: “John Smith Vice-President”). With the use of a noun phrase splitter (see patent application Ser. No. 09/585,320 filed on Jun. 2, 2000 for a “Method and Apparatus for Deriving Information from Written Text”), rules are defined to (i) detect such constructions, (ii) split the noun phrase into two parts at the appropriate point, and (iii) reanalyze the name so that correct name parts are assigned.
In some cases, the rules and algorithms described so far are not sufficient to identify the type of a particular noun phrase. This usually happens when the noun phrase is not surrounded by sufficient evidence. For example, there is not enough evidence to recognize a noun phrase such as “Kobir Malesh” as a NAME if it is not preceded by an address, does not contain a middle initial, and does not contain a known surname. However, analyzing the larger context where this noun phrase appears, it may be found that it is part of a list that follows a specific pattern, for example:
John Williams, CEO and President, ADA Inc.
Ted Brown, COO, Leda Corp.
Kobir Malesh, President, Round Technologies Corp.
Likewise, some organization names use a different format and may be recognized by a certain pattern. For instance, law firms often have names of the pattern “Name, Name, . . . & Name”.
In these cases, identifying the pattern within the text offers a way to assign the proper type to the unknown noun phrase. Thus, step 114 pattern detection follows or is employed with the lexical analysis of step 113 in
Rules are defined that recognize the repetition of certain line types and noun phrases (for instance a succession of lines where a NAME is followed by a TITLE) and that can reassign the proper type to noun phrases recognized as being part of such a pattern. For the purpose of pattern matching in step 114, only lines without verbs are considered for retyping (not sentences) and any succession of sentences and breaks are considered as one element. A pattern is recognized when at least two combinations of lines, sentences and breaks, with the same number of elements contain the same type of noun phrase in the same position on the same line. Furthermore, many variations are possible depending on the desired trade-off between coverage and accuracy. Those trade-offs concern:
Referring back to
Organization name aliasing resolves IBM, International Business Machines Corporation, IBM Corp., IBM Corporation, and International Business Machines Corp. to the same organization. This is accomplished first by finding each word in the shorter name within the longer name. Organization identifiers such as Corporation are aliased so that Corp. and Corporation match each other. If all of the words in the shorter string match words in the longer string and in the right order and there are no leftover words in the shorter string, they are said to match (indicate the same organization). If there are leftover words in one string or the other (but not both) that are basic organization identifiers, like Corporation, they are also said to match.
If the names do not match according to the above process, but the shorter name contains an acronym, the aliasing step 115 checks if there is a string of words in the longer name such that one can construct the acronym by taking one or more letters from each word, in the right order. For example, IBM and International Business Machines Corp. or FedEx Corporation and Federal Express Corp. or Digital Equipment Corporation and DEC or American Express and AMEX. If there is such a group of words, the names are said to match.
Name Aliasing for unclassified noun phrases is performed as follows. Names that could not be recognized through normal noun phrase classification 113, pattern detection 114 or special construction can still be discovered by comparing them to the list of names found on the page. The Extractor 41 program looks at all capitalized noun phrases of one to three words that did not receive any semantic type. It then tries to see if any of those could match one of the names found. This is done by considering one word noun phrases as either a first name or last name, 2 words noun phrases as “first name”+“last name”, and, 3 words noun phrases as “first name”+“middle name”+“last name”. It then applies the aliasing mechanism described above. this would allow for instance to link “Kobir” to “Mr. Kobir Malesh”.
Name Rejection: In some cases, names identified through the methods described will not be valid people's name or organization names. Different methods are used to reject names that were recognized by mistake:
Dictionary Checking: for instance, if the last name is a dictionary word (e.g.: “Paul Electricity” vs. “Paul Wood”), the Extractor 41 program checks if the last name is also flagged as being a potential family name. If not, it is rejected.
An example aliasing software routine 115 is as follows.
After aliasing 115 (
With regard to the former, Information Boundary, the following rules are used to identify the section where information about a person or organization are to be found.
For elements of information that spans across more than one line, Extractor 41 proceeds as follows. Some combinations of lines have a special structure and are recognized by defining rules that describe this structure in terms of noun phrase types and succession of specific elements. This is the case for instance of addresses where the whole address is recognized as one logical information for the purpose of pattern matching and information extraction.
Similarly, paragraphs of company/organization information such as organization description, product description and organization mission are processed as one logical unit of information. Description paragraphs are preferably located by checking for some conditions and establishing a score. The best overall description on the whole subject Web site is considered the organization description. The following is pseudo code description of a preferred implementation. Many variations are understood to be possible and the below description is for purposes of illustration and not limitation of the present invention.
For a paragraph to be considered, it must obey the following conditions:
Information about a person or organization can also be found outside of its cluster. The following cases are recognized in the preferred embodiment.
Continuing with
The press release organization can be identified among a list of noun phrase candidates using a Bayesian Engine or heuristics. Relevant tests can make use of the following information: presence in the first sentence of first paragraph, presence in the contact section, number of occurences and aliases, stock ticker symbol matching, subject of verbs like “announced”, following the word “about”, etc.
When pieces of personal information appear in a connected sentence, the logical relationship between each element (e.g.: title, company/organization, date) is expressed through the rules of the English language. In order to understand how those pieces of information are related, Natural Language Processing is employed in information extraction step 118. Sentences are syntactically parsed to obtain lexical frames representing potential relationships between words (see patent application Ser. No. 09/585,320 filed on Jun. 2, 2000 for a “Method and Apparatus for Deriving Information from Written Text”). Alternatively, those relationships can also be obtained through other NLP methods such as deterministic parsing. Those syntactical relations (or trees) are then searched for the appearance of pre-defined patterns corresponding to information that is of interest to the Extractor 41. Those patterns are referred to as “Semantic Frames”.
In the preferred embodiment, a list of semantic frames are defined for (a) sentences that express a relationship of employment between a company and a person. This includes, for instance, such semantic frames as “work Subject:[PERSON] as:[TITLE] for:[COMPANY]”, and, (b) sentences that express that a person holds a certain degree, for instance “graduated Subject:[PERSON] from:[INSTITUTION] with:[DEGREE] in:[DISCIPLINE]”. Included in the former are semantic frames that recognize an organization as an object of certain verbs, such as “joined” (as in “ . . . joined ACME in 1998”) and “was employed” (as in “ . . . was employed by ACME . . . ”). Other semantic frames for other types of personal or organization information can be defined using the same method. Semantic frames can also indicate how the resulting database record 16, 17 should be constructed from elements matching the frame (see patent application Ser. No. 09/585,320 filed on Jun. 2, 2000 for a “Method and Apparatus for Deriving Information from Written Text”). Once a sentence has been parsed, all possible semantic frames are applied. Successful matches lead to the creation of database or working records 16, 17 (
In one embodiment, the database/working records 16, 17 are structured as follows and illustrated in
Keyed by the person's name are one or more employment records 16b, i.e., a different employment record 16b for each position of employment held by the subject person. Each employment record 16b has a field indicating title of the person's position and corresponding organization's/employer's name and dates that position/title was held. The employment record 16b also has a flag (bit field) 51 indicating whether this employment record represents the person's primary employment. There are also fields indicating the geographic location of the respective employer (city, state, region) and a link to personal contact data records 16e for the subject person. The contact data records 16e include the person's street address, phone number, facsimile number and email address.
Also keyed by the subject person's name are one or more education records 16c, i.e., a different education record 16c for each degree earned by the person. Each education record 16c has a respective field for indicating degree earned, major (or field of study), institution awarding the degree and graduation date.
A copy of the biographical text or original text from which Extractor 41 reaped the information for records 16a, b, c, e is stored in a record 16d. Record 16d is keyed by the subject person's name.
In a like manner for organizations, there is one working record 17a per subject organization. The main working record 17a indicates name of the organization, stock ticker symbol (if any) and a unique identification code 19 which links or points to records 16 of individuals associated with the organization. Keyed off the organization name are site records 17b containing address, phone/fax number and domain URL for each of the various sites of the organization. Product records 17c hold product information, one record 17c per product. History records 17d store organization mission statement, organization description and other historical company information in a time ordered fashion, i.e., a different record 17d for each different year of the organization's existence.
Other records 16, 17 with other fields of information are suitable.
Returning to
Finally, some information of interest within sentences are not expressed through syntax but simply by concatenating pieces of information with the use of punctuation. For instance, “Mr. John Smith, President, Acme inc., will give a talk . . . ”. Rules are employed to recognize such occurrences. Those rules are sensitive to the succession of specific noun phrase types and punctuation within a sentence.
A person's or organization's name can appear along with relevant information on a non-sentence line separated by punctuation or formatting characters or within a succession of lines. Different methods have been devised by Applicants to construct desired database records 16, 17 from those cases.
In particular, a series of rules are utilized to express how pertinent information can appear. Those rules state the type and order of noun phrases and how to create the corresponding database records 16, 17. For instance, the succession in three different lines of a NAME, then a TITLE, and then a COMPANY can allow the creation of a work record comprising those 3 elements. Within some specific header or some specific groups of lines (as recognized through the methods described in step 111), it is possible to know with more accuracy how the information is going to be presented. Rules similar to the rules presented are then written but those rules only apply to specific sections.
Exemplary pseudo code for information extraction 118 in the preferred embodiment is as follows.
Further it is useful to associate a list 17e (
Pages used for keyword searching:
Keywords to retain for computation:
Keywords to retain at the end of the process:
Continuing with
Beginning with step 31 certain title modifiers are removed and record 16, 17 tense is affected. As information is extracted on a noun phrase basis, certain adjectival modifiers might be present at the beginning of a title. Such modifiers are inspected and depending on their meaning are:
Next organization names are detected in extracted job title information (step 33). That is, as information is extracted on a noun phrase basis, organization names might be included at the beginning of titles (for instance: “Acme President” and “International Robotic Association Vice-President of public relations”). Those names are recognized and separated at step 33. This is done by evaluating different split points in the title and attempting to identify the string resulting from such a split as an organization name by (a) matching with other occurrences of organization names on the page or site, (b) recognizing an organization name through semantic typing rules, (c) matching with a list of names of well-known organizations, or (d) matching the organization name against domain names appearing in URLs on the page.
Connecting people with company/organization through page type and headers is performed at step 35 (
Deducing organization names in biographical texts follows in step 37 of
Once all information for a person or organization has been extracted from a page, it is necessary to identify which of the different elements of information is the most important for this person/organization (e.g.: which title is the main title for this person or which occupation is the current most important one, or which name is the current one for this organization). Also, when there is a chronology of past employment or company history, it is necessary to order this information. This is accomplished at step 39 (
First the main record 16, 17 is identified. This is based on a certain order of preference:
Next, the chronology is established. It cannot be assumed that a biography will present the order of employment in a strict uniform fashion. Biographical texts must be analyzed to differentiate between different styles. In the preferred embodiment, step 39 does this in two prongs. In one prong, an ordering of employment at each different organization is made. This may be from (i) past to present, or, (ii) from present to past. This only indicates the general order of groups of sentences related to the same organization but not the order within each paragraph.
In the second prong, step 39 places in order the extracted titles or the subject person within the same organization. Each paragraph or group of lines can use a different style and different paragraphs within the same biographies can have different styles. There are three possibilities: (i) from most recent, (ii) from least recent, (iii) the first sentence is the most recent position but then it continues with the least recent and onwards.
Rules for establishing this chronology are based on keywords (e.g.: “started”, “joined”, “later”, etc.), explicit dates, and sentence construction (e.g.: “X came from Acme where . . . ”). Similarly, chronology of organizations history (events) is established through respective rules based on keywords, explicit dates and sentence construction.
Information extracted through the processes described here will contain a certain proportion of errors. Those errors can be due to a variety of sources such as orthographic and grammatical mistakes in documents, non-standard document formats, highly complex documents, etc. Many methods to detect and possibly correct errors are employed in the post-processing phase 119. This includes among others: (i) reformatting and standardization of titles, (ii) reformatting and standardization of organization names.
Furthermore, because various methods are used to locate and link information and because each method can have different trade-offs between coverage and accuracy, it is possible to associate a confidence level with all pieces of information with the collection of information within a record 16, 17 (
Referring back to
Each individual person can appear in multiple locations on the Web, either on several pages within a Web site or on multiple Web sites. In order to provide the maximum value in the results database 45 (
The first step towards identifying two people as being the same actual person is to match the names. A name consists of five parts: a prefix (Mr., Ms., Dr., etc.), a first name (Jennifer, Jen, William, Bill, etc.), a middle name (Alex, A., etc.), a last name (Johnson, Smith, Jones, etc.), and a suffix (Jr., Sr., III, etc.).
In order for two given names of individuals to match, the last name must match exactly. The first names must either match exactly, or they must be valid aliases or “nicknames” for each other (Jim and James, for example). A list of valid first name aliases compiled from U.S. Census data is employed by loader 43.
The prefix, suffix, and middle names must not conflict, but do not necessarily need to match. This means that if one of the given names has one of these fields, but the other does not, they can match. So, Mr. Jean Smith and Jean A. Smith III are valid matches, but Mr. Jean A. Smith and Ms. Jean A. Smith are not. Similarly, abbreviations can be matched, so Jean Angus Smith and Jean A. Smith match.
Once a potential match has been identified, the organization names as stored in corresponding employment records 16b must be compared to see if they match. Many organizations will have two people with the same name, so a match between the organization names and the person's name is not a 100% guarantee that they are the same person. However, the odds that both people will be found on the Internet by this system are low, so they can generally be considered to be the same person. Errors of this nature are considered acceptable.
Matching two given organization names is complicated, since IBM, International Business Machines Corporation, IBM Corp., IBM Corporation, and International Business Machines Corp. are all the same organization. The first step is to find each word in the longer name within the smaller name. Organization identifiers such as “Corporation” must be aliased so that “Corp.” and “Corporation” match each other. If all of the words in the longer string match words in the shorter string in the right order and there are no leftover words in the shorter string, they can be said to match. If there are leftover words in one string or the other (but not both) that are basic organization identifiers, like “Corporation”, they can also be said to match.
If the loader 43 does not produce a match, but the shorter name contains an acronym or a word with all capital letters, the loader 43 checks if there is a string of words in the longer name that (i) start with those letters, in order or that (ii) one can construct the acronym by taking one or more letters from each word in the right order. For example, IBM and International Business Machines Corp. or American Express and AMEX. If there is such a group of words, the given strings can be said to match.
Another test for organization name matching is to compare the organization Web site domains, if known. For example, if www.dragon.com is the Web site domain for both Dragon Systems Inc. and DSI, then it can be inferred that DSI is probably an alias of Dragon Systems Inc (the smaller string is usually considered to be an alias of the longer string).
A person in their lifetime can be associated with several organizations. Because information on the Internet can be dated, it is important to compare all organizations that a person has worked for when trying to find a match in organizations.
Locale can also be a factor in matching organizations. Many peoples' organizations are mentioned in relation to their geographical location (“The Internet is extraordinary,” said Jonathan Stern, CEO of Corex Technologies in Cambridge, Mass.). If locale information for the organization is available, it must not conflict. So, “Corex” matches “Corex in Cambridge, Mass.” and “Corex in Massachusetts”, but “Corex in Trenton, N.J.” does not match “Corex in Massachusetts”.
Titles can also be written in different ways yet mean basically the same. For example Vice President and VP are completely interchangeable. The loader program 43 contains a list of common shorthand for titles including: VP, CEO (for Chief Executive Officer), CIO (Chief Information Officer), etc.
In addition, words within the title can shuffle without changing the meaning for example: Vice President of Marketing or Marketing VP, Director of Quality Assurance and QA Director. Titles are aliased if they have identical meaning in English, as defined by the Extractor 41.
The problem can be even bigger when the title is paraphrased. For example the title President and title CEO are interchangeable in many small companies, the title Manager and the title Director are many times swapped. For this reason, the loader program 43 also contains a list of titles that are likely to be swapped.
At the database 45 level, the same process used to conclude that two given organizations are the same can be used to tie a person to an organization as well as to another person. Information about an organization is also stored in the database 45, including the host name, the location of the organization, a description, etc. By storing the database id 19 (
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
For example, the term “organization” is meant to refer to various entities such as companies, business entities, non-profit groups, associations, etc. As such, individuals associated with any such organizations may be employees, staff, officers, members, and so forth. The foregoing discussion is understood to encompass these roles/positions and broader interpretations of the terms “organization” and “employment” or relationship to an organization.
The Loader 43 may also normalize the extracted data before storing records 16, 17 in database 45. Normalizing includes case usage (upper case letters versus lower case letters), punctuation, usage of special characters, spacing and the like.
It is understood that the various described modules (crawler 11, extractor 41 and loader 43) may be implemented in various combinations, architectures and the like. Distributed processing, network processing and so forth may execute these modules. Likewise the above-described functions and operations in the preferred embodiment of extractor 41 and loader 43 are for purposes of illustration and not limitation.
A further description of the invention is found on Appendix A attached hereto.
BACKGROUND OF THE INVENTION
Searching large text repositories for relevant documents is one of most common methods of conducting research. These text repositories range from well-defined document sets, such as the archives of a periodical publication, to haphazard or random collections, such as what is found in Wikipedia or the Internet. Tools such as Google, FAST, and Lucene have been created to index these repositories to support keyword searches.
Users generally enter keywords into these tools that are likely to be found in a document that would contain the information they are looking for. The tools return a list of documents that contain these terms, and the user then reads the documents to try to find the information he or she is looking for. The quality of the search tool is often based on its ability to put highly relevant documents at the beginning of the results so that the user reads as few documents as possible before finding the answer they are looking for.
For this process to work effectively on very large repositories containing millions or billions of documents, there must a document that answers the user's question in an authoritative manner. Examples include searching for the average rainfall in the Amazon rainforest, which would probably be included in a document that gives a general overview of the Amazon rainforest, or searching for the symptoms of Tuberculosis, which would probably be found in an article about the disease and its treatment.
However, there are a variety of research questions for which there may be no single document that can be expected to answer the question authoritatively. These includes situations where the answer changes over time or for which there is actually a set of answers that would not normally all be mentioned in a single document. For example, searching for the stock price of a company is difficult in a document repository because stock prices fluctuate on a daily basis, so any document that contains the stock price is rendered meaningless by more up-to-date documents. Similarly, if a user is trying to put together an exhaustive list of books published by Houghton Mifflin, there are thousands of documents that each lists a small handful of books. He or she would have to read each one to collect all the books that make up the answer.
These problems are further compounded by the fact that some of the keywords specific to the search might have alternate meanings. For example, using the keyword “Columbia” in a search might bring back articles that refer to Columbia the university, Columbia the river, Columbia the country, Columbia the space shuttle, etc. Similarly, a user searching for the current job of someone named “Alexander Brown” would need to contend with the fact that there are likely multiple people with the name “Alexander Brown”, so he or she would need to assess whether each document they look at refers to the “Alexander Brown” in question, or simply another person with that name.
In general, these types of search problems can be referred to as entity attribute searches. A user wants to understand the value of a specific attribute of an entity. An entity could be a person, a company, a place, a disease, etc. Each entity would have its own set of attributes, some of which might have more than one value. For example, a person might have a name, a title, a company, a birthplace, etc., while a disease would have a cause, symptoms, and treatments.
Existing methods of addressing these problems are of limited use. One technique is to use entity-attribute extraction, whereby an automated software tool reads all of the documents and identifies entity-attribute relationships. Rather than reading all of the documents to see if the desired entity-attribute relationship is found, the user can just look at the existing entity-attribute relationships for the specific answer. For example, a tool might extract products and their manufacturers, allowing a user to save time by just searching the extracted entity for the results of a specific manufacturer. However, the user must still worry about the possibility that there is more than one manufacturer with the same name (for example, there is more than one company named Universal Plumbing Supply). Also, if the attribute is something that is expected to change over time, the user must still resolve conflicting answers by considering the source documents to decide which is right.
Entity-attribute extraction also presents its own specific challenges of data interpretation. One issue is that there may be more than one way to refer to the same entity, making simple tools to remove duplications ineffective. Apple Computer might be found to manufacture “iPods”, “portable music players”, and “mp3 players”, but these in fact all refer to the same product. Also, there are many potentially ambiguous situations which could result in erroneous records, such as the sentence: “The company began manufacturing liquid crystal displays in January of 2005.” It is unclear what entity is being referred to by “the company.”
A second technique to improve the user's search experience is document co-referencing, where documents that refer to the same entity are grouped together. In this situation, a search for “Alexander Brown” returns a set of document results, with each set referring to a different Alexander Brown. This reduces the number of documents that a user must consider, but they still need to review each document to search for relevant entity-attribute relationships. Furthermore, when the entity is heavily referenced in the document repository, the user still needs to review a substantial number of documents to find the answer. For example, trying to find a list of all of the television appearances by a popular film star, the user will still need to sort through a massive number of documents to compile a complete list.
A system for Authoritative Conciliation of Entities moves beyond the entity extraction and document co-referencing by examining the relationship between the various attributes of entities, conciliating similar information, resolving contradictions and establishing time lines in order to produce an organized meaningful set of well-defined global entities each with a high-level set of distinct conciliated attributes from a large number of sources. Users can then search this authoritative entity repository without the need to review large sets of documents, allowing them to answer a wide range of entity-attribute related questions quickly and easily.
This invention makes use of an input set of entities and attributes, such as could be obtained using an automatic extraction system from a document repository, as is described in [patent: Computer method and apparatus for extracting data from web pages]. At minimum, each entity must contain at least one attribute. To support timeline creation, the entities must also contain dating information, either from the source document as a whole or as part of the attribute.
The invention for Authoritative Conciliation of Entities comprises the steps of:
This method works on a multitude of entities and associated attributes, although different techniques are used depending on attribute type. For most effective use of the invention on a specific entity-attribute domain, each attribute that will be authoritatively conciliated should be specified with the following behaviors:
Entities exist as a set of attributes and values that refer to an entity of a specific type. For example, an entity representing a businessperson might contain attributes such as a first name, last name, title, company name, and location, while an entity representing a company might have attributes such as a company name, address, industry, and product. Entities may also have an associated source document, such as for automatically extracted entities, which in turn may have its own attributes, such as publishing date, title, etc.
In order to automatically combine related entities and generate authoritative attribute representations, attributes from different entities must be compared to see if they are compatible. This is often non-trivial, as there can be many ways to refer to the same attribute, particularly when extracting the attributes from human-written source documents. For example, the headquarters location of Microsoft can be described as “Redmond, Wash.”, “Seattle, Wash.” (the larger municipality of which Redmond is a suburb), and the “Northwestern United States”. While the three values appear to be contradictory, they refer to the same place at different levels of precision. Similarly, a person who sits on a board of directors might be referred to as “Director”, “Board Member” or “Chairman” depending on the context.
For each attribute, a grammar or rule set must be created to determine how to compare attributes, decide if they are the same or not, and choose a representative form. In some cases, this grammar can be very simple, such as matching first names by providing a dictionary of well-known nicknames (e.g. Robert can be referred to as Rob, Bob, Robby, Bobby, etc.). More complex dictionaries can be built for richer data sets, such as a database of cities that includes fields for metro area, state, region, country, and continent.
Other attributes are more complicated and need an actual grammar in order to properly match values that were never seen before. For example, “tractors”, “yellow tractors”, and “big yellow tractors” are all ways of referring to a single product, but “car” and “car repair tools” are separate products. Each type of attribute will have its own grammar. Below are example grammars for various attributes of entities that might be extracted from a document repository:
The grammars must also be able to describe how to handle compound forms of attributes. For example, longer titles might actually be a combination of multiple shorter titles, such as “President and CEO”. In some cases, the grammar may allow for more than one interpretation of compound attributes, leading to ambiguity. For example, “mechanical and electrical engineering” can be interpreted as “mechanical engineering and electrical engineering”, but “yogurt and ice cream” is not interpreted is as “yogurt cream and ice cream”. In cases of ambiguity, the best solution is to retain both possible interpretations, and then reconcile with other examples of the attribute at a later stage of conciliation to see if non-compound forms exist. For example, if “yogurt cream” is never found as an attribute for the entity on its own, but “yogurt” is found, then the correct interpretation can be deduced.
After rules or grammars for comparing attributes are created, each attribute must be categorized based on its effectiveness in grouping entities together. The possible categories are:
Minimally Required attributes represent data that must match for any entities to combine. In the case of people, the names must match in order to combine to people entities. However, minimally required attributes are generally not sufficient on their own for combination. Many people can have the same name, for example.
Sufficient attributes represent data that can be used to group entities together, as long as the Minimally Required attributes match. For companies, the location attribute is Sufficient, meaning that two companies with the same name and the same location can be considered the same company. For people, the company attribute is Sufficient, meaning that two people with the same name and the same company can be considered the same person.
Contingent attributes represent data that could possibly be used to group entities together in the absence of contradictory information. These are often useful in data sets that contain incomplete records, such as a person with a title but no company, or a company with an industry but no location. If two entities have a match on a Contingent attribute, combination should only be allowed if they do not have conflicting Sufficient attributes. For example, two people with the same name and same title could be the same person, provided that they do not work for two separate companies.
Insufficient attributes represent data is shared by a significant number of entities or highly mutable and thus not useful for grouping.
Once each attribute has been assigned a category, it is possible to group entities together. Each entity is compared to other entities that having matching Minimally Required attributes, and if they are found to have matching Sufficient attributes as well, they are grouped together. As a second stage, entities that have already been grouped together are compared with entities that did not group during the first stage. If they share Minimally Required attributes and Contingent attributes without Sufficient attribute conflicts, they are also grouped together.
Grouping entities together must avoid problems of overcombination (grouping together two entities that we not really the same) and undercombination (allowing two entities to remain distinct that were really the same). In order to minimize the frequency of these problems, the grouping stage can be further sub-divided into a series of rules, where the strength of the match of two attributes is considered as well. Rules with extremely high matching requirements are run first, and then increasingly looser matches are allowed with each pass. For example, when combining people, exact first name matches might required in an earlier, rule, and nickname matches (i.e. “Bob” and “Robert”) are allowed only in later rules. Restrictions can be placed on the later rules to prevent riskier rules from combining large groups, where large groups can be defined as number of sources, number of distinct attribute values with a Sufficient category, etc.
One simple optimization to reduce computational complexity is to organize entities into buckets based on Minimally Required attributes, thereby reducing the number of entities that need to actually be compared with each other. For example, grouping of people entities could operate in segments based on last name, allowing the process to be more easily subdivided across multiple computers.
Once entities have been grouped, individual attributes must be conciliated to arrive at the correct interpretation of the data. Different values can be present for the same attribute for several reasons:
The first step is to group attribute values based on the grammar rules for that attribute, assigning each value a weight based on the number of appearances. Because the grammars may allow for multiple levels of specificity, more specific values can lend their weight to more general values. For example, for a product, a company has 3 instances of “laser printers”, 9 instances of “ink jet printers”, and 7 instances of just “printers”, the “printers” weight could be increased from 7 to 19, because the first two products are also printers. In some cases, when there is a well-established hierarchy for the attribute, it may be desirable to eliminate more general values when preferable specific values are present. For example, if the target for location information is to find a city, then “Northwestern United States” values can be dropped in favor of “Seattle, Wash.”, because the two values are potentially compatible.
Once each value for the attribute is sufficiently weighted, it is possible to order the values from most confident to least confident. This ordering can be optimized in many ways, such as detecting of duplicate documents and reducing the weight accordingly. For example, a press release may be reprinted in many places and may improperly skew the weights if not detected.
Even the most confident value may change over time. For example, Jack Welch, the former chairman of General Electric, will have “Chairman of General Electric” strongly dominate for the score for his job due to the preponderance of data associated with this relationship. Mr. Welch's post-GE role as a consultant for Fortune 500 business CEOs is less newsworthy and will continue to have a lower weight, even if it is well represented in the data.
In order to account for changes in the data over time, information about the source must also be considered. Available information about the sources will vary, but it may be possible to consider some sources more important than others. For example, information from a reputable news organization might be given stronger consideration than information from a personal website. Similarly, more recent information might be considered to be more important than information published several years ago.
Document importance can thus be factored into the weights. Exactly how strongly to factor in this data depends on the nature of the information and how quickly it can be expected to change. For example, a person may switch jobs very quickly, so appearing in a recent document from an important source would be expected to factor quite strongly, whereas a company's products change much more slowly, so scores would be impacted only if there have been no mentions of the product for some time.
Once the final weighting is achieved, the value with the highest score can be selected as the primary value. If the attribute should only have one value, as would be the case for values like a company's stock ticker symbol or a person's name, remaining values can be ignored.
If a field can have multiple values, a variety of metrics can be used to decide which values to keep for the authoritative record. The first aspect to consider is records which are compatible grammatically, such as a product of “printers” versus a product of “laser printers”. If more specific values are considered more desirable, then more general records can be dropped in favor of more specific records.
In order to clean potentially erroneous records, the score of weaker values can be compared to the highest score value, and an appropriate threshold to discard records can be determined experimentally, based on the desired use case.
For values that have a timeline associated with them, such as a person's job history or a company's merger and acquisition history, dates within documents or specific timing values associated with the data can be used to create an ordered timeline. If dates are available, a date can be assigned to each value, based on its first seen, last seen, or
average date (selected as appropriate), and values can be ordered based on these dates. Also, some values may contain explicit time-bound relationships to other values, such as extracting a person's job from a sentence. For example, the sentence, “Before Akamai, Ms. Smith worked at Digital Equipment Corp.”, clearly indicates that references to “Digital Equipment Corp.” come before references to “Akamai”, even if no dates are available.
This process is repeated for each attribute possessed by the entity, resulting in an authoritative conciliation of the entity. These new representations can then be loaded into a database or other search tool, allowing a user to query for entities by attribute. Rather than displaying the source data, entities can be now displayed as summaries of their associated attributes.
For example, records extracted about people with their title and company could be grouped using the names of companies, and then conciliated by comparing the document sources, frequencies and dating information from the documents to select the authoritative current job. This data can then be loaded into a database, allowing a user to search for people based on their name, title, or company. Rather than seeing many duplicate records or conflicting pieces of information, each person is represented as a single, well-defined entity, allowing the user to concentrate on their original search challenge.
This application is a continuation-in-part of U.S. application Ser. No. 09/910,169, filed Jul. 20, 2001 and U.S. application Ser. No. 09/918,312 filed Jul. 30, 2001, which claim the benefit of U.S. Provisional Application No. 60/221,750 filed on Jul. 31, 2000. The entire teachings of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60221750 | Jul 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09910169 | Jul 2001 | US |
Child | 11436370 | May 2006 | US |
Parent | 09918312 | Jul 2001 | US |
Child | 11436370 | May 2006 | US |