Generally speaking a global computer network, e.g., the Internet, is formed of a plurality of computers coupled to a communication line for communicating with each other. Each computer is referred to as a network node. Some nodes serve as information bearing sites while other nodes provide connectivity between end users and the information bearing sites.
The explosive growth of the Internet makes it an essential component of every business, organization and institution strategy, and leads to massive amounts of information being placed in the public domain for people to read and explore. The type of information available ranges from information about companies and their products, services, activities, people and partners, to information about conferences, seminars, and exhibitions, to news sites, to information about universities, schools, colleges, museums and hospitals, to information about government organizations, their purpose, activities and people. The Internet became the venue of choice for every organization for providing pertinent, detailed and timely information about themselves, their cause, services and activities.
The Internet essentially is nothing more than the network infrastructure that connects geographically dispersed computer systems. Every such computer system may contain publicly available (shareable) data that are available to users connected to this network. However, until the early 1990's there was no uniform way or standard conventions for accessing this data. The users had to use a variety of techniques to connect to remote computers (e.g. telnet, ftp, etc) using passwords that were usually site-specific, and they had to know the exact directory and file name that contained the information they were looking for.
The World Wide Web (WWW or simply Web) was created in an effort to simplify and facilitate access to publicly available information from computer systems connected to the Internet. A set of conventions and standards were developed that enabled users to access every Web site (computer system connected to the Web) in the same uniform way, without the need to use special passwords or techniques. In addition, Web browsers became available that let users navigate easily through Web sites by simply clicking hyperlinks (words or sentences connected to some Web resource).
Today the Web contains more than one billion pages that are interconnected with each other and reside in computers all over the world (thus the term “World Wide Web”). The sheer size and explosive growth of the Web has created the need for tools and methods that can automatically search, index, access, extract and recombine information and knowledge that is publicly available from Web resources.
The following definitions of commonly used terms are used herein.
Web Domain
Web domain is an Internet address that provides connection to a Web server (a computer system connected to the Internet that allows remote access to some of its contents).
URL
URL stands for Uniform Resource Locator. Generally, URLs have three parts: the first part describes the protocol used to access the content pointed to by the URL, the second contains the directory in which the content is located, and the third contains the file that stores the content:
<protocol>: <domain><directory><file>
For example:
Commonly, the <protocol> part may be missing. In that case, modem Web browsers access the URL as if the http:// prefix was used. In addition, the <file> part may be missing. In that case, the convention calls for the file “index.html” to be fetched.
For example, the following are legal variations of the previous example URLs:
Web page is the content associated with a URL. In its simplest form, this content is static text, which is stored into a text file indicated by the URL. However, very often the content contains multi-media elements (e.g. images, audio, video, etc) as well as non-static text or other elements (e.g. news tickers, frames, scripts, streaming graphics, etc). Very often, more than one files form a Web page, however, there is only one file that is associated with the URL and which initiates or guides the Web page generation.
Web Browser
Web browser is a software program that allows users to access the content stored in Web sites. Modern Web browsers can also create content “on the fly”, according to instructions received from a Web site. This concept is commonly referred to as “dynamic page generation”. In addition, browsers can commonly send information back to the Web site, thus enabling two-way communication of the user and the Web site.
Every Web site publishes its content packaged in one or more Web pages. Typically, a Web page contains a combination of text and multimedia elements (audio, video, pictures, graphics, etc) and has relatively small and finite size. There are of course exceptions, most notably in pages that contain streaming media, which may appear to have “infinite” size, and in cases of dynamic pages that are produced dynamically, “on the fly”. However, even in those cases, there is some basic HTML code that forms the infrastructure of the page, and which may dynamically download or produce its contents on the fly.
In general, it is more useful for someone to identify the contents of “static” pages, which are less likely to change over time, and which can be downloaded into local storage for further processing. When the contents of a page are known, then special data extraction tools can be used to detect and extract relevant pieces of information. For example, a page identified as containing contact information may be passed to an address extraction tool; pages that contain press releases may be given to search engines that index news; and so on. Furthermore, identifying automatically the content type may be useful in “filtering” applications, which filter out unwanted pages (e.g. porn filters). Simple filters used today work mostly on the basis of keyword searches. The current invention, however, uses a much more sophisticated and generic technique, which combines several test outcomes and their statistical probabilities to produce a list of potential content types, each one given with a specific confidence level.
There are several applications that can significantly benefit from automatic Web page content identification; for example, see Inventions 4, 5 and 6 as disclosed in the related Provisional Application No. 60/221,750 filed on Jul. 31, 2000 for a “Computer Database Method and Apparatus”.
The purpose of this invention is to automatically identify and classify the contents of a Web page among some specific types, by assigning a confidence level to each type. For example, given the following list of potential content types:
{Contact Information, Press Release, Company Description, Employee List, Other}
The present invention analyzes the contents of some random Web page and produces a conclusion similar to the following:
This conclusion presents the probabilities that the given Web page contains each one of the pre-specified potential content types. In the above example, there is 93% probability that the given Web page contains contact information, 2% probability that it contains a press release, 26% that it contains company description, 7% that it contains an employee list, and 11% that its content actually does not fit in any of the above types.
The present invention method includes the steps of:
Apparatus embodying the present invention thus includes a predefined set of potential content types and a test module utilizing the predefined set. The test module employs a plurality of processor-executed tests having test results which enable, for each potential content type, quantitative evaluation of at least some contents of the subject Web page being of the potential content type. For each potential content type, the test module (i) runs at least a subset of the tests, (ii) combines the test results and (iii) for each potential content type, assigns a respective probability that at least some contents of that type exists on the subject Web page.
The set of potential content types includes one or more of the following:
In a preferred embodiment of the present invention, the step of combining includes producing a respective confidence level for each potential content type, that at least some content of the subject Web page is of the potential content type. Further, a Bayesian network is used to combine the test results. The Bayesian network is trained using a training set of Web pages with respective known content types such that statistics on the test results are collected on the training set of Web pages.
In accordance with one aspect of the present invention, the tests involve:
(i) determining whether a predefined piece of data or keyword appears in the page (e.g., people names, telephone numbers, etc.),
(ii) examining syntax or grammar or text properties (e.g., number of passive sentences, number of sentences without a verb, percentage of verbs in past tense, etc.),
(iii) examining the page format and style (e.g., number of fonts used, existence of tables, existence of bullet lists, etc.),
(iv) examining the links in the page (e.g., number of internal links, number of external links, number of links to media files, number of links to other pages, etc.), and/or
(v) examining the links that refer to this page (e.g., number of referring links, key words in referring links, etc.).
In accordance with another aspect of the present invention, storage means (e.g., a database) receives and stores indications of the assigned probabilities of each content type per Web page as determined by the test module. The storage means thus provides a cross reference between a Web page and respective content types of contents found on that Web page.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Every Web page is simply a container of information. There are no restrictions or standard conventions about the type of information it contains, its style, or its format. Therefore it is very difficult for computer programs to automatically extract information from a random Web page, since there are no rules or standards that could help them locate the information, or simply determine if the information exists at all.
The present invention helps solve the second part of this problem, namely to determine what kind of information a given Web page contains. Once the content type of a Web page is known, specialized techniques can be used to locate and extract this content (e.g. see Inventions 5 and 6 as disclosed in the related Provisional Application No. 60/221,750 filed on Jul. 31, 2000 for a “Computer Database Method and Apparatus”).
A simplistic approach in identifying the contents of a Web page is to develop and use a set of rules similar to the following:
If a page contains long paragraphs, and it contains at least one stock ticker symbol, and it contains a section entitled “About . . . ”, and it contains at least one phone number or at least one address at the end, then it is a press release.
However, there are several problems associated with developing and using these kinds of rules, for example:
The present invention circumvents all these problems by replacing the rule-based approach with a series of tests, statistical training, and mathematical combination of the test results to produce a list of the potential content types and an accurate measure of the confidence level associated with each type. In a preparation phase, the user defines the set of content types that the invention must recognize within Web pages, and prepares tests that provide evidence about one or more of these types. Next is a training phase. During this phase, the user runs all the tests on a set of Web pages with known content types. Then, the results of the tests are used to calculate statistical conditional probabilities of the form P(Test result|Hypothesis), i.e. the probability that a particular test result will appear for a particular test, given a particular hypothesis. The resulting table with probabilities can then be used for classification. Finally, in use, the user runs the tests prepared in the preparation phase on a subject Web page with unknown content types and collects the test results. Then, the user combines the test results using the probabilities from the training phase and calculates a confidence level for each of the potential content types, as they have been identified during the preparation phase.
More specifically, the current invention uses the following steps:
A. Preparation
B. Training
C. Classification
With reference to
Yet another user may need to identify pages that contain conference-related information. In that case, the potential content types 10 may be the following:
The next step in preparation phase 24 is to prepare tests 15 that provide some evidence whether a page contains some of these content types 10 or not. For example, the following tests may be used to provide evidence about whether a page contains a press release or not:
Note that all these are binary tests, with two possible outcomes, “True” or “False”. However, tests with more than two possible outcomes may also be used, as the following:
In general, the tests 15 may be anything that helps differentiate between two or more of the given types 10. For example, some possible types of tests 15 are the following:
The particular kind of tests 15 to develop and use depends of course on the task in hand, i.e. the kind of page content that the user is interested in identifying.
Now turning to
{Company Description, Company Locations, Company Products, Other} then a training set 23 of a few hundred Web pages is collected and the content type of each one is identified (step 20 in
In general, the accuracy of the classification that is achieved by this invention increases as the training set 23 becomes larger and more representative of the “real world”. The ideal training set 23 is a random sample of a few hundred to a few thousand samples (the actual number depends on the number of target types, and how easily they are distinguishable from each other).
With a training set 23 in hand, the actual training phase/module 50 consists of the following steps as illustrated in
The test results 22 and the conditional probabilities 27 connected with each result provide evidence about the possibility that the page contains each one of the target content types 20. But a tool is still needed to combine and weight all these pieces of evidence, and produce the final conclusion. The mathematical tool that is used by the present invention is based on the concept of Bayesian Networks and is illustrated in
Bayesian Networks have emerged during the last decade as a powerful decision-making technique. It is a statistical algorithm that can combine the outcome of several tests in order to chain probabilities and produce an optimal decision based on the given test results.
Bayesian Networks come in many forms, however their basic building block is Bayes' theorem:
One of the simplest types of Bayesian Networks is the Naïve Bayesian Network. The Naïve Bayesian Network is based on the assumption that the tests are conditionally independent which simplifies considerably the calculations. In Naïve Bayesian Networks, the formula that calculates the probability for some hypothesis given some test results is the following:
where:
F=P(Hi)·P(T1|Hi)·P(T2|Hi)·. . . ·P(TN|Hi)
In order to produce a conclusion and the overall confidence level associated with each content type, several Bayesian Networks are used, one for every content type. Each Bayesian Network is capable to detect the existence of one type of contents, based on the test results. The output of each Bayesian Network is a probability, or confidence level, that the given page contains that type of content.
For example, to distinguish between the following types:
1. Company Description Bayesian Network
2. Company Locations Bayesian Network
3. Company Products Bayesian Network
The Company Description Bayesian Network has the following hypothesis:
Hypothesis: the given Web page contains company description with two possible values, True or False. Passing the test results through this Bayesian Network produces a value between 0 and 1, which corresponds to the probability that the hypothesis is True. For example, if this Bayesian Network outputs 0.83, that means there is 83% confidence level that the given page contains a company description.
The other Bayesian Networks are used in the same way, and the end result is an array of values that correspond to confidence levels about the existence of the target content types.
Referring to
Illustrated in
In
The Bayesian Network module 52 implements step C (classification) above as previously discussed in conjunction with
As a comprehensive example, a Bayesian Network that recognizes pages that contain press release content is presented next.
The example Bayesian Network identifies if the page contains press release content. Therefore the content types of interest are simply:
In order to identify between these two target types, one Bayesian Network with the following hypothesis is sufficient:
The following tests are defined to offer evidence regarding this hypothesis:
Stuff Headers
Board Headers
A training set of 1,688 sample Web pages was used, of which 597 had a press release content, and 1,091 had no press release content. For example, some of the pages used were the following:
Running all the defined tests on these sample Web pages and calculating the probabilities of occurrence for each test outcome, the following probabilities table is obtained:
P(H=True)=0.500000
P(H=False)=0.500000
P(T1=True|H=True)=0.986600
P(T1=True|H=False)=0.954170
P(T1=False|H=True)=0.013400
P(T1=False|H=False)=0.045830
P(T1=False|H=False)=0.045830
P(T2=True|H=True)=0.383585
P(T2=True|H=False)=0.342805
P(T2=False|H=True)=0.616415
P(T2=False|H=False)=0.657195
P(T3=True|H=True)=0.276382
P(T3=True|H=False)=0.253896
P(T3=False|H=True)=0.723618
P(T3=False|H=False)=0.746104
P(T4=True|H=True)=0.216080
P(T4=True|H=False)=0.055912
P(T4=False|H=True)=0.783920
P(T4=False|H=False)=0.944088
P(T5=True|H=True)=0.447236
P(T5=True|H=False)=0.017415
P(T5=False|H=True)=0.552764
P(T5=False|H=False)=0.982585
P(T6=True|H=True)=0.557823
P(T6=True|H=False)=0.112455
P(T6=False|H=True)=0.442177
P(T6=False|H=False)=0.887545
P(T7=True|H=True)=0.755102
P(T7=True|H=False)=0.727932
P(T7=False|H=True)=0.119048
P(T7=False|H=False)=0.230955
P(T7=C|H=True)=0.125850
P(T7=C|H=False)=0.041112
P(T8=True|H=True)=0.428571
P(T8=True|H=False)=0.118501
P(T8=False|H=True)=0.571429
P(T8=False|H=False)=0.881499
P(T9=True|H=True)=0.348639
P(T9=True|H=False)=0.013301
P(T9=False|H=True)=0.651361
P(T9=False|H=False)=0.986699
P(T10=True|H=True)=0.132653
P(T10=True|H=False)=0.001209
P(T10=False|H=True)=0.867347
P(T10=False|H=False)=0.998791
P(T11=True|H=True)=0.319933
P(T11=True|H=False)=0.094409
P(T11=False|H=True)=0.661642
P(T11=False|H=False)=0.900092
P(T11=C|H=True)=0.018425
P(T11=C|H=False)=0.005500
P(T12=True|H=True)=0.139028
P(T12=True|H=False)=0.045830
P(T12=False|H=True)=0.772194
P(T12=False|H=False)=0.951421
P(T12=C|H=True)=0.088777
P(T12=C|H=False)=0.002750
P(T13=True|H=True)=0.048576
P(T13=True|H=False)=0.067828
P(T13=False|H=True)=0.951424
P(T13=False|H=False)=0.932172
P(T14=True|H=True)=0.003350
P(T14=True|H=False)=0.007333
P(T14=False|H=True)=0.996650
P(T14=False|H=False)=0.992667
P(T15=1|H=True)=0.979899
P(T15=1|H=False)=0.807516
P(T15=2|H=True)=0.020101
P(T15=2|H=False)=0.192484
P(T16=1|H=True)=0.100503
P(T16=1|H=False)=0.373052
P(T16=2|H=True)=0.899497
P(T16=2|H=False)=0.626948
P(T17=1|H=True)=0.986600
P(T17=1|H=False)=0.918423
P(T17=2|H=True)=0.013400
P(T17=2|H=False)=0.081577
P(T18=1|H=True)=0.979899
P(T18=1|H=False)=0.965170
P(T18=2|H=True)=0.020101
P(T18=2|H=False)=0.034830
P(T19=1|H=True)=0.686767
P(T19=1|H=False)=0.913841
P(T19=2|H=True)=0.313233
P(T19=2|H=False)=0.086159
P(T20=1|H=True)=0.797320
P(T20=1|H=False)=0.925756
P(T20=2|H=True)=0.202680
P(T20=2|H=False)=0.074244
P(T21=1|H=True)=0.537688
P(T21=1|H=False)=0.024748
P(T21=2|H=True)=0.075377
P(T21=2|H=False)=0.004583
P(T21=3|H=True)=0.207705
P(T21=3|H=False)=0.130156
P(T21=4|H=True)=0.112228
P(T21=4|H=False)=0.109074
P(T21=False|H=True)=0.067002
P(T21=False|H=False)=0.731439
The foregoing table of conditional probabilities produced during the Bayesian Network training is used subsequently to combine evidence that a given Web page has some press release content. For example, for the following pages (of unknown content type) the confidence levels obtained are the following:
By choosing 0.85 as the confidence level threshold for accepting the hypothesis that a page contains press release content, it is straightforward to conclude which of these pages satisfy this hypothesis:
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of Provisional Patent Application 60/221,750, filed Jul. 31, 2000, the entire teachings of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4270182 | Asija | May 1981 | A |
5319777 | Perez | Jun 1994 | A |
5418951 | Damashek | May 1995 | A |
5764906 | Edelstein et al. | Jun 1998 | A |
5813006 | Polnerow et al. | Sep 1998 | A |
5835905 | Pirolli et al. | Nov 1998 | A |
5895470 | Pirolli et al. | Apr 1999 | A |
5918236 | Wical | Jun 1999 | A |
5923850 | Barroux | Jul 1999 | A |
5924090 | Krellenstein | Jul 1999 | A |
5943670 | Prager | Aug 1999 | A |
6044375 | Shmueli et al. | Mar 2000 | A |
6052693 | Smith et al. | Apr 2000 | A |
6065016 | Stuntebeck et al. | May 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6094653 | Li et al. | Jul 2000 | A |
6112203 | Bharat et al. | Aug 2000 | A |
6122647 | Horowitz et al. | Sep 2000 | A |
6128613 | Wong et al. | Oct 2000 | A |
6212552 | Biliris et al. | Apr 2001 | B1 |
6253198 | Perkins | Jun 2001 | B1 |
6260033 | Tatsuoka | Jul 2001 | B1 |
6266664 | Russell-Falla et al. | Jul 2001 | B1 |
6269369 | Robertson | Jul 2001 | B1 |
6301614 | Najork et al. | Oct 2001 | B1 |
6314409 | Schneck et al. | Nov 2001 | B2 |
6336108 | Thiesson et al. | Jan 2002 | B1 |
6336139 | Feridun et al. | Jan 2002 | B1 |
6349309 | Aggarwal et al. | Feb 2002 | B1 |
6377936 | Henrick et al. | Apr 2002 | B1 |
6389436 | Chakrabarti et al. | May 2002 | B1 |
6397205 | Juola | May 2002 | B1 |
6415250 | van den Akker | Jul 2002 | B1 |
6418432 | Cohen et al. | Jul 2002 | B1 |
6442555 | Shmueli et al. | Aug 2002 | B1 |
6463430 | Brady et al. | Oct 2002 | B1 |
6466940 | Mills | Oct 2002 | B1 |
6493703 | Knight et al. | Dec 2002 | B1 |
6519580 | Johnson et al. | Feb 2003 | B1 |
6529891 | Heckerman | Mar 2003 | B1 |
6553364 | Wu | Apr 2003 | B1 |
6556964 | Haug et al. | Apr 2003 | B2 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6618717 | Karadimitriou et al. | Sep 2003 | B1 |
6621930 | Smadja | Sep 2003 | B1 |
6640224 | Chakrabarti | Oct 2003 | B1 |
6647396 | Parnell et al. | Nov 2003 | B2 |
6654768 | Celik | Nov 2003 | B2 |
6665841 | Mahoney et al. | Dec 2003 | B1 |
6668256 | Lynch | Dec 2003 | B1 |
6675162 | Russell-Falla et al. | Jan 2004 | B1 |
6697793 | McGreevy | Feb 2004 | B2 |
6745161 | Arnold et al. | Jun 2004 | B1 |
6859797 | Skopicki | Feb 2005 | B1 |
20010009017 | Biliris et al. | Jul 2001 | A1 |
20030221163 | Glover et al. | Nov 2003 | A1 |
20030225763 | Guilak et al. | Dec 2003 | A1 |
20060288015 | Schirripa et al. | Dec 2006 | A1 |
Number | Date | Country |
---|---|---|
A-5303198 | Aug 1998 | AU |
A-53031-98 | Aug 1998 | AU |
A 5303198 | Aug 1998 | AU |
10-320315 | Dec 1998 | JP |
WO 9967728 | Dec 1999 | WO |
WO 0033216 | Jun 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20020138525 A1 | Sep 2002 | US |
Number | Date | Country | |
---|---|---|---|
60221750 | Jul 2000 | US |