1. Field of the Invention
The present invention is related generally to an architecture for an indexer and more particularly to an architecture for an indexer that indexes tokens that may be variable-length, where variable-length data may be attached to at least one token occurrence.
2. Description of the Related Art
The World Wide Web (also known as WWW or the “Web”) is a collection of some Internet servers that support Web pages that may include links to other Web pages. A Uniform Resource Locator (URL) indicates a location of a Web page. Also, each Web page may contain, for example, text, graphics, audio, and/or video content. For example, a first Web page may contain a link to a second Web page.
A Web browser is a software application that is used to locate and display Web pages. Currently, there are billions of Web pages on the Web.
Web search engines are used to retrieve Web pages on the Web based on some criteria (e.g., entered via the Web browser). That is, Web search engines are designed to return relevant Web pages given a keyword query. For example, the query “HR” issued against a company intranet search engine is expected to return relevant pages in the intranet that are related to Human Resources (HR). The Web search engine uses indexing techniques that relate search terms (e.g., keywords) to Web pages.
An important problem today is searching large data sets, such as the Web, large collections of text, genomic information, and databases. The underlying operation that is needed for searching is the creation of large indices of tokens quickly and efficiently. These indices, also called inverted files, contain a mapping of tokens, which may be terms in text or more abstract objects, to their locations, where a location may be a document, a page number, or some other more abstract notion of location. The indexing problems is well-known and nearly all solutions are based on sorting all tokens in the data set. However, many conventional sorting techniques are inefficient.
Thus, there is a need for improved indexing techniques.
Provided are a method, system, and program for indexing data. A token is received. It is determined whether a data field associated with the token is a fixed width. When the data field is a fixed width, the token is designated as one for which fixed width sort is to be performed. When the data field is a variable length, the token is designated as one for which a variable width sort is to be performed.
Also, for each token in a set of documents, a sort key is generated that includes a document identifier that indicates whether a section of a document associated with the sort key is an anchor text section or a context section, wherein the anchor text section and the context text section have a same document identifier; it is determined whether a data field associated with the token is a fixed width; when the data field is a fixed width, the token is designated as one for which fixed width sort is to be performed; and, when the data field is a variable length, the token is designated as one for which a variable width sort is to be performed. The fixed width sort and the variable width sort are performed. For each document, the sort keys are used to bring together the anchor text section and the context section of that document.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several implementations of the present invention. It is understood that other implementations may be utilized and structural and operational changes may be made without departing from the scope of the present invention.
Implementations of the invention provide a fast technique for creating indices by using a dual-path approach in which the task of sorting (indexing) is broken into two separate sort processes for fixed width and variable width data.
The server computer 120 includes system memory 122, which may be implemented in volatile and/or non-volatile devices. A search engine 130 executes in the system memory 122. In certain implementations, the search engine includes a crawler component 132, a static rank component 134, a duplicate detection component 138, an anchor text component 140, an indexing component 142, and a tokenizer 144. Although components 132, 134, 138, 140, 142, and 144 are illustrated as separate components, the functionality of components 132, 134, 138, 140, 142, and 144 may be implemented in fewer or more or different components than illustrated. Additionally, the functionality of the components 132, 134, 138, 140, 142, and 144 may be implemented at a Web application server computer or other server computer that is connected to the server computer 120. Additionally, one or more server applications 160 execute in system memory 122. System memory 122 also includes one or more in-memory sort buffers 150.
The server computer 120 provides the client computer 100 with access to data in at least one data store 170 (e.g., a database). Although a single data store 170 is illustrated, for ease of understanding, data in data store 170 may be stored in data stores at other computers connected to server computer 120.
Also, an operator console 180 executes one or more applications 182 and is used to access the server computer 120 and the data store 170.
The data store 170 may comprise an array of storage devices, such as Direct Access Storage Devices (DASDs), Just a Bunch of Disks (JBOD), Redundant Array of Independent Disks (RAID), virtualization device, etc. The data store 170 includes data that is used with certain implementations of the invention.
The goal of text indexing is to take an input document collection (a “document-centric” view) and produce a term-centric view of the same data. In the term-centric view, data is organized by term, and for each term, there is a list of occurrences (also referred to as postings). A posting for a term consists of a list of the documents and offsets within the document that contain the term. In addition, it is useful (e.g., for ranking) to attach some extra attributes to each term occurrence. Some example attributes include, whether the term occurs in a title, occurs in anchor text or is capitalized.
Indexing is equivalent to a sorting problem, where a primary sort key is a term and a secondary sort key is an offset in the document. For example, document D1 contains “This is a test”, document D2 contains “Is this a test”, and document D3 contains “This is not a test”. Table A illustrates term information in tabular form, ordered as the terms would be read in from documents D1, D2, and D3 (in that order), in accordance with certain implementations of the invention.
The first column in Table A is the term, the second column is the document identifier, and the third column is the offset in the document, fourth column is a data field, and, in this example, the data field indicates whether a term is case-folded (capitalized), where 1 is used to indicate that the term is case-folded and a 0 indicates that the term is not case-folded.
Sorting is also accompanied by a compression, where the final index is written in a compact, easily searchable form. Table B illustrates sorted terms from Table A in accordance with certain implementations of the invention.
Posting lists may be represented more compactly by grouping together occurrences of a term in a list as illustrated in Table C:
For example, one posting list is: this (1,0,1), (2,1,0), (3,0,1). The posting lists may be stored in a compressed binary format, but, for illustration, the logical view of each posting list is provided. In certain implementations, the document identifier and offset values in the posting lists are delta encoded and then compressed.
The documents are bundled in files in data store 170. That is, when documents are retrieved by the crawler component 132, one or more documents are stored into a file, called a bundle. The bundle includes as many documents as fit into the bundle, based on the size allocated for a bundle. In certain implementations, the bundle size defaults to 8 MB (megabytes). Having bundles improves performance by allowing large batches of documents to be fetched at once using sequential I/O. Otherwise, if each of the documents was retrieved as a separate file, the seeks times would impact performance.
Doing a scan of the document store 170 is I/O bound. Each document in the data store contains a tokenized document content (e.g., stored in data vectors), and includes both an original token and a stemmed token. A tokenized document content may be described as breaking a document into individual terms with tokens determining word boundaries. An original token refers to an original term in a document (e.g., “running”) that includes case-folding, while a stemmed token refers to a root for the original token (e g., “run”).
An arbitrary binary data field may be attached with each token occurrence. In certain implementations, this data field is a one byte attribute field for document text tokens. In certain implementations, the data field may be of varying width (e.g., for document metadata tokens). For instance, some metadata tokens include the document keyword count, document unique keyword count, document path (e.g., URL or hash), site path (e.g., URL or hash), and document rank.
The tokenized content is stored in the data store 170 for performance reasons. That is, tokenization and parsing are slow and CPU intensive. Once the bulk of documents have been crawled, there are typically small incremental updates to the data store 170. Thus, rather than re-running the tokenizer each time an index is to be built (e.g., once a day), the stored tokenized document content is used to build the index.
In block 202, the static rank component 134 reviews the stored documents and assigns a rank to the documents. The rank may be described as the importance of the source document relative to other documents that have been stored by the crawler component 132. Any type of ranking technique may be used. For example, documents that are accessed more frequently may receive a higher rank.
Thus, document ranks are computed prior to indexing, and the rank value is available at indexing time as part of a document. In certain implementations, the rank is an ordinal document rank (e.g., 1, 2, 3, 4, etc.), so as the documents come in, the document rank may be used as the document identifier. The ordinal document ranks are monotonically increasing, though they are not necessarily sequential (e.g., there may be gaps as documents are being scanned). Documents need not be fed to the indexing component 142 in rank order, instead, documents may be fed in using an arbitrary ordering.
In block 204, an anchor text component 140 extracts anchor text and builds a virtual document with the anchor text. In block 206, optionally, the duplicate detection component 138 may perform duplication detection to remove duplicate documents from data store 170. The duplicate detection processing may also occur as documents are being stored.
There are many variations on indexing and on compressing posting lists. One approach is a sort-merge approach, in which sets of data are read into memory, each set sorted, and each set copied to storage (e.g., disk), producing a series of sorted runs. The index is generated by merging the sorted runs.
In certain implementations, the indexing component 142 implements a sort-merge approach. The sort-merge approach provides good performance and is suitable for batch style indexing in which data is indexed at once. Internally, the indexing component 142 has two phases: index build runs and index merge.
Although any hash function may be used with various implementations of the invention, one that is fast and that works well on large amounts of text tokens that have many common prefixes and suffixes is useful. In certain implementations, Pearson's hash function is used, which is described further in “Fast Hashing of Variable-Length Text Strings” by Peter K. Pearson, Communications of the ACM, 33(6):677-680, June 1990. Pearson's hash function is scalable.
The location 366 includes a 32 bit unique document identifier 368, a bit 370 that indicates whether the section associated with the sort key is content or anchor text, and a 31 bit offset 372 that identifies the offset of the term within the document. When a document is read from the data store 170, the location of the document is generated using the document rank and the token offset within the document. Since a full sort is later done on the location, documents may arrive in any order, and the indexing component 142 is able to reorder them properly in rank order. Also, multiple terms may occur at a same location (e.g., which is useful for adding metadata to mark terms or sequences of terms).
Thus, rather than having separate document identifier (ID) fields and offset fields, certain implementations of the invention use a global 64-bit location 366 to identify the locations of terms. With the use of this 64-bit location, up to 4 billion documents are supported in one index, with 4 billion locations in each document. In alternative implementations, other bit sizes may be used for the global location. Also, each posting occurrence may have arbitrary binary data attached to it (e.g., for ranking).
Looking at the bit encoding of the sort key, by sorting on the sort key, implementations of the invention simultaneously order the index by the token identifier. Additionally, the encoding of the sort key allows the content tokens to come before the metadata tokens (allowing the two paths for the content (fixed width) and metadata (variable width) tokens to be sorted independently and brought together in the merge phase). The encoding of the sort key, for each unique token identifier, allows occurrences to be ordered by document identifier, and, if the document identifier is also the document rank, then, the tokens are in document rank order. Anchor text and content portions of a document may be processed separately and brought together because both portions have the same document identifier. For document occurrences of each token, the offsets of the occurrences within the document are in occurrence order due to the sorting technique used (e.g., due to the stable sort property of a radix sort). Tokens within a document may be fed in order, while anchor documents may be fed in after the content documents.
In block 304, the indexing component 142 determines whether a sort key is for a fixed width sort. If so, processing continues to block 306, otherwise, precessing continues to block 316.
In block 306, the indexing component 142 forwards the sort key to a fixed width sort via the second in-memory sort buffer and processing loops back to block 302. In block 316, the indexing component 142 forwards the sort key to a variable width sort via the third in-memory sort buffer and processing loops back to block 302. If all documents in the first in-memory sort buffer have been processed, the indexing component 142 scans a new set of documents into the in-memory sort buffer for processing, until all documents have been processed.
In
In
The in-memory sort buffer controls the size of the sorted runs, which in turn is related to the total number of sorted runs generated. In certain implementations, each in-memory sort buffer is as large as possible as some systems have low limits on the number of file descriptors that may be opened at once. In particular, in certain implementations, each sort run is written to disk as a separate file, and, at merge time, all of these files are opened in order to perform the merging. In certain operating systems, each file that is currently open uses up a file descriptor, and, in some cases, there is a limit to the number of file descriptors (and thus files) that can be open at one time by a process. Additionally, the indexing component 142 uses a linear time in-memory sort technique so that performance does not change as the in-memory sort buffer size increases. In certain implementations, the in-memory sort buffer size defaults to 1.5 GB (gigabytes).
With reference to
The indexing component 142 takes advantage of the fact that when inserting tokens of a document into an in-memory sort buffer, the locations of the tokens are being inserted in order. Since radix sort is a stable sort that preserves the inserted order of elements when there are ties, a 96-bit radix sort may be used, since the tokens within a document are inserted in order (these are the lower 32 bits of the sort key). Additionally, if there were no document location remapping, a 64-bit radix sort may be used. Because documents are inserted in arbitrary rank, and thus arbitrary document identifier order, a 96-bit radix sort is used.
Thus, when a sort buffer 344, 346 is full, the sort buffer 344, 346 is handed off to an appropriate sort thread that performs radix sort and writes the sorted run to storage.
In order to use the pipelined, parallel processing technique of
In practice, a large percent (e.g., 99.9%) of the data fields are one byte long, while the remaining data fields are of variable width. Thus, the sorting may be separated into two sorts: one where the data fields are one byte and one where the data fields are variable width. This results in an optimizable fixed width field sorting problem in which a 128-bit sort key is composed of the term identifier in the upper 8 bytes and the location in the lower 8 bytes, along with a fixed-width data field (e.g., one byte) that is carried along with the sort key. Additionally, the variable width data field case may be handled by a slower, more general radix sort that works with variable width fields.
In block 404, the indexing component 142 performs remapping. It is possible that documents fed into the indexing component 142 have gaps in their rank value (e.g., documents 1, 6, 7, etc. are received, leaving a gap between ranks 1 and 6). The indexing component 142 remaps the rank locations to close up the gaps, ensuring that they are in sequence. In certain implementations, this is done by keeping a bit vector of the document identifiers that were put in the indexing component 142. In certain implementations, the bit vector is defined to be of the size of MaximumDocumentRank=50,000,000 bits so that the bit vector takes up about 48 MB. At the start of the multi-way merge phase, a remapping table is built that maps each document identifier to a new value. MaximumDocumentRank may be described as a value for a maximum document rank. In certain implementation, this table is MaximumDocumentRank*(4 bytes)=191 MB in size. During the multi-way merge phase, each token document identifier is remapped to remove the gaps. In certain implementations, the remap table is not allocated until after the sort buffer (e.g., the second or third in-memory sort buffer referenced in blocks 308 and 318, respectively) has been deallocated.
In block 406, the indexing component 142 associates anchor text with corresponding content. That is, the indexing component 142 also brings together multiple pieces of a document if they are read from the data store at different times. In certain implementations, two document pieces are supported: a content section and an anchor text section. In certain implementations, the content section arrives and then the anchor text section arrives at a later time. This feature of the indexing component 142 is used to append anchor text to a document by giving anchor text locations in which the upper bit of the offset is set. Prior to indexing, the anchor text is extracted and a virtual document is built with the anchor text. This allows for sending the content of a document separately from the anchor text that points to the document. The content section and anchor text section may be correlated using a common document identifier.
In block 408, the indexing component 142 generates a final index with a dictionary, a block descriptor, and postings.
In
Reordering posting lists in document rank order is useful so that documents with higher rank occur earlier in the posting list to allow for early termination of a search for terms using the index.
In
In certain implementations, 2-way Symmetric Multiprocessors (SMPs) may be utilized for the parallel processing of
In certain alternative implementations, rather than using a sort-merge approach, the indexing component 142 uses a Move To Front (MTF) hash table to accumulate posting lists for terms in-memory. The MTF hash table allows a sort to be performed on the unique tokens (which are the hash keys), rather than on all occurrences. When an MTF hash table is used, dynamically resizable vectors are maintained to hold the occurrences of each term, which requires CPU and memory resources.
Thus, the implementations of the invention provide a high performance, general purpose indexer designed for doing fast and high quality information retrieval. Certain implementations of the invention make use of a dual-path approach to indexing, using a high performance, pipelined radix sort. Additionally, implementations of the invention enable document rank ordering of posting lists, per-occurrence attribute information, positional token information, and the bringing together of documents in pieces (e.g., content and anchor text).
Thus, the sort for index creation takes as input a set of token occurrences (called postings) that include a token, the document the token occurs in, the offset within the document, and some associated attribute data. Postings are sorted by token, document, and offset, in that order, so the token is the primary sort key, the document is the secondary sort key, and the offset is the tertiary sort key. The data field is carried along through the sort process.
Implementations of the invention provide a high performance sort based on the fact that for many postings, the data field is of constant size. Using a fixed width token-ID to represent the token, the postings may be represented by a fixed width binary data structure. Also, the data field is a fixed width in many cases by construction. Therefore, a fast, fixed width sort may be used to sort the fixed width postings.
The postings that can not be handled by a fixed width sort are ones that have a variable width data field. These postings are sorted by a variable width sort. The results of both sorts are combined during a multi-way merge phase. In alternative implementations, the merge may be avoided, and the two sets of sorted postings may be maintained separately.
The sort keys are encoded so that, during the sorting process, the index is simultaneously ordered by term, terms are ordered by document ID, and offsets within documents are also ordered.
The offsets are encoded within a document so that multiple parts of a document are brought together automatically by the sort. Thus, the parts of a document may be fed into the index build separately. Conventional indexers require that documents be fed in as a whole.
In certain implementations, a radix sort may be used because the radix sort sorts in linear time for fixed width sort keys and is stable (i.e., preserves the original ordering of sort element when there are ties in a sort key). The stable sort property of radix sort is used to further improve indexing performance. In particular, if tokens in each part of a document are already in location order, sort is performed on the portion of the sort key that is needed to generate correct sort results.
If all tokens within a document are fed in order, the sort key need not use the offset portion, but can sort on token type, token, document identifier, and document section. If the document sections arrive in order, then the sort can be on token type, token, and document identifier. If the documents are in document rank order, the sort can be on token type and token.
Thus, the indexing component 142 may have different implementations, based on the order in which tokens, document sections, and documents arrive. For example, in certain implementations, the indexing component 142 recognizes that tokens of a given document are fed in order of appearance. In certain implementations, the indexing component 142 recognizes that for document sections, the content sections arrive before the anchortext sections. Also, in certain implementations, the indexing component 142 recognizes that the documents are fed in document rank order. Furthermore, in additional implementations, the indexing component 142 recognizes the order for two or more of tokens, document sections, and documents.
The described techniques for an architecture for an indexer may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term “article of manufacture” as used herein refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. The code in which various implementations are implemented may further be accessible through a transmission media or from a file server over a network. In such cases, the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. Thus, the “article of manufacture” may comprise the medium in which the code is embodied. Additionally, the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise any information bearing medium known in the art.
The logic of
The illustrated logic of
The computer architecture 600 may comprise any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld computer, telephony device, network appliance, virtualization device, storage controller, etc. Any processor 602 and operating system 605 known in the art may be used.
The foregoing description of implementations of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many implementations of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
This application is a continuation of and claims the benefit of U.S. Pat. No. 7,424,467, with application Ser. No. 10/764,800, filed on Jan. 26, 2004, the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4965763 | Zamora | Oct 1990 | A |
5187790 | East et al. | Feb 1993 | A |
5265221 | Miller | Nov 1993 | A |
5287496 | Chen et al. | Feb 1994 | A |
5297039 | Kanaegami et al. | Mar 1994 | A |
5423032 | Byrd et al. | Jun 1995 | A |
5574906 | Morris | Nov 1996 | A |
5638543 | Pederson et al. | Jun 1997 | A |
5685003 | Peltonen et al. | Nov 1997 | A |
5701469 | Brandli et al. | Dec 1997 | A |
5708825 | Sotomayor | Jan 1998 | A |
5721938 | Stuckey | Feb 1998 | A |
5721939 | Kaplan | Feb 1998 | A |
5724033 | Burrows | Mar 1998 | A |
5729730 | Wlaschin et al. | Mar 1998 | A |
5745889 | Burrows | Apr 1998 | A |
5745890 | Burrows | Apr 1998 | A |
5745894 | Burrows et al. | Apr 1998 | A |
5745898 | Burrows | Apr 1998 | A |
5745899 | Burrows | Apr 1998 | A |
5745900 | Burrows | Apr 1998 | A |
5745904 | King et al. | Apr 1998 | A |
5745906 | Squibb | Apr 1998 | A |
5756150 | Mori et al. | May 1998 | A |
5765149 | Burrows | Jun 1998 | A |
5765158 | Burrows | Jun 1998 | A |
5765168 | Burrows | Jun 1998 | A |
5787435 | Burrows | Jul 1998 | A |
5797008 | Burrows | Aug 1998 | A |
5809502 | Burrows | Sep 1998 | A |
5832480 | Byrd, Jr. et al. | Nov 1998 | A |
5832500 | Burrows | Nov 1998 | A |
5832519 | Bowen et al. | Nov 1998 | A |
5852820 | Burrows | Dec 1998 | A |
5862325 | Reed et al. | Jan 1999 | A |
5864863 | Burrows | Jan 1999 | A |
5873097 | Harris et al. | Feb 1999 | A |
5884305 | Kleinberg et al. | Mar 1999 | A |
5890103 | Carus | Mar 1999 | A |
5893119 | Squibb | Apr 1999 | A |
5903646 | Rackman | May 1999 | A |
5903891 | Chen et al. | May 1999 | A |
5903901 | Kawakura et al. | May 1999 | A |
5909677 | Broder et al. | Jun 1999 | A |
5914679 | Burrows | Jun 1999 | A |
5915251 | Burrows et al. | Jun 1999 | A |
5920859 | Li | Jul 1999 | A |
5924091 | Burkhard | Jul 1999 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
5963940 | Liddy et al. | Oct 1999 | A |
5963954 | Burrows | Oct 1999 | A |
5966703 | Burrows | Oct 1999 | A |
5966710 | Burrows | Oct 1999 | A |
5970497 | Burrows | Oct 1999 | A |
5974412 | Hazlehurst et al. | Oct 1999 | A |
5995980 | Olson et al. | Nov 1999 | A |
6005503 | Burrows | Dec 1999 | A |
6016493 | Burrows | Jan 2000 | A |
6016501 | Martin et al. | Jan 2000 | A |
6021409 | Burrows | Feb 2000 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6026413 | Challenger et al. | Feb 2000 | A |
6029165 | Gable | Feb 2000 | A |
6035268 | Carus et al. | Mar 2000 | A |
6047286 | Burrows | Apr 2000 | A |
6067543 | Burrows | May 2000 | A |
6078914 | Redfern | Jun 2000 | A |
6078916 | Culliss | Jun 2000 | A |
6078923 | Burrows | Jun 2000 | A |
6088694 | Burns et al. | Jul 2000 | A |
6105019 | Burrows | Aug 2000 | A |
6119124 | Broder et al. | Sep 2000 | A |
6182062 | Fujisawa et al. | Jan 2001 | B1 |
6182121 | Wlaschin | Jan 2001 | B1 |
6192258 | Kamada et al. | Feb 2001 | B1 |
6192333 | Pentheroudakis | Feb 2001 | B1 |
6205451 | Norcott et al. | Mar 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6216175 | Sliger et al. | Apr 2001 | B1 |
6233571 | Egger et al. | May 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6243718 | Klein et al. | Jun 2001 | B1 |
6269361 | Davis et al. | Jul 2001 | B1 |
6278992 | Curtis et al. | Aug 2001 | B1 |
6285999 | Page | Sep 2001 | B1 |
6295529 | Corston-Oliver et al. | Sep 2001 | B1 |
6308179 | Petersen et al. | Oct 2001 | B1 |
6324566 | Himmel et al. | Nov 2001 | B1 |
6336117 | Massarani | Jan 2002 | B1 |
6339772 | Klein et al. | Jan 2002 | B1 |
6374268 | Testardi | Apr 2002 | B1 |
6381602 | Shoroff et al. | Apr 2002 | B1 |
6385616 | Gardner | May 2002 | B1 |
6418433 | Chakrabarti et al. | Jul 2002 | B1 |
6421655 | Horvitz et al. | Jul 2002 | B1 |
6463439 | Dahlberg | Oct 2002 | B1 |
6507846 | Consens | Jan 2003 | B1 |
6519592 | Getchius et al. | Feb 2003 | B1 |
6519593 | Matias et al. | Feb 2003 | B1 |
6519597 | Cheng et al. | Feb 2003 | B1 |
6529285 | Bobrow et al. | Mar 2003 | B2 |
6542906 | Korn | Apr 2003 | B2 |
6547829 | Meyerzon et al. | Apr 2003 | B1 |
6553385 | Johnson et al. | Apr 2003 | B2 |
6567804 | Ramasamy et al. | May 2003 | B1 |
6578032 | Chandrasekar et al. | Jun 2003 | B1 |
6584458 | Millett et al. | Jun 2003 | B1 |
6594682 | Peterson et al. | Jul 2003 | B2 |
6615209 | Gomes et al. | Sep 2003 | B1 |
6618725 | Fukuda et al. | Sep 2003 | B1 |
6622211 | Henry et al. | Sep 2003 | B2 |
6631369 | Meyerzon et al. | Oct 2003 | B1 |
6631496 | Li et al. | Oct 2003 | B1 |
6633872 | Ambrosini et al. | Oct 2003 | B2 |
6643650 | Slaughter et al. | Nov 2003 | B1 |
6658406 | Mazner et al. | Dec 2003 | B1 |
6658423 | Pugh et al. | Dec 2003 | B1 |
6665657 | Dibachi | Dec 2003 | B1 |
6678409 | Wu et al. | Jan 2004 | B1 |
6725214 | Garcia-Chiesa | Apr 2004 | B2 |
6754873 | Law et al. | Jun 2004 | B1 |
6763362 | McKeeth | Jul 2004 | B2 |
6766316 | Caudill et al. | Jul 2004 | B2 |
6789077 | Slaughter et al. | Sep 2004 | B1 |
6810375 | Ejerhed | Oct 2004 | B1 |
6829606 | Ripley | Dec 2004 | B2 |
6839665 | Meyers | Jan 2005 | B1 |
6839702 | Patel et al. | Jan 2005 | B1 |
6839843 | Bacha et al. | Jan 2005 | B1 |
6842730 | Ejerhed et al. | Jan 2005 | B1 |
6845009 | Whitted | Jan 2005 | B1 |
6850979 | Saulpaugh et al. | Feb 2005 | B1 |
6865575 | Smith et al. | Mar 2005 | B1 |
6868447 | Slaughter et al. | Mar 2005 | B1 |
6870095 | Whitted | Mar 2005 | B1 |
6877136 | Bess et al. | Apr 2005 | B2 |
6904454 | Stickler | Jun 2005 | B2 |
6906920 | Whitted | Jun 2005 | B1 |
6907423 | Weil et al. | Jun 2005 | B2 |
6934634 | Ge | Aug 2005 | B1 |
6985948 | Taguchi et al. | Jan 2006 | B2 |
6990634 | Conroy et al. | Jan 2006 | B2 |
6999971 | Latarche et al. | Feb 2006 | B2 |
7024623 | Higashiyama et al. | Apr 2006 | B2 |
7031954 | Kirsch | Apr 2006 | B1 |
7051023 | Kapur et al. | May 2006 | B2 |
7065784 | Hopmann et al. | Jun 2006 | B2 |
7080091 | Matsuda | Jul 2006 | B2 |
7089532 | Rubin | Aug 2006 | B2 |
7096208 | Zaragoza | Aug 2006 | B2 |
7136806 | Miyahira et al. | Nov 2006 | B2 |
7139752 | Broder et al. | Nov 2006 | B2 |
7146361 | Broder et al. | Dec 2006 | B2 |
7173912 | Jaber et al. | Feb 2007 | B2 |
7197497 | Cossock | Mar 2007 | B2 |
7243301 | Bargeron et al. | Jul 2007 | B2 |
7257593 | Mazner et al. | Aug 2007 | B2 |
7293005 | Fontoura et al. | Nov 2007 | B2 |
7318075 | Ashwin et al. | Jan 2008 | B2 |
7356530 | Kim et al. | Apr 2008 | B2 |
7362323 | Doyle | Apr 2008 | B2 |
7424467 | Fontoura et al. | Sep 2008 | B2 |
7461064 | Fontoura et al | Dec 2008 | B2 |
7499913 | Kraft et al. | Mar 2009 | B2 |
20010049671 | Joerg | Dec 2001 | A1 |
20020032677 | Morgenthaler et al. | Mar 2002 | A1 |
20020120685 | Srivastava et al. | Aug 2002 | A1 |
20020165707 | Call | Nov 2002 | A1 |
20020169770 | Kim et al. | Nov 2002 | A1 |
20030028564 | Sanfilippo | Feb 2003 | A1 |
20030046311 | Baidya et al. | Mar 2003 | A1 |
20030163454 | Jacobsen et al. | Aug 2003 | A1 |
20030177127 | Goodwin et al. | Sep 2003 | A1 |
20030187833 | Plu | Oct 2003 | A1 |
20030217052 | Rubenczyk et al. | Nov 2003 | A1 |
20030225763 | Guilak et al. | Dec 2003 | A1 |
20040044962 | Green et al. | Mar 2004 | A1 |
20040078387 | Benjamin et al. | Apr 2004 | A1 |
20040098399 | Risberg et al. | May 2004 | A1 |
20040111408 | Caudill et al. | Jun 2004 | A1 |
20040123104 | Boyen et al. | Jun 2004 | A1 |
20040243554 | Broder et al. | Dec 2004 | A1 |
20040243556 | Ferrucci et al. | Dec 2004 | A1 |
20040243560 | Broder et al. | Dec 2004 | A1 |
20040243581 | Weissman et al. | Dec 2004 | A1 |
20050033745 | Wiener et al. | Feb 2005 | A1 |
20050044411 | Somin et al. | Feb 2005 | A1 |
20050120004 | Stata et al. | Jun 2005 | A1 |
20050149499 | Franz et al. | Jul 2005 | A1 |
20050149576 | Marmaros et al. | Jul 2005 | A1 |
20050149851 | Mittal | Jul 2005 | A1 |
20050165800 | Fontoura et al. | Jul 2005 | A1 |
20050198076 | Stata et al. | Sep 2005 | A1 |
20060047825 | Streenstra et al. | Mar 2006 | A1 |
20060129538 | Baader et al. | Jun 2006 | A1 |
20070016583 | Lempel et al. | Jan 2007 | A1 |
20070198456 | Betz et al. | Aug 2007 | A1 |
20070282829 | Fontoura et al. | Dec 2007 | A1 |
20080294634 | Fontoura et al. | Nov 2008 | A1 |
20080301130 | Fontoura et al. | Dec 2008 | A1 |
20090083270 | Kraft et al. | Mar 2009 | A1 |
Number | Date | Country |
---|---|---|
0809197 | Nov 1997 | EP |
10289246 | Oct 1998 | JP |
10293767 | Nov 1998 | JP |
2000339309 | Dec 2000 | JP |
9749048 | Dec 1997 | WO |
Number | Date | Country | |
---|---|---|---|
20070271268 A1 | Nov 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10764800 | Jan 2004 | US |
Child | 11834556 | US |