The present invention relates in general to peer-to-peer networking and, in particular, to a system and method for providing a peer indexing service.
Computer networking continues to evolve. The earliest computer networks connected dumb terminals to monolithic centralized computers. Each terminal was limited to displaying only those services provided by the centralized computer. Later, personal computers revolutionized computing by enabling individual users to execute applications independently. Local area networks formed from interconnected personal computers facilitated intercomputer communication and resource sharing. Wide area networks combining diverse computing platforms, including personal computers through legacy mainframes, have enabled access to information and computing services worldwide through interconnectivity to internetworks, such as the Internet.
Conventional local area and wide area network services typically include a centralized server to manage and coordinate network service activities for subscribing peer nodes. The use of such centralized servers results in the creation of two de facto “classes” of computer systems, whereby computational services are provided by one or more powerful server nodes, while various capable, but underutilized, client nodes are relegated to consuming information and services. Recent advances in peer-to-peer networking design attempt to rebalance these computational inequities by better utilizing the idle computational, storage and bandwidth resources found in the client nodes. When coupled with the services provided by conventional server nodes, peer-to-peer networking seeks to provide a higher level of network services at a lower overall cost.
Certain types of network services that are generally provided in a server-centric fashion, however, must be redefined when moving from the conventional client-server network model to a peer-to-peer network model. For example, information discovery and retrieval that is provided through on-line searching tools, such as those services provided by MSN and Google, have become increasingly popular among Internet users. These tools rely on a centrally located and managed indexing database and information requests are resolved by performing a query against the indexing database. Building and efficiently accessing the indexing database remains an essential aspect of these tools, although recent efforts at distributing indexing databases amongst peer nodes have suffered in terms of scalability, success rate and real-time performance.
Providing remote access to distributed indexing information in both conventional IP subdomains and within peer-to-peer networks poses challenges with respect to availability and scalability. First, peer nodes frequently include local file storage and locally stored information that can be made available to other nodes over the network through various types of network file systems and file sharing arrangements. However, access to such information requires that the storing node be available. Most file access schemes fail when the storing node is unavailable either due to being off-line or inactive.
In a peer-to-peer system, the key can be used to select a node to store the key and value pair. Preferably, the key maps to the node in a deterministic fashion and any node in possession of the key is able to readily find the node storing the value. Popular or frequently recurring keys tend to create a logical “hotspot” within a network that overtaxes the node associated with the key. The node receives a disproportionate amount of query traffic and must provide extra processing and network bandwidth and additional storage capacity. Hotspots can be minimized through the use of non-deterministic key assignments, but ensuring consistent usage at every node in possession of a potential key in a distributed computing environment can be difficult or impracticable to manage.
There is a need for an approach to provide deterministic storage of indexing information for key and value pairs in a distributed peer-to-peer network. Preferably, such an approach would be scalable to support indexing through a wide area network scale, including on the Internet. To support such high scalability, such an approach would properly distribute information to avoid any hotspots and offer close-to-real-time performance. Preferably, such an approach would ensure the accessibility of indexing information at all levels through a combination of neighboring peer nodes and duplication.
An embodiment provides a system and method for providing a peer indexing service. A peer graph is formed by assigning published identifiers to each of one or more peer nodes that each map to network addresses. A signed number space is defined over the peer graph based on the published identifiers. Given a key, a closest peer node is determined by maintaining data identifying other peer nodes in the peer graph. Index information identifying the key and an associated value stored as a pair on a storing peer node in the peer graph is maintained. Further key and value pairs is maintained in a local indexing database. A plurality of hierarchically structured levels is organized in a peer indexing database as a factor of the number space size. A level group identifier is stored at a level of the peer indexing database determined as a function of the published identifier of the storing peer node, the level and the number space size factor. The key is hashed as a function of the initial level group identifier and the number space size factor to identify a closest peer node sharing the initial level group identifier. Key and value pairs are transiently maintained in a peer index cache.
Still other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein are one embodiments of the invention by way of illustrating the best mode contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
The foregoing terms are used throughout this document and, unless indicated otherwise, are assigned the meanings presented above.
Peer Indexing Service System Overview
By way of example, the network domain 9 can include a plurality of individual clients 12, 13, 14 interconnected to an internetwork 11, such as the Internet, and a further plurality of individual clients 17, 18, 19 interconnected to an intranetwork 15, which is in turn connected to the internetwork 11 via a router 16 or similar gateway or network interfacing device. In the described embodiment, the clients interconnected to both the internetwork 11 and intranetwork 15 operate in accordance with the Transmission Control Protocol/Internet Protocol (TCP/IP), such as described in W. R. Stevens, “TCP/IP Illustrated,” Vol. 1, Chs. 1-3, Addison Wesley Longman, Inc., Reading, Mass., (1994), the disclosure of which is incorporated by reference. Other network domain topologies, organizations and arrangements are possible.
One or more of the clients from the network domain 9, such as clients 12, 13, 14, 18, 19, can be logically grouped to form a single security domain, referred to as a peer system 20. The peer system 20 includes one or more peer users 21, which correspond to the clients 12, 13, 14, 18, 19. Each peer system 20 includes a binding of a private and public key pair. A client lacking a private and public key pair is ineligible to participate as a peer user 21. The peer system 20 also includes a binding of a private and public key pair. The private and public key pair bindings of the peer system 20 and each peer user 21 are respectively evidenced by an authenticated system certificate and address certificate. In addition, each peer user 21 is issued a logon certificate to enable entry into the peer system 20. System certificates, address certificates, and logon certificates are further described below with reference to
Each peer system 20 also includes at least one peer system management server node (MSN) 25, which is a well-known node that provides necessary centralized servicing to the peer users 21. Each management server node 25 provides membership management, name service bootstrapping, micropayment virtual banking, and gateway services. Other types of services are possible and different services can be provided by different system management server nodes 25.
The individual clients are general purpose, programmed digital computing devices consisting of a central processing unit (CPU), random access memory (RAM), non-volatile secondary storage, such as a hard drive or CD ROM drive, network interfaces, and can include peripheral devices, such as user interfacing means, such as a keyboard and display. Program code, including software programs, and data are loaded into the RAM for execution and processing by the CPU and results are generated for display, output, transmittal, or storage.
Peer Graph
Each peer node 23 corresponds physically to a client in the underlying network domain 9. The client identify of a peer node 23 is resolved through a peer name service, which is implemented in a fully distributed fashion by each peer node 23 in the peer graph 24, such as further described in commonly-assigned U.S. patent application Ser. No. 10/832,730, entitled “System And Method for Providing a Peer Name Service,” filed Apr. 27, 2004, pending, the disclosure of which is incorporated by reference. A client identify is required for each peer node 23 to access the underlying transport, network and link protocol layer network services. In the described embodiment, client identities are represented by physical internet protocol (IP) network addresses, such as defined by IP versions 4 and 6, although other forms of physical network addresses could also be used. The mapping of published identifier 31 and physical network address is provided by an address certificate uniquely associated with the peer node 23, as further described below with reference to
Peer identifiers 32 identify specific peer users 21. Instance identifiers 33 identify specific appearances of peer user 21 within a peer graph 24. The peer identifier 32 is a hash of the public key of the peer user 21. The instance identifier 33 is randomly assigned. Instance identifiers 33 enable a peer user 21 to appear multiple times in the same peer graph 24 with each appearance being uniquely identified by a different instance identifier 33. In the described embodiment, each instance identifier 33 is a 128-bit randomly generated number, although other sizes of appearance instance identifiers are possible. The published identifier 31 forms a 256-bit signed integer including the 128-bit peer identifier 32 and 128-bit instance identifier 33, which thereby defines the number space within which the peer graph 24 operates.
Referring back to
Peer Graph Number Space
To improve performance, the published identifiers 31 are mapped into a logically circular number space to halve the magnitude T of the number space. In the described embodiment, the maximum distance is reduced to 2255. Published identifiers 31 are treated as signed integers, which dictate the operations defined over the number space 41. Addition and subtraction operations are defined according to the rules applicable to signed integers. The maximum distance between any pair of peer nodes 23 is equal to the magnitude T of the number space.
For each published identifier N, the sign, absolute value, relational, and distance operators are redefined to operate within the circular number space. The sign of N is defined as follows:
where MSB(N) is the most significant bit of the published identifier N. The absolute value of N is defined as follows:
The relational operators between two published identifiers N1 and N2 differ from conventional relational operators for signed integers and are defined as follows:
N1>N2 If Sign(N2−N1)=−1
N1=N2 If Sign(N2−N1)=0
N1<N2 If Sign(N2−N1)=+1
Finally, the distance between two published identifiers N1 and N2 is defined as follows:
Dist(N1,N2)=Abs(N2−N1)
where:
If Dist(N1, N3)<Dist(N2, N3), then N1 is closer to N3 than N2.
If N1>N2, then N1 is at the positive side of N2, or N1 is to the light of N2.
If N1<N2, then N1 is at the negative side of N2, or N1 is to the left of N2.
By way of example, relative to peer node P0, peer nodes P1 and P2 are on the positive or right side 42 of peer node P0 and peer nodes P4, P51 and P3 are on the negative or left side 43 of peer node P0.
Peer Node Tree
The key 62 is also used to publish indexing information identifying the storing peer node 23 to one or more other peer nodes 23 in the peer node tree 51.
Referring back to
Each peer node 23 is assigned to a group 56 within each level 52-55. Group assignments are based on the published identifier 31 of the peer node 23 the factor F. The group identifier GJ(P) for a given peer node P at one such level number J is determined in accordance with equation (1), as follows:
for J=0, 1, . . . , T/F, where T/F represents an upper bound on the number of levels. Similarly, the group RJ(P) for a given peer node P at one such level number J is determined in accordance with equation (2), as follows:
for J==0, 1, . . . , T/F.
The closest peer node 23 to which indexing information is published within a group 56 is selected by first applying a hashing function to the key 66 based on the published identifier 31 of the storing peer node 23 to determine an interim peer node PJ′. The hashing function HJ(P,K) for a given key K stored at a peer node P at one such level number J is determined in accordance with equation (3), as follows:
for J=0, 1, . . . , T/F.
and where the hashing function Hash( ) maps the published identifier 31 into the number space 41.
Next, the closest peer node PJ, also known as level J group indexing peer node, is identified by applying a function to the result of the hashing function HJ(P, K) to return the closest neighboring peer node PJ determined in accordance with equation (4), as follows:
PJ=CJ(HJ(P,K))=CJ(HJ(GJ,K)) (4)
where the GJ is the level J group for peer node P determined in accordance with equation (1) and CJ(P) is a function returning a closest neighboring peer node PJ to peer node P. In the described embodiment, the closest neighboring peer node PJ is determined by a peer name service, such as further described in commonly-assigned U.S. patent application Ser. No. 10/832,730, entitled “System And Method for Providing a Peer Name Service,” filed Apr. 27, 2004, pending, the disclosure of which is incorporated by reference. Other formulae and methodologies for determining a group identifier, group, and hash value are possible.
Peer Node
Messages
Peer nodes 71 communicate through message exchange, which enable each peer node 23 to publish and discover resources with other peer nodes 23 in a peer graph 24. In the described embodiment, messages are exchanged using the underlying transport, network and link protocol layers. Incoming messages 76 are processed by the peer node 71 with a message processor 73, while outgoing messages 77 are generated with a message generator 74. The peer indexing service 86 implements five basic message types, PUBLISH, DUPLICATE, QUERY, RESPONSE, and STATUS messages, as further described below. Other types of messages, both for the peer indexing service 86 and for other purposes, are possible.
Generically, each message exchanged by the peer indexing service 86 non-exclusively contains the follow information:
Certificates
The identify of each peer node 71 is guaranteed by a set of certificates, which include a system certificate 80, logon certificate 81, and address certificate 82.
System Certificate
Each system certificate 80 is a self-signed certificate binding of the private and public key pair of a peer system 20. System certificates 80 are used for system identification and as the root of all trust relationships in the peer system 20. Each peer user 21 must have a copy of a system certificate 80 for communication and to establish trust with other peer users 21.
System certificates 80 contain the follow information:
Logon Certificate
Each logon certificate 81 forms the basis for communications between peer users 21. A peer user 21 must first obtain a logon certificate 81 from a management server node 25 before communicating with other peer users 21. All data originating from a peer user 21 is signed by the private key of the peer user 21 and the signature is verified using a copy of the public key of the peer user 21 stored in the logon certificate 81. The logon certificate 81 of a destination peer user 21 is also used for encrypted data communication.
Logon certificates 81 contain the follow information:
Peer Address Certificate
Each peer address certificate 82, or, simply, address certificate, forms a binding between a peer identifier 32 for a peer user 21 and the physical network address of the underlying client. At runtime, each peer node 23 maintains a cache of address certificates 82 for use by the peer indexing service 86 in performing name resolution. Each peer node 71 maintains only one address certificate 82 at any given time. If the physical network address of a peer node 71 changes, such as, for example, when using dynamic IP addresses, a new version of the address certificate 82 is created.
Address certificates 82 contain the follow information:
Program Modules
The peer indexing service 86 is implemented at each peer node 71 to provide resource discovery and publishing. To implement the peer indexing service 86, each peer node 71 includes program modules for an optional peer name service 72, message processor 73, message generator 74, and index updater 75.
Peer Name Service
As a peer-to-peer network, each peer node 71 operates within the peer graph 24 presumptively without the complete knowledge of the identities of the other peer nodes 71 that constitute the peer graph 24. Instead, the peer node 71 relies on knowledge of other peer nodes 23 as maintained in a set of caches through an optional peer name service 72, which enables each peer node 71 to learn about closest neighboring peer nodes 71 and resolve the client identify of other peer nodes 71, such as described in commonly-assigned U.S. patent application Ser. No. 10/832,730, entitled “System And Method for Providing a Peer Name Service,” filed Apr. 27, 2004, pending, the disclosure of which is incorporated by reference. The peer indexing service 86 uses the client identity provided by the optional peer name service 72 to support resource discovery and publishing. Other types of peer name services could also be used. In a further embodiment, the peer graph 24 is implemented as a conventional subdomain network, such as an IP subdomain, using a centralized name resolution service in lieu of the peer name service 72.
Message Processor and Message Generator
The message processor 73 module processes incoming messages 76. The message generator 74 module processes outgoing messages 77. The message processor 73 and message generator 74 modules are responsible for receiving, processing and sending PUBLISH, DUPLICATE, QUERY, RESPONSE, and STATUS messages, as further described below with reference to
Briefly, PUBLISH messages are used by a storing peer node to inform a group indexing peer at the initial level about the indexing information maintained by the storing peer node. PUBLISH messages are also used by a group indexing peer in a lower level to notify other group indexing peers at subsequent levels about the indexing information maintained in the lower level. Basically, a key and value pair is first published to all peer nodes in the same initial level group through the initial level group identifier peer node, then to the same next level group through the next level group identifier peer node, and so forth until the key and value pair is published to the T/F level group, which represents the entire peer node 24. PUBLISH messages non-exclusively contain the follow information:
DUPLICATE messages are used by a storing peer node to duplicate indexing information to other peer nodes, generally peer nodes neighboring the storing peer node, to improve availability and reliability. DUPLICATE messages non-exclusively contain the follow information:
(1) Message header with the message code DUPLICATE.
(2) Count of indexing information.
(3) Array of indexing information.
Other information is possible.
QUERY messages are used by a peer node to discover indexing information regarding a key from a target peer node at a specified level in the peer node tree 51. Briefly, query processing begins from the root node of the peer node tree 51 and continues by gradually expanding the peer node tree 51. The root of the peer node tree 51 is the group 56 located at level
To expand a node in the peer node tree 51 at any level, the searching peer node Q sends a QUERY message to a peer node PJ with a published identifier 23 determined in accordance with equation (5), as follows:
PJ=CJ(HJ(GJ(Q),K)) (5)
where CJ is a function returning the closest neighboring peer node 23 identified using the peer name service 72, HJ is a hashing function determined in accordance with equation (3), GJ(Q) is the level J group for searching peer node Q determined in accordance with equation (1). Upon receiving replies from the above sequences of queries, all intermediate peer nodes cache the indexing information received for a pre-defined amount of time. If the peer node PJ has already cached the indexing information in the local cache 93, peer node PJ sends a RESPONSE message containing the indexing information to the searching peer node Q. Otherwise, if peer node PJ has not cached the indexing information and the level of the target peer node is higher than the level of the query J, peer node PJ forwards the query request to a peer node PJ+1 with a published identifier 23 determined in accordance with equation (6), as follows:
PJ+1=CJ+1(HJ+1(GJ+1(Q),K)) (6)
where CJ+1 is a function returning the closest neighboring peer node 23 identified using the peer name service 72, HJ+1 is a hashing function determined in accordance with equation (3), GJ+1(Q) is the level J+1 group for searching peer node Q determined in accordance with equation (1). The process is repeated until the queried peer node either obtains the indexing information for the searching peer node Q or the level of the query matches the level of the target peer node 23. If the query and target peer node 23 levels match, the most recent target peer node 23 forwards the query request to a peer node PJi determined in accordance with equation (7), as follows:
PJi=CJ(HJ(GJi,K)) (7)
where CJ is a function returning the closest neighboring peer node 23 identified using the peer name service 72, HJ is a hashing function determined in accordance with equation (3), GJi is the level J group identifier included in the QUERY message, which identifiers the node in the peer node tree 51 to expand. The peer node PJi returns an authoritative answer, which includes a list of the published identifiers 31 for the peer nodes 23 that contain key and value pairs matching the key with a count of the matching key and value pairs for each peer node 23. Individual query requests are then issued directly against the local indexing database of each peer node 23.
QUERY messages non-exclusively contain the follow information:
(1) Message header with the message code QUERY.
(2) Query identifier.
(3) Level of query.
(4) Level of target peer node.
(5) Key that is the subject of the query request.
(6) Group identifier of the group being queried (GJi).
(7) Response count.
Other information is possible.
RESPONSE messages are sent by a target peer node in response to a query request. RESPONSE messages non-exclusively contain the follow information:
(1) Message header with the message code RESPONSE.
(2) Query identifier.
(3) Indexing information.
Other information is possible.
Finally, STATUS messages are sent in reply to publishing and duplication requests. Each STATUS message has a status code indicating the operation results. In the described embodiment, status code 200 indicates complete success. Status codes 1XX indicate an operational success with extra processing instruction. Status codes 4XX indicate error codes. STATUS messages non-exclusively contain the follow information:
Index Updater
The index updater 75 module maintains the local index entries 84 and peer index entries 85 maintained in a database 83 coupled to the peer node 71. Local index entries 84 are maintained for and published by applications and peer index entries 85 are maintained by a group indexing peer at each level, including the initial level. In a further embodiment, the local index entries 84 and peer index entries 85 are lazily updated by deferring the upward publishing of indexing information until a preset threshold is exceeded. Index updating is further described below with reference to
In addition, the index updater 75 caches copies of the indexing information contained in the peer index entries 85 into a local cache 79. In the described embodiment, each cache entry includes a time-to-live value and cache entries are discarded upon the expiration of the associated time-to-live values.
Peer Indexing Service Method Overview
From the standpoint of the peer indexing service 86, the peer node 71 executes a continuous processing loop (blocks 101-103) during which messages are processed (block 102). The peer indexing service 86 implements five basic message types, PUBLISH, DUPLICATE, QUERY, RESPONSE, and STATUS messages, as further described above with reference to
Publishing a Local Key/Value Pair
First, each new key and value pair is added as a local index entry 61 to the local indexing database (block 121). If a new local index entry 61 is created for the key (block 122), an instance count for the key is initialized (block 123) and a PUBLISH message is immediately sent to the initial level group indexing node with the key and the published identifier 31 of the storing peer node (block 125). If the key already exists in the local indexing database (block 122), the instance count for the key is incremented (block 124). If the instance count change is above a preset threshold (block 126), a PUBLISH message is sent to the initial level group indexing peer node to update the indexing information (block 127). If the instance count change is below the preset threshold (block 126), updating of the indexing information is deferred. The routine then returns.
The publishing process can logically continue at the initial level group indexing peer node as part of the PUBLISH message processing. The initial level group indexing peer node can send further PUBLISH messages to subsequent level group indexing peer nodes if the instance count change is above the preset threshold and the update process can continue until the maximum level is reached.
Unpublishing a Local Key/Value Pair
First, the instance count for the key is decremented (block 131). If the instance count drops to baseline, that is, zero (block 132), the key and value pair is removed as a local index entry 61 from the local indexing database (block 133) and a PUBLISH message is immediately sent to the level one group indexing node with an instance count 68 set to baseline, that is, zero (block 134), which signifies that the indexing information is to be unpublish. Similarly, if the instance count change is above a preset threshold (block 135), a PUBLISH message is immediately sent to the level one group indexing node with new index count (block 136). Baseline values other than zero are also possible. The routine then returns.
If the instance count change at the level one group indexing peer node is above a preset threshold, the publishing process can logically continue at the initial level group indexing peer node as part of the PUBLISH message processing. The initial level group indexing peer node can send further PUBLISH messages to subsequent level group indexing peer nodes if the instance count change is above the preset threshold and the update process can continue until the maximum level is reached.
Publishing to a Higher Level Group
First, upon the receipt of a PUBLISH message (block 141), the peer indexing database is searched for an entry matching the key included with the publishing request (block 142). If the key is not found in the peer indexing table (block 143) and the instance count of the PUBLISH message changes from baseline, that is, is non-zero (block 144), the indexing information is added to the peer indexing database as a peer index entry 65 (block 145) and a PUBLISH message containing the indexing information is sent to the next level group indexing peer node (block 146). If the key is not found in the peer indexing table (block 143) and the instance count of the PUBLISH message is baseline, that is, zero (block 144), no further processing is necessary, except sending back a RESPONSE message.
Otherwise, if the key is already in the peer indexing table (block 143), the peer index entry 65 is updated (block 147) and the instance count for the key is recalculated (block 148). If the instance count of the peer index entry is baseline, that is, zero (block 149), the peer index entry 65 is removed from the local indexing database (block 150) and a new PUBLISH message with the instance count set to zero is sent to the next level group indexing peer node (Block 151). Otherwise, if the instance count of the peer index entry is not baseline, that is, non-zero (block 149) and the difference between the new instance count for the key and the last published instance count to upper level is above a preset threshold (block 152), a new PUBLISH message with updated indexing information is sent to the next level group indexing peer node (block 153) to cause a lazy update. Baseline values other than zero are also possible. Otherwise, no further processing is required and the routine returns.
Processing a Query
Query requests sent to level zero are queries to be executed against the local indexing database, whereas query requests sent to a higher level are requests to expand the peer node tree 51 of the sending peer node. Thus, upon the receipt of a QUERY message (block 161), if the level of the query equals baseline, that is, zero (block 162), the local indexing database is consulted to construct a set of key and value pairs (block 163), which are returned in a RESPONSE message (block 164). Otherwise, if the level of the query is larger than baseline, that is, zero (block 162), the level of the query and the key are checked to make ensure the local peer node is the appropriate peer node to answer the query request (block 163).
If the local peer node is not responsible for answering the query request (block 166), a RESPONSE message containing an error code is returned (block 167). Otherwise, if the local peer node should process the query request (block 166), the local cache 93 is consulted (block 168). If a matching and non-expired cache entry is found (block 169), the cache entry is returned in a RESPONSE message (block 170). Otherwise, if either no matching cache entry is found or an expired cache entry is found (block 169), the query request must be forwarded to another group identifier peer node in the peer node tree 51.
If the level of the target peer node is higher than the level of the query (block 171), the receiving peer node forwards the query request to the next higher level group indexing peer node for the key 66 (block 172) and returns the response from the subsequent level group identifier peer node in a RESPONSE message (block 173). Otherwise, if the level of the query equals the level of the target peer node (block 171), the peer group identifier 69 in the QUERY message is checked (block 174). If the local peer node is not in the peer group indicated by the peer group identifier 69 (block 176), the receiving peer node forwards the query request to a group identifier peer node (block 175) identified according to equation (7) and returns the response from the group identifier peer node in a RESPONSE message (block 173). Otherwise, the receiving peer node returns an authoritative answer in a RESPONSE message (block 177), which includes a list of the published identifiers 31 for the peer nodes 23 if the level of the query is at level one, or a list of the lower level group identifiers that contains matching peer nodes 23 if the query is above level one. Baseline values other than zero are also possible.
While the invention has been particularly shown and described as referenced to the embodiments thereof, those skilled in the art will understand that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
6092201 | Turnbull et al. | Jul 2000 | A |
6266420 | Langford et al. | Jul 2001 | B1 |
6748530 | Aoki | Jun 2004 | B1 |
7139760 | Manion et al. | Nov 2006 | B2 |
7231463 | Nagendra et al. | Jun 2007 | B2 |
7263560 | Abdelaziz et al. | Aug 2007 | B2 |
7308496 | Yeager et al. | Dec 2007 | B2 |
7401152 | Traversat et al. | Jul 2008 | B2 |
7401153 | Traversat et al. | Jul 2008 | B2 |
20020143989 | Huitema et al. | Oct 2002 | A1 |
20020194484 | Bolosky et al. | Dec 2002 | A1 |
20030055892 | Huitema et al. | Mar 2003 | A1 |
20030056093 | Huitema et al. | Mar 2003 | A1 |
20030182421 | Faybishenko et al. | Sep 2003 | A1 |
20030204742 | Gupta et al. | Oct 2003 | A1 |
20040064693 | Pabla et al. | Apr 2004 | A1 |
20050007964 | Falco et al. | Jan 2005 | A1 |
20050063318 | Xu et al. | Mar 2005 | A1 |
20050108203 | Tang et al. | May 2005 | A1 |
20060085385 | Foster et al. | Apr 2006 | A1 |
20070112803 | Pettovello | May 2007 | A1 |
20070174309 | Pettovello | Jul 2007 | A1 |