Large scale service providers providing data, content and applications via the Internet look to maximize availability and responsiveness of clustered server systems. They also seek to maintain minimal total costs of ownership for their systems. As more users access such information, demand for faster delivery and responsiveness increases.
Delivery systems have been developed whereby geographically dispersed networks of edge locations can each store copies of content. Each edge location can include one or multiple servers. Clients requesting the content are routed to the nearest edge location so the content is delivered with the best possible performance. To achieve the best possible performance, the edge locations are typically high performance data centers that are able to respond to requested loads during peak times.
The primary issue with this strategy is that the edge locations or “caches” need to manage freshness or validity of their content. Edge locations expire the content and refresh it on a relatively frequent basis. The requirement for freshness creates cache misses which may end up invoking back end services at a higher cost. In some cases the content has expired based on a time-to-live (TTL) value, but the content may not actually have changed. In many systems, there is no mechanism to refresh the edge cache without executing the full heavyweight retrieval from the back end service. This results in a large amount of network traffic and back end service calls which yield no benefit to the service provider or end user. In current multi-tier cache systems, it may be difficult to ensure content freshness without forcing every caching layer have shorter than desired TTL values. The cost of a refresh includes the cost of a proxy cache miss at every cache layer and back end server processing.
Technology is presented which uses a forward propagation mechanism in a resource delivery network to maximize resource availability at edge locations in the network. The technology retains cache semantics for look asides on read misses at the edge locations. Multiple layers of cache are kept fresh using pull-based forward propagation. The technology provides transparent support for multiple tiers of systems, allowing the technology to scale to support very large read loads while presenting fresh data at a lower total cost. Additional background CPU tasks at each cache layer may be used for partial pre-rendering of data, which increases effective machine utilization. The technology also eliminates cache miss storms which can brownout back end services.
The technology will be described herein using a document-centric discussion. In this context, a resource may be considered as a file. However, a resource may represent an element not stored in a file such as a database or other service. A resource may also be interpreted as an arbitrary blob of data which can be stored in an arbitrarily extensible hierarchal storage mechanism and the term subdirectory is interpreted as a group which may contain resources or other groups. The technology does not require an underlying file system. Resources are uniquely identified with a uniform resource identifier (URI) semantic and change history is recorded for each resource.
In addition, while resources are often described herein as written or copied to different systems. It should be recognized that resources may be stored in volatile memory of systems or in alternative forms or non-volatile memory such as hard-disks, disc arrays, solid state systems or other forms of non-volatile memory. The technology allows misses against memory to reference disk while retaining overall high system performance.
Each tier 110, 120, 130, 140 includes one or more caching servers 102, 104, 106, 108 which are used to scale the system. Servers in, for example, tier 110 pull resources written to authoritative store 100. Servers in tiers 110, 120, 130, 140 pull resources from the next higher tier of servers. Cache servers in tiers 110, 120, 130 and additional tiers all use the same resource synchronization mechanism.
Within the architecture, routers (illustrated in
A set of all resources is stored on the authoritative store 100. Writes are made for resources to the authoritative server by means of a PUT or POST command using standard HTTP protocol. As noted above, while HTTP protocol may be used in one embodiment, alternative protocols other than HTTP may be utilized. Any a protocol which allows long lived connections with a bi-directional conversation could be utilized. In still another alternative, a mix of protocols may be used in the same system. In accordance with the technology, each resource written to the authoritative store is copied down each tier until it reaches the edge tier servers (tier 140 in
Each tier 110, 120, 130, 140 replicates files on the authoritative store 100 using log entries from a next tier higher server sourcing the data. This allows a stacked layer strategy where each layer is slightly delayed from the prior layer but can represent a duplicate of the resources in the prior layer.
Because each tier uses the same synchronization mechanism to copy resources to their local store this can allow edge resources, including special purpose application servers, to continue operation even when the back end services which produced the resources are down. This can provide increased availability during failures and provides a window of operation during service failure without negatively impacting user experience.
As discussed below, each synchronization client at each tier may use local filtering to allow vertical partitioning of resources to improve system efficiency. One example of such partitioning is dedicating a subset machines to specific range of users. Another example is configuring servers at a higher tier to filter for specific data. This filtering can be used, for example, to filter for configuration data. In this instance, a service interested in only a subset of data, such as configuration information, can implement a portion of the replication client and use the pull techniques discussed herein to update a local database, foregoing the ability to serve up the data it acquires. In the configuration implementation, a centralized configuration store is kept completely independent of downstream services while allowing the downstream services to rapidly detect interesting changes which they may use to modify their internal configurations. The replication client which updates the service local data may be implemented in complete isolation from either the configion store and the service provided it can call the service configuration update API.
In one embodiment, each successive layer away from the authoritative store 100 increases the number of servers or server clusters by a factor of 4 to 1, e.g. there are 4 times as many servers in tier 120 as tier 110. This allows 1 write server to support 4 read servers in tier 110, 16 read servers in tier 120, 64 read servers in tier 130 and 256 read servers in tier 140. The number of layers and servers discussed herein is exemplary and the illustrations indicated in
Each tier 110, 120, 130, 140 beyond the authoritative store 100 is composed of a group of servers, with the number of servers in the tier (for example tier N+1) set so that when all servers are actively reading files at a defined throttle rate and concurrently reading files from previous tier of servers (e.g. tier N), they will not degrade the maximum write performance of tier N servers by more than some percentage X %, where X may be about 30%.
Access to each layer is provided by routers (illustrated in
An external data store service 250, is accessible to the system 50. The external data store service may be a service optimized for handling large sized resources. An example of one external data store service is the Amazon S3 service which unlimited storage through a simple web services interface.
Each of the tiers may be separated by one or more network links. The tiers may be physically proximate to each other or geographically separated. WAN links are used to connect physically separated servers. Tiers can be organized so WAN links are optimally used. For example when replicating to remote servers, a small number of remote servers are configured replicate from a given cluster across the WAN links. A larger number of servers at the same remote location are configured to replicate from the servers which have already copied the content locally. This minimized the amount of redundant data transfer to across WAN links which are generally more expensive and slower than LAN links. Reducing the WAN traffic in this fashion can dramatically speed up availability of data on all servers of a remote cluster.
To maximize replication rates it is possible to use a lower grade security and possibly not use encryption via SSL for replication between lower tier servers. Any tier which may be exposed to entrusted users or consumers outside a trusted network partition can utilize full encryption and security measures to protect the content. It will be noted that in such embodiments, the N−1 or write tier should be protected by firewalling techniques or anther security mechanism, such as an approved list of servers allowed to contact the source server, should be provided.
Each server 100, 102, 104, 106, 108 in
The web server 302 in each tier responds to HTTP GET/HEAD and HTTP PUT/POST requests to read and write information, respectively, to and from the storage 1304 of the server. For each write, the servers record an entry in an update log 314 using the log write component 320. The log 314 allows each server in a given layer N to deliver change history in small granular chucks to the next tier (N+1) of servers.
A clean up process 304 runs to create a summary log 312 by deleting repetitive entries for the same resource. For efficiency, a number of recent entries 360 in the update log 314 may be maintained in memory for rapid retrieval. The data transformation handler 306 is a background process allowing certain efficiencies to be created in certain portions of the process. The request handler 308 answers updated requests from next tier servers, as described below.
Shown in server 102, and present in every tier server except the authoritative store, is a sync client 370. Sync client 370 acts as a pull agent on the next lower tier of servers to pull data from each successively lower tier. The sync client queries logs 314 in the lower layer tier and uses log information to retrieve resources listed therein using, for example, a standard HTTP GET. The files are recorded in a local resource store 316 of the querying server after fetch. Each resource store comprises the local storage of each server and may comprise any number of different storage elements, including but not limited to redundant storage arrays and storage area networks of any known type. In some instances the authoritative store may be a database may not support miss pass through. In this instance the authoritative store is responsible only to make writes against the next higher tier for each resource changed.
A router 350 directs traffic between each of the servers in each of the tiers using VIP entries for the servers and tiers. This design may include a load balancer that can generally randomly distribute total traffic from the next tier across servers in a given tier. A router or load balancer that can route requests from the same client to the same server with session or IP affinity may also be used. A load dispatching proxy may be used in lieu of a traditional load balancer.
Returning to
Once new or updated resources have been determined, at 510 the client executes a HTTP GET for each resource it processes from the log and at 512 writes a copy of each resource in its local file store at the same relative location in the local store that the resource exists on the tier N server. Standard HTTP semantics send 1 GET request for each resource returned. This imposes at least 1 network round trip and the associated latency per resource fetched. HTTP persistent connections can be used by all clients 370 when requesting multiple resources. Allowing request of multiple items per request reduces the replication delay across WAN partitions where latency can be much higher.
At 514, the client updates the tier N+1 server local log for each item copied from the tier N server.
Many servers may have the capability to store memory caches of resources. When the replication process writes a new local version of a given resource it can invalidate that item in memory. This is supported using cache invalidation so the next reference to the resource will trigger reload. At 516, the client invalidates the in memory cache for each file processed and at 518 updates the TTL age for all the resource or group of resources on the tier N+1 server. In another embodiment, solid state or other forms of memory caches within each server may be used for performance optimization.
In some instances files can contain mutual dependencies so that a change in a file would require information from one or more other files. In this instance, it may be desirable to defer processing of the file until the complete set of changes are written to all interdependent files. In this instance the requesting tier server may modified so that it does not start the processing based on any single file or resource change. The processing is triggered by an activation or sentinel file which may be written into the same directory tree as the resource file. The replication client detects sentinel file which is used to trigger the processing chain. In this way, each tier server generating file changes is free to write new files at will, knowing that they will not trigger the next step in the processing chain until they write the sentinel file. If file versioning is enabled the sentinel file may include the version number for each file which can be processed for this group.
All files are processed in sequence of change because the log entries are processed in order they occur in the logs. If resources which are referenced by another resource are written first then the ordered delivery can provide referential integrity.
The client application generating file changes is responsible to recognize success or failure of each write request against the authoritative store. It should only generate the sentinel file when all the dependant file changes have been acknowledged. Nothing in this section should be interpreted as support of a 2 phase commit. If the client application fails to record all necessary files it is responsible to clean up resources and then re-write those as needed.
This may be supported by allowing each resource change to generate a new resource URI (version), which frees the file change producers to generate new versions of the dependant files without concern for overwriting important changes before the downstream processors have finished their work. In this embodiment, each URI for each resource for each resource may be unique and understood by the server from which the resource is requested to refer to the given resource.
When a “read storm” occurs—a condition where a large number of reads occurs in a short period of time—any server subject to the storm may lose execution cycles. This may delay replication which can result in increasing staleness of the data. This is addressed using the TTL—the TTL for the server may increase which will trigger TTL based cache miss which will trigger refresh of expired files. Each read server may use a sufficient number of threads that it can pull more than one resource at a time.
New servers obtain their initial state by initially reading the summary logs at 702. At 704, the new server identifies all unique resources which pass a local filter, if content filtering for the new server is implemented. At 706, the new server issues an HTTP GET for all resources identified in the summary log of the tier N server which pass its local filter and at 708, the resources are written to its local store. It should be noted that where a write occurs to the local store at 708, the write may be an in-memory write (to volatile memory), a write to disk (or other nonvolatile memory, or both. Because additional writes to the tier N serve may occur during the replication process while writes are occurring at 708, the client in the N+1 layer checks the tier N write log 314 for subsequent writes to resources in the summary log. This occurs in a manner similar to a standard update where, at 710, the client issues a poll request (equivalent to 504) and at 712 reads the log entries for all changes that occurred after the start of the summary log (equivalent to 506 above). At 714, a local filter of the new resources is performed and at 716, the log file 314 is parsed to obtain each updated resource on the tier N server. At 718 an HTTP GET is issued for the new resources and at 720, the resources are written to the tire N+1 local store.
For new servers, a primary build application package including the components necessary for implementing instructions to run the components illustrated in
At startup each server references a known URI for this configuration information to discover which the clusters in which it will participate. It then accesses the configuration information for those clusters to determine its local behavior, such as which layer VIP it is participating in, which data partition it is supporting, the partition mapping keys, and other information. Due to the number of servers which may be participating, a single resource is created at the default URI for each server by name which contains the URI for the clusters it is participating. For example a server named WN9018.internal.example.domain would have a resource created at servermanage.example.domain/WN9018 which would contain needed information.
There is some risk of a server in a higher layer executing a read request which is routed to a different server on the previous layer than the original server, and that such server will be behind the original server in replication. If this occurs, the downstream server will detect the lacking log entries when it cannot find the higher numbered log entries in the log. The replication server can re-try or wait for the other server to catch up with its most recent entries. This can be avoided by using session-based affinity for fetches to route synchronization requests to the same server whenever it has sufficient bandwidth. This is only an issue if a resource of the same name changes. If a write once unique URI strategy is utilized, then this would only trigger cache miss which would be handled automatically.
Detection of excess replication lag is one indication of server failure. Upstream servers, which detect replication lag in one server which is in a higher level than others, may be able to use this information to request that the router remove or de-prioritize the lagging server when routing future requests.
The server ID 802 uniquely identifies a server or server cluster and remains the same for all writes on a single server or server cluster. The write count 804 is unique and is incremented for each write to the server or server cluster. No write should duplicate this number when using the same server ID. The time stamp 806 is measured from system count at the time of write. Some implementations may eliminate the timestamp. The action flag 808 indicates the type of action for this resource. Options include “W” for write and “D” for delete; other actions, including “M” for modified may be added. The local URI (path) written 810 is the local path of the file on the server. It is the server relative URI location for the file. Each entry may optionally include a file version number (not shown) in an integer form which increases by 1 for each new version of the resource written to the authoritative store 100. Using a file version allows additional features to be implemented in a more reliable fashion than when the file URI path is simply overwritten. An authoritative write store 100 enforces version numbering if needed for a given resource type.
The log entries are generally recorded as a result of PUT or POST against a HTTP server (or other protocol resource write instructions). Log entries can also be captured in a RPC or handlers which update underlying data.
Where the resources are stored in a database and writes to the database are to be logged, the same logging strategy is used. Each write to the database resource is sequenced and recorded to the log in the same way. All items in the database can be uniquely accessed via a URI style semantic. When used with database operations the recording of the log can be part of the code making the database update or captured in a database trigger.
The technology also supports removing of resources or files from the server's cached set of resources. This is desirable when content which was previously valid is no longer valid or when it can be removed for other reasons. To process the deletes the log entry includes the action D for “delete”. The change of the action from W for write to D for delete allows the replication client to detect the desire for delete and remove the resource from its local store. It also invalidates any memory cache for that resource. The authoritative sore can preserve D type action during log consolidation for a period of time set by policy to ensure that all servers have had a adequate time to process the removal. The D-type actions can be removed during log consolidation since the write function would functionally act as a replacement of the original resource.
Returning to
In one embodiment, the write logs 314 are recorded and ordered to make incremental access fast and relatively inexpensive. This can be accomplished by a directory structure having the following structure: for each day the local server creates a sub directory labeled by day such as ccyy-mm-dd (Century Year-Month-day). Starting at midnight it creates a new sub directory and starts filling it with new log files. a new file hh-mm (hour-minute). One file per minute with the name hh-mm (hour-minute) zero padded is created which contains all the log entries for all files and entries changed during that minute. If no entries changed during the last minute no file will be created. If no files where changed during a day then no sub directory for that day is created. The one minute granularity may be adjusted upwards or downwards depending on the write load of the server.
The above structure is merely exemplary—alternative structures, times and filenames may be used.
As an alternative to writing log files to disk individual log files can be stored in a database. The advantage of this approach is that no directory tier structure can be established per day. It also allows queries for changes across a unit of time which may reduce the number of discrete fetches needed for a given set of changes.
All the log files may be made available via HTTP GET from the tier N server tier at a server relative path such as../updatelogs/day/minutelogfiles.txt Each server will also return a list of all logs in the directory which occur after a given time. The log can be consolidated to remove repetitive updates so a single larger log may replace a large number of the smaller log.
Logs may themselves be partitioned in each server to allow read clients to effectively search only those resources which they are interested in. Each server can analyze path prefixes so the logs can be partitioned into separate sub directories. This allows read clients that are only interested in certain read prefixes to avoid the overhead of filtering out log entries.
For example if two files are written ../sites/joemerchant/joesite/web/joe.detail.layoutxml.1 and ../system/items/00019919/baseitem.ion, the write handler has the ability to detect the../sites and different ../system/items, and record the log entries in a separate sub directory such as ../writelog/sites versus ../writelog/items. This is configured with simple path prefix matching. This mechanism may allow “path-prefix+file extension” to be used to determine the replication prefix.
As a result of the partitioning of the logs, the read cluster for resources in the subdirectory ../system/items may have a read load which is 1,000 times higher than required for the ../sites configuration data. Servers can process only the write log entries they are interested in without looking at the others. This optimization reduces the costs of analyzing log entries and applying the copy filter at the replication client. One implementation will include configuration based support to partitioning write logs based on simple path prefix matching.
As also illustrated in
The virtual handler 308 allows the tier N+1 server to call it repeatedly and block until a new update arrives. The server will recognize a new entry immediately and return that single line or multiple lines which allows a change recognition for an individual resource. After processing the updates the tier N+1 will call the same resource but will use a new timestamp that is equal to the timestamp of the change of the last resource processed in the last call.
This handler 308 can also accept a request for changes after a write-count number, where the WriteCount is a serial number representing the last write the client processed is available. In this instance the server simply returns the first set of writes occurring after a WriteCount.
Handler 308 implement a simple limit of 5,000 records. A limit may be used because the replication client will take time to replicate that number of resources referenced. The client calls the same handler 308 again after processing the first set of changes and receives the next set based on the timestamp or write count of the last item in the prior batch. It repeats this loop until it is blocked waiting for changes (as discussed with respect to
In memory entries 360 may be flushed to storage after every resource write. However, spooling of entries in memory and writing to the disk in later batches also may be implemented. If used in spooling mode, when an unclean shutdown occurs, the server can walk the entire local storage resource directory structure to find individual resource update timestamps and regenerate any that may have been lost as a result of the spool not being flushed. The immediate flush removes the need for such a extensive walk but could degrade maximum write rates for the server.
As indicated above, resource content filtering on each of the servers may be implemented (at for example steps 508 or 704 above). Every resource written has a predictable URI. This allows N+1 tier servers to read URI from the next lower layer of server N (or the authoritative store) and filter it. For example, where the system 50 is used to implement a Web-based electronic commerce system having, for example, items for sale from a catalog, a server processing only catalog items could look only at the path prefix such as../system/catalog/items and ignore items such as../system/siteconfig. This capability allows resources which have read storm characteristics to be replicated more heavily and through more layers to guarantee high speed access. It can be particularly effective when the resources have low change rates relative to the maximum read rates. This type of filtering is particularly effective for application servers which only need a subset of the data for their local caches. It minimizes the cost of processing for unwanted fragments to a simple single line evaluation of the URI without the associated GET or an extra network round trip to fetch the unwanted resources.
Access control within each resource store can be treated at the sub-directory or any child-directory by allowing a htaccess file to be synchronized as one of the fragments. This requires the local web server 302 to include a security handler enhanced to reference these files and apply them to all files in a directory and all children of a directory. An alternative is to have larger .htaccess resources for the entire server in a location where it can be easily replicated. This latter approach would require triggering the reload of the htaccess file on receipt of new changes or using a relatively short TTL.
In
Once all lines from a detailed log are recorded in the summary log at 908 then the detailed write log can be deleted 910.
A timestamp of the beginning of the log cleanup is recorded as part of the summary log. This timestamp is used by tier N+1 servers to determine where they should start processing detailed log entries from tier N servers. This approach allows the summary log to be created on a hot basis without blocking further writes or replication from the server. As noted above, the log summary process is a low priority so it does not affect the write or copy rates from the system 50. In one embodiment, the log update process is a lower priority process than the read process.
In some cases, a GET request will be made to a tier N+1 tier server before a given resource has been replicated to that machine. This would normally result in the GET request failing with a HTTP 404. In one embodiment, instead of immediately returning the 404, each server is configured so the N+1 machine acts as a proxy and fetches the required file from the previous tier of servers.
This process is illustrated in
When the resource is sought from the tier N server at 1010 if the file is present it will be returned at 1012. However, if the resource requested from layer N is not available at layer N, the tier N server will also perform the method of
This proxy system can be augmented with TTL semantics which invalidate local resources and trigger a cache miss. Each resource or set of resources can be assigned a TTL value, preventing the resource from being returned in response to a GET request after expiration of the TTL value. The time for replication across all layers should generally be shorter than the shortest average TTL for any object in the cache tier. In a “worst case miss,” where the TTL expired at every layer, the delay time is only the sum of the latencies of the read layers and in most cases, this would only propagate through a fraction of the layers.
As noted above, a TTL value may be assigned to each resource or set of resources. In some cases, TTL based request storms can occur where a large number of resources expire at the same time. This can be eliminated by providing a rule which states that no file can be less fresh than the current replication lag on a given server. Since the server knows that no resource has changed that it does not know about, it can completely eliminate the traditional HTTP GET or HEAD needed to ensure its content is sufficiently fresh.
In one embodiment, an intelligent TTL handler 318 is provided which first compares the last time stamp of update from the log system and uses it to override the age calculations for individual resources. This allows an update of 0 or more files from the log stream to reset all TTL ages. This can completely eliminate all TTL related GET or HEAD requests even for a very large file sets It is reasonable to expect a single system to store 10 million or more resources. If each resource had 10 minute TTL and if they where managed through a typical cache system and the server was heavily visited by robots, it could result in 1.4 billion cache misses per day. In contrast the intelligent TTL approach would have very close to 0 misses which effectively reduces the read load against the resource by 1.4 billion requests per day.
In general, update rates are limited by the maximum write rate of a given write cluster, e.g. the update rate of a tier N+1 tier is limited by the write rate of the N tier. In some applications, limiting write rates may be unacceptable, especially in large scale systems. In order to address this limitation, vertical partitioning is used in the system.
Vertical partitions are illustrated in
Each partition can meet the same availability and durability requirements of its authoritative store 100. This allows all writes for a subset of content to be routed to different servers within a given tier. The partitioning may also be driven based on total storage requirements rather than update loads. In general, partitions are reflected vertically to allow read rates that are much higher than write rates for any single partition. The new write rate limit is the sum of the maximum write rate for all partitions and the system can scale to larger number of partitions at need.
In order to implement partitioning and efficient addressing, addressing is based on a 16 bit hash key and an assumption of 1,000 writes per second of 0.1K to 100K files per authoritative write store. The 16 bit hash key provides a maximum of 32,768 hash buckets which if all writing at maximum rates would allow 32.768 million writes per second.
Any single cache signature could be allocated to each partition and a single partition may be dedicated to a given hash key which allocates an entire cluster of hardware to service writes for single cache key. It is possible that hash key overlaps could result in traffic that exceeds that capability of a full cluster. When this condition is detected a second tier lookup may be used. In this second index, the hash key is calculated using an alternative algorithm and then indexed to a specific cache partition using the second hash signature to index into a separate partition map. This is expected to be a rare occasion so the second hash map is treated as a sparse matrix.
Each server may be provided with a unique name. The unique name is mapped to a partition (p1, p2 . . . pn) and a layer (VIP0, VIP1, . . . VIPn) using a simple configuration file. Each server knows the partition to which it has been allocated. Each tier of servers of each vertical partition can have a unique VIP (DNS name). The router(s) handles routing of requests to all machines registered under that VIP domain name. The routing may be handled in a round-robin fashion, or other by other balancing techniques. Each server at a given tier (tier N+1) can be configured to know the VIP name of the next lower tier (tier N) of servers. All configuration elements are represented as simple resource fragments that are replicated to all servers as part of standard replication process.
A standard hashing algorithm is used to produce a 16 bit integer from the URI of the resource. This integer is used in a hash table lookup where it resolves integer number which may be between 1 . . . N partitions. This integer is used to look up a partition number. This information may have a form similar to:
This file information is generated and stored as an ASCII resource and distributed through a non partitioned branch to all replication client servers. Each of the possible buckets is mapped to a partition. The simplest version of this file would contain a single entry per possible hash code which is estimated at 16 bytes of memory per hash code would consume 512K of Memory. Extension to support hash ranges may be added which may reduce memory consumption. This is assumed to be an in memory hash table which allows rapid lookup of the bucket.
One the system has identified the bucket number the bucket number is resolved to a given URI for the front most edge of the partition where the data for a given URI path exists. This second lookup is used because it is unlikely that any system will actually use N partitions. In source form this would look as follows.
The source files for these maps are stored at the authoritative write store for the cluster and are replicated like any other resource. They are replicated using the option to skip partitioning which allows them to be replicated to all partitions which effectively ensures they are available on all servers.
Other addressing schemes may be utilized in accordance with the present technology. As noted above, a path prefix analysis partitioning applied before hashing allows this partitioning to be extended to filtering as necessary.
In certain situations, a given client may try to access content at edge tier servers which may not contain the data needed. In the present implementation, it is desirable that each server at each subsequent tier (e.g. tier N+1) not be required to understand the vertical partitioning, allowing flexibility in the configuration or number of partitions.
The use of the HTTP redirect described in
In many cases the maximum write rate for given write partition is very similar to the maximum write rate for downstream cache servers so the partitioning may need to be replicated all the way to the edge cache. Consistent hashing is used to determine which partition the data for a given URI will be located.
All updates are effectively written over existing resources of the same path. To avoid the possibility of returning a resource which has been partially updated if a server where to request the resource while it is being updated, all replacement of existing resources can be written to a different key space such as “original file path+timestamp+.tmp.” Once the new copy has been fully written, the old version can be deleted and the new version renamed to the original key. If updating a database resource, any individual update is assumed to be atomic which means the local replication client can download the entire resource and process its updated in a single database transaction.
With reference to
Write request handling is largely processed the same as a read request. The primary difference is that the write or PUT request is by default proxied by the receiving server to the appropriate partition by default.
As the system grows additional partitions will be needed. The process for adding new partitions is illustrated in
Partition resolution data can be changed and replicated when all network partitions are available. All replication clients will not receive data simultaneously so data moved from one partition to another can be reached at either point. Update write requests may be routed to the old partition for a period of time.
In some cases, content for the new partition is scattered randomly across existing partitions. New partition write servers retrieve the full list of all content from each partition by reading their summary logs. They apply the partitioning lookup on each resource listed in the summary log and determine if it belongs in their partition by applying the hash semantics. If so, a GET is used to fetch the resource and do a local PUT of the same that resource to their local store. After the write has been confirmed the original content is deleted from the old location. This requires use of an additional query parameter which causes the old location to temporarily ignore their partitioning logic otherwise they would issue a HTTP redirect.
Partition removal is handled in a similar manner. The primary partition configuration is left unchanged while a temporary configuration of the new server mappings is written and propagated to the authoritative write server of the partition to be removed. At this time, all write requests are proxied by the partition to be removed to their new location by the authoritative write servers in the removal partition. The authoritative write store walks its content tree and issues a PUT to the new partition for each resource based on its calculated location based on the new partition map. Then it issues a delete for the resource after receiving an acknowledgement from the PUT. Any read requests are proxied to the new calculated location. If the new location does not have the resource the local store is checked. This may be reversed for optimization.
During this time a portion of the content will be in the new partition and a portion will be in the old location and the amount will change as the copy and delete operation continues. When all content has been removed and copied to its new location the primary partition configuration is updated to reflect new data locations and is propagated to all servers.
After the configuration information has been propagated to all servers the old partition can be removed. Ideal to leave this in place for a period of time and only remove after it receives no requests for a period of time. It is also viable to remove all but one server and remap all layers to this server which simply acts as a proxy until all servers have started using the new partition map.
Partition splitting may be performed in the same manner as the creation of a new partition. The main difference is that all resources which need to be moved reside on a single partition which allows the split to occur as a result of a single walk.
To minimize replication of files that may have been written on the server but did not really change on the client, a MD5 type hash can be used on the contents of the file. The reading server (tier N+1) can compare its hash code for the files current contents with the hash in the write log. If the hash is identical, then the tier N+1 server can simply change the modification time of the resource to reflect the server timestamp and skip the GET. The unique hash code may be added to the write log immediately before the relative path as shown by the string “82828288” in the sample below:
Depending on data freshness requirements it can be necessary to limit the size of items written into any partitioned write area. This is necessary because a number of larger resources such as images may delay replication of smaller fragments if they are mixed in the same log partition. By partitioning larger files into separate partitions they can be replicated at a different rate. This is based on the assertion that some files need to be replicated quickly while a large image or video would have less impact. A very large file can prevent propagation of the next file. The copy of a large multi gigabyte file could take several minutes over a traditional WAN link which would reduce the freshness of any files in the queue after the larger file. If this conditions persists for long enough, the TTL for some content could exceed the server replication age which would trigger a larger number of cache misses for content with a TTL shorter than the replication lag.
To prevent large files from delaying replication of smaller files a special semantic is used. In lieu of replicating large files immediately, a smaller place holder or proxy file is written. This proxy file is replicated as normal. The replication client recognizes these proxy files and adds the need to replicate the larger file named in the proxy to a lower priority replication queue. This allows the replication client to move onto subsequent files with no extra delay. If the large file replaces an existing file then the proxy file may delete the original file at the time the proxy file is detected or mark that file for special expiration so that if it has not been replaced by the time its TTL expires the original is deleted.
If a client attempts to access the larger file before it is replicated then the last version present would be served unless it has exceeded the TTL. If the TTL has expired or if the resource has not arrived then it is treated as a standard cache miss.
In some instances an excess number of large files may exceed the disk space available in the servers. In this instance, an alternative is used which allows the remote replication server to defer fetching the file until first accessed and to clean these files using standard least recently used (LRU) cache algorithms.
When many large files are stored, they may be saved in an external storage array or other system 250 optimized for large scale resource management. When using the external array, the client 370 or higher tier is responsible for writing a copy of the file to the array or media service 250, and generating a URI where the resource can be accessed. This URI is included in the proxy file which is replicated using the standard mechanism. When the replication client receives the proxy, it can choose to either retrieve the resource or to wait until the first request for the resource is made and then manage the total disk space usage using a LRU mechanism to clean out the least used resources.
When using large file optimized services 250, the reading server may retrieve large resources directly from those stores to minimize extra network overhead. In that instance the server may return a HTTP redirect to the reading server with the URI where the larger resource is available. This presumes the large resource storage can be accessed by the reading server.
When writing larger files, extra attention can be paid to ensuring the resource is written completely before a the reading server is allowed to access the local copy. To ensure this occurs, the resource is written under a different name such as “requested path+timestamp+.tmp” and renamed when the write is complete.
Returning again to
Client 370 includes handlers 306 that allow transformation and write to alternative resources names that can be implemented in the same language as the client and dynamically loaded based on path matching and content type semantics. The pluggable transformation agents may also be used to call API in other services which allow local services which have their own repositories or databases to be updated based on changes in the content.
Data transformation may be used in a number of contexts. One of these is summarizing the first page of reviews shown in a detail page. This data only changes as the reviews are approved so the data summary view changes relatively infrequently.
Transformed data may include elements of web pages which require updating, where the entire portion of the page does not require updating. Consider, for example, an electronic commerce system where a number of items is offered for sale. It may be desirable to determine the “best” offer from amongst a series of sales offers. In one embodiment, the calculation may be made and written to the authoritative store. In another embodiment, this calculation and accompanying data may be made at one or more of the tiers in system 50.
In some instances, a request may be received for a resource that would normally be generated during transformation and which has not yet been generated. This would generate a cache miss that in many instances would propagate all the way back to the authoritative store.
In one embodiment, the authoritative store handler 306 can dynamically generate the resource on demand. This eliminates any need for the lower tiers to have custom handlers for data transformation on the fly. In another embodiment, edge tier servers can to detect the cache miss and compose the resource needed by accessing the other pre-transform resources.
It may be desirable to use the CPU resources present in one of the servers to dynamically generate the transformed resources when needed. In this instance, a handler which represents a virtual resource is used. The server first detects a cache miss and before attempting to access the next lower tier, checks its list handlers (which may be local or remote), and uses that handler to dynamically generate the missing resource using other resources or external data sources. Once this is done the server returns the resource as needed. It saves a local copy and writes the generated resource back to the authoritative store using a PUT. The write back is based on the presumption that if accessed once the same resource has a higher probability of being needed again and there is no guarantee that the next access for the same resource will land on the same server where content was dynamically generated.
Certain cases may occur when a file which contained data used in a transform changes the transformed view can be invalidated. For example, using the electronic commerce example, a product summary record may have been generated using data from many sources including 1 . . . N offers. When one of the offers changes, the generated view needs to be deleted so it is not used and is forced to be regenerated. Generation of this type of transform should occur in the authoritative write store where the generation can be triggered at the time of change. However, this approach consumes resources in the authoritative store. Another approach is to scale the transforms such that this type of entry is regenerated on a sufficiently frequent basis that the new transform is available before the TTL in traditional caches expires. To support this in the servers, a list of dependant transforms is maintained for each atomic asset. This list is referenced whenever a given asset is changed and then the dependant transforms are scheduled for deletion. This analysis or detection can be assigned to a small number of servers at the lowest tier set of servers possible and the deletes can be written against the authoritative store using the standard process. This can be implemented so the priority deletion servers are present in each vertical partition. The servers responsible for deletion processing may be configured to handle fewer or no inbound cache requests so they can allocate a majority of their capacity to CPU detection. Priority deletion servers may be allocated a subset of the write logs using the standard filtering or log partitioning to guarantee rapid response.
Additional data transformations can be enabled at the sever tiers closest to the edge (e.g. servers 140) which transform the basic file fragments into those optimized for rapid rendering of common pages. This transformation is done by low priority processes. Any new fragments generated can be replicated to other servers, which may receive a request for the same content. A specialized handler (not shown) may be implemented so that a cache miss of this content can cause the content to be rebuilt from the lower tier fragments present on the server. An alternative cache miss strategy is to allow the servers to rebuild the transformed representation of the data based the lower tier fragments directly. Leveraging background processes in this fashion allows higher effective machine utilization during idle times while minimizing work done to yield final rendering forms of the data during peak times.
Due to the number of hosts participating in the edge cache, there is a substantial amount of unused CPU power during non peak moments on these machines. To maximize the benefit derived from these servers, partitioning of the data they traverse when building transformed data types can be utilized. The actual partitioning information can be replicated as file fragments and treated as a queue so each summarization process is awarded small units of work from the queue.
When summarization work is done by the edge cache, it may be replicated to all other servers serving the same type of data at the same tier of the system 50. Data may be written into all machines of a lower layer. By writing such data to a lower tier, the transformed data is automatically replicated towards all servers in the edge cache which deal with the same set of data.
It is ideal if the server layer supports registration of dependencies for transformed or summary views so that if any of the file fragments referenced to build the transformed view change they any transform generated views that where built based on the content of those files is automatically invalidated. This is ideally extended to allow registration of that summary view for rebuild on a priority but less than real-time basis.
Some use cases mandate that the freshest data be used. A good example of this is in electronic commerce system where a customer has recently changed a shipping address. In this case, the most recent current shipping address should be provided on any page rendered by the web server 150, even though the page rendering may be from a different rendering server than the one responsible for the update request.
One solution is to identify such cases using a standard HTTP header cache-request-directive “no-cache”. If this is received by an edge server, then all system tiers may treat this as a cache miss and will proxy the request to the next lower layer until the first layer or authoritative store is reached. This technique can create request storms on relatively constrained hardware. Due to the ability of this directive to create request storms at lower tiers, one solution is to not honor the request and issue an appropriate error message.
Another solution is to allow the standard HTTP cache-request-directive “max-age” to specify that data can be fresh within a given time frame. This can be used in conjunction with “max-stale” which allows the server to return data that may be stale but the server can attach a warning 110 (Response is stale) if the content age exceeds the max-age. For example if the max-age of 1 second is used for a customer-shipping-address.xml, the server will check's its recorded server age. If the replication age is older than 1 second, the server will check document ages and if it older than 1 second, the next lower server tier will either return the content or refer to the next lower tier until it reaches the authoritative write store. In most instances the replication will be complete before the client request arrives at the authoritative write store. However, if the data has only replicated through a portion of the server layers, it will be found at the highest layer it has made it to and then pulled forward. It is desirable to use the largest acceptable max-age to minimize cache misses.
In the event of a network partition which prevents a server in one layer from reaching the next lower layer that tier will return the most recent data it has and will return the Warning 110 (i.e. response is stale). Each cache layer will update its local age for the content that is retrieved in this fashion to prevent the next cache miss. This warning can be returned through all layers to the reading server.
The tier servers may use standard HTTP HEAD or GET which allows the servers to return a 304 indicating the content has not been changed rather than copying content which has not changed. If the client receives the warning 110, it most likely indicates a network partition failure.
In the case of the customer shipping address, mentioned above, a small lag may be allowed before rendering the content. For content of this type, the use of relatively small files and small vertical partitions allows rapid propagation. Assuming a blocking read on the log changes (defined above) and a 4 to 1 server mix, a sub 2 second propagation can be delivered in a 3 tier, moderately loaded cache which provides 16 read servers. At an average read rate of 5,000 reads per server per second, this supports a max read rate of about 80,000 reads per seconds with a sub 2-second propagation delay.
As defined above, under normal operating conditions, all write requests are proxied to the authoritative write store for the vertical partition which currently owns content a given URI space. This creates a problem in the event of network partition failure where clients of the system need to updated.
An example of this would be using two geographically distinct datacenters, where one data center, for example located in the United States, provides backup for a second data center, for example provided in Europe, during outage conditions, and the authoritative store for a given set of data such as Customer profile for European users is normally located in Europe and replicated to the USA. During the failure condition when the USA data center is in operation assume that a customer wishes to change a component of their profile such as shipping address. In the normal operating condition European cache server would receive the request and simply proxy it to the proper local authoritative store. During the failure condition the USA server can not reach the European server which would prevent the write from occurring which would be represented to the end user as an availability issue.
There is a need for cache servers operating in a remote data center to optionally allow local spooling of writes during network partition failures. For data which can tolerate some inconsistency, the process of
As illustrated in
At 1204, the local servers in the layer of the partition elect a local master. This master goes into local write mode and acts as the authoritative server for the local partition. If the local master fails a new local master is elected. At 1206, peer servers at the same tier in the local partition are temporarily reconfigured to treat the elected master as local master. All write requests are routed to the locally elected proxy using a proxy mechanism such as that discussed above for partitioning. Local servers at same tier temporarily reconfigure to point at the elected local master to pull change logs.
At 1208, the local master saves the updates in its local store and records them in its local write log using a different machine ID to identify the log. At 1210 all changes are replicated to lower tier servers in the same network partition using the processes discussed above with respect to
When restoration of network connectivity to the missing partition is detected, at 1212, for each resource, the local master processes and sorts its write log to find the most recent update at 1214. To perform this function, a index may be maintained of the log, allowing the local master to find the last write of a resource written while it was the local master. At 1216, a determination is made as to whether the local resource is newer than the remote at 1216. If so, it issues a GET against the local server and PUT against the remote server which updates the normal authoritative store. In another alternative, the log can be read sequentially by reading through the local log starting at the first item it wrote using the new machine ID after being elected as local master. The remote store records this as a normal update and will end up overlaying the local version. If the remote content is newer at 1218, then an error is logged for manual reconciliation at 1220, the local content is copied to a new numbered resource name, and the new name is added to the error log to allow future reconciliation. The remote content is fetched and overlays the local content at 1222.
Once all files updated at 1224, then at 1226 the local master sets a special resource file which is detected by the local replication servers to shift all peers at the same tier in the local partition to refer to the remote partitions. All servers in the local partition begin processing changes from remote authoritative store starting from a point before the failure occurs. Eventually they are brought fully up to date when they have processed all the changes which occurred while network connectivity has been down.
Alternatively, at 1214, each resource on the local server may be processed if the total number of changes in the local log with the new machine ID is greater than some threshold (either an absolute threshold or a percentage of total resources on the server, for example).
Some data, for example banking transactions, does not allow the possibility of conflicting changes. To support this, servers can be able to analyze the local path and not accept changes for data having strict consistency requirements. If data in the files can be updated at a finer granularity, such as at the atomic data element in a XML, structure, then the process may be applied at a finer granularity.
Some consistency issues can be overcome by using the versioned numbered files identified above. If each version of each file is retained then it is possible to write automated or manual processes which can be used to reconcile the content across versions to derive a valid master version.
Processor 1300 may contain a single microprocessor, or may contain a plurality of microprocessors for configuring the computer system as a multiprocessor system. Memory 1302 stores instructions and data for execution by processor 1300. If the technology described herein is wholly or partially implemented in software, memory 1302 (which may include one or more memory devices) will store the executable code for programming processor 1300 to perform the processes described herein. In one embodiment, memory 1302 may include banks of dynamic random access memory, high speed cache memory, flash memory, other nonvolatile memory, and/or other storage elements.
Mass storage device 1304, which may be implemented with a magnetic disc drive or optical disc drive, is a nonvolatile storage device for storing data and code. In one embodiment, mass storage device 1304 stores the system software that programs processor 1300 to implement the technology described herein.
Portable storage device 1312 operates in conjunction with a portable nonvolatile storage medium, such as a floppy disc, CD-RW, flash memory card/drive, etc., to input and output data and code to and from the computing system of
Peripheral devices 1306 may include any type of computer support device, such as an input/output interface, to add additional functionality to the computer system. For example, peripheral devices 1306 may include a network interface for connecting the computer system to a network, a modem, a router, a wireless communication device, etc. Input devices 1310 provide a portion of a user interlace, and may include a keyboard or pointing device (e.g. mouse, track ball, etc.). In order to display textual and graphical information, the computing system of
The components depicted in the computing system of
Numerous variations on the above technology are possible. Non file based stores can be updated using the same replication strategy. In this instance, the data source can be modified to provide the update logs and the individual data records can be made available via HTTP GET at unique URI.
The sync client 370 can be easily modified to update a local database in lieu of local files. It is equally viable to store the elements retrieved from a remote database as local files. In general, small static files can be served quickly and inexpensively from standard caching HTTP servers, delivering overall cost benefit while requiring minimum of investment to move data resources forward for high speed access.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application is a continuation of U.S. application Ser. No. 13/842,970, entitled “FORWARD-BASED RESOURCE DELIVERY NETWORK,” filed Mar. 15, 2013, which is a continuation of allowed U.S. application Ser. No. 12/652,541, entitled “DISTRIBUTION NETWORK WITH FORWARD RESOURCE PROPAGATION,” filed Jan. 5, 2010, and issuing as U.S. Pat. No. 8,667,088, each of which is incorporated by reference herein for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5341477 | Pitkin et al. | Aug 1994 | A |
5611049 | Pitts | Mar 1997 | A |
5701467 | Freeston | Dec 1997 | A |
5764910 | Shachar | Jun 1998 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5852717 | Bhide et al. | Dec 1998 | A |
5892914 | Pitts | Apr 1999 | A |
5893116 | Simmonds et al. | Apr 1999 | A |
5933811 | Angles et al. | Aug 1999 | A |
5974454 | Apfel et al. | Oct 1999 | A |
6016512 | Huitema | Jan 2000 | A |
6026452 | Pitts | Feb 2000 | A |
6052718 | Gifford | Apr 2000 | A |
6078960 | Ballard | Jun 2000 | A |
6085234 | Pitts et al. | Jul 2000 | A |
6098096 | Tsirigotis et al. | Aug 2000 | A |
6108703 | Leighton et al. | Aug 2000 | A |
6157942 | Chu et al. | Dec 2000 | A |
6167438 | Yates et al. | Dec 2000 | A |
6167446 | Lister et al. | Dec 2000 | A |
6182111 | Inohara et al. | Jan 2001 | B1 |
6182125 | Borella et al. | Jan 2001 | B1 |
6185598 | Farber et al. | Feb 2001 | B1 |
6192051 | Lipman et al. | Feb 2001 | B1 |
6205475 | Pitts | Mar 2001 | B1 |
6223288 | Byrne | Apr 2001 | B1 |
6243761 | Mogul et al. | Jun 2001 | B1 |
6275496 | Burns et al. | Aug 2001 | B1 |
6286043 | Cuomo et al. | Sep 2001 | B1 |
6286084 | Wexler et al. | Sep 2001 | B1 |
6304913 | Rune | Oct 2001 | B1 |
6338082 | Schneider | Jan 2002 | B1 |
6345308 | Abe | Feb 2002 | B1 |
6351743 | DeArdo et al. | Feb 2002 | B1 |
6351775 | Yu | Feb 2002 | B1 |
6363411 | Dugan et al. | Mar 2002 | B1 |
6366952 | Pitts | Apr 2002 | B2 |
6374290 | Scharber et al. | Apr 2002 | B1 |
6377257 | Borrel et al. | Apr 2002 | B1 |
6386043 | Millins | May 2002 | B1 |
6405252 | Gupta et al. | Jun 2002 | B1 |
6411967 | Van Renesse | Jun 2002 | B1 |
6415280 | Farber et al. | Jul 2002 | B1 |
6430607 | Kavner | Aug 2002 | B1 |
6438592 | Killian | Aug 2002 | B1 |
6452925 | Sistanizadeh et al. | Sep 2002 | B1 |
6457047 | Chandra et al. | Sep 2002 | B1 |
6459909 | Bilcliff et al. | Oct 2002 | B1 |
6473804 | Kaiser et al. | Oct 2002 | B1 |
6484143 | Swildens et al. | Nov 2002 | B1 |
6505241 | Pitts | Jan 2003 | B2 |
6523036 | Hickman et al. | Feb 2003 | B1 |
6529910 | Fleskes | Mar 2003 | B1 |
6529953 | Van Renesse | Mar 2003 | B1 |
6553413 | Leighton et al. | Apr 2003 | B1 |
6560610 | Eatherton et al. | May 2003 | B1 |
6611873 | Kanehara | Aug 2003 | B1 |
6643357 | Lumsden | Nov 2003 | B2 |
6654807 | Farber et al. | Nov 2003 | B2 |
6658462 | Dutta | Dec 2003 | B1 |
6665706 | Kenner et al. | Dec 2003 | B2 |
6678717 | Schneider | Jan 2004 | B1 |
6678791 | Jacobs et al. | Jan 2004 | B1 |
6681282 | Golden et al. | Jan 2004 | B1 |
6694358 | Swildens et al. | Feb 2004 | B1 |
6697805 | Choquier et al. | Feb 2004 | B1 |
6724770 | Van Renesse | Apr 2004 | B1 |
6732237 | Jacobs et al. | May 2004 | B1 |
6754699 | Swildens et al. | Jun 2004 | B2 |
6754706 | Swildens et al. | Jun 2004 | B1 |
6760721 | Chasen et al. | Jul 2004 | B1 |
6769031 | Bero | Jul 2004 | B1 |
6782398 | Bahl | Aug 2004 | B1 |
6785704 | McCanne | Aug 2004 | B1 |
6795434 | Kumar et al. | Sep 2004 | B1 |
6799214 | Li | Sep 2004 | B1 |
6804706 | Pitts | Oct 2004 | B2 |
6810291 | Card et al. | Oct 2004 | B2 |
6810411 | Coughlin et al. | Oct 2004 | B1 |
6829654 | Jungck | Dec 2004 | B1 |
6862607 | Vermeulen | Mar 2005 | B1 |
6874017 | Inoue et al. | Mar 2005 | B1 |
6917951 | Orbits et al. | Jul 2005 | B2 |
6928467 | Peng | Aug 2005 | B2 |
6928485 | Krishnamurthy et al. | Aug 2005 | B1 |
6941562 | Gao et al. | Sep 2005 | B2 |
6963850 | Bezos et al. | Nov 2005 | B1 |
6976090 | Ben-Shaul et al. | Dec 2005 | B2 |
6981017 | Kasriel et al. | Dec 2005 | B1 |
6985945 | Farhat et al. | Jan 2006 | B2 |
6986018 | O'Rourke et al. | Jan 2006 | B2 |
6990526 | Zhu | Jan 2006 | B1 |
6996616 | Leighton et al. | Feb 2006 | B1 |
7003555 | Jungck | Feb 2006 | B1 |
7006099 | Gut et al. | Feb 2006 | B2 |
7007089 | Freedman | Feb 2006 | B2 |
7010578 | Lewin et al. | Mar 2006 | B1 |
7010598 | Sitaraman et al. | Mar 2006 | B2 |
7024466 | Outten et al. | Apr 2006 | B2 |
7031445 | Lumsden | Apr 2006 | B2 |
7032010 | Swildens et al. | Apr 2006 | B1 |
7058706 | Iyer et al. | Jun 2006 | B1 |
7058953 | Willard et al. | Jun 2006 | B2 |
7065587 | Huitema et al. | Jun 2006 | B2 |
7072982 | Teodosiu et al. | Jul 2006 | B2 |
7076633 | Tormasov et al. | Jul 2006 | B2 |
7082476 | Cohen et al. | Jul 2006 | B1 |
7086061 | Joshi et al. | Aug 2006 | B1 |
7092505 | Allison et al. | Aug 2006 | B2 |
7092997 | Kasriel et al. | Aug 2006 | B1 |
7096266 | Lewin et al. | Aug 2006 | B2 |
7099936 | Chase et al. | Aug 2006 | B2 |
7103645 | Leighton et al. | Sep 2006 | B2 |
7114160 | Suryanarayana et al. | Sep 2006 | B2 |
7117262 | Bai et al. | Oct 2006 | B2 |
7133905 | Dilley et al. | Nov 2006 | B2 |
7136922 | Sundaram et al. | Nov 2006 | B2 |
7139821 | Shah et al. | Nov 2006 | B1 |
7143169 | Champagne et al. | Nov 2006 | B1 |
7143170 | Swildens et al. | Nov 2006 | B2 |
7146560 | Dang et al. | Dec 2006 | B2 |
7149809 | Barde et al. | Dec 2006 | B2 |
7152118 | Anderson, IV et al. | Dec 2006 | B2 |
7162539 | Garcie-Luna-Aceves | Jan 2007 | B2 |
7174382 | Ramanathan et al. | Feb 2007 | B2 |
7185063 | Kasriel et al. | Feb 2007 | B1 |
7185084 | Sirivara et al. | Feb 2007 | B2 |
7188214 | Kasriel et al. | Mar 2007 | B1 |
7194522 | Swildens et al. | Mar 2007 | B1 |
7200667 | Teodosiu et al. | Apr 2007 | B2 |
7216170 | Ludvig et al. | May 2007 | B2 |
7225254 | Swildens et al. | May 2007 | B1 |
7228350 | Hong et al. | Jun 2007 | B2 |
7228359 | Monteiro | Jun 2007 | B1 |
7233978 | Overton et al. | Jun 2007 | B2 |
7240100 | Wein et al. | Jul 2007 | B1 |
7251675 | Kamakura et al. | Jul 2007 | B1 |
7254626 | Kommula et al. | Aug 2007 | B1 |
7254636 | O'Toole, Jr. et al. | Aug 2007 | B1 |
7257581 | Steele et al. | Aug 2007 | B1 |
7260598 | Liskov et al. | Aug 2007 | B1 |
7260639 | Afergan et al. | Aug 2007 | B2 |
7269784 | Kasriel et al. | Sep 2007 | B1 |
7274658 | Bornstein et al. | Sep 2007 | B2 |
7289519 | Liskov | Oct 2007 | B1 |
7293093 | Leighton et al. | Nov 2007 | B2 |
7308499 | Chavez | Dec 2007 | B2 |
7310686 | Uysal | Dec 2007 | B2 |
7316648 | Kelly et al. | Jan 2008 | B2 |
7320131 | O'Toole, Jr. | Jan 2008 | B1 |
7321918 | Burd et al. | Jan 2008 | B2 |
7339937 | Mitra et al. | Mar 2008 | B2 |
7363291 | Page | Apr 2008 | B1 |
7363626 | Koutharapu et al. | Apr 2008 | B2 |
7370089 | Boyd et al. | May 2008 | B2 |
7373416 | Kagan et al. | May 2008 | B2 |
7376736 | Sundaram et al. | May 2008 | B2 |
7380078 | Ikegaya et al. | May 2008 | B2 |
7392236 | Rusch et al. | Jun 2008 | B2 |
7398301 | Hennessey et al. | Jul 2008 | B2 |
7406512 | Swildens et al. | Jul 2008 | B2 |
7406522 | Riddle | Jul 2008 | B2 |
7430610 | Pace et al. | Sep 2008 | B2 |
7441045 | Skene et al. | Oct 2008 | B2 |
7441261 | Slater | Oct 2008 | B2 |
7454457 | Lowery et al. | Nov 2008 | B1 |
7454500 | Hsu et al. | Nov 2008 | B1 |
7461170 | Taylor et al. | Dec 2008 | B1 |
7464142 | Flurry et al. | Dec 2008 | B2 |
7478148 | Neerdaels | Jan 2009 | B2 |
7492720 | Pruthi et al. | Feb 2009 | B2 |
7496651 | Joshi | Feb 2009 | B1 |
7499998 | Toebes et al. | Mar 2009 | B2 |
7502836 | Menditto et al. | Mar 2009 | B1 |
7505464 | Okmianski et al. | Mar 2009 | B2 |
7519720 | Fishman et al. | Apr 2009 | B2 |
7519726 | Palliyil et al. | Apr 2009 | B2 |
7523181 | Swildens et al. | Apr 2009 | B2 |
7543024 | Holstege | Jun 2009 | B2 |
7548947 | Kasriel et al. | Jun 2009 | B2 |
7552235 | Chase et al. | Jun 2009 | B2 |
7555542 | Ayers et al. | Jun 2009 | B1 |
7561571 | Lovett et al. | Jul 2009 | B1 |
7565407 | Hayball | Jul 2009 | B1 |
7568032 | Feng et al. | Jul 2009 | B2 |
7573916 | Bechtolsheim et al. | Aug 2009 | B1 |
7574499 | Swildens et al. | Aug 2009 | B1 |
7581009 | Hsu et al. | Aug 2009 | B1 |
7594189 | Walker et al. | Sep 2009 | B1 |
7596619 | Leighton et al. | Sep 2009 | B2 |
7617222 | Coulthard et al. | Nov 2009 | B2 |
7623460 | Miyazaki | Nov 2009 | B2 |
7624169 | Lisiecki et al. | Nov 2009 | B2 |
7631101 | Sullivan et al. | Dec 2009 | B2 |
7640296 | Fuchs et al. | Dec 2009 | B2 |
7650376 | Blumenau | Jan 2010 | B1 |
7653700 | Bahl et al. | Jan 2010 | B1 |
7653725 | Yahiro et al. | Jan 2010 | B2 |
7657613 | Hanson et al. | Feb 2010 | B1 |
7657622 | Douglis et al. | Feb 2010 | B1 |
7661027 | Langen et al. | Feb 2010 | B2 |
7664879 | Chan et al. | Feb 2010 | B2 |
7676570 | Levy et al. | Mar 2010 | B2 |
7680897 | Carter et al. | Mar 2010 | B1 |
7685251 | Houlihan et al. | Mar 2010 | B2 |
7693813 | Cao et al. | Apr 2010 | B1 |
7702724 | Brydon et al. | Apr 2010 | B1 |
7706740 | Collins et al. | Apr 2010 | B2 |
7707314 | McCarthy et al. | Apr 2010 | B2 |
7711647 | Gunaseelan et al. | May 2010 | B2 |
7711788 | Lev Ran et al. | May 2010 | B2 |
7716367 | Leighton et al. | May 2010 | B1 |
7725602 | Liu et al. | May 2010 | B2 |
7730187 | Raciborski et al. | Jun 2010 | B2 |
7739400 | Lindbo et al. | Jun 2010 | B2 |
7747720 | Toebes et al. | Jun 2010 | B2 |
7756913 | Day | Jul 2010 | B1 |
7756965 | Joshi | Jul 2010 | B2 |
7757202 | Dahlsted et al. | Jul 2010 | B2 |
7761572 | Auerbach | Jul 2010 | B1 |
7765304 | Davis et al. | Jul 2010 | B2 |
7769823 | Jenny et al. | Aug 2010 | B2 |
7773596 | Marques | Aug 2010 | B1 |
7774342 | Virdy | Aug 2010 | B1 |
7787380 | Aggarwal et al. | Aug 2010 | B1 |
7792989 | Toebes et al. | Sep 2010 | B2 |
7809597 | Das et al. | Oct 2010 | B2 |
7813308 | Reddy et al. | Oct 2010 | B2 |
7814229 | Cabrera et al. | Oct 2010 | B1 |
7818454 | Kim et al. | Oct 2010 | B2 |
7827256 | Phillips et al. | Nov 2010 | B2 |
7836177 | Kasriel et al. | Nov 2010 | B2 |
7873065 | Mukerji et al. | Jan 2011 | B1 |
7890612 | Todd et al. | Feb 2011 | B2 |
7899899 | Joshi | Mar 2011 | B2 |
7904875 | Hegyi | Mar 2011 | B2 |
7912921 | O'Rourke et al. | Mar 2011 | B2 |
7925782 | Sivasubramanian et al. | Apr 2011 | B2 |
7930393 | Baumback et al. | Apr 2011 | B1 |
7930402 | Swildens et al. | Apr 2011 | B2 |
7930427 | Josefsberg et al. | Apr 2011 | B2 |
7937477 | Day et al. | May 2011 | B1 |
7945693 | Farber et al. | May 2011 | B2 |
7949779 | Farber et al. | May 2011 | B2 |
7958222 | Pruitt et al. | Jun 2011 | B1 |
7958258 | Yeung et al. | Jun 2011 | B2 |
7962597 | Richardson et al. | Jun 2011 | B2 |
7966404 | Hedin et al. | Jun 2011 | B2 |
7970816 | Chess et al. | Jun 2011 | B2 |
7970940 | Van de Ven et al. | Jun 2011 | B1 |
7979509 | Malmskog et al. | Jul 2011 | B1 |
7991910 | Richardson et al. | Aug 2011 | B2 |
7996533 | Leighton et al. | Aug 2011 | B2 |
7996535 | Auerbach | Aug 2011 | B2 |
8000724 | Rayburn et al. | Aug 2011 | B1 |
8010707 | Elzur et al. | Aug 2011 | B2 |
8024441 | Kommula et al. | Sep 2011 | B2 |
8028090 | Richardson et al. | Sep 2011 | B2 |
8041773 | Abu-Ghazaleh et al. | Oct 2011 | B2 |
8041809 | Sundaram et al. | Oct 2011 | B2 |
8041818 | Gupta et al. | Oct 2011 | B2 |
8065275 | Eriksen et al. | Nov 2011 | B2 |
8069231 | Schran et al. | Nov 2011 | B2 |
8073940 | Richardson et al. | Dec 2011 | B1 |
8082348 | Averbuj et al. | Dec 2011 | B1 |
8108623 | Krishnaprasad et al. | Jan 2012 | B2 |
8117306 | Baumback et al. | Feb 2012 | B1 |
8122098 | Richardson et al. | Feb 2012 | B1 |
8122124 | Baumback et al. | Feb 2012 | B1 |
8135820 | Richardson et al. | Mar 2012 | B2 |
8156243 | Richardson et al. | Apr 2012 | B2 |
8190682 | Paterson-Jones et al. | May 2012 | B2 |
8195837 | McCarthy et al. | Jun 2012 | B2 |
8224986 | Liskov et al. | Jul 2012 | B1 |
8234403 | Richardson et al. | Jul 2012 | B2 |
8239530 | Sundaram et al. | Aug 2012 | B2 |
8250211 | Swildens et al. | Aug 2012 | B2 |
8250219 | Raciborski et al. | Aug 2012 | B2 |
8266288 | Banerjee et al. | Sep 2012 | B2 |
8266327 | Kumar et al. | Sep 2012 | B2 |
8280998 | Joshi | Oct 2012 | B2 |
8281035 | Farber et al. | Oct 2012 | B2 |
8291046 | Farber et al. | Oct 2012 | B2 |
8291117 | Eggleston et al. | Oct 2012 | B1 |
8301645 | Crook | Oct 2012 | B1 |
8321568 | Sivasubramanian et al. | Nov 2012 | B2 |
8402137 | Sivasubramanian et al. | Mar 2013 | B2 |
8433749 | Wee et al. | Apr 2013 | B2 |
8447876 | Verma et al. | May 2013 | B2 |
8452874 | MacCarthaigh et al. | May 2013 | B2 |
8463877 | Richardson | Jun 2013 | B1 |
8468222 | Sakata et al. | Jun 2013 | B2 |
8468245 | Farber et al. | Jun 2013 | B2 |
8473613 | Farber et al. | Jun 2013 | B2 |
8478903 | Farber et al. | Jul 2013 | B2 |
8504721 | Hsu et al. | Aug 2013 | B2 |
8510428 | Joshi | Aug 2013 | B2 |
8510807 | Elazary et al. | Aug 2013 | B1 |
8521851 | Richardson et al. | Aug 2013 | B1 |
8521908 | Holmes et al. | Aug 2013 | B2 |
8526405 | Curtis et al. | Sep 2013 | B2 |
8527658 | Holmes et al. | Sep 2013 | B2 |
8572208 | Farber et al. | Oct 2013 | B2 |
8572210 | Farber et al. | Oct 2013 | B2 |
8577992 | Richardson et al. | Nov 2013 | B1 |
8589996 | Ma et al. | Nov 2013 | B2 |
8606996 | Richardson et al. | Dec 2013 | B2 |
8615549 | Knowles et al. | Dec 2013 | B2 |
8626950 | Maccarthaigh et al. | Jan 2014 | B1 |
8635340 | Schneider | Jan 2014 | B1 |
8639817 | Sivasubramanian et al. | Jan 2014 | B2 |
8645539 | McCarthy et al. | Feb 2014 | B2 |
8676918 | Richardson et al. | Mar 2014 | B2 |
8683076 | Farber et al. | Mar 2014 | B2 |
8688837 | Richardson et al. | Apr 2014 | B1 |
8732309 | Richardson et al. | May 2014 | B1 |
8756325 | Sivasubramanian et al. | Jun 2014 | B2 |
8756341 | Richardson et al. | Jun 2014 | B1 |
8782236 | Marshall et al. | Jul 2014 | B1 |
8782279 | Eggleston et al. | Jul 2014 | B2 |
8819283 | Richardson et al. | Aug 2014 | B2 |
8914514 | Jenkins et al. | Dec 2014 | B1 |
8924528 | Richardson et al. | Dec 2014 | B1 |
8930513 | Richardson et al. | Jan 2015 | B1 |
8930544 | Richardson et al. | Jan 2015 | B2 |
8938526 | Richardson et al. | Jan 2015 | B1 |
8966318 | Shah | Feb 2015 | B1 |
9003035 | Richardson et al. | Apr 2015 | B1 |
9009286 | Sivasubramanian et al. | Apr 2015 | B2 |
9009334 | Jenkins et al. | Apr 2015 | B1 |
9021127 | Richardson et al. | Apr 2015 | B2 |
9021128 | Sivasubramanian et al. | Apr 2015 | B2 |
9021129 | Richardson et al. | Apr 2015 | B2 |
9026616 | Sivasubramanian et al. | May 2015 | B2 |
9037975 | Taylor et al. | May 2015 | B1 |
9083675 | Richardson et al. | Jul 2015 | B2 |
9083743 | Patel et al. | Jul 2015 | B1 |
9106701 | Richardson et al. | Aug 2015 | B2 |
20010000811 | May et al. | May 2001 | A1 |
20010025305 | Yoshiasa et al. | Sep 2001 | A1 |
20010032133 | Moran | Oct 2001 | A1 |
20010034704 | Farhat et al. | Oct 2001 | A1 |
20010049741 | Skene et al. | Dec 2001 | A1 |
20010052016 | Skene et al. | Dec 2001 | A1 |
20010056416 | Garcia-Luna-Aceves | Dec 2001 | A1 |
20010056500 | Farber et al. | Dec 2001 | A1 |
20020002613 | Freeman et al. | Jan 2002 | A1 |
20020004846 | Garcia-Luna-Aceves et al. | Jan 2002 | A1 |
20020007413 | Garcia-Luna-Aceves et al. | Jan 2002 | A1 |
20020010798 | Ben-Shaul et al. | Jan 2002 | A1 |
20020048269 | Hong et al. | Apr 2002 | A1 |
20020049608 | Hartsell et al. | Apr 2002 | A1 |
20020049857 | Farber et al. | Apr 2002 | A1 |
20020052942 | Swildens et al. | May 2002 | A1 |
20020062372 | Hong et al. | May 2002 | A1 |
20020068554 | Dusse | Jun 2002 | A1 |
20020069420 | Russell et al. | Jun 2002 | A1 |
20020078233 | Biliris et al. | Jun 2002 | A1 |
20020082858 | Heddaya et al. | Jun 2002 | A1 |
20020083118 | Sim | Jun 2002 | A1 |
20020083148 | Shaw et al. | Jun 2002 | A1 |
20020087374 | Boubez et al. | Jul 2002 | A1 |
20020091786 | Yamaguchi et al. | Jul 2002 | A1 |
20020092026 | Janniello et al. | Jul 2002 | A1 |
20020099616 | Sweldens | Jul 2002 | A1 |
20020099850 | Farber et al. | Jul 2002 | A1 |
20020101836 | Dorenbosch | Aug 2002 | A1 |
20020107944 | Bai et al. | Aug 2002 | A1 |
20020112049 | Elnozahy et al. | Aug 2002 | A1 |
20020116481 | Lee | Aug 2002 | A1 |
20020116491 | Boyd et al. | Aug 2002 | A1 |
20020120782 | Dillon et al. | Aug 2002 | A1 |
20020124047 | Gartner et al. | Sep 2002 | A1 |
20020124098 | Shaw | Sep 2002 | A1 |
20020129123 | Johnson et al. | Sep 2002 | A1 |
20020135611 | Deosaran et al. | Sep 2002 | A1 |
20020138286 | Engstrom | Sep 2002 | A1 |
20020138437 | Lewin et al. | Sep 2002 | A1 |
20020143989 | Huitema et al. | Oct 2002 | A1 |
20020147770 | Tang | Oct 2002 | A1 |
20020147774 | Lisiecki et al. | Oct 2002 | A1 |
20020150094 | Cheng et al. | Oct 2002 | A1 |
20020156911 | Croman et al. | Oct 2002 | A1 |
20020161767 | Shapiro et al. | Oct 2002 | A1 |
20020163882 | Bornstein et al. | Nov 2002 | A1 |
20020165912 | Wenocur et al. | Nov 2002 | A1 |
20020169890 | Beaumont et al. | Nov 2002 | A1 |
20020184368 | Wang | Dec 2002 | A1 |
20020188722 | Banerjee et al. | Dec 2002 | A1 |
20020198953 | O'Rourke et al. | Dec 2002 | A1 |
20030002484 | Freedman | Jan 2003 | A1 |
20030009591 | Hayball et al. | Jan 2003 | A1 |
20030026410 | Lumsden | Feb 2003 | A1 |
20030028642 | Agarwal et al. | Feb 2003 | A1 |
20030033283 | Evans et al. | Feb 2003 | A1 |
20030037139 | Shteyn | Feb 2003 | A1 |
20030065739 | Shnier | Apr 2003 | A1 |
20030074401 | Connell et al. | Apr 2003 | A1 |
20030079027 | Slocombe et al. | Apr 2003 | A1 |
20030093523 | Cranor et al. | May 2003 | A1 |
20030099202 | Lear et al. | May 2003 | A1 |
20030099237 | Mitra et al. | May 2003 | A1 |
20030101278 | Garcia-Luna-Aceves et al. | May 2003 | A1 |
20030120741 | Wu et al. | Jun 2003 | A1 |
20030133554 | Nykanen et al. | Jul 2003 | A1 |
20030135509 | Davis et al. | Jul 2003 | A1 |
20030140087 | Lincoln et al. | Jul 2003 | A1 |
20030145038 | Tariq et al. | Jul 2003 | A1 |
20030145066 | Okada et al. | Jul 2003 | A1 |
20030149581 | Chaudhri et al. | Aug 2003 | A1 |
20030154239 | Davis et al. | Aug 2003 | A1 |
20030154284 | Bernardin et al. | Aug 2003 | A1 |
20030163722 | Anderson, IV | Aug 2003 | A1 |
20030172145 | Nguyen | Sep 2003 | A1 |
20030172183 | Anderson, IV et al. | Sep 2003 | A1 |
20030172291 | Judge et al. | Sep 2003 | A1 |
20030174648 | Wang et al. | Sep 2003 | A1 |
20030182305 | Balva et al. | Sep 2003 | A1 |
20030182413 | Allen et al. | Sep 2003 | A1 |
20030182447 | Schilling | Sep 2003 | A1 |
20030187935 | Agarwalla et al. | Oct 2003 | A1 |
20030187970 | Chase et al. | Oct 2003 | A1 |
20030191822 | Leighton et al. | Oct 2003 | A1 |
20030200394 | Ashmore et al. | Oct 2003 | A1 |
20030204602 | Hudson et al. | Oct 2003 | A1 |
20030229682 | Day | Dec 2003 | A1 |
20030233423 | Dilley | Dec 2003 | A1 |
20030233445 | Levy et al. | Dec 2003 | A1 |
20030233455 | Leber et al. | Dec 2003 | A1 |
20030236700 | Arning et al. | Dec 2003 | A1 |
20040010563 | Forte et al. | Jan 2004 | A1 |
20040010588 | Slater | Jan 2004 | A1 |
20040010621 | Afergan et al. | Jan 2004 | A1 |
20040019518 | Abraham et al. | Jan 2004 | A1 |
20040024841 | Becker et al. | Feb 2004 | A1 |
20040030620 | Benjamin et al. | Feb 2004 | A1 |
20040034744 | Karlsson et al. | Feb 2004 | A1 |
20040039798 | Hotz et al. | Feb 2004 | A1 |
20040044731 | Chen et al. | Mar 2004 | A1 |
20040044791 | Pouzzner | Mar 2004 | A1 |
20040059805 | Dinker et al. | Mar 2004 | A1 |
20040064501 | Jan et al. | Apr 2004 | A1 |
20040073596 | Kloninger et al. | Apr 2004 | A1 |
20040073707 | Dillon | Apr 2004 | A1 |
20040073867 | Kausik et al. | Apr 2004 | A1 |
20040078468 | Hedin et al. | Apr 2004 | A1 |
20040078487 | Cernohous et al. | Apr 2004 | A1 |
20040083283 | Sundaram et al. | Apr 2004 | A1 |
20040083307 | Uysal | Apr 2004 | A1 |
20040117455 | Kaminsky et al. | Jun 2004 | A1 |
20040128344 | Trossen | Jul 2004 | A1 |
20040128346 | Melamed et al. | Jul 2004 | A1 |
20040167981 | Douglas et al. | Aug 2004 | A1 |
20040167982 | Cohen et al. | Aug 2004 | A1 |
20040172466 | Douglas et al. | Sep 2004 | A1 |
20040194085 | Beaubien et al. | Sep 2004 | A1 |
20040194102 | Neerdaels | Sep 2004 | A1 |
20040203630 | Wang | Oct 2004 | A1 |
20040205149 | Dillon et al. | Oct 2004 | A1 |
20040205162 | Parikh | Oct 2004 | A1 |
20040215823 | Kleinfelter et al. | Oct 2004 | A1 |
20040221019 | Swildens et al. | Nov 2004 | A1 |
20040221034 | Kausik et al. | Nov 2004 | A1 |
20040249939 | Amini et al. | Dec 2004 | A1 |
20040249971 | Klinker | Dec 2004 | A1 |
20040249975 | Tuck et al. | Dec 2004 | A1 |
20040254921 | Cohen et al. | Dec 2004 | A1 |
20040267906 | Truty | Dec 2004 | A1 |
20040267907 | Gustafsson | Dec 2004 | A1 |
20050010653 | McCanne | Jan 2005 | A1 |
20050021706 | Maggi et al. | Jan 2005 | A1 |
20050021862 | Schroeder et al. | Jan 2005 | A1 |
20050027882 | Sullivan et al. | Feb 2005 | A1 |
20050038967 | Umbehocker et al. | Feb 2005 | A1 |
20050044270 | Grove et al. | Feb 2005 | A1 |
20050102683 | Branson et al. | May 2005 | A1 |
20050108169 | Balasubramanian et al. | May 2005 | A1 |
20050108529 | Juneau | May 2005 | A1 |
20050114296 | Farber et al. | May 2005 | A1 |
20050117717 | Lumsden | Jun 2005 | A1 |
20050132083 | Raciborski et al. | Jun 2005 | A1 |
20050157712 | Rangarajan et al. | Jul 2005 | A1 |
20050163168 | Sheth et al. | Jul 2005 | A1 |
20050168782 | Kobashi et al. | Aug 2005 | A1 |
20050171959 | Deforche et al. | Aug 2005 | A1 |
20050188073 | Nakamichi et al. | Aug 2005 | A1 |
20050192008 | Desai et al. | Sep 2005 | A1 |
20050198334 | Farber et al. | Sep 2005 | A1 |
20050198571 | Kramer et al. | Sep 2005 | A1 |
20050216569 | Coppola et al. | Sep 2005 | A1 |
20050216674 | Robbin et al. | Sep 2005 | A1 |
20050229119 | Torvinen | Oct 2005 | A1 |
20050232165 | Brawn et al. | Oct 2005 | A1 |
20050259672 | Eduri | Nov 2005 | A1 |
20050262248 | Jennings, III et al. | Nov 2005 | A1 |
20050267991 | Huitema et al. | Dec 2005 | A1 |
20050267992 | Huitema et al. | Dec 2005 | A1 |
20050267993 | Huitema et al. | Dec 2005 | A1 |
20050278259 | Gunaseelan et al. | Dec 2005 | A1 |
20050283759 | Peteanu et al. | Dec 2005 | A1 |
20050283784 | Suzuki | Dec 2005 | A1 |
20060013158 | Ahuja et al. | Jan 2006 | A1 |
20060020596 | Liu et al. | Jan 2006 | A1 |
20060020684 | Mukherjee et al. | Jan 2006 | A1 |
20060020714 | Girouard et al. | Jan 2006 | A1 |
20060020715 | Jungck | Jan 2006 | A1 |
20060026067 | Nicholas et al. | Feb 2006 | A1 |
20060026154 | Altinel et al. | Feb 2006 | A1 |
20060036720 | Faulk, Jr. | Feb 2006 | A1 |
20060036966 | Yevdayev | Feb 2006 | A1 |
20060037037 | Miranz | Feb 2006 | A1 |
20060039352 | Karstens | Feb 2006 | A1 |
20060041614 | Oe | Feb 2006 | A1 |
20060047787 | Aggarwal et al. | Mar 2006 | A1 |
20060047813 | Aggarwal et al. | Mar 2006 | A1 |
20060059246 | Grove | Mar 2006 | A1 |
20060063534 | Kokkonen et al. | Mar 2006 | A1 |
20060064476 | Decasper et al. | Mar 2006 | A1 |
20060064500 | Roth et al. | Mar 2006 | A1 |
20060074750 | Clark et al. | Apr 2006 | A1 |
20060075084 | Lyon | Apr 2006 | A1 |
20060075139 | Jungck | Apr 2006 | A1 |
20060083165 | McLane et al. | Apr 2006 | A1 |
20060085536 | Meyer et al. | Apr 2006 | A1 |
20060088026 | Mazur et al. | Apr 2006 | A1 |
20060112066 | Hamzy | May 2006 | A1 |
20060112176 | Liu et al. | May 2006 | A1 |
20060120385 | Atchison et al. | Jun 2006 | A1 |
20060129665 | Toebes et al. | Jun 2006 | A1 |
20060143293 | Freedman | Jun 2006 | A1 |
20060149529 | Nguyen et al. | Jul 2006 | A1 |
20060155823 | Tran et al. | Jul 2006 | A1 |
20060155862 | Kathi et al. | Jul 2006 | A1 |
20060161541 | Cencini | Jul 2006 | A1 |
20060168088 | Leighton et al. | Jul 2006 | A1 |
20060179080 | Meek et al. | Aug 2006 | A1 |
20060184936 | Abels et al. | Aug 2006 | A1 |
20060190605 | Franz et al. | Aug 2006 | A1 |
20060193247 | Naseh et al. | Aug 2006 | A1 |
20060195866 | Thukral | Aug 2006 | A1 |
20060206568 | Verma et al. | Sep 2006 | A1 |
20060206586 | Ling et al. | Sep 2006 | A1 |
20060218265 | Farber et al. | Sep 2006 | A1 |
20060218304 | Mukherjee et al. | Sep 2006 | A1 |
20060227740 | McLaughlin et al. | Oct 2006 | A1 |
20060227758 | Rana et al. | Oct 2006 | A1 |
20060230137 | Gare et al. | Oct 2006 | A1 |
20060233155 | Srivastava | Oct 2006 | A1 |
20060253546 | Chang et al. | Nov 2006 | A1 |
20060253609 | Andreev et al. | Nov 2006 | A1 |
20060259581 | Piersol | Nov 2006 | A1 |
20060259690 | Vittal et al. | Nov 2006 | A1 |
20060259984 | Juneau | Nov 2006 | A1 |
20060265497 | Ohata et al. | Nov 2006 | A1 |
20060265508 | Angel et al. | Nov 2006 | A1 |
20060265516 | Schilling | Nov 2006 | A1 |
20060265720 | Cai et al. | Nov 2006 | A1 |
20060271641 | Stavrakos et al. | Nov 2006 | A1 |
20060282522 | Lewin et al. | Dec 2006 | A1 |
20060288119 | Kim et al. | Dec 2006 | A1 |
20070005689 | Leighton et al. | Jan 2007 | A1 |
20070005801 | Kumar et al. | Jan 2007 | A1 |
20070005892 | Mullender et al. | Jan 2007 | A1 |
20070011267 | Overton et al. | Jan 2007 | A1 |
20070014241 | Banerjee et al. | Jan 2007 | A1 |
20070021998 | Laithwaite et al. | Jan 2007 | A1 |
20070028001 | Phillips et al. | Feb 2007 | A1 |
20070038729 | Sullivan et al. | Feb 2007 | A1 |
20070038994 | Davis et al. | Feb 2007 | A1 |
20070041393 | Westhead et al. | Feb 2007 | A1 |
20070043859 | Ruul | Feb 2007 | A1 |
20070050522 | Grove et al. | Mar 2007 | A1 |
20070050703 | Lebel | Mar 2007 | A1 |
20070055764 | Dilley et al. | Mar 2007 | A1 |
20070061440 | Sundaram et al. | Mar 2007 | A1 |
20070076872 | Juneau | Apr 2007 | A1 |
20070086429 | Lawrence et al. | Apr 2007 | A1 |
20070094361 | Hoynowski et al. | Apr 2007 | A1 |
20070101377 | Six et al. | May 2007 | A1 |
20070118667 | McCarthy et al. | May 2007 | A1 |
20070118668 | McCarthy et al. | May 2007 | A1 |
20070134641 | Lieu | Jun 2007 | A1 |
20070156919 | Potti et al. | Jul 2007 | A1 |
20070162331 | Sullivan | Jul 2007 | A1 |
20070168517 | Weller | Jul 2007 | A1 |
20070174426 | Swildens et al. | Jul 2007 | A1 |
20070174442 | Sherman et al. | Jul 2007 | A1 |
20070174490 | Choi et al. | Jul 2007 | A1 |
20070183342 | Wong et al. | Aug 2007 | A1 |
20070198982 | Bolan et al. | Aug 2007 | A1 |
20070204107 | Greenfield et al. | Aug 2007 | A1 |
20070208737 | Li et al. | Sep 2007 | A1 |
20070219795 | Park et al. | Sep 2007 | A1 |
20070220010 | Ertugrul | Sep 2007 | A1 |
20070233705 | Farber et al. | Oct 2007 | A1 |
20070233706 | Farber et al. | Oct 2007 | A1 |
20070233846 | Farber et al. | Oct 2007 | A1 |
20070233884 | Farber et al. | Oct 2007 | A1 |
20070244964 | Challenger et al. | Oct 2007 | A1 |
20070250467 | Mesnik et al. | Oct 2007 | A1 |
20070250560 | Wein et al. | Oct 2007 | A1 |
20070250601 | Amlekar et al. | Oct 2007 | A1 |
20070250611 | Bhogal et al. | Oct 2007 | A1 |
20070253377 | Janneteau et al. | Nov 2007 | A1 |
20070255843 | Zubev | Nov 2007 | A1 |
20070263604 | Tal | Nov 2007 | A1 |
20070266113 | Koopmans et al. | Nov 2007 | A1 |
20070266311 | Westphal | Nov 2007 | A1 |
20070266333 | Cossey et al. | Nov 2007 | A1 |
20070270165 | Poosala | Nov 2007 | A1 |
20070271375 | Hwang | Nov 2007 | A1 |
20070271385 | Davis et al. | Nov 2007 | A1 |
20070280229 | Kenney | Dec 2007 | A1 |
20070288588 | Wein et al. | Dec 2007 | A1 |
20070291739 | Sullivan et al. | Dec 2007 | A1 |
20080005057 | Ozzie et al. | Jan 2008 | A1 |
20080008089 | Bornstein et al. | Jan 2008 | A1 |
20080025304 | Venkataswami et al. | Jan 2008 | A1 |
20080037536 | Padmanabhan et al. | Feb 2008 | A1 |
20080046550 | Mazur et al. | Feb 2008 | A1 |
20080046596 | Afergan et al. | Feb 2008 | A1 |
20080065724 | Seed et al. | Mar 2008 | A1 |
20080065745 | Leighton et al. | Mar 2008 | A1 |
20080071859 | Seed et al. | Mar 2008 | A1 |
20080071987 | Karn et al. | Mar 2008 | A1 |
20080072264 | Crayford | Mar 2008 | A1 |
20080082551 | Farber et al. | Apr 2008 | A1 |
20080082662 | Dandliker et al. | Apr 2008 | A1 |
20080086574 | Raciborski et al. | Apr 2008 | A1 |
20080103805 | Shear et al. | May 2008 | A1 |
20080104268 | Farber et al. | May 2008 | A1 |
20080114829 | Button et al. | May 2008 | A1 |
20080126706 | Newport et al. | May 2008 | A1 |
20080134043 | Georgis et al. | Jun 2008 | A1 |
20080140800 | Farber et al. | Jun 2008 | A1 |
20080147866 | Stolorz et al. | Jun 2008 | A1 |
20080147873 | Matsumoto | Jun 2008 | A1 |
20080155059 | Hardin | Jun 2008 | A1 |
20080155061 | Afergan et al. | Jun 2008 | A1 |
20080155613 | Benya | Jun 2008 | A1 |
20080155614 | Cooper et al. | Jun 2008 | A1 |
20080162667 | Verma et al. | Jul 2008 | A1 |
20080162821 | Duran et al. | Jul 2008 | A1 |
20080162843 | Davis et al. | Jul 2008 | A1 |
20080172488 | Jawahar et al. | Jul 2008 | A1 |
20080189437 | Halley | Aug 2008 | A1 |
20080201332 | Souders et al. | Aug 2008 | A1 |
20080215718 | Stolorz et al. | Sep 2008 | A1 |
20080215730 | Sundaram et al. | Sep 2008 | A1 |
20080215735 | Farber et al. | Sep 2008 | A1 |
20080215747 | Menon et al. | Sep 2008 | A1 |
20080215750 | Farber et al. | Sep 2008 | A1 |
20080215755 | Farber et al. | Sep 2008 | A1 |
20080222281 | Dilley et al. | Sep 2008 | A1 |
20080222291 | Weller et al. | Sep 2008 | A1 |
20080222295 | Robinson et al. | Sep 2008 | A1 |
20080228574 | Stewart et al. | Sep 2008 | A1 |
20080228920 | Souders et al. | Sep 2008 | A1 |
20080235400 | Slocombe et al. | Sep 2008 | A1 |
20080256175 | Lee et al. | Oct 2008 | A1 |
20080275772 | Suryanarayana et al. | Nov 2008 | A1 |
20080281946 | Swildens et al. | Nov 2008 | A1 |
20080281950 | Wald et al. | Nov 2008 | A1 |
20080288722 | Lecoq et al. | Nov 2008 | A1 |
20080301670 | Gouge et al. | Dec 2008 | A1 |
20080319862 | Golan et al. | Dec 2008 | A1 |
20080320123 | Houlihan et al. | Dec 2008 | A1 |
20080320269 | Houlihan et al. | Dec 2008 | A1 |
20090013063 | Soman | Jan 2009 | A1 |
20090016236 | Alcala et al. | Jan 2009 | A1 |
20090029644 | Sue et al. | Jan 2009 | A1 |
20090031367 | Sue | Jan 2009 | A1 |
20090031368 | Ling | Jan 2009 | A1 |
20090031376 | Riley et al. | Jan 2009 | A1 |
20090049098 | Pickelsimer et al. | Feb 2009 | A1 |
20090063038 | Shrivathsan et al. | Mar 2009 | A1 |
20090063704 | Taylor et al. | Mar 2009 | A1 |
20090070533 | Elazary et al. | Mar 2009 | A1 |
20090083228 | Shatz et al. | Mar 2009 | A1 |
20090086741 | Zhang | Apr 2009 | A1 |
20090089869 | Varghese | Apr 2009 | A1 |
20090103707 | McGary et al. | Apr 2009 | A1 |
20090106381 | Kasriel et al. | Apr 2009 | A1 |
20090112703 | Brown | Apr 2009 | A1 |
20090125393 | Hwang et al. | May 2009 | A1 |
20090125934 | Jones et al. | May 2009 | A1 |
20090132368 | Cotter et al. | May 2009 | A1 |
20090132648 | Swildens et al. | May 2009 | A1 |
20090144412 | Ferguson et al. | Jun 2009 | A1 |
20090150926 | Schlack | Jun 2009 | A1 |
20090157850 | Gagliardi et al. | Jun 2009 | A1 |
20090158163 | Stephens et al. | Jun 2009 | A1 |
20090164331 | Bishop et al. | Jun 2009 | A1 |
20090177667 | Ramos et al. | Jul 2009 | A1 |
20090182815 | Czechowski et al. | Jul 2009 | A1 |
20090182837 | Rogers | Jul 2009 | A1 |
20090182945 | Aviles et al. | Jul 2009 | A1 |
20090187575 | DaCosta | Jul 2009 | A1 |
20090204682 | Jeyaseelan et al. | Aug 2009 | A1 |
20090210549 | Hudson et al. | Aug 2009 | A1 |
20090233623 | Johnson | Sep 2009 | A1 |
20090248786 | Richardson et al. | Oct 2009 | A1 |
20090248787 | Sivasubramanian et al. | Oct 2009 | A1 |
20090248852 | Fuhrmann et al. | Oct 2009 | A1 |
20090248858 | Sivasubramanian et al. | Oct 2009 | A1 |
20090248893 | Richardson et al. | Oct 2009 | A1 |
20090249222 | Schmidt et al. | Oct 2009 | A1 |
20090254661 | Fullagar et al. | Oct 2009 | A1 |
20090259971 | Rankine et al. | Oct 2009 | A1 |
20090271498 | Cable | Oct 2009 | A1 |
20090271577 | Campana et al. | Oct 2009 | A1 |
20090271730 | Rose et al. | Oct 2009 | A1 |
20090279444 | Ravindran et al. | Nov 2009 | A1 |
20090282038 | Subotin et al. | Nov 2009 | A1 |
20090287750 | Banavar et al. | Nov 2009 | A1 |
20090307307 | Igarashi | Dec 2009 | A1 |
20090327489 | Swildens et al. | Dec 2009 | A1 |
20090327517 | Sivasubramanian et al. | Dec 2009 | A1 |
20090327914 | Adar et al. | Dec 2009 | A1 |
20100005175 | Swildens et al. | Jan 2010 | A1 |
20100011061 | Hudson et al. | Jan 2010 | A1 |
20100011126 | Hsu et al. | Jan 2010 | A1 |
20100023601 | Lewin et al. | Jan 2010 | A1 |
20100030662 | Klein | Feb 2010 | A1 |
20100030914 | Sparks et al. | Feb 2010 | A1 |
20100034470 | Valencia-Campo et al. | Feb 2010 | A1 |
20100036944 | Douglis et al. | Feb 2010 | A1 |
20100042725 | Jeon et al. | Feb 2010 | A1 |
20100057894 | Glasser | Mar 2010 | A1 |
20100070603 | Moss et al. | Mar 2010 | A1 |
20100082787 | Kommula et al. | Apr 2010 | A1 |
20100088367 | Brown et al. | Apr 2010 | A1 |
20100088405 | Huang et al. | Apr 2010 | A1 |
20100100629 | Raciborski et al. | Apr 2010 | A1 |
20100111059 | Bappu et al. | May 2010 | A1 |
20100115133 | Joshi | May 2010 | A1 |
20100115342 | Shigeta et al. | May 2010 | A1 |
20100121953 | Friedman et al. | May 2010 | A1 |
20100121981 | Drako | May 2010 | A1 |
20100122069 | Gonion | May 2010 | A1 |
20100125673 | Richardson et al. | May 2010 | A1 |
20100125675 | Richardson et al. | May 2010 | A1 |
20100131646 | Drako | May 2010 | A1 |
20100138559 | Sullivan et al. | Jun 2010 | A1 |
20100150155 | Napierala | Jun 2010 | A1 |
20100161799 | Maloo | Jun 2010 | A1 |
20100169392 | Lev Ran et al. | Jul 2010 | A1 |
20100169452 | Atluri et al. | Jul 2010 | A1 |
20100192225 | Ma et al. | Jul 2010 | A1 |
20100217801 | Leighton et al. | Aug 2010 | A1 |
20100223364 | Wei | Sep 2010 | A1 |
20100226372 | Watanabe | Sep 2010 | A1 |
20100228819 | Wei | Sep 2010 | A1 |
20100257024 | Holmes et al. | Oct 2010 | A1 |
20100257266 | Holmes et al. | Oct 2010 | A1 |
20100257566 | Matila | Oct 2010 | A1 |
20100268789 | Yoo et al. | Oct 2010 | A1 |
20100274765 | Murphy et al. | Oct 2010 | A1 |
20100293296 | Hsu et al. | Nov 2010 | A1 |
20100293479 | Rousso et al. | Nov 2010 | A1 |
20100299427 | Joshi | Nov 2010 | A1 |
20100299438 | Zimmerman et al. | Nov 2010 | A1 |
20100299439 | McCarthy et al. | Nov 2010 | A1 |
20100312861 | Kolhi et al. | Dec 2010 | A1 |
20100318508 | Brawer et al. | Dec 2010 | A1 |
20100322255 | Hao et al. | Dec 2010 | A1 |
20100332595 | Fullagar et al. | Dec 2010 | A1 |
20110029598 | Arnold et al. | Feb 2011 | A1 |
20110040893 | Karaoguz et al. | Feb 2011 | A1 |
20110055714 | Vemulapalli et al. | Mar 2011 | A1 |
20110058675 | Brueck | Mar 2011 | A1 |
20110078000 | Ma et al. | Mar 2011 | A1 |
20110078230 | Sepulveda | Mar 2011 | A1 |
20110087769 | Holmes et al. | Apr 2011 | A1 |
20110096987 | Morales et al. | Apr 2011 | A1 |
20110113467 | Agarwal et al. | May 2011 | A1 |
20110153938 | Verzunov et al. | Jun 2011 | A1 |
20110153941 | Spatscheck et al. | Jun 2011 | A1 |
20110154318 | Oshins et al. | Jun 2011 | A1 |
20110191449 | Swildens et al. | Aug 2011 | A1 |
20110191459 | Joshi | Aug 2011 | A1 |
20110208876 | Richardson et al. | Aug 2011 | A1 |
20110208958 | Stuedi et al. | Aug 2011 | A1 |
20110209064 | Jorgensen et al. | Aug 2011 | A1 |
20110219120 | Farber et al. | Sep 2011 | A1 |
20110219372 | Agrawal et al. | Sep 2011 | A1 |
20110238501 | Almeida | Sep 2011 | A1 |
20110238793 | Bedare et al. | Sep 2011 | A1 |
20110252142 | Richardson et al. | Oct 2011 | A1 |
20110252143 | Baumback et al. | Oct 2011 | A1 |
20110258049 | Ramer et al. | Oct 2011 | A1 |
20110276623 | Girbal | Nov 2011 | A1 |
20110296053 | Medved et al. | Dec 2011 | A1 |
20110302304 | Baumback et al. | Dec 2011 | A1 |
20110320559 | Foti | Dec 2011 | A1 |
20120036238 | Sundaram et al. | Feb 2012 | A1 |
20120066360 | Ghosh | Mar 2012 | A1 |
20120089972 | Scheidel et al. | Apr 2012 | A1 |
20120096065 | Suit et al. | Apr 2012 | A1 |
20120124184 | Sakata et al. | May 2012 | A1 |
20120131177 | Brandt et al. | May 2012 | A1 |
20120136697 | Peles et al. | May 2012 | A1 |
20120166516 | Simmons et al. | Jun 2012 | A1 |
20120169646 | Berkes et al. | Jul 2012 | A1 |
20120173677 | Richardson et al. | Jul 2012 | A1 |
20120173760 | Jog et al. | Jul 2012 | A1 |
20120179817 | Bade et al. | Jul 2012 | A1 |
20120179839 | Raciborski et al. | Jul 2012 | A1 |
20120198043 | Hesketh et al. | Aug 2012 | A1 |
20120233522 | Barton et al. | Sep 2012 | A1 |
20120233668 | Leafe et al. | Sep 2012 | A1 |
20120303785 | Sivasubramanian et al. | Nov 2012 | A1 |
20120303804 | Sundaram et al. | Nov 2012 | A1 |
20120311648 | Swildens et al. | Dec 2012 | A1 |
20120324089 | Joshi | Dec 2012 | A1 |
20130003735 | Chao et al. | Jan 2013 | A1 |
20130007100 | Trahan et al. | Jan 2013 | A1 |
20130007101 | Trahan et al. | Jan 2013 | A1 |
20130007102 | Trahan et al. | Jan 2013 | A1 |
20130007241 | Trahan et al. | Jan 2013 | A1 |
20130019311 | Swildens et al. | Jan 2013 | A1 |
20130041872 | Aizman et al. | Feb 2013 | A1 |
20130046869 | Jenkins et al. | Feb 2013 | A1 |
20130080420 | Taylor et al. | Mar 2013 | A1 |
20130080421 | Taylor et al. | Mar 2013 | A1 |
20130080576 | Taylor et al. | Mar 2013 | A1 |
20130080577 | Taylor et al. | Mar 2013 | A1 |
20130086001 | Bhogal et al. | Apr 2013 | A1 |
20130117849 | Golshan et al. | May 2013 | A1 |
20130130221 | Kortemeyer et al. | May 2013 | A1 |
20130151646 | Chidambaram et al. | Jun 2013 | A1 |
20130198341 | Kim | Aug 2013 | A1 |
20130212300 | Eggleston et al. | Aug 2013 | A1 |
20130227165 | Liu | Aug 2013 | A1 |
20130246567 | Green et al. | Sep 2013 | A1 |
20130268616 | Sakata et al. | Oct 2013 | A1 |
20130305046 | Mankovski et al. | Nov 2013 | A1 |
20130311605 | Richardson et al. | Nov 2013 | A1 |
20130339429 | Richardson et al. | Dec 2013 | A1 |
20130346567 | Richardson et al. | Dec 2013 | A1 |
20140040478 | Hsu et al. | Feb 2014 | A1 |
20140053022 | Forgette et al. | Feb 2014 | A1 |
20140059198 | Richardson et al. | Feb 2014 | A1 |
20140075109 | Richardson et al. | Mar 2014 | A1 |
20140143320 | Sivasubramanian et al. | May 2014 | A1 |
20140257891 | Richardson et al. | Sep 2014 | A1 |
20140297870 | Eggleston et al. | Oct 2014 | A1 |
20140310811 | Hentunen | Oct 2014 | A1 |
20140325155 | Marshall et al. | Oct 2014 | A1 |
20140331328 | Wang et al. | Nov 2014 | A1 |
20140337472 | Newton et al. | Nov 2014 | A1 |
20140365666 | Richardson et al. | Dec 2014 | A1 |
20150081842 | Richardson et al. | Mar 2015 | A1 |
20150172379 | Richardson et al. | Jun 2015 | A1 |
20150172407 | MacCarthaigh et al. | Jun 2015 | A1 |
20150172414 | Richardson et al. | Jun 2015 | A1 |
20150172415 | Richardson et al. | Jun 2015 | A1 |
20150180988 | Sivasubramanian et al. | Jun 2015 | A1 |
20150188994 | Marshall et al. | Jul 2015 | A1 |
20150195244 | Richardson et al. | Jul 2015 | A1 |
20150207733 | Richardson et al. | Jul 2015 | A1 |
20150215270 | Sivasubramanian et al. | Jul 2015 | A1 |
Number | Date | Country |
---|---|---|
2741 895 | May 2010 | CA |
1422468 | Jun 2003 | CN |
1605182 | Apr 2005 | CN |
101189598 | May 2008 | CN |
101460907 | Jun 2009 | CN |
103731481 | Apr 2014 | CN |
1603307 | Dec 2005 | EP |
1351141 | Oct 2007 | EP |
2008167 | Dec 2008 | EP |
2001-0506093 | May 2001 | JP |
2001-249907 | Sep 2001 | JP |
2002-044137 | Feb 2002 | JP |
2003-167810 | Jun 2003 | JP |
2003-167813 | Jun 2003 | JP |
2003-522358 | Jul 2003 | JP |
2003188901 | Jul 2003 | JP |
2004-533738 | Nov 2004 | JP |
2005-537687 | Dec 2005 | JP |
2007-133896 | May 2007 | JP |
2009-071538 | Apr 2009 | JP |
2012-209623 | Oct 2012 | JP |
WO 02069608 | Sep 2002 | WO |
2005071560 | Aug 2005 | WO |
WO 2007007960 | Jan 2007 | WO |
WO 2007126837 | Nov 2007 | WO |
WO 2009124006 | Oct 2009 | WO |
WO 2010002603 | Jan 2010 | WO |
WO 2012044587 | Apr 2012 | WO |
Entry |
---|
Horvath et al., “Enhancing Energy Efficiency in Multi-tier Web Server Clusters via Prioritization,” in Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International , vol., no., pp. 1-6, Mar. 26-30, 2007. |
Canonical Name (CNAME) DNS Records, domainavenue.com, Feb. 1, 2001, XP055153783, Retrieved from the Internet: URL:http://www.domainavenue.com/cname.htm [retrieved on Nov. 18, 2014]. |
“Content delivery network”, Wikipedia, the free encyclopedia, Retrieved from the Internet: URL:http://en.wikipedia.org/w/index.php?title=Contentdelivery network&oldid=601009970, XP055153445, Mar. 24, 2008. |
“Global Server Load Balancing with ServerIron,” Foundry Networks, retrieved Aug. 30, 2007, from http://www.foundrynet.com/pdf/an-global-server-load-bal.pdf, 7 pages. |
“Grid Computing Solutions,” Sun Microsystems, Inc., retrieved May 3, 2006, from http://www.sun.com/software/grid, 3 pages. |
“Grid Offerings,” Java.net, retrieved May 3, 2006, from http://wiki.java.net/bin/view/Sungrid/OtherGridOfferings, 8 pages. |
“Recent Advances Boost System Virtualization,” eWeek.com, retrieved from May 3, 2006, http://www.eWeek.com/article2/0,1895,1772626,00.asp, 5 pages. |
“Scaleable Trust of Next Generation Management (STRONGMAN),” retrieved May 17, 2006, from http://www.cis.upenn.edu/˜dsl/STRONGMAN/, 4 pages. |
“Sun EDA Compute Ranch,” Sun Microsystems, Inc., retrieved May 3, 2006, from http://sun.com/processors/ranch/brochure.pdf, 2 pages. |
“Sun Microsystems Accelerates UltraSP ARC Processor Design Program With New Burlington, Mass. Compute Ranch,” Nov. 6, 2002, Sun Microsystems, Inc., retrieved May 3, 2006, from http://www.sun.com/smi/Press/sunflash/2002-11/sunflash.20021106.3 .xml, 2 pages. |
“Sun N1 Grid Engine 6,” Sun Microsystems, Inc., retrieved May 3, 2006, from http://www.sun.com/software/gridware/index.xml, 3 pages. |
“Sun Opens New Processor Design Compute Ranch,” Nov. 30, 2001, Sun Microsystems, Inc., retrieved May 3, 2006, from http://www.sun.com/smi/Press/sunflash/2001-11/sunflash.20011130.1.xml, 3 pages. |
“The Softricity Desktop,” Softricity, Inc., retrieved May 3, 2006, from http://www.softricity.com/products/, 3 pages. |
“Xen—The Xen virtual Machine Monitor,” University of Cambridge Computer Laboratory, retrieved Nov. 8, 2005, from http://www.cl.cam.ac.uk/Research/SRG/netos/xen/, 2 pages. |
“XenFaq,” retrieved Nov. 8, 2005, from http://wiki.xensource.com/xenwiki/XenFaq?action=print, 9 pages. |
Abi, Issam, et al., “A Business Driven Management Framework for Utility Computing Environments,” Oct. 12, 2004, HP Laboratories Bristol, HPL-2004-171, retrieved Aug. 30, 2007, from http://www.hpl.hp.com/techreports/2004/HPL-2004-171.pdf, 14 pages. |
American Bar Association; Digital Signature Guidelines Tutorial [online]; Feb. 10, 2002 [retrieved on Mar. 2, 2010]; American Bar Association Section of Science and Technology Information Security Committee; Retrieved from the internet: (URL: http://web.archive.org/web/20020210124615/www.abanet.org/scitech/ec/isc/dsg-tutorial.html; pp. 1-8. |
Baglioni et al., “Preprocessing and Mining Web Log Data for Web Personalization”, LNAI 2829, 2003, pp. 237-249. |
Barbir, A., et al., “Known Content Network (CN) Request-Routing Mechanisms”, Request for Comments 3568, [online], IETF, Jul. 2003, [retrieved on Feb. 26, 2013], Retrieved from the Internet: (URL: http://tools.ietf.org/rfc/rfc3568.txt). |
Bellovin, S., “Distributed Firewalls,” ;login;:37-39, Nov. 1999, http://www.cs.columbia.edu/-smb/papers/distfw. html, 10 pages, retrieved Nov. 11, 2005. |
Blaze, M., “Using the KeyNote Trust Management System,” Mar. 1, 2001, from http://www.crypto.com/trustmgt/kn.html, 4 pages, retrieved May 17, 2006. |
Brenton, C., “What is Egress Filtering and How Can I Implement It?—Egress Filtering v 0.2,” Feb. 29, 2000, SANS Institute, http://www.sans.org/infosecFAQ/firewall/egress.htm, 6 pages. |
Byun et al., “A Dynamic Grid Services Deployment Mechanism for On-Demand Resource Provisioning”, IEEE International Symposium on Cluster Computing and the Grid:863-870, 2005. |
Chipara et al, “Realtime Power-Aware Routing in Sensor Network”, IEEE, 2006, 10 pages. |
Clark, C., “Live Migration of Virtual Machines,” May 2005, NSDI '05: 2nd Symposium on Networked Systems Design and Implementation, Boston, MA, May 2-4, 2005, retrieved from http://www.usenix.org/events/nsdi05/tech/full—papers/clark/clark.pdf, 14 pages. |
Coulson, D., “Network Security Iptables,” Apr. 2003, Linuxpro, Part 2, retrieved from http://davidcoulson.net/writing/lxf/38/iptables.pdf, 4 pages. |
Coulson, D., “Network Security Iptables,” Mar. 2003, Linuxpro, Part 1, retrieved from http://davidcoulson.net/writing/lxf/39/iptables.pdf, 4 pages. |
Deleuze, C., et al., A DNS Based Mapping Peering System for Peering CDNs, draft-deleuze-cdnp-dnsmap-peer-00.txt, Nov. 20, 2000, 20 pages. |
Demers, A., “Epidemic Algorithms for Replicated Database Maintenance,” 1987, Proceedings of the sixth annual ACM Symposium on Principles of Distributed Computing, Vancouver, British Columbia, Canada, Aug. 10-12, 1987, 12 pages. |
Gruener, J., “A Vision of Togetherness,” May 24, 2004, NetworkWorld, retrieved May 3, 2006, from, http://www.networkworld.com/supp/2004/ndc3/0524virt.html, 9 pages. |
Gunther et al, “Measuring Round Trip Times to determine the Distance between WLAN Nodes”,May 2005, In Proc. of Networking 2005, all pages. |
Gunther et al, “Measuring Round Trip Times to determine the Distance between WLAN Nodes”, Dec. 18, 2004, Technical University Berlin, all pages. |
Hartung et al.; Digital rights management and watermarking of multimedia content for m-commerce applications; Published in: Communications Magazine, IEEE (vol. 38, Issue: 11 ); Date of Publication: Nov. 2000; pp. 78-84; IEEE Xplore. |
Ioannidis, S., et al., “Implementing a Distributed Firewall,” Nov. 2000, (ACM) Proceedings of the ACM Computer and Communications Security (CCS) 2000, Athens, Greece, pp. 190-199, retrieved from http://www.cis.upenn.edu/˜dls/STRONGMAN/Papers/dt.pdf, 10 pages. |
Joseph, Joshy, et al., “Introduction to Grid Computing,” Apr. 16, 2004, retrieved Aug. 30, 2007, from http://www.informit.com/articles/printerfriendly.aspx?p=169508, 19 pages. |
Kalafut et al., Understanding Implications of DNS Zone Provisioning., Proceeding IMC '08 Proceedings of the 8th AMC SIGCOMM conference on Internet measurement., pp. 211-216., ACM New York, NY, USA., 2008. |
Kato, Yoshinobu , Server load balancer—Difference in distribution technique and supported protocol—Focus on function to meet the needs, Nikkei Communications, Japan, Nikkei Business Publications, Inc., Mar. 20, 2000, vol. 314, pp. 114 to 123. |
Liu et al., “Combined mining of Web server logs and web contents for classifying user navigation patterns and predicting users' future requests,” Data & Knowledge Engineering 61 (2007) pp. 304-330. |
Maesono, et al., “A Local Scheduling Method considering Data Transfer in Data Grid,” Technical Report of IEICE, vol. 104, No. 692, pp. 435-440, The Institute of Electronics, Information and Communication Engineers, Japan, Feb. 2005. |
Mulligan et al.; How DRM-based content delivery systems disrupt expectations of “personal use”; Published in: Proceeding DRM '03 Proceedings of the 3rd ACM workshop on Digital rights management; 2003; pp. 77-89; ACM Digital Library. |
Shankland, S., “Sun to buy start-up to bolster N1 ,” Jul. 30, 2003, CNet News.com, retrieved May 3, 2006, http://news.zdnet.com/2100-3513—22-5057752.html, 8 pages. |
Strand, L., “Adaptive distributed firewall using intrusion detection,” Nov. 1, 2004, University of Oslo Department of Informatics, retrieved Mar. 8, 2006, from http://gnist.org/˜lars/studies/master/StrandLars-master.pdf, 158 pages. |
Takizawa, et al., “Scalable MultiReplication Framework on the Grid,” Report of Study of Information Processing Society of Japan, Information Processing Society, vol. 2004, No. 81, pp. 247-252, Japan, Aug. 1, 2004. |
Tan et al., “Classification: Basic Concepts, Decision Tree, and Model Evaluation”, Introduction in Data Mining; http://www-users.cs.umn.edu/˜kumar/dmbook/ch4.pdf, 2005, pp. 245-205. |
Van Renesse, R., “Astrolabe: A Robust and Scalable Technology for Distributed System Monitoring, Management, and Data Mining,” May 2003, ACM Transactions on Computer Systems (TOCS), 21 (2): 164-206, 43 pages. |
Vijayan, J., “Terraspring Gives Sun's N1 a Boost,” Nov. 25, 2002, Computerworld, retrieved May 3, 2006, from http://www.computerworld.com/printthis/2002/0,4814, 76159,00.html, 3 pages. |
Virtual Iron Software Home, Virtual Iron, retrieved May 3, 2006, from http://www.virtualiron.com/, 1 page. |
Waldspurger, CA., “Spawn: A Distributed Computational Economy,” Feb. 1992, IEEE Transactions on Software Engineering, 18(2): 103-117, 15 pages. |
Watanabe, et al., “Remote Program Shipping System for GridRPC Systems,” Report of Study of Information Processing Society of Japan, Information Processing Society, vol. 2003, No. 102, pp. 73-78, Japan, Oct. 16, 2003. |
Xu et al., “Decision tree regression for soft classification of remote sensing data”, Remote Sensing of Environment 97 (2005) pp. 322-336. |
Yamagata, et al., “A virtual-machine based fast deployment tool for Grid execution environment,” Report of Study of Information Processing Society of Japan, Information Processing Society, vol. 2006, No. 20, pp. 127-132, Japan, Feb. 28, 2006. |
Zhu, Xiaoyun, et al., “Utility-Driven Workload Management Using Nested Control Design,” Mar. 29, 2006, HP Laboratories Palo Alto, HPL-2005-193(R.1), retrieved Aug. 30, 2007, from http://www.hpl.hp.com/techreports/2005/HPL-2005-193R1.pdf, 9 pages. |
Supplementary European Search Report in Application No. 09729072.0 2266064 dated Dec. 10, 2014. |
First Singapore Written Opinion in Application No. 201006836-9, dated Oct. 12, 2011 in 12 pages. |
Singapore Written Opinion in Application No. 201006836-9, dated Apr. 30, 2012 in 10 pages. |
First Office Action in Chinese Application No. 200980111422.3 dated Apr. 13, 2012. |
First Office Action in Japanese Application No. 2011-502138 dated Feb. 1, 2013. |
Singapore Written Opinion in Application No. 201006837-7, mailed Oct. 12, 2011 in 11 pages. |
Supplementary European Search Report in Application No. 09727694.3 mailed Jan. 30, 2012 in 6 pages. |
Singapore Examination Report in Application No. 201006837-7 mailed Mar. 16, 2012. |
First Office Action in Chinese Application No. 200980111426.1 mailed Feb. 16, 2013. |
Second Office Action in Chinese Application No. 200980111426.1 mailed Dec. 25, 2013. |
Third Office Action in Chinese Application No. 200980111426.1 mailed Jul. 7, 2014. |
Fourth Office Action in Chinese Application No. 200980111426.1 mailed Jan. 15, 2015. |
First Office Action in Japanese Application No. 2011-502139 dated Nov. 5, 2013. |
Decision of Rejection in Application No. 2011-502139 dated Jun. 30, 2014. |
Singapore Written Opinion in Application No. 201006874-0, mailed Oct. 12, 2011 in 10 pages. |
First Office Action in Japanese Application No. 2011-502140 mailed Dec. 7, 2012. |
First Office Action in Chinese Application No. 200980119995.0 dated Jul. 6, 2012. |
Second Office Action in Chinese Application No. 200980119995.0 dated Apr. 15, 2013. |
Examination Report in Singapore Application No. 201006874-0 dated May 16, 2012. |
Search Report for European Application No. 09839809.2 dated May 11, 2015. |
Supplementary European Search Report in Application No. 09728756.9 mailed Jan. 8, 2013. |
First Office Action in Chinese Application No. 200980119993.1 dated Jul. 4, 2012. |
Second Office Action in Chinese Application No. 200980119993.1 dated Mar. 12, 2013. |
Third Office Action in Chinese Application No. 200980119993.1 dated Oct. 21, 2013. |
First Office Action in Japanese Application No. 2011-503091 dated Nov. 18, 2013. |
Search Report and Written Opinion issued in Singapore Application No. 201006873-2 mailed on Oct. 12, 2011. |
First Office Action is Chinese Application No. 200980125551.8 mailed Jul. 4, 2012. |
First Office Action in Japanese Application No. 2011-516466 mailed Mar. 6, 2013. |
Second Office Action in Japanese Application No. 2011-516466 mailed Mar. 17, 2014. |
Decision of Refusal in Japanese Application No. 2011-516466 mailed Jan. 16, 2015. |
Office Action in Canadian Application No. 2726915 dated May 13, 2013. |
First Office Action in Korean Application No. 10-2011-7002461 mailed May 29, 2013. |
First Office Action in Chinese Application No. 200980145872.4 dated Nov. 29, 2012. |
First Office Action in Canadian Application No. 2741895 dated Feb. 25, 2013. |
Second Office Action in Canadian Application No. 2741895 dated Oct. 21, 2013. |
Search Report and Written Opinion in Singapore Application No. 201103333-9 mailed Nov. 19, 2012. |
Examination Report in Singapore Application No. 201103333-9 dated Aug. 13, 2013. |
International Search Report and Written Opinion in PCT/US2011/053302 mailed Nov. 28, 2011 in 11 pages. |
International Preliminary Report on Patentability in PCT/US2011/053302 mailed Apr. 2, 2013. |
First Office Action in Japanese Application No. 2013-529454 mailed Feb. 3, 2014 in 6 pages. |
Office Action in Japanese Application No. 2013-529454 mailed Mar. 9, 2015 in 8 pages. |
First Office Action issued in Australian Application No. 2011307319 mailed Mar. 6, 2014 in 5 pages. |
Search Report and Written Opinion in Singapore Application No. 201301573-0 mailed Jul. 1, 2014. |
First Office Action in Chinese Application No. 201180046104.0 mailed Nov. 3, 2014. |
Examination Report in Singapore Application No. 201301573-0 mailed Dec. 22, 2014. |
International Preliminary Report on Patentability in PCT/US2011/061486 mailed May 22, 2013. |
International Search Report and Written Opinion in PCT/US2011/061486 mailed Mar. 30, 2012 in 11 pages. |
First Office Action in Chinese Application No. 201180053405.6 dated May 3, 2015. |
Office Action in Japanese Application No. 2013-540982 dated Jun. 2, 2014. |
Written Opinion in Singapore Application No. 201303521-7 dated May 20, 2014. |
International Search Report and Written Opinion in PCT/US07/07601 mailed Jul. 18, 2008 in 11 pages. |
International Preliminary Report on Patentability in PCT/US2007/007601 mailed Sep. 30, 2008 in 8 pages. |
Supplementary European Search Report in Application No. 07754164.7 mailed Dec. 20, 2010 in 7 pages. |
Office Action in Chinese Application No. 200780020255.2 dated Mar. 4, 2013. |
Office Action in Indian Application No. 3742/KOLNP/2008 dated Nov. 22, 2013. |
Office Action in Japanese Application No. 2012-052264 mailed Dec. 11, 2012 in 26 pages. |
Office Action in Japanese Application No. 2013-123086 mailed Apr. 15, 2014 in 3 pages. |
Office Action in Japanese Application No. 2013-123086 mailed Dec. 2, 2014 in 2 pages. |
“Final Office Action dated Sep. 5, 2012”, U.S. Appl. No. 12/652,541 (40 pages). |
“Non Final Office Action dated Jan. 3, 2012”, U.S. Appl. No. 12/652,541 (35 pages). |
“Notice of Allowance dated Jan. 4, 2013”, U.S. Appl. No. 12/652,541 (11 pages). |
“Non Final Office Action dated Apr. 30, 2014”, U.S. Appl. No. 13/842,970 (20 pages). |
“Final Office Action dated Aug. 19, 2014”, U.S. Appl. No. 13/842,970 (13 pages). |
“Notice of Allowance dated Dec. 5, 2014”, U.S. Appl. No. 13/842,970 (6 pages). |
Armour et al.: “A Heuristic Algorithm and Simulation Approach to Relative Location of Facilities”; Management Science, vol. 9, No. 2 (Jan. 1963); pp. 294-309. |
Meng et al., “Improving the Scalability of Data Center Networks with Traffic-Aware Virtual Machine Placement”; Proceedings of the 29th Conference on Information Communications, INFOCOM'10, pp. 1154-1162. Piscataway, NJ. IEEE Press, 2010. |
Number | Date | Country | |
---|---|---|---|
20150249579 A1 | Sep 2015 | US |
Number | Date | Country | |
---|---|---|---|
61248291 | Oct 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13842970 | Mar 2013 | US |
Child | 14644031 | US | |
Parent | 12652541 | Jan 2010 | US |
Child | 13842970 | US |