Many companies and other organizations operate large web sites that are used by their customers, as well as the organizations' employees, to obtain access to various types of information and services. Often, clients access the sites from locations that are geographically distributed around the world. As the sophistication and complexity of the content that is made available through the web sites increases, the number of different static and dynamically-generated components of individual web pages can also increase—for example, an HTTP (HyperText Transfer Protocol) request for a single URL (Universal Record Locator) may in some cases result in the transmission to the requester of several different image files of various kinds, numerous static text components, dynamically-generated results of several queries to a backend application server or database, and, in some cases, even content components retrieved dynamically from different third-party sources. Often the content provided is customized in some ways based on the preferences or profiles of the requester.
In at least some cases, the web sites are the primary interface through which the organizations market and sell their products—e.g., an online retailer may sell hundreds or thousands of products via its web sites. Especially in such scenarios, the perceived performance of the web site—e.g., how long it appears to take to navigate from one web page to another, or to retrieve search results, and so on—may be critical to the organization's financial success, as potential customers that are dissatisfied with the web site's responsiveness may take their business elsewhere.
With the increasing popularity in recent years of new web-enabled devices, such as smart phones and tablets, the problem of providing content fast enough to retain client interest and loyalty has become even more complicated, as the different devices (and the versions of web browsers installed on the devices) and the various types of network connections being used (e.g., over cellular links, public or home-based “wi-fi” links, or high-bandwidth corporate network links) may all have very different performance capabilities. Although a number of approaches to speed up perceived and actual web page delivery have been implemented, such as caching of web content at edge servers that are located geographically close to the requesting clients, asynchronous delivery of various types of web page components, and the like, slow responses to web requests remain a potential problem that can have a significant negative impact on an organization's business success.
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to.
Various embodiments of methods and apparatus for content preloading using predictive models are described. Content such as text, images, video, audio, and the like that clients can access from various sources (such as web sites) over network connections may be referred to generally as “network content” or “web content” herein. Individual content components, such as a snippet of text, an image (e.g., a gif file or a jpg file), or browser-executable code elements written in a scripting language, may be termed “assets”, “network content assets” or “web content assets”. Network content provided to a client as a logical unit, e.g., in response to a single request for a particular URL or web page, may thus comprise one or more assets, some of which may be static (e.g., an unchanging corporate logo image may be included in a large number of the web pages of the corresponding corporation's web sites), while others may be dynamically generated (e.g., a list of products that is customized for a particular client, where the products are identified and sorted dynamically based on the client's preferences, previous request history, and/or the current prices of the products). Sophisticated web sites, such as an online retailer's site or sites, may include hundreds or thousands of web pages potentially reachable from one or more starting points (such as a site “home” page) via large numbers of feasible navigation and/or search paths.
A network content provider, such as an organization that sets up and maintains the web site, may be able to gather various types of information about the incoming client requests and the responses to those requests. Such information may include, for example, the times at which network content requests are received (e.g., at a web server, as indicated in the web server's request logs), the assets transmitted to the clients in response to the requests, as well as various client device or client software properties. For example, in embodiments where the HTTP protocol is in use, each client request may include various HTTP headers, such as a “Referer” header which indicates the address of the previous web page from which a link to the currently requested page was followed. (The word “referrer” was misspelled as “referer” in an early HTTP standards document, Request for Comments (RFC) 1945, and the incorrect spelling has been in general use in HTTP-related discussions and HTTP official documentation since then.) For example, a request for an html page http://www.website1.com/page10.html may include a Referer header “Referer: http://www.website1.com/main.html indicating that “page10.html” was requested via a link on “main.html”. Another HTTP header may include a cookie that can be used to identify the requesting client's previous activity, an email address of the client making the request, or a “user-agent” that identifies the client's browser. In additional, the originating network address (e.g., an Internet Protocol or IP address) from which the client sends the request may be tracked as part of establishing the network connection over which the content is transmitted in some embodiments. Other information about the client environment may be available, either directly (e.g., from headers) or indirectly—for example, if a browser version in use at a requesting client's cell-phone device is identified, it may be possible to determine, from some public source, the maximum size of that browser's cache.
Using some combination of these types of information (e.g., request time, response contents, header contents, requester IP address and so on), in some embodiments a predictive model may be generated to identify, given a particular request for some set of content assets from a client, one or more additional assets that the client is likely to request. For example, an analysis of request timing sequences may indicate that when a particular user issues requests within a few seconds for web page A followed by web page B, that user often requests web page C within the next few seconds or minutes. Similar predictions may also be possible from an analysis of HTTP request header information (e.g., Referer headers), which may indicate the path taken (i.e., the links clicked on) to reach a given content page. In some embodiments one or more preloader components of a content delivery system may be responsible for using predictions made by the model to proactively initiate delivery of content assets on behalf of clients. Continuing the discussion of the above example, where requests for pages A and B are found to frequently be followed by requests for page C, page C might comprise a plurality of network content assets such as one or more images, static text components as well as dynamic components. The preloader's model may thus be able to identify a predicted set of assets (e.g., assets associated with page C) that the client is anticipated to request after requesting the assets associated with pages A and B. In some embodiments the model may be able to provide probabilities of anticipated requests: e.g., based on its analysis, the model may indicate that there is a 75% probability that asset K will be requested by a client that has recently requested asset L, and the preloader may take the probability into account when determining whether to proactively initiate delivery of the predicted asset to the client.
In at least some embodiments, a preloader may use one or more characteristics of a given client request to determine whether to perform an asset preload operation using the model. In one embodiment, one or more preload indicator assets, such as a particular browser-executable script file name “magic.js” in environments where the JavaScript™ scripting language is supported, may be identified to serve as triggers for preload operations. (JavaScript™ is just one example of a scripting language that may be used, and “magic.js” is just an example name of a script file that could be used. In general, languages other than JavaScript™, and techniques other than the use of dynamically generated scripts, may be used for preload triggering in various embodiments.) Various content providers (e.g., web site operators) may be notified in some embodiments that, in order to take advantage of the preload feature to obtain the best possible website performance, web pages provided from their sites should include respective requests for the preload indicator assets. For example, if the operator for a website with a home page URL www.website1.com/home.html wishes to use the preloader feature, the operator may be notified that home.html page (and other pages of www.website1.com) should include a request for “magic.js”. In some scenarios more detailed guidance may be provided to the content provider as to how to use the preload indicator asset: for example, in implementations where the various static and dynamic components on a given page are expected to be processed by a web browser in order, the content provider may be instructed to include the request for “magic.js” in a component located close to the end of the page, so that any processing or overhead related to preload causes minimum interference with processing related to other non-preload-related components of the page. In some embodiments, enhanced web browsers, or plugins for web browsers, may be configured to add requests for preloader indicator assets, and/or requests for predicted assets, to client requests.
Continuing with the “magic.js” example scenario, when the preloader determines that a given client request includes a request for magic.js, this may serve as a signal to the preloader to determine whether proactive delivery of assets should be attempted based on the client's current request or current request sequence. The preloader may cause the contents of “magic.js” to be generated dynamically based on the model's prediction of additional assets likely to be requested soon by the client. The contents of this dynamically generated “magic.js” script may then be transmitted to the client for execution by the client's browser. When the client's browser executes the dynamically generated script, requests for one or more of the predicted assets may be generated. Such requests may result in the predicted assets being transmitted to the client and stored in the client's browser cache in some embodiments, so that they are available locally at the client when and if the assets are actually requested. That is, in such embodiments, the assets of the predicted asset set may not be immediately displayed, but may be retrieved from the appropriate web server or other content source for later display. Any of a number of different types of preload asset indicators, at least some of which may not include scripts, may be used in different embodiments.
In some embodiments, preloading may be targeted to caches at other entities, instead of, or in addition to, being targeted at client browser caches. For example, model predictions may be used to proactively load data into a content cache maintained at an edge server of a content delivery network (CDN), so that subsequent requests from clients may be handled more quickly than if the requested assets had to be loaded on demand when clients request them from the edge servers. Similarly, model predictions may be used to proactively load data from a database server into application server caches in some embodiments. In some embodiments at least some components of the preloader and/or the model may be resident on client devices—e.g., as subcomponents of an enhanced client browser, or as separately-installed components or applications on a client computer, laptop, phone, or tablet. In some scenarios, the preloader and/or the model may be implemented at multiple layers—e.g., with some components or instances at client devices or browsers, others at edge servers or web servers, and others at application servers or back-end databases. The delivery of the predicted assets may be targeted at multiple destination caches in some embodiments—e.g., a single prediction may result in assets being delivered to an edge server and to a client browser by cooperating preloader instances at different layers of the system. Model predictions may also be made based on observations of client request sequences at several different entities in some embodiments—e.g., at an individual user's devices from which the requests originate, or at a web server, an edge server, an application server or a database server.
In at least one embodiment, predictions made with the help of the model may be used for evicting contents from caches. For example, the model may be able to predict when the probability of additional requests for one or more cached assets falls below a certain threshold, making it less useful to retain the assets in the cache. Using this information, the assets may be discarded from the cache, or placed in a list of cached assets identified as good candidates for replacement from the cache. In some embodiments the model may be configured to proactively recommend eviction candidates, while in other embodiments the model may identify eviction candidates in response to specific requests from the preloader or other entities. Thus, in general, the model may be used for preloading assets for which additional requests are expected, and/or for discarding currently cached assets for which the probability of additional requests is deemed low. Just as model-based preloading may be targeted to caches in different layers of a system, model-based eviction operations may also be targeted at different cache layers, such as browser caches, edge server caches, or application server caches.
In one simple implementation, the characteristic of a client request that is used to determine whether a preload operation is to be performed may simply be the presence (or absence) of a requested asset in the model's database of assets for which predictions are available. For example, the model in such a scenario may use a mapping M from each element of a set R of requested assets (or groups of requested assets), to a corresponding element of a set P of predicted assets (or groups of assets). When a client C requests an asset a1, the preloader may check whether a1 is an element of R (i.e., whether a prediction can be found for assets likely to be requested after a1 is requested by C). If such a prediction is found, a preload operation to deliver the corresponding assets in P to client C may be initiated; if R does not contain a1, no preload operation may be initiated. In some implementations, the model may be enhanced based on the actual requests that follow the request for a1, so that preloads based on requests for a1 can be initiated in the future.
If the preloader determines, based on characteristics such as the request for a previously specified indicator asset, that an asset preload operation is to be performed, the preloader may consult or query the model to identify the specific assets to be sent to the client. For example, in one implementation a model may provide a programmatic query interface. A request specifying the currently-requested network content assets, and identification information associated with the requesting client (e.g., an IP address of the client, or cookie-derived client identification) as input may be submitted via such an interface, and the model may respond with a predicted set of assets. The preloader may initiate the delivery of at least some of the identified assets of the predicted set to one or more destination caches on behalf of the client, e.g., for inclusion in the client's browser cache, and/or an edge server cache (which may be shared by many clients). A subset of the predicted set may be delivered in some scenarios, rather than the entire set, based for example on the preloader's determination that some of the predicted assets are likely to already be in the destination cache, or as a result of resource constraints, as described below in further detail. The delivery of the assets to a client device may involve participation of one or more content delivery or content generation system components, depending on the implementation. For example, in some scenarios, at least a portion of the preloader may be implemented as a component incorporated at an edge server of a content delivery network (CDN), where the edge server maintains a cache of the content generated at one or more content generators (which may be termed “origin servers” in some cases). In such a scenario, the preloader may check whether the predicted assets are already present in the edge server's cache. If the predicted assets are present, they may be delivered from the edge server's cache to the client browser; if the predicted assets are not present, they may first be requested from the content generators so that they can be loaded into the edge server cache, and transmitted to the client after the edge server cache receives them. As noted above, in some embodiments the preloader may deliver the predicted assets to the edge server content cache, and may not necessarily transmit them all the way to client browsers. In some embodiments it may be possible for the preloader to direct a content source (such as an application server) to transmit the components directly to the client without intermediate server-side caching.
Not all requests or request sequences analyzed by the preloader may lead to the use of the model, or the proactive delivery of assets to the client. In one implementation, if the analysis of the characteristics of a client's request does not indicate that a preload operation is to be performed, the preloader may simply continue on to analyze the next client request, to determine whether that next client request leads to a preload. In some embodiments, the preloader may obtain feedback regarding the responsiveness of one or more content sites (e.g., using a custom or instrumented browser configured to measure content component load times and provide at least some metrics obtained from those measurements back to the preloader). If the feedback indicates that the performance is satisfactory for a given site, preloading may be reduced or discontinued for some portions of the sites.
In some embodiments, the preloader may be able to identify (e.g., using contents of a User-Agent HTTP header of a client request, or another similar information source) one or more properties of the browser or device from which the client requests are received. For example, the browser name and version may be obtained, which may in turn indicate the maximum size of the corresponding browser cache. Alternatively, in some embodiments it may be possible to deduce whether the requests are being received from a phone or tablet rather than a personal computer or server; this type of information may serve as an indication of the computing performance capabilities or likely memory size at the requesting device. Similarly, in some scenarios the type of network connection being used may be determined or deduced (e.g., whether a cellular phone network is being used, or whether a broadband connection or other high-throughput links are being used). Based on some or all of these factors, the preloader may be able to select an appropriate version of an asset to be preloaded for the client in some embodiments—e.g., for a phone device, a smaller or lower-resolution version of an image file may be preloaded than for devices with larger displays. The version of an asset selected for delivery to a client may be customized in any of several different ways—e.g., in one embodiment, a complete unabridged version of a document asset may be preloaded under certain conditions, while a summarized or abridged version may be preloaded if the preloader detects that the client device or browser is resource-constrained.
The functioning of the model used for predictions of future requests may be controlled in part by a number of modifiable parameters in some embodiments. The parameters may include, for example, an input history depth parameter indicating the number of received network asset requests or URL requests to be used to determine the predicted set, and an output prediction length parameter indicating a targeted size of the predicted set (e.g., in terms of content assets, or in terms of entire URLs with all their associated assets). In a simple example, the history depth may be set to one URL requested, and the output prediction length may also be set to one URL, thus indicating that one predicted URL is to be identified for each requested URL when possible. In another example, a sequence of three requested URLs may be use to predict two anticipated image files. In another embodiment, in which the model provided probability estimates for its predictions, another model parameter may comprise a probability threshold indicating a minimum predicted request probability for an asset to initiate a preload operation for the asset. Cost-related parameters may be included in some embodiments, e.g., so that the estimated resource usage (such as network bandwidth or CPU usage) for preloading, at either the sending entity or the receiving entity or both, is factored into the decision as to whether to perform a given preload. Some or all of the model parameters may be automatically adjusted by the preloader (e.g., based on metrics of effectiveness of the preloads, such as the client-side performance metrics discussed above, or on offline comparisons of actual requests to predicted requests).
In some embodiments, depending on the specific versions of content delivery protocols being used, any of various protocol-specific features may be used to preload the predicted assets. For example, in some implementations, the preloaded content assets may be delivered via the use of one or more embedded HTML elements inserted into a web page at the request of (or by) the preloader, or an inline HTML frame (iframe) such as a hidden iframe. Such hidden iframes or other embedded elements may server as triggers for the preload operations to be initiated, in a manner analogous to the use of the example magic.js script described earlier. Guidance as to where within a page such an iframe or embedded HTML element should be placed (e.g., towards the end of a page, so as not to interfere with the non-preload-related components of the page) may be provided to content generators in some implementations. If the delivery protocol support server-side push operations, the preloader may instruct a web server or other server-side component to transmit the preloaded assets to the client without actually receiving a corresponding request from the client browser. In some embodiments where preloader components or instances are incorporated into the client browser, the preloader may insert requests for one or more model-predicted assets into the original client requests. In some implementations, the preloaded assets may be accompanied by a marker or indicator notifying the receiving browser that the assets are to be cached for later display upon request—i.e., that the assets are not to be displayed immediately. The preloaded assets may include static components (e.g., images or static text) as well as dynamic components (e.g., dynamic HTML elements) in some embodiments. In some embodiments the delivery of at least some of the predicted assets may be implemented asynchronously—i.e., the delivery may be scheduled as a background or low-priority task, relative to the priority of delivering other content components that have already been requested. In at least one implementation, the predicted asset set may include one or more assets from a different web site than at least some of the assets whose requests by the clients that led to the prediction—i.e., the predictions may cross site boundaries in such cases.
The content assets of the predicted set corresponding to a given set of actual requests may change over time in some cases, and may also change from one user to another. For example, in the case of a news web site, one client C1 may start by visiting the site's home page, then visit the sports page, and then typically visit the “top” international news story page (where the “top” page is defined by the news web site, and may change from day to day or even hour to hour). The URL for the specific page that displays the top international news story, as well as at least some of the assets associated with that page, may differ on Jan. 10, 2013 from the corresponding “top” assets on Jan. 11, 2013, for example. Thus the predicted set of assets anticipated to be requested following the home page and the sports page may change over time—i.e., even for the same client and the same input request sequence, the predicted set determined by the model may change from one time period to another. Continuing the example, another client C2 may also start with the home page followed by the sports page, but may then typically visit the business page. Thus, the predicted asset set for C2, based on visits to the home page and sports page, may comprise assets of the business page and not the assets of the top international news page. In some embodiments, the model predictions may be based on request sequence histories from multiple users, from which the most likely anticipated asset sets may be derived for an entire user population considered as a group (e.g., all users whose requests originate from a particular domain may be considered as one group, or users from a particular geographical region may be considered a group, or all users regardless of their location or affiliation may be considered as one large group for making predictions). In other embodiments, predictions may be made on a per-user basis instead of being based on analysis of requests from numerous users. The extent to which predictions are to be customized for individual users may itself be governed by a tunable parameter in some embodiments—e.g., in one implementation, initially the predictions may be based on requests from all users, and later, as more request records are collected, predictions may be customized based on smaller groups or on request streams from individual users.
Example System Environments
In system 100, each edge server is shown with a respective server-side preloader instance 180—e.g., edge server 120A has associated preloader instance 180A, edge server 120B has a preloader instance 180B, and edge server 120C has preloader instance 180C. Each preloader instance may comprise one or more executable components and/or data sets in some embodiments. Each preloader instance may also have access to a predictive model, as shown in the depicted embodiment, where preloader instance 180A accesses predictive model instance 135A, preloader instance 180B accesses predictive model instance 135B, and preloader instance 180C accesses predictive model instance 135C. The various preloader instances may coordinate their functionality to varying degrees, depending on the implementation. For example, in some tightly-coupled implementations, each preloader instance may be expected to initiate proactive delivery of same predicted set of assets for a given actual request sequence (and therefore each preloader instance may rely on the same model mappings); in other, more loosely-coupled implementations, each preloader instance may utilize a slightly different model that may make slightly different predictions, so the preloaded assets may differ for the same input request sequence. The term “preloader” may be used synonymously herein for the term “preloader instance”, and the term “model” may be used synonymously for the term “model instance”. Each model instance 135 may in turn comprise one or more executable components and/or data sets that may be used to generate and/or store predictions in some embodiments. It is noted that although a distributed model implementation (with multiple model instances) is depicted in
In some embodiments, each edge server 120 may be configured to generate request logs, or otherwise capture request sequence information (as well as request headers, IP addresses of clients 148) and the like, and the collected request sequences may be used to generate the predictive models 135. In some embodiments, request sequence information used for the model may also be obtained directly from client browsers 160. Depending on the implementation and on such factors as the volume of requests, the costs of sharing request sequence histories across networks, and so on, the predictions made by a given model instance 135 may be derived from request sequence information gathered at a single edge server, at multiple edge servers, or at browsers and one or more edge servers. In some implementations, edge servers 120 may serve only a subset of requests submitted by clients 148—for example, in one implementation, requests for some types of dynamically-generated content may be handled by the origin servers rather than the edge servers. In such cases, the models may be configured to utilize request sequence histories obtained from origin servers 112 as well. In at least one implementation, the origin servers themselves may comprise multiple tiers—e.g., application server tiers, database servers, and the like, and request sequence logs from one or more of such tiers may be utilized for model predictions. The model instances (e.g., server-side instances 135 and/or client-side instances 137) may be operable to determine a predicted set of one or more additional network content assets anticipated to be requested by a client 148 after the client has requested a given set of one or more network content assets, based at least in part on an analysis of a history of received network content asset requests obtained from one or more of the different sources discussed above. In at least some embodiments, the client-side and server-side components or instances of the models and/or preloaders may cooperate closely with each other—e.g., predictions made by a client-side model may be based partly on request sequence analysis performed at the server-side and partly on analysis of requests from the client device, and corresponding preload operations may be performed by either client-side preloaders, server-side preloaders, or by both types of preloaders.
In the embodiment shown in
In addition to being used for proactive preloading of assets, models 135 and/or 137 may be used for eviction operations from caches 125 and/or 161 in some embodiments. The models may be able to predict when an asset that is currently cached is unlikely to be required in the future, or to identify the particular asset or assets that are least likely to be requested again. Accordingly, assets identified as having low request probabilities may be discarded from their respective caches, and/or placed in a list of candidates to be evicted from the caches in preference to other assets. In some embodiments, server-side models 135 and/or client-side models 137 may be used largely or exclusively for eviction decisions, while in other embodiments the models may be used largely or exclusively for preload operations.
In the depicted embodiment, each web server 220 comprises a corresponding server-side preloader instance 180—e.g., web server 220A comprises preloader instance 180A, web server 220B comprises preloader instance 180B, and so on. The preloader instance may be implemented as a web server plugin in some embodiments. Each preloader instance 180 in turn may utilize a predictive model subcomponent component 135—e.g., preloader 180A uses model 135A, preloader 180B uses model 135B, and preloader 180C uses model 135C. As discussed with reference to the preloader and model instances of
The predictive models 135 and 137 in the embodiment shown in
As briefly mentioned above, the origin servers 112 of
Predictive Model Components
In the embodiment shown in
Over time, the size of the mapping dataset may grow substantially, especially for popular content sites. Furthermore, some of the mappings may simply become inapplicable as the content that is available from a given content provider is modified—e.g., news stories may be removed from a news web site within some period of time, so a mapping that indicates that a given news story web page is likely to be requested after an earlier request for a different page may no longer be valid when the news story web page is removed from the site by the site owner. Accordingly, at least in some embodiments, some mappings 345 may be discarded or archived (e.g., placed in a mapping archive or deleted mapping repository 350) at appropriate times, thereby reducing the size of the current mapping dataset 320 and ensuring that invalid mappings do not lead to erroneous predictions and associated overhead.
A number of different modifiable parameters 325 may govern various aspects of model functionality in some embodiments. For example, an input history depth parameter may be used to determine how many received or past asset requests are to be used as input for identifying future asset requests—for example, whether a sequence of five requests is to be used to make a prediction, or a sequence of two requests is to be used for predictions. An output prediction size parameter may be used to determine how may future requests are to be identified for a given input request set—e.g., whether two future requests should be predicted, or whether four requests should be predicted. In some embodiments, one or more threshold probability parameters may be set, so that, for example, the model may be directed to provide predicted sets of assets only if the probability that those assets will be requested is estimated to exceed a specified threshold. If a threshold of 80% probability is set, for example, and the model is only able to predict that asset set X is likely to be requested with a 60% probability, the model may not provide the prediction, so that the overhead of a preload that may not turn out to be useful is avoided. Cost-related parameters may also be used in some embodiments, e.g., indicating maximum thresholds for estimated preloading-related resource usage (such as the amount of network bandwidth likely to be used, or the likely CPU overhead for performing the preload, at either the source of the preloaded data or the destination) to determine whether to perform a given preload. In some implementations, parameters related to cache eviction policies may also be implemented—e.g., the minimum probability of re-access required for an asset to be retained in a cache may be specified as a parameter.
In some embodiments, the model may be instructed, via one or more customization parameter settings, whether predictions are to be customized for individual clients or groups of clients, or whether predictions should be developed for aggregated or “average” users. Further details regarding some aspects of customized predictions are provided below in conjunction with the descriptions of
The interface manager 310 may implement one or more programmatic interfaces in some embodiments, such as application programming interfaces (APIs) through which prediction requests may be sent, and through which predicted asset sets 395 may be indicated. The execution engine may be responsible in some embodiments for a number of functions, such as extracting predictions from the dataset, implementing parameter changes (either in response to administrative requests, or as a result of self-tuning decisions based on various metrics of resource consumption and/or prediction accuracy), initiating the archival or deletion of mappings, maintaining a desired level of consistency between the datasets of different model instances in a distributed model implementation, and so on. The execution engine 315 may gather metrics regarding how much CPU, memory and/or storage space is being used by the various model components, and use those metrics to make auto-tuning decisions (or provide reports on the metrics to administrators) in various embodiments. A subcomponent of the execution engine may also be responsible for determining how successful the predictions are (e.g., by analysis of actual request sequences) and modifying/removing the prediction mappings in dataset 320 accordingly in some embodiments.
At least in some implementations, the prediction requests 305 may include some metadata or properties of the client on whose behalf the prediction is being requested—e.g., what kind of computing device the client is using, which browser the client is using, and so on. In some such implementations, the predicted asset set 395 may include versions of assets that are selected from a set of alternative versions based on the client properties—e.g., a smaller image file with lower resolution may be provided if the client is using a phone than if the client is using a personal computer.
In some embodiments in which the aggregated analysis approach shown in
It is noted that in at least some implementations, there may be a delay between successive requests of a request sequence from a given client 148. For example, a client 148A may request a web page P1, and spend some time reading the contents of P1 before clicking on a link to page P2. The request logs captured at the corresponding web site or edge server may (especially for busy sites) have hundreds or thousands of other entries between two successive request entries from the same client. Thus, a non-trivial amount of computing power and/or storage may be needed to extract or construct accurate request streams for various clients in some embodiments.
Reconsidering the example discussed above with respect to
In some embodiments, a hybrid approach combining aspects of the approaches shown in
Methods of Predicting and Preloading Network Assets
In some embodiments, guidance or instructions regarding the use of a preload indicator asset may be provided to content generators (e.g., an application server that forms a backend tier for a web server, or origin servers from which content is cached at edge servers of a content delivery network) or to client-side preloader instances or components, as shown in element 704. The guidance may indicate, for example, that in order to trigger the preload functionality as a result of a request for a given web page WP, that web page WP should itself include a request for a specified indicator asset such as a specified script file. An example of the use of such a script file as a preload indicator is provided in
The preloader may monitor incoming client asset requests, as indicated in element 707. This monitoring may serve at least two purposes in some embodiments. First, incoming request sequences may serve as inputs to the model, so that predicted assets (if any predictions are provided by the model) can be preloaded. Second, the incoming request sequences can be used to validate, enhance and/or update the model's predictions. In at least some implementations, it may be possible to determine some properties of the client device (e.g., personal computer, tablet or phone) being used for submitting the client's requests, and/or some properties of the client's browser (e.g., the browser vendor and version, which may in turn indicate browser cache size limits and the like) from the client's requests and or the network connection being used (element 710). If such client-side properties can be ascertained, they may be used to select specific versions of assets to be preloaded, as described below in further detail with respect to element 719.
If, based on the characteristics of an incoming client request sequence (where the sequence size may be determined by a tunable parameter), a determination is made that assets are to be preloaded for that client (as determined in element 713), the model may be consulted to identify specifically which asset or assets should be preloaded (element 716). If a decision is made that the current client request sequence being considered should not lead to assets being preloaded (as also determined in element 713), the preloader may resume monitoring further client requests (element 707).
If a preload operation is to be performed, in the depicted embodiment the preloader may check whether any assets need to be evicted or replaced from the destination cache(s), e.g., to accommodate the assets to be preloaded. If evictions are required, the preloader may optionally use the model to identify which assets to evict from the caches, and mark the assets as no longer needed or valid (element 714). The assets to be delivered to the appropriate destination cache or caches for the client may be obtained (element 719), either from a local cache accessible to the preloader, or from some other content source such as an application server, database server, or an origin server in the CDN scenario. In at least some implementations, the predicted set of assets may include content generated at a different website or web sites than the website(s) of the assets whose requests led to the prediction. E.g., the model may be able to predict that after client C visits page P1 on website W1, C1 is very likely to access page P2 of website W2. If properties of the client device or client browser were identified, and device-specific or browser-specific versions of the content assets are available, the appropriate versions of the assets may be selected for transmission to the destination cache(s). For example, a summary version of an article may be preloaded instead of a full version. In some embodiments, multiple caches may be loaded as a result of a single prediction—e.g., both an edge server content cache and a client browser cache may be selected as destinations. In some implementations, the preloader may obtain, from the model, a list L1 of the assets that the model predicts will likely be requested soon, but the preloader may decide to initiate delivery of a list L2 of assets that does not exactly match the model's list L1. For example, the model's list L1 may include three large image files f1, f2 and f3 in one scenario. However, the preloader may determine, based on its monitoring of earlier client requests and earlier preloads, that the client is very likely to already have f1 and f2 in its browser cache, so the preloader may initiate the delivery of f3 alone, instead of all three files. In some embodiments, the preloader may decide to deliver only a subset (or none) of the list of assets provided by the model based on other considerations, such as the current estimated resource utilization at the delivery server or the network connection to the client, or even the resource utilization levels at the client if client-side resource usage information is available. Thus, at least in some embodiments, it may be possible for the preloader to make the final determination as to which assets are to be transmitted to the client, which may differ from the recommendations of the model.
After the actual set of assets to be preloaded has been determined, the delivery of the assets to the selected destination(s) may then be initiated (element 722) using any of a number of possible techniques in different embodiments. For example, in the case where the destination is a client browser cache, the assets may be delivered as a result of execution by the client browser of a dynamically generated script, as a result of a request for a dynamically embedded HTML element or iframe, or as a result of a server-side push operation. In some implementations, if a plurality of assets is to be preloaded, the sequence in which the assets are delivered, and the time between initiation of delivery of each of the assets, may be determined by the preloader. For example, it may be possible to predict that a client C1 is very likely to request asset a1, and then, after a few seconds or minutes, asset a2. In such a scenario, the preloader may initiate the transmission of a1, and delay the transmission of a2 relative to a1 based on the expectation of when the assets are likely to be requested by the client. After delivery of the assets is initiated, the preloader may resume monitoring subsequent requests from clients (element 707 onwards) and repeat the process of determining whether additional preload operations are to be performed, and performing the additional preload operations as needed.
As indicated with respect to element 714 above, in some embodiments the model or models may optionally be used for cache eviction-related decisions, such as when a particular asset should be removed from the cache, or which specific assets are least likely to be accessed again and therefore are good candidates for removal from the cache. Such eviction operations may be performed independently of preload operations in some embodiments—e.g., an asset may be evicted or marked for replacement based on some schedule, independently of whether such an eviction is required for an impeding delivery of a predicted asset. In some implementations a model may be configured to generate notifications when it is able to recommend asset evictions from one or more caches, and send the notifications to the preloader, a cache manager, or some other entity, without having received a specific request for such recommendations.
In at least some embodiments, the preloader may obtain feedback or measurements from a number of sources to help adjust or tune its operations. For example, in one embodiment, a special browser plugin (or a special browser version) may be implemented that provides asset load time measurements (obtained at the browser) to the preloader. Additional metrics regarding the utilization of various resources at the delivery servers from which assets are transmitted to the clients, and the network paths over which the assets are transmitted, may also be collected in some embodiments. Such measurements may be used to determine the effectiveness of the preload operations—for example, to determine whether the benefits of preloading justify the resources consumed for preloading.
<script type=“text/javascript” src=“magic.js”></script>
It is noted that the name “magic.js” is simply an example, and that any desired script name (or scripting language) may be used in various embodiments. When the included indicator is evaluated by a client browser, the browser submits a request for the specified script file such as “magic.js” to the delivery server (e.g., web server). When the request for the script is received at the delivery server (element 804), this may serve as a trigger to the preloader that the model is to be consulted to determine if a predicted set of assets should be preloaded for the client in the depicted embodiment. The list of assets to be preloaded may then be determined using the model (element 807). For example, the assets to be preloaded may include an image file named “heavyimagefile.jpg”. The contents of “magic.js” may then be dynamically generated (element 810) such that the execution of the script at the browser results in the browser requesting (but not immediately displaying) the predicted assets. In the “heavyimagefile.jpg” example, the preloader may generate a custom “magic.js file” with contents similar to the following, whose execution by the browser would result in the delivery of heavyimagefile.jpg to the browser cache:
<script language=“JavaScript”>
function preloader( )
{
heavyImage=new Image( )
heavyImage.src=“heavyimagefilejpg”;
}
preloader( );
</script>
When the browser runs this script, the predicted assets (heavyfileimage.jpg in this case) are requested by, and delivered to, the browser (element 813). The asset may be stored in the browser cache until it is explicitly requested for display (in some subsequent client request, if the model's prediction turns out to be accurate).
Use Cases
The techniques described above, of generating predictive models based on received client requests for network content, and then using the models to proactively deliver content to clients, may be used in any environment where it is possible to gather input request sequences from clients, and where sufficient compute and storage resources are available to analyze the request sequences to generate the kinds of predictions described. The techniques may be beneficial in a large number of scenarios, especially where content-rich website pages are set up such that many page requests typically result in several different static and dynamic assets being delivered.
In environments where it is possible to identify the types of devices being used by clients (e.g., smart phones versus tablets versus desktops/laptops), or to ascertain the limitations of the browsers being used, the ability to identify and preload device-appropriate versions (or browser-specific versions) of content assets may prove highly effective in improving overall customer satisfaction. The eviction-related features of the predictive models may be of great benefit in scenarios where, for example, the utilization of resources available for caches at various application layers (such as edge servers or web servers) is typically high.
Illustrative Computer System
In at least some embodiments, a server that implements a portion or all of one or more of the technologies described herein, including the techniques to implement the functionality of the server-side and/or client-side preloaders and the predictive models used by the preloaders, may include a general-purpose computer system that includes or is configured to access one or more computer-accessible media.
In various embodiments, computing device 3000 may be a uniprocessor system including one processor 3010, or a multiprocessor system including several processors 3010 (e.g., two, four, eight, or another suitable number). Processors 3010 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 3010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 3010 may commonly, but not necessarily, implement the same ISA.
System memory 3020 may be configured to store instructions and data accessible by processor(s) 3010. In various embodiments, system memory 3020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory 3020 as code 3025 and data 3026.
In one embodiment, I/O interface 3030 may be configured to coordinate I/O traffic between processor 3010, system memory 3020, and any peripheral devices in the device, including network interface 3040 or other peripheral interfaces. In some embodiments, I/O interface 3030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 3020) into a format suitable for use by another component (e.g., processor 3010). In some embodiments, I/O interface 3030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 3030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 3030, such as an interface to system memory 3020, may be incorporated directly into processor 3010.
Network interface 3040 may be configured to allow data to be exchanged between computing device 3000 and other devices 3060 attached to a network or networks 3050, such as other computer systems or devices as illustrated in
In some embodiments, system memory 3020 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above for
Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc, as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
The various methods as illustrated in the Figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5802292 | Mogul | Sep 1998 | A |
5931907 | Davies et al. | Aug 1999 | A |
5933811 | Angles et al. | Aug 1999 | A |
6009410 | Lemole et al. | Dec 1999 | A |
6134244 | Van Renesse et al. | Oct 2000 | A |
6185558 | Bowman et al. | Feb 2001 | B1 |
6233571 | Egger et al. | May 2001 | B1 |
6233575 | Agrawal et al. | May 2001 | B1 |
6260061 | Krishnan | Jul 2001 | B1 |
6266649 | Linden et al. | Jul 2001 | B1 |
6282534 | Vora | Aug 2001 | B1 |
6338066 | Martin et al. | Jan 2002 | B1 |
6361326 | Fontana et al. | Mar 2002 | B1 |
6385641 | Jiang | May 2002 | B1 |
6411967 | Van Renesse | Jun 2002 | B1 |
6421675 | Ryan et al. | Jul 2002 | B1 |
6438579 | Hosken | Aug 2002 | B1 |
6460036 | Herz | Oct 2002 | B1 |
6493702 | Adar et al. | Dec 2002 | B1 |
6493703 | Knight et al. | Dec 2002 | B1 |
6529953 | Van Renesse | Mar 2003 | B1 |
6542964 | Scharber | Apr 2003 | B1 |
6549896 | Candan et al. | Apr 2003 | B1 |
6564210 | Korda et al. | May 2003 | B1 |
6584504 | Choe | Jun 2003 | B1 |
6604103 | Wolfe | Aug 2003 | B1 |
6721744 | Naimark et al. | Apr 2004 | B1 |
6724770 | Van Renesse | Apr 2004 | B1 |
6738678 | Bharat et al. | May 2004 | B1 |
6742033 | Smith | May 2004 | B1 |
6757682 | Naimark et al. | Jun 2004 | B1 |
6842737 | Stiles et al. | Jan 2005 | B1 |
6850577 | Li | Feb 2005 | B2 |
6871202 | Broder | Mar 2005 | B2 |
6912505 | Linden et al. | Jun 2005 | B2 |
6920505 | Hals et al. | Jul 2005 | B2 |
6981040 | Konig | Dec 2005 | B1 |
6993591 | Klemm | Jan 2006 | B1 |
7010762 | O'Neil | Mar 2006 | B2 |
7039677 | Fitzpatrick et al. | May 2006 | B2 |
7181447 | Curtis et al. | Feb 2007 | B2 |
7216290 | Goldstein et al. | May 2007 | B2 |
7278092 | Krzanowski | Oct 2007 | B2 |
7296051 | Lasriel | Nov 2007 | B1 |
7333431 | Wen et al. | Feb 2008 | B2 |
7360166 | Krzanowski | Apr 2008 | B1 |
7392262 | Alspector et al. | Jun 2008 | B1 |
7440976 | Hart et al. | Oct 2008 | B2 |
7467349 | Bryar et al. | Dec 2008 | B1 |
7519990 | Xie | Apr 2009 | B1 |
7552365 | Marsh et al. | Jun 2009 | B1 |
7565425 | Van Vleet et al. | Jul 2009 | B2 |
7590562 | Stoppelman | Sep 2009 | B2 |
7594189 | Walker et al. | Sep 2009 | B1 |
7649838 | Fishteyn et al. | Jan 2010 | B2 |
7660815 | Scofield et al. | Feb 2010 | B1 |
7685192 | Scofield et al. | Mar 2010 | B1 |
7716425 | Uysal | May 2010 | B1 |
7774335 | Scofield et al. | Aug 2010 | B1 |
7797421 | Scofield et al. | Sep 2010 | B1 |
7831582 | Scofield et al. | Nov 2010 | B1 |
7860895 | Scofield et al. | Dec 2010 | B1 |
7966395 | Pope et al. | Jun 2011 | B1 |
8131665 | Wolfe | Mar 2012 | B1 |
8136089 | Snodgrass | Mar 2012 | B2 |
8140646 | Mickens | Mar 2012 | B2 |
8225195 | Bryar et al. | Jul 2012 | B1 |
8229864 | Lin | Jul 2012 | B1 |
8521664 | Lin | Aug 2013 | B1 |
8583763 | Kim | Nov 2013 | B1 |
8626791 | Lin | Jan 2014 | B1 |
8645494 | Altman | Feb 2014 | B1 |
8867807 | Fram | Oct 2014 | B1 |
8984048 | Maniscalco | Mar 2015 | B1 |
9037638 | Lepeska | May 2015 | B1 |
9106607 | Lepeska | Aug 2015 | B1 |
9436763 | Gianos | Sep 2016 | B1 |
20010037401 | Soumiya et al. | Nov 2001 | A1 |
20010053129 | Arsikere et al. | Dec 2001 | A1 |
20020055872 | Labrie et al. | May 2002 | A1 |
20020065933 | Kobayashi | May 2002 | A1 |
20020078230 | Hals et al. | Jun 2002 | A1 |
20020083067 | Tamayo et al. | Jun 2002 | A1 |
20020124075 | Venkatesan | Sep 2002 | A1 |
20020147788 | Nguyen | Oct 2002 | A1 |
20020174101 | Fernley et al. | Nov 2002 | A1 |
20020178259 | Doyle et al. | Nov 2002 | A1 |
20020178381 | Lee et al. | Nov 2002 | A1 |
20030028890 | Swart et al. | Feb 2003 | A1 |
20030040850 | Najmi et al. | Feb 2003 | A1 |
20030074409 | Bentley | Apr 2003 | A1 |
20030088580 | Desai | May 2003 | A1 |
20030115281 | McHenry et al. | Jun 2003 | A1 |
20030115289 | Chinn et al. | Jun 2003 | A1 |
20030121047 | Watson et al. | Jun 2003 | A1 |
20030187984 | Banavar | Oct 2003 | A1 |
20030193893 | Wen et al. | Oct 2003 | A1 |
20030212760 | Chen et al. | Nov 2003 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040073533 | Mynarski et al. | Apr 2004 | A1 |
20040093414 | Orton | May 2004 | A1 |
20040098486 | Gu | May 2004 | A1 |
20040111508 | Dias et al. | Jun 2004 | A1 |
20040193706 | Willoughby et al. | Sep 2004 | A1 |
20040236736 | Whitman et al. | Nov 2004 | A1 |
20040255027 | Vass et al. | Dec 2004 | A1 |
20050013244 | Parlos | Jan 2005 | A1 |
20050015626 | Chasin | Jan 2005 | A1 |
20050033803 | Vleet et al. | Feb 2005 | A1 |
20050044101 | Prasad et al. | Feb 2005 | A1 |
20050071221 | Selby | Mar 2005 | A1 |
20050071328 | Lawrence | Mar 2005 | A1 |
20050131992 | Goldstein et al. | Jun 2005 | A1 |
20050138143 | Thompson | Jun 2005 | A1 |
20050154701 | Parunak et al. | Jul 2005 | A1 |
20050182755 | Tran | Aug 2005 | A1 |
20050182849 | Chandrayana et al. | Aug 2005 | A1 |
20050210008 | Tran et al. | Sep 2005 | A1 |
20050234893 | Hirsch | Oct 2005 | A1 |
20050246651 | Krzanowski | Nov 2005 | A1 |
20050256866 | Lu et al. | Nov 2005 | A1 |
20050267869 | Horvitz | Dec 2005 | A1 |
20050289140 | Ford et al. | Dec 2005 | A1 |
20060004703 | Spivack et al. | Jan 2006 | A1 |
20060026153 | Soogoor | Feb 2006 | A1 |
20060059163 | Frattura et al. | Mar 2006 | A1 |
20060069742 | Segre | Mar 2006 | A1 |
20060080321 | Horn et al. | Apr 2006 | A1 |
20060085447 | D'urso | Apr 2006 | A1 |
20060095331 | O'malley et al. | May 2006 | A1 |
20060101514 | Milener et al. | May 2006 | A1 |
20060123338 | Mccaffrey et al. | Jun 2006 | A1 |
20060129916 | Volk et al. | Jun 2006 | A1 |
20060161520 | Brewer et al. | Jul 2006 | A1 |
20060165009 | Nguyen et al. | Jul 2006 | A1 |
20060176828 | Vasseur et al. | Aug 2006 | A1 |
20060184500 | Najork et al. | Aug 2006 | A1 |
20060190852 | Sotiriou | Aug 2006 | A1 |
20060193332 | Qian et al. | Aug 2006 | A1 |
20060200443 | Kahn et al. | Sep 2006 | A1 |
20060200445 | Chen et al. | Sep 2006 | A1 |
20060206428 | Vidos et al. | Sep 2006 | A1 |
20060206799 | Vidos et al. | Sep 2006 | A1 |
20060206803 | Smith | Sep 2006 | A1 |
20060242145 | Krishnamurthy et al. | Oct 2006 | A1 |
20060248059 | Chi | Nov 2006 | A1 |
20060259462 | Timmons | Nov 2006 | A1 |
20060265508 | Angel et al. | Nov 2006 | A1 |
20060288072 | Knapp et al. | Dec 2006 | A1 |
20060294124 | Cho | Dec 2006 | A1 |
20060294223 | Glasgow | Dec 2006 | A1 |
20070027830 | Simons et al. | Feb 2007 | A1 |
20070033104 | Collins et al. | Feb 2007 | A1 |
20070050387 | Busey | Mar 2007 | A1 |
20070055477 | Chickering | Mar 2007 | A1 |
20070067682 | Fang | Mar 2007 | A1 |
20070088955 | Lee et al. | Apr 2007 | A1 |
20070106751 | Moore | May 2007 | A1 |
20070112639 | Blumenau | May 2007 | A1 |
20070136696 | Matthews | Jun 2007 | A1 |
20070156761 | Smith, III | Jul 2007 | A1 |
20070156845 | Devanneaux | Jul 2007 | A1 |
20070180510 | Long et al. | Aug 2007 | A1 |
20070192485 | Mcmahan et al. | Aug 2007 | A1 |
20070244900 | Hopkins et al. | Oct 2007 | A1 |
20070255844 | Shen | Nov 2007 | A1 |
20080005273 | Agarwalla et al. | Jan 2008 | A1 |
20080040314 | Brave et al. | Feb 2008 | A1 |
20080065718 | Todd et al. | Mar 2008 | A1 |
20080133510 | Timmons | Jun 2008 | A1 |
20080141307 | Whitehead | Jun 2008 | A1 |
20080147971 | Hawkins | Jun 2008 | A1 |
20080201331 | Eriksen | Aug 2008 | A1 |
20080306959 | Spivack et al. | Dec 2008 | A1 |
20090028441 | Milo et al. | Jan 2009 | A1 |
20090063652 | Hwang et al. | Mar 2009 | A1 |
20090172773 | Moore | Jul 2009 | A1 |
20090254971 | Herz et al. | Oct 2009 | A1 |
20100049678 | Huang | Feb 2010 | A1 |
20100115388 | Nguyen | May 2010 | A1 |
20100174775 | Saiki | Jul 2010 | A1 |
20100180082 | Sebastian et al. | Jul 2010 | A1 |
20100281224 | Ho | Nov 2010 | A1 |
20100287191 | Price et al. | Nov 2010 | A1 |
20100332513 | Azar | Dec 2010 | A1 |
20110029641 | Fainberg | Feb 2011 | A1 |
20110029899 | Fainberg | Feb 2011 | A1 |
20110040777 | Stefanov | Feb 2011 | A1 |
20110087842 | Lu et al. | Apr 2011 | A1 |
20110131341 | Yoo | Jun 2011 | A1 |
20110167054 | Bailey | Jul 2011 | A1 |
20110173569 | Howes | Jul 2011 | A1 |
20110196853 | Bigham | Aug 2011 | A1 |
20110246406 | Lahav | Oct 2011 | A1 |
20110296048 | Knox et al. | Dec 2011 | A1 |
20120047445 | Rajagopal | Feb 2012 | A1 |
20120084343 | Mir | Apr 2012 | A1 |
20120096106 | Blumofe et al. | Apr 2012 | A1 |
20120143844 | Wang | Jun 2012 | A1 |
20120209942 | Zehavi | Aug 2012 | A1 |
20120233069 | Bulawa | Sep 2012 | A1 |
20120239598 | Cascaval | Sep 2012 | A1 |
20120246257 | Brown | Sep 2012 | A1 |
20120278476 | Agrawal | Nov 2012 | A1 |
20120284597 | Burkard | Nov 2012 | A1 |
20120323838 | Ulinski | Dec 2012 | A1 |
20130019159 | Civelli | Jan 2013 | A1 |
20130041881 | Wierman | Feb 2013 | A1 |
20130151652 | Brech | Jun 2013 | A1 |
20130226837 | Lymberopoulos | Aug 2013 | A1 |
20140019577 | Lobo | Jan 2014 | A1 |
20140365861 | Lasmarias | Dec 2014 | A1 |
20140373032 | Merry | Dec 2014 | A1 |
20150229733 | Yang | Aug 2015 | A1 |
Number | Date | Country |
---|---|---|
0125947 | Apr 2001 | WO |
Entry |
---|
U.S. Appl. No. 11/479,225, filed Jun. 30, 2006, Christoph L. Scofield, et al. |
U.S. Appl. No. 11/238,070, filed Sep. 28, 2005, Elmore Eugene Pop, et al. |
3bubbles.com, Frequently Asked Questions, http://web.archive.org/web/20060626213746/3bubbles.com/faq.php, 2006, 3 pages. |
U.S. Appl. No. 10/864,288, filed Jun. 9, 2004, Dennis Lee, et al. |
Junghoo Cho, Page Quality: In Search of an Unbiased Web Ranking, Jun. 14, 2005, SIGMOD, p. 1-13. |
Salton, “Search and retrieval experiments in real-time information retrieval,” Cornell University Technical Report No. 68-8, 1968, 34 pages. |
Amazon.com, “What are statistically improbable phrases?,” http://web.archive.org/web/20050416181614/http://www.amazon.com/gp/search-inside/sipshelp.heml, 2005, 1 page. |
Salton, et al., “Term weighting approaches in automatic text retrieval,” Information Processing and Management, v.24, No. 5, 1988, 11 pages. |
Rocchio, “Relevance Feedback in Information Retrieval,” in Salton, ed., “The Smart System—experiments in automatic document processing,” pp. 313-323, 1971, 13 pages. |
Forney, “The Viterby algoritm,” Proceedings of the IEEE, v. 61, No. 3, 1973, 11 pages. |
BlogPulse FAQs, www.blogpulse.com/about.html, 2005, 8 pages. |
Brin, et al., “Anatomy of a large-scale hypertextual web search engine,” Proceedings of the 7th International World Wide Web Conference, 1998, 20 pages. |
MyBlogLog FAQs, http://web.archive.org/web/20050307012413/www.mybloglog.com/help/, 2005, 2 pages. |
Net Applications, “Are all web site statistics reports created equal?”, Feb. 2005, 3 pages. |
Net Applications, “How to maximize the ROI from your web site,” Feb. 2005, 4 pages. |
IMNMotion Behavior Monitor, www.whitefrost.com/projects/mousetrax, 2003, 2 pages. |
Touchgraph Amazon Browser V1.01, http://web.archive.org/web/20050104085346/www.touchgraph.com/TGAmazonBrowser.html, 2005, 2 pages. |
Jeanson, et al., “Pheromone trail decay rates on different substrates in the Pharaoh's ant, Monomorium pharaonis,” Physiological Entomology v. 28, 2003, 7 pages. |
Martin, et al., “The privacy practices of web browser extensions,” Privacy Foundation, Dec. 6, 2000, 61 pages. |
Menkov, et al., “AntWorld: A collaborative web search tool,” Proceedings of Distributed Communities on the Web, Third International Workshop, 2000, 10 pages. |
Kantor, et al., “The information question: A dynamic model of user's information needs,” Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 1999, 10 pages. |
Dorigo, et al., “The ant system: Optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 26, No. 1, 1996, 26 pages. |
Mute: how ants find food, http://web.archive.org/web/20041209082357/mute-net.sourceforge.net/howAnts.shtml, 2004, 9 pages. |
Levy, “In the new game of tag, all of us are it,” Newsweek, Apr. 18, 2005, 2 pages. |
Harth, et al., “Collaborative filtering in a distributed environment: an agent-based approach,” Technical report, University of Applied Sciences Wurzburg, Germany, Jun. 2001, 7 pages. |
Shapira, et al., “The effect of extrinsic motivation on user behavior in a collaborative information finding system,” Journal of the American Society of Information Science and Technology, 2001, 27 pages. |
Panait, et al., “A pheromone-based utility model for collaborative foraging,” Proceedings of the 2004 International Conference on Autonomous Agents and Multiagent Systems, 8 pages. |
Theraulaz, et al., “The formation of spatial patterns in social insects: from simple behaviors to complex structures,” Philosophical Transactions of the Royal Society of London A, 2003, 20 pages. |
Andersson, et al., “Admission control of the Apache web server,” Proceedings of Nordic Teletraffic Seminar 2004, 12 pages. |
Andersson, et al., “Modeling and design of admission control mechanisms for web servers using non-linear control theory,” Proceedings of ITCOM 2003, 12 pages. |
Visitorville, “How it works (in a nutshell),” 2005, 3 pages. |
Alexa Web Search, “About the Alexa traffic rankings,” http://web.archive.org/web/20050527223452/pages.alexa.com/prod_serv/traffic_learn_more.html, 2005, 3 pages. |
Alexa Company Info—History, http://web/archive.org/web/20060830003300/www.alexa.com/site/company/history, 2005, 2 pages. |
Alexa Company Info—Technology, http://web.archive.org/web/20060830034439/www.alexa.com/site/company/technology, 2005, 2 pages. |
Alexa Web Information Service, http://web.archive.org/web/20041231034354/http://pages.alexa.com/prod_serv/WebInfoService.html, 2004, 2 pages. |
Vara, “New Search Engines Help Users Find Blogs,” Wall Street Journal Online, Sep. 7, 2005. |
Dowdell, “BlogPulse New & Improved Blog Search Engine,” Marketing Shift blog, http://www.marketingshift.com/2005/7/blogpulse-new-improved-blog-search.cfm, Jul. 20, 2005, 55 pages. |
Technorati Tour, “How Technorati Works,” http://web.archive.org/web/20050702025310/http://www.technorati.com/tour/page2.html, 2005, 1 page. |
Technorati, “About Us,” http://web.archive.org/web/20050703012613/www.technorati.com/about/, 2005, 1 page. |
Fry, About Anemone: http://web.archive.org/.web.20041209174809/http://acg.media.mit.edu/people/fry/anemone/about/, 2004, 4 pages. |
Fry, “Organic Information Design,” Master's Thesis, Massachusetts Institute of Technology, http://acg.media.mit.edu/people/fry/thesis/thesis-0522d.pdf, May 2000, 97 pages. |
O'Reilly Radar—About, http://web.archive.org/web/20050421173629/radar.oreilly.com/about/, 2005, 3 pages. |
del.icio.us—About, http://del.icio.us/about/, 2008, 2 pages. |
About Stumbleupon, http://web.archive.org/web/20050107011918/www.stupmbleupon.com/about.html, 2004, 2 pages. |
Van Renesse, et al., “Astrolabe: A robust and scalable technology for distributed system monitoring, management, and data mining,” ACM Transactions on Computer Systems, May 2003, 43 pages. |
Erinaki, et al., “SEWeP: Using site semantics and a taxonomy to enhance the web personalization process,” Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2003, 10 pages. |
Wilson, “Pheromones,” Scientific American v. 208, 1963. |