With a vast and growing population of people going online to shop, read news, search for information, seek entertainment, check weather and traffic, transact business, and so forth, there are myriad opportunities and contexts within which providers can interact with people in meaningful ways. Each time a person visits a website and views a page provides another opportunity to offer information tailored to that person, such as targeted advertisements, specialty articles, recommendations, and so forth. However, there is limited viewable space on a page and the amount of space varies depending upon the computing device used to render the page. Getting the right information to the right audience in a timely manner presents an ongoing challenge.
There have been various approaches to selecting what content gets put on a page and where that content is placed. Manual placement is widely used, where site operators decide which items are placed in prescribed locations on the page. Ranking schemes have also been used to order lists of items or rank search results, so that items of more relevance are prioritized over items of less relevance. In the context of online advertising, auction algorithms have also been employed to allow advertisers to compete for locations on the page.
Once selection and placement decisions are made and the content is served, it is difficult to know whether the content presented was optimal for any given user. For instance, suppose a site operator is building a page that contains an article on mountaineering and the page has slots available for advertisements. Common sense might dictate that the most appropriate advertisements would pertain to mountaineering, such as climbing gear or vacation packages to destination mountain resorts. However, if the content appeals more generally to young adults interested in outdoor activities, many of whom are parents of small children, perhaps an advertisement for children's clothing would be more successful in driving sale activity than the vacation advertisement.
Accordingly, there remains a need for improved selection and placement of content that is tailored to individual users.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
This disclosure is directed to automated targeting of content to users. The content may be essentially any type of data or information that can be served electronically, such as news, search results, weather, articles, entertainment, financial data, traffic status, advertisements, recommendations, images, photographs, video, e-commerce items, and so on. The content is provided in manageable chunks or components. When a user visits a site, content components are selected and placed on a page, which is then served to users for viewing. User interaction with the content components on the page is observed to identify which types of users (as characterized by certain attributes) are likely to act on the content components. The users are segmented into groups according to the attributes and this information is fed back to aid in selection and placement of content components. In this manner, more granular targeting of content components may be achieved.
The automated targeting may be implemented in a number of ways. One example implementation is provided with reference to the following figures, as described below in more detail.
Example Environment and System Architecture
The user computing device 104 may be implemented as any number of computing devices, including as a personal computer, a laptop computer, a portable digital assistant (PDA), a cell phone, a set-top box, a game console, and so forth. Each user computing device 104 is equipped with one or more processors and memory to store applications and data. An application, such as browser, running on the device 104 facilitates access to the site 106 over the network 108.
The content site 106 is hosted on one or more servers 110(1), . . . , 110(N) having processing and storage capabilities. In one implementation, the servers might be arranged in a cluster or as a server farm, although other server architectures may also be used to host the site. The site is capable of handling requests from many users and serving, in response, various pages of content that can be rendered at the user computing device 104 for viewing by the user 102. The content site 106 is representative of any number of sites, such as websites for e-commerce, news and information, search, entertainment, and so forth. Additionally, the site is representative of proprietary sites that receive requests and provide content over proprietary networks other than the Internet and public web.
In the illustrated example, the site 106 serves content as pages that can be rendered by a browser application running on the user computing device 104. A browser user interface (UI) 112 presents a page 114 having multiple slots 116(1), . . . , 116(S) designated to hold various forms of content. There may be any number of slots. In this illustration, there are horizontal slots, such as slot 116(1), and vertical slots, such as slot 116(S). As one example, the horizontal slots may hold articles or search results lists, and the vertical slots may be used for advertisements, images, or links to other relevant information. It is noted that the illustrated arrangement of horizontal and vertical slots is just one example. More generally, content may be arranged in any number of layouts and formats.
Depending upon various factors (e.g., page design, display size, type of content, bandwidth, etc.), there may be instances where not all slots are viewable at one time. For instance, when the page is initially rendered, a subset of the slots might be viewable (as represented by the solid line boxes within the UI 112) while other slots are not viewable (as represented by the dashed line boxes below the UI 112). These invisible slots become viewable when the user scrolls to down the page or resizes the UI 112. In one implementation, when a page is rendered, all slots on the page are deemed to be exposed to the user, whether viewable or not. Thus, individual content components are considered exposed to the user whether or not the user actually views them.
A targeting manager 120 assists in deciding what content to target at the user 102 or a class of users to which the user 102 belongs. More particularly, the targeting manager 120 determines which content components to serve to the user 102 based on many factors, such as historical viewing patterns, associations, user surveys, and so forth. The targeting manager 120 further determines where to place the content components in the various slots 116(1)-116(S) of the page 114. The targeting manager 120 may not actually build the page, but it can designate or suggest where the content components are to be placed.
Once placed, the targeting manager 120 observes user interaction with the page to monitor how well the placement does over time to meet predefined objectives. As more individual content components are exposed to more users, the targeting manager 120 attempts to identify types of users (as characterized by certain attributes) that are likely to act on the content components (e.g., by clicking on the component, viewing the component for a prolonged period, interacting with the content, clicking on promotional and buying item being promoted, etc.). The targeting system 120 splits the users into groups according to the attributes and feeds back this information for improved selection and placement of content components for users who exhibit the same attributes as the groups. User interaction continues to be observed, measured, and used to further differentiate users, thereby providing a continuous closed loop approach to automating the targeting of content to users.
The targeting manager 120 may be implemented as part of the site 106, where it is a set of software components that execute on one or more of the servers 110(1)-110(N). Alternatively, the targeting manager may be implemented separately from the site 106, but interactive with the site to direct content to be placed on the page and to monitor the outcome.
It is further noted that the targeting manager 120 may be used to automate placement of essentially any type of content to be served to the user 102. For instance, the targeting manager may be used to target news items to users. Alternatively, the targeting manager may be used to select and place recommendations that are targeted to the users. In another context, the targeting manager may be employed to suggest photographs, video clips, or entertainment options to users.
For purposes of continuing discussion, however, suppose that the targeting manager 120 is configured to assist in selection and placement of campaigns, such as advertising campaigns designed to interest a consumer in purchasing an item or service and merchandising campaigns crafted to market or commercialize on brands. As illustrated in
The targeting manager 120 suggests one or more campaigns 122(1)-122(C) for placement in one or more slots 116(1)-116(S) of the page 114 to be rendered on the user computing device 104. The targeting manager maintains, or has access to, information on individual users or groups to which the user belongs. Initially, the targeting manager 120 may or may not know anything about the user. However, over time, the user activity at the individual user level is collected and categorized. For instance, the targeting manager 120 may have data pertaining to the user's viewing patterns, purchase history, survey information provided by the user, demographical data, and behavior characteristics associated with such demographics. As more data is collected, attributes may be developed to characterize users. The targeting manager 120 chooses the campaigns based in part on what it knows about the user.
To illustrate this point, suppose that the targeting manager 120 knows that the user is interested in digital photography based on past viewing patterns. Further, the targeting manager 120 knows that the user has purchased children DVDs in the past. Hence, the targeting manager 120 might suggest selection and placement of campaigns associated with digital photography (e.g., the camera advertisement campaign 122(1)) and child-relevant items (e.g., the teddy bear campaign 122(2)).
Once the campaigns are placed and served, the targeting manager 120 receives user interaction data from the site 106 to observe user behavior and measure performance of the campaigns. For example, the targeting manager 120 might track whether the user clicked on the digital camera campaign 122(1) and what further action was taken following this action. Did the user purchase the camera? Did the user return to the original page? Did the user request different information, such as information on other consumer electronics? Did the user leave the site altogether? In one implementation, the targeting manager 120 is designed to protect user privacy, and hence tracks raw data in association with aggregate groups of users.
The targeting manager 120 constructs a learning data structure 124 that records user interaction with the content components over time. As this user activity is collected, the learning data structure 124 facilitates segmentation of the user population into various groups so that content components may be targeted to various groups in the future. The learning data structure 124 may be implemented in many ways, including as decision trees, neural networks, decision tables, nearest neighbor algorithms, support vector machines, and so forth.
For purposes of continuing discussion, the learning data structure 124 is implemented as decision or segment trees 126(1), . . . , 126(P), which are constructed for associated placements of the campaigns 122 in the slots 116. Each of the segment trees 126(1)-126(P) characterizes user activity for a particular placement of a particular campaign. Thus, when a campaign 122 is placed into a slot 116 and exposed for viewing by the user 102, the targeting manager 120 constructs an associated tree 126. There is at least one tree per placement, although in some implementations, there may be more than one tree per placement. As users interact with the campaign placement, information is collected in the segment tree data structure. Over time, discernable patterns emerge that may be used to characterize viewer interaction with the campaign. For example, suppose a segment tree is built for the teddy bear campaign 122(2) and overtime a pattern emerges that suggests users who have purchased baby clothing are more likely to interact with the teddy bear campaign than users who did not purchase baby clothing. Thus, the tree may branch to form two groups of users: those who have purchased baby clothes and those who have not. A more detailed discussion of the construction and use of the segment trees 126(1)-126(P) is provided below with reference to
The targeting manager 120 implements a closed loop mechanism 128 to learn user behavior and leverage that knowledge into making improved campaign selections and placements. When placements are made, the targeting manager 120 monitors user activity. In one implementation, for example, the targeting manager 120 might track whether a user views a placement and purchases the item being advertised in the placement. In another implementation, the targeting manager might observe user activity for an entire session following some interaction with a placement. For instance, say the user 102 clicks on the advertisement placement of the teddy bear campaign 122(2), but ultimately buys a digital camera. The teddy bear campaign receives some credit for eventually driving a purchase of a digital camera, making the value of the teddy bear campaign increase. This activity is recorded and assimilated by the closed loop mechanism 128 to develop a better understanding of the user and other potential users who can be characterized by the same attributes. For instance, if enough people buy a digital camera after clicking through the teddy bear campaign, perhaps there is some association between digital cameras and teddy bears that might be worth assessing and leveraging in the future. Maybe this is attributable indirectly to the concept that parents interested in teddy bears for their young children may also be interested in digital cameras to capture the tender years.
In this way, the automated targeting manager 120 is able to learn automatically over time how to provide more targeted, and hence more relevant, information to users. In this manner, the system 120 can develop a more sophisticated feel for what content to target for individual users. The targeting manager 120 may ascertain preferences of users and associations of various users that are not readily obvious to people who would otherwise manually select content components for various pages. As with our above example, a connection between teddy bears and cameras may not be readily obvious. But this association is discoverable by the targeting manager because it is able to start with a large audience, such as the universe of all users, and let the actual data dictate the possible associations.
Targeting Manager
The targeting manager 120 is implemented as software or computer-executable instructions stored in a memory 204 and executed by one or more processors 202. The targeting manager 120 may optionally include a campaign store 210 to hold the campaigns 122(1)-122(C). The campaigns specify campaign parameters, the content to be presented, and any other metadata. The campaigns may be defined by human managers, or automated processes, or through a combination of human managers and automated processes. It is noted that the campaign store may be maintained independently of the targeting manager, but the manager has access to the campaigns maintained in the store.
The targeting manager 120 further includes a campaign selection, placement, and monitoring (SPM) module 212 that is responsible for the section and placement of the campaigns 122(1)-122(C) onto pages to be served to a population of users. The SPM manager 212 may choose from any number of campaigns in the campaign store 210, with some being more relevant to the user and context than others. Further, the SPM module 212 may be handling requests for a large population of users and managing hundreds, thousands, or more campaigns simultaneously. Although the SPM module 212 is illustrated in this implementation as a single software module for implementing the functions of selection, placement, and monitoring, it is noted that these functions may be performed by any number of modules, or even by separate computing systems.
When a user 102 visits a site, the campaign SPM module 212 accesses any information pertaining to this particular user. Such information may include activity data accumulated in association with particular contexts. Examples of such information include browsing history, purchase history, behavior patterns, survey information, account information, and so forth. This information helps determine whether the user can be characterized in any way to help in selecting targeted campaigns. The campaign SPM module 212 may also ask a value assessor 214 to provide a value to the piece of content relative to being viewed by this particular user. This value may be based on many factors particular to the user or the group to which he/she belongs. For instance, in the example of an e-commerce site, value may be impacted by how frequently the user visits, the number of purchases made, the amount of purchases, the types of purchase, purchase trends, and so on. The value assessor 214 returns a value that a placement is expected to yield for this user. The value may be expressed in terms of dollars or some other measure.
Based on the value and any known attributes of the user, the campaign SPM module 212 selects campaigns from the campaign store 210 for placement in pages to be served to the user 102. As the pages are served and the campaigns exposed to the user, the SPM module 212 monitors user activity in response to exposure to the campaigns. The module 212 may directly track click activity or other metrics, or receive this data from the site 106.
In one possible implementation, aspects of the campaign SPM module 212 can be implemented as described in U.S. patent application Ser. No. 10/393,505, entitled “Adaptive Learning Methods for Selecting Web Page Components for Inclusion in Web Pages”, which was filed Mar. 19, 2003.
The campaign SPM module 212 stores the user activity metrics in a metrics store 216. Additionally, the campaign SPM module 212 forwards logs of user activity to a segment tree constructor 218. The user activity logs contain information pertaining to campaign placements, including such data as which slots the campaigns were placed in, the number of exposures of the campaign, any click actions on the placed campaign once exposed, whether any downstream purchases were made, and the like. Additionally, the user activity logs may include viewing data about what pages the user viewed, how long the user viewed the pages, and so on. This viewing data may be used to help characterize users according to a set of attributes. In the context of an e-commerce environment, for example, long term attributes may be established based on products and services purchased by users (e.g., digital cameras, DVDs, etc.) while short term attributes may be the items being viewed by the user and any trends in viewing behavior.
The segment tree constructor 218 builds the segment trees 126 based on the user activity metrics received in the logs from the SPM module 212. The segment trees are constructed over time as a way to learn and record which user attributes are predictive of user engagement with the campaigns. In the described implementation, the segment tree constructor 218 builds one segment tree for each placement of a campaign that is suggested by the campaign SPM module 212.
Each segment tree 126 has one or more nodes. Each node is representative of an attribute used to characterize the population of users. The tree therefore differentiates among groups of users according to the attributes represented by the nodes. The tree constructor 218 employs the user activity metrics to identify key attributes and makes branching decisions upon discovery of a key attribute that is predictive of user activity.
When the tree constructor 218 decides to segment the tree, the new segment is passed to a value assessor 214 for use in deriving more accurate values. In turn, the values are passed to the campaign SPM module 212 to enable more granular targeting of campaigns to users in the new segment. In this manner, the targeting manager 120 implements a closed loop mechanism in which the SPM module 212 places campaigns for users based on an assessed value supplied by the value assessor 214. User activity is monitored as users become exposed to the campaign and fed to the segment tree constructor 218 to construct segment trees that attempt to learn which user attributes have the most impact on user interaction with the campaign. As key attributes become ascertainable, the segment tree constructor 218 segments the users along the key attributes and feeds these segments to the value assessor 214. In response, the value assessor 214 adjusts the values given to the SPM module 212 for users meeting a specific set of attributes. This has an effect on which campaigns are selected and exposed to the users. User activity is further monitored and fed back to the segment tree constructor. As this closed loop feedback continues, the targeting manager 120 is able to target campaigns to increasingly refined sets of users.
Tree Construction
Associated with the root universe node 302 is a feature count table 304 to track user activity. The feature count table 304 contains a predefined list of features or attributes against which to measure user activity. There may be any number of attributes in the feature count table. The attributes are definable by system operators or campaign managers for a given context. Within our ongoing example of an advertisement placement, the attributes might relate to other products or services that may be of interest to, or purchased by, users in the universe in the past. For instance, in the context of placing an advertisement for an electronic mp3 player, the attributes might include products such as DVDs, digital cameras, game systems, computers, electronic magazines, fictional books, cellular phones, running equipment, and so on. In
When the advertising campaign is exposed, user activity is tracked in the feature count table 304. In one implementation, click actions taken by the user during a session following campaign exposure are tracked. A simple count of the clicks provides a satisfactory proxy of user activity. It is noted, however, that as an alternative to click counts other metrics may be employed, such as viewing duration, click rate or frequency, downstream purchase, and so forth.
To illustrate, suppose a user is exposed to a page on the e-commerce site, and the page contains the mp3 advertisement campaign. The e-commerce site has records of the user's purchase history, which reveals that the user has previously purchased a DVD and a gaming system. Now, if the user clicks on the mp3 campaign, counts are added in the table 304 for the attributes related to DVDs and game systems.
The counts are recorded in the feature count table 304. In the illustrated example, the count table 304 has at least two columns: a first column (i.e., the “w/” column) for users who are characterized by the associated attribute and a second column (i.e., the “w/o” column) for users who are not characterized by the associated attribute. Each count is recorded in a format X/Y, where the value X represents the number of clicks on the campaign and the value Y represents the total number of users in that column who have been exposed to the campaign. It is noted that other table structures and designs may be used to record count information or other metrics employed to value the performance of a placed campaign.
Initially, the counts are 0/0. Now, assume that the campaign is exposed to a first user who has previously purchased a DVD (e.g., attribute 1). The user decides not to click on the 10% discount campaign for the mp3 player. In this case, the Y count is incremented by one (i.e., 0/1). Later, suppose the campaign is exposed to anther user who has previously purchased a DVD. This user decides to click on the campaign for an mp3 player. As a result, both the X and Y counts are incremented by one (i.e., 1/2). This continues for a predetermined duration or until a count number reached.
In
Over time, patterns emerge in the feature count table 304 which suggest that the population of users may be segmented or split along one or more attributes so that the advertisement campaign may be more precisely targeted to a smaller segment of the population. The segment tree constructor 218 determines whether there is enough data for one or more attributes to be valid. In one implementation, a threshold exposure count is used to indicate whether the campaign has been exposed to a sufficient number of users who exhibit a particular attribute to be valid. The threshold exposure count is a definable parameter that is set by the system operator or campaign manager.
When there is one or more valid attributes, the segment tree constructor 218 decides whether to split the users into two segments along the attribute that proves the most effective at driving traffic to the advertisement campaign. In one approach, the segment tree constructor 218 uses conversions (i.e., a click on the campaign following exposure) as a measure of which attribute had the greatest impact. The constructor 218 looks through all the attributes on the feature count table 304 in search for the largest differential between conversions in the two columns. Here, for example, attribute 1 may be found to have the largest separation between conversions (i.e., 16.7% for users characterized by the attribute v. 2.5% for users not characterized by the attribute) and thus, a decision is made to split along attribute 1. The segment tree constructor 218 creates two nodes that are subordinate to the root node 302, where one node represents the segment of users to which attribute 1 applies, and the second node represents the remaining users.
It is noted that conventional decision tree splitting algorithms may also be employed to determine when a tree is ready to be split along a key attribute. Examples of well-known decision tree splitting algorithms include the gini index (GI), ID3 algorithm, and C4.5 or C5.0 algorithms. It is further noted that decision of whether to split is time sensitive, depending on the content being placed. For advertisements, a placement may only run for a few days to few weeks. For news or weather items, a placement may run for only a few hours to a few days. For financial information, a placement may run for a few minutes to a few hours.
Once again, user activity in the form of click counts are recorded in the feature count tables 406 and 408 as the campaign is exposed to the various user segments. Over time, patterns emerge in the feature count tables which tend to suggest that the users within the segments may be further split along one or more additional attributes. In
It is noted that one assumption made throughout this discussion is that user behavior is roughly the same over the observation period. Over prolonged periods, behavior may change. To account for this behavior shift, new segment trees may be grown at different times. Old trees may be kept or discarded. Further, if trees are deemed too stale and no longer represent reasonable segments of the users, the system may simply prune certain nodes from the tree or rebuild an entirely new tree.
Operation
For discussion purposes, the process is described with reference to the architecture 100 of
At 704, the selected content components are placed for presentation to the users. As illustrated in the network server architecture of
At 706, the content components are exposed to the population of users. This operation may be performed by the same entity or system that performs the other operations, or by an independent entity (as represented visually in
At 708, user activity following exposure to the content components is monitored. There are many ways to implement this monitoring. In one implementation, direct user interaction with each of the content components is recorded, such as by counting clicks on the content components. In other implementations, other metrics may be tracked, such as measuring click-through frequency, measuring click-through trends, tracking page views, measuring duration of page views, measuring session duration, downstream purchase, and other interactions.
At 710, the user activity metrics are employed to differentiate users according to a set of attributes. That is, based on this user activity, operation 710 ascertains one or more key attributes that are predictive of how users will respond to the content components. Identification of such key attributes may be accomplished in many ways. In one approach, attributes that correlate to the highest number of affirmative actions taken by the users on the content component are identified as being key predictors. In another approach, key attributes are uncovered based on statistical analysis of the click activity of the users. Consider the example illustrated in
It is noted that the attributes may be predefined or observable over time. In the example e-commerce scenario above, the attributes are based on a user's purchase history, such as whether a user had purchased a DVD or a gaming system. However, the attributes may be defined in other ways, such as user demographics, user browse history, and so on.
At 712, the users are segmented into multiple groups according to the key attributes. Segmentation creates different, smaller groups of users to whom content components may be targeted more granularly. In one implementation, segmentation is implemented by building a decision tree structure for individual placements of the content component. The decision tree has a root node representing a universe of users being observed and one or more subordinate nodes representing the attributes along which groups of users are segmented. These trees may then be saved in association with the content components and used to better target the content components to users who meet the set of key attributes defined by the tree.
As more user segments are created along various attribute boundaries, these segments are fed back for use in selecting and placing content components. This is represented by the feedback loop to 702. Thus, when a new user arrives at the site, the user can be evaluated to see which attributes best characterize him or her, and based on that, select and place content components having segment trees that suggest the components might appeal to the user.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
5890137 | Koreeda | Mar 1999 | A |
5953710 | Fleming | Sep 1999 | A |
6332131 | Grandcolas et al. | Dec 2001 | B1 |
6636242 | Bowman-Amuah | Oct 2003 | B2 |
6658568 | Ginter et al. | Dec 2003 | B1 |
6661431 | Stuart et al. | Dec 2003 | B1 |
6853987 | Cook | Feb 2005 | B1 |
6868395 | Szlam et al. | Mar 2005 | B1 |
6980960 | Hajdukiewicz et al. | Dec 2005 | B2 |
6988657 | Singer et al. | Jan 2006 | B1 |
7058718 | Fontes et al. | Jun 2006 | B2 |
7073129 | Robarts et al. | Jul 2006 | B1 |
7089497 | Abbott et al. | Aug 2006 | B2 |
7107226 | Cassidy et al. | Sep 2006 | B1 |
7117165 | Adams et al. | Oct 2006 | B1 |
7136841 | Cook | Nov 2006 | B2 |
7185010 | Morinville | Feb 2007 | B2 |
7225156 | Fisher et al. | May 2007 | B2 |
7319986 | Praisner et al. | Jan 2008 | B2 |
7324968 | Rotman et al. | Jan 2008 | B2 |
7379921 | Kiliccote | May 2008 | B1 |
7383231 | Gupta et al. | Jun 2008 | B2 |
7434723 | White et al. | Oct 2008 | B1 |
7478331 | Abbott et al. | Jan 2009 | B2 |
7496849 | Abbott et al. | Feb 2009 | B2 |
7552365 | Marsh et al. | Jun 2009 | B1 |
7676407 | Van De Van et al. | Mar 2010 | B2 |
7685013 | Gendler | Mar 2010 | B2 |
7729994 | Gupta et al. | Jun 2010 | B2 |
7748614 | Brown | Jul 2010 | B2 |
7809819 | DeLima et al. | Oct 2010 | B2 |
7917160 | Choe et al. | Mar 2011 | B2 |
8027918 | Nielsen et al. | Sep 2011 | B2 |
8046256 | Chien et al. | Oct 2011 | B2 |
8150768 | Gupta et al. | Apr 2012 | B2 |
8249988 | Teicher | Aug 2012 | B2 |
8271395 | Dominguez et al. | Sep 2012 | B2 |
20010034724 | Thieme | Oct 2001 | A1 |
20010044756 | Watkins et al. | Nov 2001 | A1 |
20020046169 | Keresman, III et al. | Apr 2002 | A1 |
20020087477 | Mantena et al. | Jul 2002 | A1 |
20020103752 | Berger et al. | Aug 2002 | A1 |
20020112171 | Ginter et al. | Aug 2002 | A1 |
20020120567 | Caplan et al. | Aug 2002 | A1 |
20020174030 | Praisner et al. | Nov 2002 | A1 |
20020194138 | Dominguez et al. | Dec 2002 | A1 |
20030004831 | Owens | Jan 2003 | A1 |
20030046172 | Himmel et al. | Mar 2003 | A1 |
20030061170 | Uzo | Mar 2003 | A1 |
20030083983 | Fisher et al. | May 2003 | A1 |
20030105682 | Dicker et al. | Jun 2003 | A1 |
20030126094 | Fisher et al. | Jul 2003 | A1 |
20030135625 | Fontes et al. | Jul 2003 | A1 |
20030139971 | Rescigno et al. | Jul 2003 | A1 |
20030220875 | Lam et al. | Nov 2003 | A1 |
20040111370 | Saylors et al. | Jun 2004 | A1 |
20040128211 | Tsai | Jul 2004 | A1 |
20040143547 | Mersky | Jul 2004 | A1 |
20040198308 | Hurst et al. | Oct 2004 | A1 |
20040225606 | Nguyen et al. | Nov 2004 | A1 |
20040267672 | Gray et al. | Dec 2004 | A1 |
20050027639 | Wong | Feb 2005 | A1 |
20050097037 | Tibor | May 2005 | A1 |
20050108153 | Thomas et al. | May 2005 | A1 |
20050125317 | Winkelman, III et al. | Jun 2005 | A1 |
20050149439 | Suisa | Jul 2005 | A1 |
20050154744 | Morinville | Jul 2005 | A1 |
20050166265 | Satomi | Jul 2005 | A1 |
20050167489 | Barton et al. | Aug 2005 | A1 |
20050198534 | Matta et al. | Sep 2005 | A1 |
20050240493 | Johnson et al. | Oct 2005 | A1 |
20050278222 | Nortrup | Dec 2005 | A1 |
20050278263 | Hollander et al. | Dec 2005 | A1 |
20060015458 | Teicher | Jan 2006 | A1 |
20060015463 | Gupta et al. | Jan 2006 | A1 |
20060080238 | Nielsen et al. | Apr 2006 | A1 |
20060136309 | Horn et al. | Jun 2006 | A1 |
20060212392 | Brown | Sep 2006 | A1 |
20060212393 | Lindsay Brown | Sep 2006 | A1 |
20060219776 | Finn | Oct 2006 | A1 |
20060248452 | Lambert et al. | Nov 2006 | A1 |
20060265489 | Moore | Nov 2006 | A1 |
20060277474 | Robarts et al. | Dec 2006 | A1 |
20070005495 | Kim | Jan 2007 | A1 |
20070073630 | Greene et al. | Mar 2007 | A1 |
20070078760 | Conaty et al. | Apr 2007 | A1 |
20070083433 | Lawe | Apr 2007 | A1 |
20070106606 | Pankratz et al. | May 2007 | A1 |
20070150299 | Flory | Jun 2007 | A1 |
20070157110 | Gandhi et al. | Jul 2007 | A1 |
20070179790 | Leitch et al. | Aug 2007 | A1 |
20070192245 | Fisher et al. | Aug 2007 | A1 |
20070198432 | Pitroda et al. | Aug 2007 | A1 |
20070226084 | Cowles | Sep 2007 | A1 |
20070283273 | Woods | Dec 2007 | A1 |
20070288364 | Gendler | Dec 2007 | A1 |
20070288370 | Konja | Dec 2007 | A1 |
20070299736 | Perrochon et al. | Dec 2007 | A1 |
20080015927 | Ramirez | Jan 2008 | A1 |
20080033878 | Krikorian et al. | Feb 2008 | A1 |
20080052226 | Agarwal et al. | Feb 2008 | A1 |
20080052343 | Wood | Feb 2008 | A1 |
20080097933 | Awaida et al. | Apr 2008 | A1 |
20080114709 | Dixon et al. | May 2008 | A1 |
20080134043 | Georgis et al. | Jun 2008 | A1 |
20080140524 | Anand et al. | Jun 2008 | A1 |
20080140564 | Tal et al. | Jun 2008 | A1 |
20080147506 | Ling | Jun 2008 | A1 |
20080168543 | von Krogh | Jul 2008 | A1 |
20080168544 | von Krogh | Jul 2008 | A1 |
20080177663 | Gupta et al. | Jul 2008 | A1 |
20080183574 | Nash et al. | Jul 2008 | A1 |
20080183757 | Dorogusker et al. | Jul 2008 | A1 |
20080189186 | Choi et al. | Aug 2008 | A1 |
20080195506 | Koretz et al. | Aug 2008 | A1 |
20080201643 | Nagaitis et al. | Aug 2008 | A1 |
20080208747 | Papismedov et al. | Aug 2008 | A1 |
20080221987 | Sundaresan et al. | Sep 2008 | A1 |
20080270293 | Fortune et al. | Oct 2008 | A1 |
20080275777 | Protheroe et al. | Nov 2008 | A1 |
20080320147 | DeLima et al. | Dec 2008 | A1 |
20090006995 | Error et al. | Jan 2009 | A1 |
20090024469 | Broder et al. | Jan 2009 | A1 |
20090037294 | Malhotra | Feb 2009 | A1 |
20090132969 | Mayer | May 2009 | A1 |
20090138379 | Scheman | May 2009 | A1 |
20090164442 | Shani et al. | Jun 2009 | A1 |
20090172551 | Kane et al. | Jul 2009 | A1 |
20090248467 | Bulman et al. | Oct 2009 | A1 |
20090259559 | Carroll et al. | Oct 2009 | A1 |
20090259574 | Thomsen et al. | Oct 2009 | A1 |
20090307134 | Gupta et al. | Dec 2009 | A1 |
20100049766 | Sweeney et al. | Feb 2010 | A1 |
20100121734 | Harper et al. | May 2010 | A1 |
20100197380 | Shackleton | Aug 2010 | A1 |
20100293048 | Singolda et al. | Nov 2010 | A1 |
20100299731 | Atkinson | Nov 2010 | A1 |
20100306078 | Hwang | Dec 2010 | A1 |
20110035289 | King et al. | Feb 2011 | A1 |
20110060629 | Yoder et al. | Mar 2011 | A1 |
20110117935 | Cho et al. | May 2011 | A1 |
20130074168 | Hao et al. | Mar 2013 | A1 |
20130136242 | Ross et al. | May 2013 | A1 |
Entry |
---|
“PayPal Security Key”, retrieved on Jun. 19, 2008 at <<https://www.paypal.com/securitykey>>, PayPal (2 pages). |
Quova, retrieved on May 29, 2009 at <<http://www.quova.com/>>, Quova Inc., USA, 5 pgs. |
Kessler, “Passwords—Streghts and Weaknesses”, retrived at <<http://www.garykessler.net/library/password.html>>, 1996, pp. 1-pp. 7. |
Office action for U.S. Appl. No. 12/165,102, mailed on Apr. 1, 2011, Jesensky, James, “Automatic Approval”. |
Non-Final Office Action for U.S. Appl. No. 12/147,876, mailed on May 6, 2011, Isaac Oates, “Providing Information Without Authentication”. |
Non-Final Office Action for U.S. Appl. No. 12/165,102, mailed on Mar. 8, 2012, James Jesensky et al., “Automatic Approval”, 31 pages. |
Non-Final Office Action for U.S. Appl. No. 12/165,081, mailed on Jun. 4, 2012, Amit Agarwal et al., “Conducting Transactions with Dynamic Passwords”, 23 pages. |
Non-Final Office Action for U.S. Appl. No. 12/165,102, mailed on Jul. 3, 2012, Jesensky James et al., “Automatic Approval”, 30 pages. |
Apache HBase, Chapter 8 Architecture, retrieved from <<http://hbase.apache.org/book.html#architecture>>, available as early as Nov. 30, 2011, Apache Software Foundation, 8 pages. |
Chang et al, “Bigtable: A Distributed Storage System for Structured Data,” 7th USENIX Symposium on Operating Systems Design and Implementation, OSDI '06, Nov. 2006, 14 pages. |
Final Office Action for U.S. Appl. No. 12/147,876, mailed on May 6, 2011, Isaac Oates et al., “Providing Information Without Authentication”, 11 pages. |
Fielding et al, “Hypertext Transfer Protocol—HTTP/1.1”, Network Working Group, W3C/MIT, Jun. 1999, http://tools.ietf.org/pdf/rfc2616.pdf, 114 pages. |
Howstuffworks, “What is a packet?”, http//web.archive.org/web/20060708154355/http://computer.howstuffworks.com/question525.htm, last retrieved Sep. 1, 2011, 2 pages. |
Final Office Action for U.S. Appl. No. 12/035,618, mailed on Aug. 2, 2011, Michal Bryc, “Automated Targeting of Content Components”. |
Final Office Action for U.S. Appl. No. 12/165,102, mailed on Sep. 13, 2011, James Jesensky, “Automatic Approval”, 31 pages. |
Wikipedia, HTTP cookie, “http://web.archive.org/web/20080227064831/http://en.wikipedia.org/wiki/HTTP—cookie”, last retrieved Sep. 1, 2011, 18 pages. |
Wikipedia, MSISDN, http://web/archive.org/web/20071029015418/http://en.wikipeida.org/wiki/MSISDN, last retrieved Sep. 1, 2011, 3 pages. |
Office action for U.S. Appl. No. 12/165,102, mailed on May 17, 2013, Jesensky et al, “Automatic Approval”, 42 pages. |
Final Office Action for U.S. Appl. No. 12/165,102, mailed on Nov. 8, 2013, James Jesensky, “Automatic Approval”, 37 pages. |
Office Action for U.S. Appl. No. 12/165,081, mailed on Nov. 20, 2013, Amit Agarwal, “Conducting Transactions with Dynamic Passwords”, 25 pages. |
Office action for U.S. Appl. No. 12/165,081, mailed on Oct. 17, 2012, Agarwal et al., “Conducting Transactions with Dynamic Passwords”, 25 pages. |
Office action for U.S. Appl. No. 12/165,102, mailed on Nov. 9, 2012, Jesensky et al., “Automatic Approval”, 36 pages. |
U.S. Appl. No. 11/771,679, filed Jun. 29, 2007, Maynard-Zhang, et al., “Mapping Attributes to Network Addresses.”, 28 pages. |