Content delivery based on user terminal events

Information

  • Patent Grant
  • 8898217
  • Patent Number
    8,898,217
  • Date Filed
    Thursday, May 6, 2010
    14 years ago
  • Date Issued
    Tuesday, November 25, 2014
    9 years ago
Abstract
Systems and methods are provided for selecting content based on user interactions with content previously presented to a user but failing to generate a conversion. In operation, a content delivery system delivers a content package to a user terminal, where the content package includes a content designed to elicit a pre-defined response, such as a conversion. The user terminal then presents the content to a user and generates a journal of events occurring in response to the content package. The journal is then used to determine the proximity of the events in the journal to the pre-defined response. If the degree of proximity to the desired response is high, the user is likely prepared to complete the conversion and therefore a same or similar content can be selected for the user terminal. Otherwise, new content can be delivered to the user terminal.
Description
FIELD

The following relates to content delivery and more specifically relates to systems and methods for content delivery based on events at a user terminal.


BACKGROUND

Computer applications, websites, or other electronic content including offers for products and services generally require a user to explicitly select and/or interact with one or more portions of the content being presented to generate a conversion (e.g., completion of a sale or purchase, submission of information to a content provider, causing delivery of additional information to the user or any other pre-defined response for the content). For example, an advertisement for a product or service can require the user to select the advertisement and navigate to the online store offering the product for sale. At the online store, the user can then enter information to purchase or obtain additional information regarding the product or service.


In many types of electronic content maintained by content providers, the portions of the content offering products and services are generally not static. Rather, such (primary) content providers may offer portions, directly or via an agent, for use by one or more other (secondary) content providers. Thus, these portions can vary over time, depending on the arrangement between the primary and secondary content providers.


Typically, content from secondary content providers, such as advertisements, are presented and priced based on some type of arrangement between the primary and secondary content providers. For example, a secondary content provider may pay up front for a number of impressions (i.e., presentations of their advertisement) during a period of time. In another example, the secondary content provider may only pay for the number of times an impression results in a conversion.


Such models are generally based on the premise that advertisements and similar content are effective for generating interest in a product or service only if a conversion results. Unfortunately, consumer behavior can be unpredictable and accordingly a consumer may walk away prior to a conversion. This can occur for any number of reasons, including reasons unrelated to the advertisement. Thus, the existing metrics for determining the effectiveness of electronic campaigns may not accurately reflect the amount of actual interest in the product or service.


SUMMARY

Accordingly, the present technology provides systems and methods for selecting content, such as advertisements, to present to users based on user interactions that fail to generate a conversion. In operation, a content server delivers a content package to a user terminal, where the content package includes content, such as advertisements, designed to elicit a conversion or any other type of pre-defined user response. Upon receiving the content package, the user terminal presents the content to a user and generates a journal of events occurring at the user terminal during display of the content package. The journal is then used to determine the proximity of the events in the journal to the pre-defined response. If the degree of proximity to the pre-defined response is high, it is more likely than not that the user is prepared to complete the conversion and therefore a same or similar content can be selected for a next content package being delivered to the user terminal. Otherwise, new content can be delivered to the user terminal in the next content package.


The degree of proximity can be selected on the basis of a mapping and/or event weight scheme. Based on selected factors, such as an order and a quantity of the events, the events can be mapped to an event weight. The event weights can then be combined to generate a proximity score that indicates the degree of proximity to the desired response. In some configurations, the mapping and scoring can occur completely at a content delivery system serving the user terminal. In other configurations, the mapping and scoring can occur at the user terminal, which then forwards the score to the content server. In either case, the content delivery can thereafter use the score to assemble future content packages for the user terminal.


The present technology also allows for managing electronic campaigns for multiple user terminals. Thus, when the same content package is delivered to multiple user terminals, the next content package for these multiple user terminal can be selected based on the proximity of actions at the various user terminals to the desired response. In one configuration, the aggregate proximity of the user terminals to the desired response can be evaluated by combining individual proximity scores. Thus, if the degree of aggregate proximity to the desired response is high, the same or related content is selected. Otherwise, new content is selected for the user terminals. In another configuration, the user terminals can be sorted into different groups or segments based on their individual proximity to the desired response. Thereafter, content packages for each group can be assembled, based on their relative proximity to the desired event. In either configuration, the content delivery system can be further configured to assemble future content packages based on bidding or premium pricing for targeting these multiple terminals.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example computing device;



FIG. 2 illustrates an example system embodiment;



FIG. 3 is a flowchart illustrating a method embodiment for managing content delivery for a user terminal;



FIG. 4 is a flowchart illustrating a method embodiment for determining proximity scores;



FIG. 5 is a flowchart illustrating a method embodiment for managing electronic campaigns for multiple user terminals;



FIG. 6 is a flowchart illustrating a method embodiment for assembling a next content package for user terminals based on aggregate behavior at the user terminals; and



FIG. 7 is a flowchart illustrating a method embodiment for selecting a next content package for user terminals based on segmentation of the user terminals.





DESCRIPTION

Various embodiments of the disclosed methods and arrangements are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components, configurations, and steps may be used without parting from the spirit and scope of the disclosure.


With reference to FIG. 1, a general-purpose computing device 100 which can be portable or stationary is shown, including a processing unit (CPU) 120 and a system bus 110 that couples various system components including the system memory such as read only memory (ROM) 140 and random access memory (RAM) 150 to the processing unit 120. Other system memory 130 may be available for use as well. It can be appreciated that the system may operate on a computing device with more than one CPU 120 or on a group or cluster of computing devices networked together to provide greater processing capability. The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 140 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up. The computing device 100 further includes storage devices such as a hard disk drive 160, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 160 is connected to the system bus 110 by a drive interface. The drives and the associated computer readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible computer-readable medium in connection with the necessary hardware components, such as the CPU, bus, display, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device is a small, handheld computing device, a desktop computer, or a large computer server.


Although the exemplary environment described herein employs a hard disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment.


To enable user interaction with the computing device 100, an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. The device output 170 can also be one or more of a number of output mechanisms known to those of skill in the art. For example, video output or audio output devices which can be connected to or can include displays or speakers are common. Additionally, the video output and audio output devices can also include specialized processors for enhanced performance of these specialized functions. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100. The communications interface 180 generally governs and manages the user input and system output. There is no restriction on the disclosed methods and devices operating on any particular hardware arrangement and therefore the basic features may easily be substituted for improved hardware or firmware arrangements as they are developed.


For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks (including functional blocks labeled as a “processor”). The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software. For example the functions of one or more processors presented in FIG. 1 may be provided by a single shared processor or multiple processors. (Use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM) for storing software performing the operations discussed below, and random access memory (RAM) for storing results. Very large scale integration (VLSI), field-programmable gate array (FPGA), and application specific integrated circuit (ASIC) hardware embodiments may also be provided.


The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits.


The present system and method is particularly useful for delivering a first content package to user terminals and assembling a second content package to deliver to the user terminals based user interactions with the content in the first content package. A system 200 is illustrated in FIG. 2 wherein electronic devices communicate via a network for purposes of exchanging content and other data. In some embodiments, the present system and method are carried out on a local area network such as that illustrated in FIG. 2. However, the present principles are applicable to a wide variety of network configurations that facilitate the intercommunication of electronic devices.


In system 200, a content package is delivered to user terminals 2021 . . . 202n (collectively “202”) connected to a network 204 by direct and/or indirect communications with a content delivery system 206. In particular, the content delivery system 206 receives a request for an electronic content, such as a web page, from one of user terminals 202. Thereafter, the content delivery system 206 assembles a content package in response to the request and transmits the assembled content package to the requesting one of user terminals 202. The content in the assembled content package can include text, graphics, audio, video, or any combination thereof. Further, the assembled content packages can includes content designed to elicit a pre-defined response from the user and that can vary over time. The content delivery system can include a communications interface 207 to facilitate communications with the user terminals 202 and any other components in system 200.


The content delivery system 206 includes a content management module 208 that facilitates generation of the assembled content package that includes time-varying content, such as an advertisement. Specifically, the content management module can combine content from one or more one or more primary content providers 2101 . . . 210n (collectively “210”) and content from one or more secondary content providers 2141 . . . 214n (collectively “214”) to generate the assembled content package for the user terminals 202.


Although, primary and secondary providers 210, 214 are presented herein as discrete, separate entities, this is for illustrative purposes only. In some cases, the primary and secondary providers 210, 214 can be the same entity. Thus, a single entity may define and provide both the static and the time-varying content.


For example, in the case of a web page being delivered to a requesting one of user terminals 202, the content management module 208 can assemble a content package by requesting the data for the web page from one of the primary content providers 210 maintaining the web page. For the time-varying content on the web page provided by the secondary content providers 214, the content management module 208 can request the appropriate data according to the arrangement between the primary and secondary content providers 210 and 214. For example, the content from the secondary provider 214 can be selected based on a guaranteed number of impressions. Alternatively, the content from the secondary provider 214 can also be selected based on the context of the content provided by the primary content provider 210 in the web page. However, any other arrangements and configuration for selecting content from the secondary provider can also be used.


Although the content management module 208 can be configured to request that the data be sent directly from content providers 210 and 214, a cached arrangement can also be used to improve performance of the content delivery system 206 and improve overall user experience. That is, the content delivery system 206 can include a content database 212 for locally storing/caching content maintained by content providers 210 and 214. The data in the content database 212 can be refreshed or updated on a regular basis to ensure that the content in the database 212 is up to date at the time of a request from a user terminal. However, in some cases, the content management module 208 can be configured to retrieve data directly from content providers 210 and 214 if the metadata associated with the data in content database 212 appears to be outdated or corrupted.


In the various embodiments, the content delivery 206 can also include a unique user identifier (UUID) database 215 that can be used for managing sessions with the various user terminal devices 202. The UUID database 215 can be used with a variety of session management techniques. For example, the content delivery system 206 can implement an HTTP cookie or other conventional session management methods (e.g., IP address tracking, URL query strings, hidden form fields, window name tracking, authentication methods, and local shared objects) for user terminals 202 connected to content delivery system 206 via a substantially persistent network session. However, other methods can be used as well. For example, in the case of mobile devices or other types of user terminals connecting using multiple or non-persistent network sessions, multiple requests for content from such devices may be assigned to a same entry in the UUID database 215. Such an assignment can be provided by analyzing requesting device attributes in order to determine whether such requests can be attribute to a same device. Such attributes can include device or group-specific attributes.


As described above, content maintained by the content providers 210 and 214 can be combined according a predefined arrangement between the two content providers, which can be embodied as a set of rules. In an arrangement where the content delivery system assembles the content package from multiple content providers, these rules can be stored in a rules database 216 in content delivery system 206 and content management module 208 can be configured to assemble the content package for user terminals 202 based on these rules. The rules can specify how to select content from secondary content providers 214 and the primary content providers 210 in response to a request from one of user terminals 202. For example, in the case of a web page maintained by one of primary providers 210 and including variable advertisement portions, the rules database 216 can specify rules for selecting one of the secondary providers 214. The rules can also specify how to select specific content from the selected one of secondary providers 214 to be combined with the content provided by one of primary providers 210.


Once assembled, the assembled content package can be sent to a requesting one of user terminals. However, the content package is not limited to the content from content providers 210 and 214. Rather, the content package can include other data generated at the content delivery system 206. In some embodiments, this other data can include code or instructions for generating and/or managing a journal or log of user interactions at the requesting one of user terminals during presentation of the assembled content. For example, the assembled content package can be delivered along with a server-side generated cookie or a server-side generated daemon or other application that generates the journal and delivers the journal back to the content delivery system 206. In another example, the assembled content package can be delivered with instructions for generating a terminal-side cookie or spawning an instance of a terminal-side daemon or other application for generating the journal and delivering the journal back to the content delivery system 206. In some cases, the code or instructions can be embedded within delivered portions of the content in the content package. In yet other embodiments, the user terminals 202 can be configured to automatically generate the journal upon receipt of a content package from the content delivery system 206.


Although generation of the journal can be triggered at the time of delivery and presentation of the content from the delivered content package, in other embodiments the journal generation can be triggered by other events. For example, in some embodiments the journal generation can be triggered at time of the request at the user terminal 202 or upon request or delivery of advertisement second content package to the user terminal. In other embodiments, the journal generation can be triggered based on detection of explicit user input (e.g., as in when the user asks the user terminal to track his current location). Thus, by allowing the generation of the journal to begin prior to presentation of the content from the delivered content package, other data associated with the user terminal can be captured and used to subsequently evaluate the proximity scores for the content. For example, load times and other delays can be used to positively or negatively affect a subsequently computed proximity score, as described below.


Thereafter, the events in the journal can be used to generate scores for assembling a next content package to be delivered to the requesting one of user terminals 202 responsive to a next request. This is described below in greater detail with respect to FIGS. 3 and 4.



FIG. 3 is a flowchart illustrating a method 300 for managing content delivery for a user terminal. Method 300 begins at step 302 and continues on to step 304. At step 304, a first content package is delivered to one of user terminals 202 from content delivery system 206. As described above, this first content package delivered to a user terminal consists of at least a first content designed to elicit a pre-defined response when presented to the user. For example, these portions can include advertisements, forms, or any other type of content specifically designed to require a specific user action or a set of specific user actions to result in a conversion.


After the first content package is delivered to a user terminal at step 304, method 300 proceeds to step 306. At step 306, the user terminal presents the content from the first content package. Concurrently, the user terminal begins to generate a journal of events occurring in response to the first content package, as described above.


For each of these events, the journal can include timestamp information, such as the date, time, and length of the event. Such events can include, for example, actions caused by a user interface device, such as a keyboard or keypad, a mouse or trackball, a touchpad or touch screen, or any other type of device for permitting a user to directly interact with a user terminal, as described above with respect to FIG. 1. In some cases, the occurrence of particular types of events, consisting of a series of sustained or multiple user actions, can be recorded in the journal. For example, in the case of a touch screen interface, events such as swiping, scrolling, taping, pinching, and typing can be recorded as events in the journal. Additionally, the events in the journal can also include user terminal generated events, such as notifications for the user, generation of error messages, or any other type of activity not corresponding to a direct user input. Further, the journal can also record periods of inactivity as events in the journal.


Following step 306, method 300 can determine at step 308 whether or not the journal needs to be used to select a next content for the user terminal. In particular, if a conversion has occurred, the journal may not be needed and method 300 can end and resume previous processing at step 314, including repeating method 300. However, if no conversion has occurred, method 300 proceeds to step 310 to utilize the journal to assemble the next content package.


At step 310, a proximity score for the user terminal is calculated based on the journal. That is, a score that indicates the proximity of the events in the journal to the pre-defined response for a content in the first content package. Calculation of this score will be described below in greater detail with respect to FIG. 4. Once the proximity score is calculated at step 308, the method 300 proceeds to step 312, where the content delivery system 206 assembles, and subsequently delivers, the second content package to the user terminal based on the proximity score.


The proximity score can be used in several ways to assemble the next content package. In one arrangement, the proximity score can be compared to a threshold value or other single proximity score criteria. Thus, if the proximity score exceeds or meets this single criterion, it is indicative that the events in the journal were close to occurrence of a conversion. Accordingly, the content management module 208 can assemble the next content package to include a second content related to the first content the next time the user terminal requests content from the content delivery system 206. For example, the second content can consist of the first content from the first content package. Alternatively, the content management module 208 can select a different content, but closely related to the first content. For example, such content can include content associated with a same electronic campaign, a same provider, or similar goods and services.


In some embodiments, more than one threshold value or proximity score criteria can be specified. For example, at least first and second threshold values can be provided to indicate different levels of proximity. In such an arrangement, if the proximity score exceeds both values, this can indicate a high degree of proximity. Thus, the same or substantially similar content can be selected, as described above. In contrast, if the proximity score falls below both values, this can indicate a low degree of proximity. Thus, different content or content from a different secondary provider should be selected. In the case that the proximity score falls between the two threshold values, this can indicate some degree of proximity. Therefore, related, but different content can be selected. Other threshold values can be specified to provide additional categorization for the user terminal. Once the next content package is selected at step 312, method 300 resumes previous processing at step 314, including repeating method 300.


Referring now to FIG. 4, there provided a flowchart illustrating a method 400 for determining proximity scores. Method 400 begins at step 402 and continues to step 404. At step 404, the events in the journal are classified. In the various embodiments, the events in the journal can be classified in a variety of ways. For example, classification can be based on the type of events, a source or origin of the events (i.e., the user, the user terminal, etc. . . . ), and a location of the event (e.g., the portion of the user terminal display or the content presented associated with the event). Thereafter, at step 406, the temporal relationship between the events in the journal can be determined. For example, an order, a relative timing, and/or a number of recurrences of the events can be determined.


Once the events are classified at step 404 and their temporal relationship is established at step 406, each of the events can be associated with an event weight or event score with respect to the first content. Therefore, each event can be mapped to a particular event weight or score based on its classification and relative temporal position in the journal at step 408.


In some embodiments, the event weights can rely at least partially on the type of the event. For example, in the case of a touch screen interface providing a web page with an advertisement, some types of events are commonly associated with a more focused or careful viewing of the web page. Such events can be user actions resulting in magnification of portions of the web page, slower scrolling of the web page, or any other event that could be interpreted as being indicative of the user reviewing the content of the web page more closely. Accordingly, when such events are detected in the journal, a higher event weight can be applied for these events. In contrast, other events can be classified as being associated with a less careful view of the page. Example of such events can include a relatively quick scrolling of the web page or any other user action generally associated with a cursory or superficial inspection of the web page. Accordingly, when such events are detected in the journal, a lower event weight can be applied.


In some embodiments, the event weights can also rely at least partially on a location of the event. For example, in the case of a touch screen interface providing a web page with an advertisement, events classified as occurring within or nearby a portion of the web page including the advertisement can be associated a higher event weight. In contrast, events classified as occurring far from the position of the advertisement in the web page can be associated with a relatively lower event weight. In another example, certain portions of the user interface that display the advertisement can be associated with different weights. Similarly, certain positions around the advertisement can be associated with different weights.


In some embodiments, the event weights can be based on a combination of type and location of one or more events. Some types of events can consist of a series of actions occurring over several portions of the user interface. For example, a cursor motion or a swiping motion on a touch screen device effectively consists of a motion along a series of points in the user interface in one or more directions. In such a configuration, different weights can be applied based on the aggregate of the individual motions. Thus, an aggregate weight can be generated, for example, by combining the weights of the individual actions. Alternatively, a weight can be generated based on a comparison of the motion to one or more references, each associated with a weight. Thus, a weight can be applied that is associated with the reference motion that is closest.


In some embodiments, the event weights can also rely at least partially on a source of the event. In many cases, events occurring on a user terminal consist of user-initiated events, user terminal initiated events, or combinations thereof. In general, an advertisement or other electronic content seeking a response generally requires some level of direct user interaction. Therefore, an event weight can be applied accordingly. For example, a higher event weight can be applied for events primarily initiated by users, depending on the level of user interaction. In contrast, user terminal initiated invents can be associated with lower event weights depending again on the level of user interaction.


As described above, an event weight can also rely on the temporal relationship between the events in the journal. Accordingly, the event weights resulting from the classification process can be adjusted in response the temporal relationship of the events. For example, a specific order or sequence of events can be associated with completion of a conversion. Thus, if the events in the journal indicate that a portion of the sequence has been completed, the events associated with this sequence can be provided a higher event weight. Additionally, the event weight can be further adjusted based on the portion completed. That is, the event weight can be proportional to the portion of the sequence that has been completed.


Similarly, the timing of the events can also affect the event weights. That is, even if a sequence of events associated with partially completing a conversion is detected in the journal, the separation in time between the events can be so great that it is more likely that not that the sequence was not associated with a user seeking to complete a conversion. Similarly, even if a sequence of events was detected in the journal, the inclusion of one or more additional events therebetween can affect the event weights. For example, if such intervening events are primarily user-initiated, this can be associated with an immediate lack of interest in the content and thus a lower event weight should be applied. In contrast if such events are user terminal initiated, this indicates that the user may still be interest, but was interrupted by other, external factors. Thus, the significance of these intervening events is lower and thus a higher event weight can be applied.


Additionally, the recurrence of events (or lack thereof) can affect the event weights. For example, if a web page is presented at the user terminal and the user scrolls up and down repeatedly, this recurrence can be associated with a higher event weight. In contrast, if the web page is presented and the user scrolls down and does not continue on to scroll back and forth, this lack of recurrence can be associated with a lower event weight. In another example, recurring events can be associated with a lower event weight if they are part of a typical user interaction with an electronic content. For example, in the case of a mobile device, repeatedly scrolling and zooming as a web page is example can be a typical user interaction that does not necessarily correspond to a conversion. Thus, a lower event weight can be applied for such types of recurrences.


Further, weights can be generated via a mathematical function that is based on some original weights, but which then generates a function that can generate a new weight, depending on certain conditions, such as order/sequence, type of actions, content or user metadata that provide context in which these actions have been performed.


Referring back to FIG. 4, once the event weights for the events are completed at step 408, the event weights can be aggregated or combined at step 410 to determine a proximity score. Such a computation can occur in a variety of ways. For example, the proximity score can be a statistical measure of the event weights, such as the mean, the median, or the mode of the event weights. However, any other methods for combining or evaluating the event weights or distributions thereof can be used. Once the proximity score for the journal (i.e., the score for the user terminal) is determined at step 410, the method 400 resumes previous processing at step 412. Such processing can include repeating method 400 for other journals or performing and/or completing any other methodologies and processes described herein.


In the various embodiments, the location where proximity scores are calculated can vary. In some embodiments, the proximity scores can be calculated at the content delivery system 206. In other embodiments, the proximity scores can be calculated at the user terminals 202.


In the case of computing the proximity scores at the content delivery system 206, the user terminals 202 can be enabled to transmit the journal to the content delivery system 206. The journal can be delivered to the content delivery system 206 in several ways. For example, the journal can be received as part of a data package consisting of a subsequent request for the content delivery system 206. Alternatively, the user terminals 202 can be configured to automatically generate and deliver a data package including the journal to the content delivery system 206 if a next request is being directed to different content delivery than the one providing the first content. The precise timing and format for the journal and/or the data package can be specified in the code or instruction associated with the content delivered to the user terminals 202 or can be pre-defined for the user terminals.


Upon receipt of the journals from the user terminals 202 the proximity scores can be computed by content management module 208 based on the rules database 216. In particular, the rules database 216 can be configured to include an events database 218, as shown in FIG. 2, listing the various types of events that can be scored. Further, the rules database 216 can include, separately or in combination with events database 218, a set of content event weights 2201 . . . 220i that specify the mapping for the events in events database 218 to event weights. In operation, the content management module 208 can first parse the journal to identify the events therein. Thereafter, scores can be associated these events according to the rules database 216 and the content management module 208 can generate the proximity score for the journal.


In the case of computing the proximity scores at the user terminals 202, the process is similar to the one described above for the content delivery system 206. Thereafter a data package, including the journal and/or the proximity score, can be assembled and delivered to the content delivery system. However, in such configurations the user terminals 202 would need to locally store or have remote access to the events database 218 and the associated content event weights 2201 . . . 120i. Although such a configuration requires performing the mapping of event weights and computation of proximity scores at the user terminal devices 202, such a configurable can be more desirable from a privacy standpoint. That is, since only proximity scores are transmitted from the user terminals to the content delivery system, little or no information is exchanged about particular events occurring at the user terminals 202. As with delivery of the journals to the content delivery system 206, the precise timing and format for the proximity scores can be specified in the code or instruction associated with the content delivered to the user terminals 202 or can be pre-defined for the user terminals.


As described above, in addition to selection of future content for the user terminals 202, the content delivery system 206 can also be used to manage and evaluate electronic campaigns. This is described below with respect to FIGS. 5, 6, and 7.



FIG. 5 is a flowchart illustrating a method 500 for managing electronic campaigns for multiple user terminals. The method 500 begins at step 502 and continues to step 504. At step 504, a substantially similar or related first content package is delivered by the content delivery system 206 to multiple ones of the user terminals 202. For example, a same advertisement is delivered to all of the user terminals 202. However, the first content delivered to each of the user terminals 202 need not be identical. Rather, as in a typical electronic campaign, several types of advertisements can be generated that are for the same or related goods and services. Thus, each of the user terminals 202 can receive a content package that includes content from a same electronic campaign, but that varies from user terminal to user terminal.


Once the first content package is delivered at step 504, the proximity scores for the user terminals 202 receiving this first content package can be determined at step 506. In particular, these scores can be based on the journals generated by each of the user terminals 202. The proximity scores for each user terminal can be generated as described above with respect to FIG. 4. Further, the proximity scores can also be generated at the content delivery system 206 or the user terminals 202, as also described above. Thereafter, the scores can be used to assemble subsequent content packages for the user terminals at step 508, by performing an evaluation based on content delivery criteria. Exemplary embodiments of such evaluations are described below with respect to FIGS. 6 and 7. Once the next content packages for the user terminals are assembled and subsequently delivered, method 500 can resume previous processing at step 510, including repeating method 500 or any other methods described herein.


As described above, evaluation of proximity scores, for purposes of managing an electronic campaign, can be performed in several ways. One method is to combine the various scores to identify content for future content packages, as shown in FIG. 6. FIG. 6 is a flowchart illustrating a method 600 for assembling a next content package for user terminals based on an aggregate behavior at the user terminals. Method 600 begins at step 602 and continues on to step 604. At step 604, an aggregate of the various proximity scores from the user terminals for the same or similar content is generated by the content management module 208. Such a computation can occur in a variety of ways. For example, a statistical measure of the various proximity scores can be generated, such as the mean, the median, or the mode of the scores. However, any other methods for combining or evaluating the scores or distributions thereof can be used.


Once the aggregate score is obtained at step 606, the aggregate score can be compared to at least one proximity criteria, such as a threshold value or other criteria indicating a level of proximity to a conversion. Thereafter, if the aggregate score meets the criteria at step 608, this score can be indicative of a large amount of interest in the goods and services associated with the campaign. Therefore, method 600 can proceed to step 610 where the next content package is assembled to include the same content, substantially similar content, or related content with the expectation that a user will complete a conversion when the content is presented again. In contrast, if the aggregate score fails to meet the criteria at step 608, this can be indicative of a general lack of interest in the goods and services associated with the campaign. In such an instance, method 600 can instead proceed to step 612, where the next content package is assembled to include new or unrelated content or content from a different or revised electronic campaign. The method can thereafter end at step 614 and resume previous processing.


Another method to manage the electronic campaign is to use the various proximity scores for segmentation, as shown in FIG. 7. FIG. 7 is a flowchart illustrating a method embodiment for selecting a next content for user terminals based on segmentation of the user terminals 202. Method 700 can begin at step 702 and proceed to step 704. At step 704, the user terminals 202 can be classified into one or more segments based on their individual proximity scores. This classification can be based on one or more threshold values or other proximity criteria. For example, if a single threshold value is specified, two groups or segments can be defined: (1) user terminals with proximity scores less than or equal to the threshold value and (2) user terminals with proximity scores greater than the threshold value. However, any number of threshold values or other criteria can be specified to define the number of groups.


After the segments are defined at step 704, the next content can be assembled for the user terminals at step 706. In particular, a next content package can be assembled for each segment of user terminals based on range of proximity scores associated with each segment. Thus, segments associated with higher proximity scores can be associated with a higher degree of proximity. Thus, the same or substantially similar content can be selected for user terminals in this segment. In contrast, segments associated with lower proximity scores can be associated with a lower degree of proximity. Thus, different content or content from a different secondary provider should be selected. In the case that additional segments are defined in which intermediate proximity scores are defined, the content for these scores can be selected based on the relative values of these proximity scores. The method can thereafter end at step 708 and resume previous processing.


Since the methods described herein are essentially generating leads and identifying potential customers for the goods and services advertised in the content, a bidding or premium pricing process can be used in conjunction with the various method described above to generate additional revenues for an operator of a content delivery system. Accordingly, referring back to FIG. 2, the content delivery system 206 can further include a bid/pricing engine 222 for facilitating such processes. In such cases, the content management module 208 is configured to select content using the engine 222. That is, rather than automatically assemble a next content package by selecting the next content from one of the secondary providers 214 and thereafter bill the presentation of the next content according to the agreement between the primary and secondary providers, the engine can identify such a presentation as having a higher value. Thus, the engine 220 can configured to either change the pricing for delivery of the content in the next content package to the identified user terminals. Alternatively, the engine can send a request for bids to various content providers offering the same goods or services and provide content in the next content package from the highest bidding content provider. Thus, the engine 222 facilitates the generation of additional revenue for the content delivery system 206.


Although the various embodiments described above are directed at basing the delivery of future content based on a currently computed proximity score, proximity scores from the same user can be aggregated. For example, in one embodiment, the content delivery system 206 can be configured to store proximity scores for the content previously delivered to the user. Thus, when a proximity score is obtained for a current content, this score can be compared to previous proximity scores for the same or similar content. As a result, an aggregate proximity score can be generated, which can be used to more accurately estimate a user's interest in the content.


Alternatively or in combination with such aggregation, the current journal can be compared to a past journal to identify outliers. For example, a first journal is obtained for a user terminal that results in a high proximity score that indicates a high interest in the content already delivered, triggering delivery of a same or similar content to the user terminal. A subsequent journal obtained for this delivered content may result in a low proximity score, indicating a low interest in this related content. However, rather than automatically triggering delivery of a new content, the past and current journals can be first compared. Thus, if significant differences in events between the journals are discovered, this can confirm the low interest and the low proximity score. However, if the differences between the events in the journal are minor or are considered to be irrelevant with respect to a user's interest, the low proximity score can be considered an outlier and thus can be boosted prior to selection of a next content. For example, the events driving the low score can be removed and the proximity score can be recomputed to obtain a boosted proximity. In another example, the current proximity score can be averaged or otherwise combined with a prior proximity score to provide the boosted proximity score. Such operations can be performed on the content delivery system or a user terminal.


In some embodiments, the journals and proximity scores can be used to enhance a user experience at the user terminal without the need to communicate with the content delivery system. For example, in some configurations, a user terminal can receive a collection of content to present to the user. Accordingly, based on a proximity score when one of this collection content is presented, the user terminal device can determine which of these locally stored content to present. That is, the user terminal can determine whether to not present the content again or wait an extended period of time before presenting the content again (if a low proximity score is obtained), immediately present the content again (if a high proximity score is obtained), or present the content again in a short period of time (if an intermediate proximity score is obtained). Any scores obtained can be compared to proximity criteria, such as threshold values or other measures of proximity described above. Alternatively, the proximity score can be used to select a specific one of the collection of content to present. In such a configuration, a proximity score for each of the collection can be computed and a highest score can be used to select the next of the collection to present.


Other implementations according to these examples include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such tangible computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures.


Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.


Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.


Communication at various stages of the described system can be performed through a local area network, a token ring network, the Internet, a corporate intranet, 802.11 series wireless signals, fiber-optic network, radio or microwave transmission, etc. Although the underlying communication technology may change, the fundamental principles described herein are still applicable.


The various embodiments described above are provided by way of illustration only and should not be construed as limiting. Those skilled in the art may recognize various modifications and changes that may be made while following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the present disclosure.

Claims
  • 1. A method comprising: sending, by a processor, a first content package to a user terminal, the first content package including a first content designed to elicit a pre-defined response from a user of the user terminal;receiving, by the processor, a first data package from the user terminal, the first data package comprising a journal of events performed on the user terminal by the user of the user terminal in response to the first content;calculating, by the processor, a proximity score based on the journal of events, the proximity score indicating a proximity of the events performed by the user in response to the first content to the pre-defined response elicited by the first content; andassembling, by the processor, a second content package for the user terminal, wherein the second content package includes a second content related to the first content when the proximity score meets a first proximity score criteria for the first content.
  • 2. The method of claim 1, wherein the assembling further comprises selecting the second content to be the first content.
  • 3. The method of claim 1, wherein calculating the proximity score comprises: providing a classification for the events in the first journal;determining a temporal relationship among the events in the first journal; andcomputing the proximity score based on at least the classification and the temporal relationship.
  • 4. The method of claim 3, wherein providing the classification comprises identifying a type, a source, and a location for the events in the first journal.
  • 5. The method of claim 3, wherein determining the temporal relationship comprises identifying an order, a timing, and a recurrence of the events in the first journal.
  • 6. The method of claim 1, wherein the assembling of the second content package further comprises: providing a plurality of content for the second content package, each of the plurality of content associated with a proximity score value with respect to the first content; andselecting for the second content package at least one of the plurality of content corresponding to the proximity score.
  • 7. The method of claim 1, wherein the assembling further comprises: providing a plurality of groups of content for the second content package, the plurality of groups associated with a plurality of proximity score criteria with respect to the first content;comparing the proximity score to the plurality of proximity score criteria; andselecting content for the second content package from a one of the plurality of groups associated with a one of the plurality of proximity score criteria met by the proximity score.
  • 8. A non-transitory computer-readable medium having code for causing a computer to perform a method stored thereon, the method comprising: sending a first content package to one or more user terminals, the first content package comprising a first content designed to elicit a pre-defined response from a user of each of the one or more user terminals;storing one or more data packages received from the user terminals in response to the first content package, each of the data packages comprising a proximity score indicating a proximity of the events performed by the user of the user terminal from which the data package was received to the predefined response elicited by the first content of the first content package; andassembling at least one second content package for the user terminals, wherein the second content package includes a second content related to the first content when the proximity score associated with the one of the user terminals meets a first proximity score criteria for the first content.
  • 9. The non-transitory computer-readable medium of claim 8, wherein the step of assembling further comprises: generating an aggregate proximity score for the user terminals;comparing the aggregate proximity score at least one of a plurality of proximity score criteria with respect to the first content; andselecting at least one content for the second content package from a plurality of content associated with a one of the plurality of proximity score criteria met by the aggregate proximity score.
  • 10. The non-transitory computer-readable medium of claim 8, wherein the step of assembling further comprises: categorizing the user terminals into one or more segments according to the proximity score associated with each of the user terminals and at least one proximity score category criteria with respect to the first content; andassembling the second content package for each segment to including at least one of a plurality of content associated with each of the segments.
  • 11. A content delivery system, comprising: a communications interface configured for sending a content package to a user terminal and receiving a data package from the user terminal, wherein the content package includes a first content designed to elicit a pre-defined response from a user of each of the at least the user terminal and the data package comprising a dataset associated with events at the user terminal in response to the first content; anda content management module for assembling a next content package for the user terminal based on the received data package, wherein the content management module is configured for: determining from the received data package a proximity score indicating a proximity of the events to the pre-defined response elicited by the first content, andassembling the next content package based on the proximity score, wherein the next content package includes a second content related to the first content when the proximity score meets a first proximity score criteria for the first content.
  • 12. The system of claim 11, wherein the dataset comprises a journal of the events, and wherein the content management module is configured for calculating the proximity score based on the journal.
  • 13. The system of claim 12, wherein content management module is further configured for calculating the proximity score based on a classification of the events in the journal and a temporal relationship among the events in the journal.
  • 14. The system of claim 13, wherein content management module classifies the events in the journal by identifying a type, a source, and a location for the events in the journal and determines the temporal relationship among the events based on an order, a timing, and a recurrence of the events in the journal.
  • 15. The system of claim 11, wherein the content management module is further configured during the assembling for: responsive to receiving a plurality of data packages from a plurality of terminals in response to the previous content package, generating an aggregate proximity score for the plurality of user terminals and assembling the next content package for the plurality of terminals based on the aggregate proximity score.
  • 16. The system of claim 11, wherein the content management module is further configured during assembling for: responsive to receiving a plurality of data packages from a plurality of terminals in response to the previous content package, categorizing the plurality of user terminals into one or more segments based an associated proximity score and at least one proximity score criteria with respect to the previous content package, and selecting content for the next content package for each of the segments from a plurality of content associated with each of the segments.
  • 17. The system of claim 11, further comprising: a bidding/pricing engine for requesting an alternate pricing for delivery of the second content.
  • 18. A method comprising: receiving, by a user terminal, a first content package including a first content designed to elicit a pre-defined response;generating, by the user terminal, a journal of events occurring during presentation of the first content package at the user terminal;computing a proximity score indicating a proximity of the events in the journal to the pre-defined response elicited by the first content;assembling a data package in response to the content package, the data package comprising the proximity score; andsending the data package to a source of the first content package.
  • 19. The method of claim 18, wherein the data package further comprises the journal.
  • 20. The method of claim 18, wherein the computing further comprises: providing a classification for the events in the journal;determining a temporal relationship among the events in the journal; andcomputing the proximity score based on the classification and the temporal relationship.
  • 21. The method of claim 20, wherein providing the classification comprises identifying a type, a source, and a location for the events in the journal.
  • 22. The method of claim 20, wherein determining the temporal relationship comprises identifying an order, a timing, and a recurrence of the events in the journal.
  • 23. The method of claim 18, further comprising: prior to the step of assembling, comparing the proximity score to at least one other proximity score associated with another content previously presented at the user terminal and associated with the first content, and adjusting the proximity score if the proximity score is determined to be an outlier.
  • 24. A user terminal, comprising: a communications interface for receiving a content package, the content package including a first content designed to elicit a pre-defined response from a user of the user terminal;at least one user interface for receiving a user input; anda processing element communicatively coupled to the user interface and the communications interface, the processing element configured to:present the content package at the user interface,generate a journal of events for the content package, the journal including input received by the user interface in response to the presented first content, andcompute a proximity score, the proximity score indicating a proximity of the events to the pre-defined response elicited by the first content of the content package.
  • 25. The user terminal of claim 24, wherein the processing element is further configured to: assemble a data package in response to the content package, andcause the communications interface to send the data package to a source of the content package, wherein the assembled data package comprises at least one of the journal and the proximity score.
  • 26. The user terminal of claim 24, wherein the processing element generates the proximity score based on a classification of the events in the journal and a temporal relationship among the events in the journal.
  • 27. The user terminal of claim 24, wherein the content package comprises a plurality of content for presenting at different times at the user interface, and wherein the processing element is further configured to: generate the proximity score based on a currently presented content of the plurality of content, and selecting a next presented content of the plurality of content based on the proximity score.
  • 28. The user terminal of claim 27, wherein the processing element is further configured to: select a currently presented content as the next presented content if the proximity score meets a proximity score criteria for the currently presented content.
  • 29. The user terminal of claim 27, wherein the processing element is further configured to: exclude a currently presented content as the next presented content if the proximity score meets a proximity score criteria for the currently presented content.
US Referenced Citations (244)
Number Name Date Kind
5408519 Pierce et al. Apr 1995 A
5459306 Stein et al. Oct 1995 A
5600364 Hendricks et al. Feb 1997 A
5613213 Naddell et al. Mar 1997 A
5678179 Turcotte et al. Oct 1997 A
5978775 Chen Nov 1999 A
5978833 Pashley et al. Nov 1999 A
6006197 d'Eon et al. Dec 1999 A
6009410 LeMole et al. Dec 1999 A
6023700 Owens et al. Feb 2000 A
6055512 Dean et al. Apr 2000 A
6055513 Katz et al. Apr 2000 A
6057872 Candelore May 2000 A
6097942 Laiho Aug 2000 A
6253189 Feezell et al. Jun 2001 B1
6286005 Cannon Sep 2001 B1
6334145 Adams et al. Dec 2001 B1
6338044 Cook et al. Jan 2002 B1
6345279 Li et al. Feb 2002 B1
6381465 Chern et al. Apr 2002 B1
6393407 Middleton et al. May 2002 B1
6405243 Nielsen Jun 2002 B1
6408309 Agarwal Jun 2002 B1
6446261 Rosser Sep 2002 B1
6502076 Smith Dec 2002 B1
6684249 Frerichs et al. Jan 2004 B1
6690394 Harui Feb 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6718551 Swix et al. Apr 2004 B1
6738978 Hendricks et al. May 2004 B1
6795808 Strubbe et al. Sep 2004 B1
6886000 Aggarwal et al. Apr 2005 B1
6920326 Agarwal et al. Jul 2005 B2
6990462 Wilcox et al. Jan 2006 B1
7039599 Merriman et al. May 2006 B2
7072947 Knox et al. Jul 2006 B1
7149537 Kupsh et al. Dec 2006 B1
7168084 Hendricks et al. Jan 2007 B1
7203684 Carobus et al. Apr 2007 B2
7280818 Clayton Oct 2007 B2
7356477 Allan et al. Apr 2008 B1
7370002 Heckerman et al. May 2008 B2
7506355 Ludvig et al. Mar 2009 B2
7539652 Flinn et al. May 2009 B2
7558559 Alston Jul 2009 B2
7669212 Alao et al. Feb 2010 B2
7685019 Collins Mar 2010 B2
7730017 Nance et al. Jun 2010 B2
7734632 Wang Jun 2010 B2
7747676 Nayfeh et al. Jun 2010 B1
7870576 Eldering Jan 2011 B2
7882518 Finseth et al. Feb 2011 B2
7903099 Baluja Mar 2011 B2
7912843 Murdock et al. Mar 2011 B2
7921069 Canny et al. Apr 2011 B2
7984014 Song et al. Jul 2011 B2
8046797 Bentolila et al. Oct 2011 B2
8060406 Blegen Nov 2011 B2
8191098 Cooper et al. May 2012 B2
8196166 Roberts et al. Jun 2012 B2
8229786 Cetin et al. Jul 2012 B2
8380562 Toebes et al. Feb 2013 B2
20010044739 Bensemana Nov 2001 A1
20010047272 Frietas et al. Nov 2001 A1
20010051925 Kang Dec 2001 A1
20020006803 Mendiola et al. Jan 2002 A1
20020016736 Cannon et al. Feb 2002 A1
20020019829 Shapiro Feb 2002 A1
20020021809 Salo et al. Feb 2002 A1
20020052781 Aufricht et al. May 2002 A1
20020075305 Beaton et al. Jun 2002 A1
20020077130 Owensby Jun 2002 A1
20020078147 Bouthors et al. Jun 2002 A1
20020083411 Bouthors et al. Jun 2002 A1
20020099842 Jennings et al. Jul 2002 A1
20020120498 Gordon et al. Aug 2002 A1
20020137507 Winkler Sep 2002 A1
20020138291 Vaidyanathan et al. Sep 2002 A1
20020161770 Shapiro et al. Oct 2002 A1
20020164977 Link II et al. Nov 2002 A1
20020165773 Natsuno et al. Nov 2002 A1
20020175935 Wang et al. Nov 2002 A1
20030003935 Vesikivi et al. Jan 2003 A1
20030023489 McGuire et al. Jan 2003 A1
20030040297 Pecen et al. Feb 2003 A1
20030083931 Lang May 2003 A1
20030101454 Ozer et al. May 2003 A1
20030126015 Chan et al. Jul 2003 A1
20030126146 Van Der Riet Jul 2003 A1
20030130887 Nathaniel Jul 2003 A1
20030154300 Mostafa Aug 2003 A1
20030182567 Barton et al. Sep 2003 A1
20030188017 Nomura Oct 2003 A1
20030191689 Bosarge et al. Oct 2003 A1
20030197719 Lincke et al. Oct 2003 A1
20040003398 Donian et al. Jan 2004 A1
20040043777 Brouwer et al. Mar 2004 A1
20040045029 Matsuura Mar 2004 A1
20040054576 Kanerva et al. Mar 2004 A1
20040068435 Braunzell Apr 2004 A1
20040133480 Domes Jul 2004 A1
20040136358 Hind et al. Jul 2004 A1
20040158858 Paxton et al. Aug 2004 A1
20040185883 Rukman Sep 2004 A1
20040192359 McRaild et al. Sep 2004 A1
20040203761 Baba et al. Oct 2004 A1
20040203851 Vetro et al. Oct 2004 A1
20040204133 Andrew et al. Oct 2004 A1
20040209649 Lord Oct 2004 A1
20040259526 Goris et al. Dec 2004 A1
20050010641 Staack Jan 2005 A1
20050021397 Cui et al. Jan 2005 A1
20050060425 Yeh et al. Mar 2005 A1
20050071224 Fikes et al. Mar 2005 A1
20050075929 Wolinsky et al. Apr 2005 A1
20050125397 Gross et al. Jun 2005 A1
20050138140 Wen et al. Jun 2005 A1
20050228680 Malik Oct 2005 A1
20050228797 Koningstein et al. Oct 2005 A1
20050229209 Hildebolt et al. Oct 2005 A1
20050239495 Bayne Oct 2005 A1
20050239504 Ishii et al. Oct 2005 A1
20050249216 Jones Nov 2005 A1
20050267798 Panara Dec 2005 A1
20050273465 Kimura Dec 2005 A1
20050273833 Soinio Dec 2005 A1
20050289113 Bookstaff Dec 2005 A1
20060031327 Kredo Feb 2006 A1
20060040642 Boris et al. Feb 2006 A1
20060048059 Etkin Mar 2006 A1
20060059133 Moritani Mar 2006 A1
20060068845 Muller et al. Mar 2006 A1
20060075425 Koch et al. Apr 2006 A1
20060095511 Munarriz et al. May 2006 A1
20060117378 Tam et al. Jun 2006 A1
20060123014 Ng Jun 2006 A1
20060129455 Shah Jun 2006 A1
20060141923 Goss Jun 2006 A1
20060161520 Brewer et al. Jul 2006 A1
20060167747 Goodman et al. Jul 2006 A1
20060168616 Candelore Jul 2006 A1
20060194595 Myllynen et al. Aug 2006 A1
20060200460 Meyerzon et al. Sep 2006 A1
20060200461 Lucas et al. Sep 2006 A1
20060206586 Ling et al. Sep 2006 A1
20060276170 Radhakrishnan et al. Dec 2006 A1
20060276213 Gottschalk et al. Dec 2006 A1
20060282319 Maggio Dec 2006 A1
20060282328 Gerace et al. Dec 2006 A1
20060286963 Koskinen et al. Dec 2006 A1
20060286964 Polanski et al. Dec 2006 A1
20060288124 Kraft et al. Dec 2006 A1
20070004333 Kavanti Jan 2007 A1
20070011344 Paka et al. Jan 2007 A1
20070016743 Jevans Jan 2007 A1
20070022021 Walker et al. Jan 2007 A1
20070027703 Hu et al. Feb 2007 A1
20070027760 Collins et al. Feb 2007 A1
20070027762 Collins et al. Feb 2007 A1
20070037562 Smith-Kerker et al. Feb 2007 A1
20070047523 Jiang Mar 2007 A1
20070061195 Liu et al. Mar 2007 A1
20070061300 Ramer et al. Mar 2007 A1
20070067215 Agarwal et al. Mar 2007 A1
20070072631 Mock et al. Mar 2007 A1
20070074262 Kikkoji et al. Mar 2007 A1
20070078712 Ott et al. Apr 2007 A1
20070083602 Heggenhougen et al. Apr 2007 A1
20070088687 Bromm et al. Apr 2007 A1
20070088801 Levkovitz et al. Apr 2007 A1
20070088851 Levkovitz et al. Apr 2007 A1
20070094066 Kumar et al. Apr 2007 A1
20070100651 Ramer et al. May 2007 A1
20070100805 Ramer et al. May 2007 A1
20070105536 Tingo, Jr. May 2007 A1
20070113243 Brey May 2007 A1
20070117571 Musial May 2007 A1
20070118592 Bachenberg May 2007 A1
20070136457 Dai et al. Jun 2007 A1
20070149208 Syrbe et al. Jun 2007 A1
20070156534 Lerner et al. Jul 2007 A1
20070180147 Leigh Aug 2007 A1
20070192409 Kleinstern et al. Aug 2007 A1
20070198485 Ramer et al. Aug 2007 A1
20070208619 Branam et al. Sep 2007 A1
20070214470 Glasgow et al. Sep 2007 A1
20070233671 Oztekin et al. Oct 2007 A1
20070244750 Grannan et al. Oct 2007 A1
20070260624 Chung et al. Nov 2007 A1
20070288950 Downey et al. Dec 2007 A1
20070290787 Fiatal et al. Dec 2007 A1
20070300263 Barton et al. Dec 2007 A1
20080004046 Mumick et al. Jan 2008 A1
20080004958 Ralph et al. Jan 2008 A1
20080013537 Dewey et al. Jan 2008 A1
20080032703 Krumm et al. Feb 2008 A1
20080032717 Sawada et al. Feb 2008 A1
20080040175 Dellovo Feb 2008 A1
20080052158 Ferro et al. Feb 2008 A1
20080057947 Marolia et al. Mar 2008 A1
20080065491 Bakman Mar 2008 A1
20080070579 Kankar et al. Mar 2008 A1
20080071875 Koff et al. Mar 2008 A1
20080071929 Motte et al. Mar 2008 A1
20080082686 Schmidt et al. Apr 2008 A1
20080091796 Story Apr 2008 A1
20080133344 Hyder et al. Jun 2008 A1
20080140508 Anand et al. Jun 2008 A1
20080228568 Williams et al. Sep 2008 A1
20080243619 Sharman et al. Oct 2008 A1
20080249832 Richardson et al. Oct 2008 A1
20080262927 Kanayama et al. Oct 2008 A1
20080271068 Ou et al. Oct 2008 A1
20080281606 Kitts et al. Nov 2008 A1
20080288476 Kim et al. Nov 2008 A1
20080319836 Aaltonen et al. Dec 2008 A1
20090006194 Sridharan et al. Jan 2009 A1
20090029721 Doraswamy Jan 2009 A1
20090049090 Shenfield et al. Feb 2009 A1
20090063249 Tomlin et al. Mar 2009 A1
20090106111 Walk et al. Apr 2009 A1
20090125377 Somji et al. May 2009 A1
20090132395 Lam et al. May 2009 A1
20090138304 Aharoni et al. May 2009 A1
20090197619 Colligan et al. Aug 2009 A1
20090216847 Krishnaswamy et al. Aug 2009 A1
20090240677 Parekh et al. Sep 2009 A1
20090275315 Alston Nov 2009 A1
20090286520 Nielsen et al. Nov 2009 A1
20090298483 Bratu et al. Dec 2009 A1
20100030647 Shahshahani Feb 2010 A1
20100082397 Blegen Apr 2010 A1
20100082423 Nag et al. Apr 2010 A1
20100088152 Bennett Apr 2010 A1
20100114654 Lukose et al. May 2010 A1
20100125505 Puttaswamy May 2010 A1
20100138271 Henkin Jun 2010 A1
20100153216 Liang et al. Jun 2010 A1
20100161424 Sylvain Jun 2010 A1
20100169157 Muhonen et al. Jul 2010 A1
20100169176 Turakhia Jul 2010 A1
20110106840 Barrett et al. May 2011 A1
20110209067 Bogess et al. Aug 2011 A1
20110276401 Knowles et al. Nov 2011 A1
Foreign Referenced Citations (97)
Number Date Country
1015704 Jul 2005 BE
19941461 Mar 2001 DE
10061984 Jun 2002 DE
1061465 Dec 2000 EP
1073293 Jan 2001 EP
1107137 Jun 2001 EP
1109371 Jun 2001 EP
1220132 Jul 2002 EP
1239392 Sep 2002 EP
1280087 Jan 2003 EP
1365604 Nov 2003 EP
1408705 Apr 2004 EP
1455511 Sep 2004 EP
1509024 Feb 2005 EP
1528827 May 2005 EP
1542482 Jun 2005 EP
1587332 Oct 2005 EP
1615455 Jan 2006 EP
1633100 Mar 2006 EP
1677475 Jul 2006 EP
1772822 Apr 2007 EP
2343051 Apr 2000 GB
2369218 May 2002 GB
2372867 Sep 2002 GB
2406996 Apr 2005 GB
2414621 Nov 2005 GB
2424546 Sep 2006 GB
2002140272 May 2002 JP
2007199821 Aug 2007 JP
20060011760 Jul 2004 KR
9624213 Aug 1996 WO
9821713 May 1998 WO
0000916 Jan 2000 WO
0030002 May 2000 WO
0044151 Jul 2000 WO
0122748 Mar 2001 WO
0131497 May 2001 WO
0144977 Jun 2001 WO
0152161 Jul 2001 WO
0157705 Aug 2001 WO
0158178 Aug 2001 WO
0163423 Aug 2001 WO
0165411 Sep 2001 WO
0169406 Sep 2001 WO
0171949 Sep 2001 WO
0191400 Nov 2001 WO
0193551 Dec 2001 WO
0197539 Dec 2001 WO
0209431 Jan 2002 WO
0231624 Apr 2002 WO
0244989 Jun 2002 WO
02054803 Jul 2002 WO
02069585 Sep 2002 WO
02069651 Sep 2002 WO
02075574 Sep 2002 WO
02084895 Oct 2002 WO
02086664 Oct 2002 WO
02096056 Nov 2002 WO
03015430 Feb 2003 WO
03019845 Mar 2003 WO
03024136 Mar 2003 WO
03049461 Jun 2003 WO
03088690 Oct 2003 WO
2004084532 Sep 2004 WO
2004086791 Oct 2004 WO
2004100470 Nov 2004 WO
2004100521 Nov 2004 WO
2004102993 Nov 2004 WO
2004104867 Dec 2004 WO
2005020578 Mar 2005 WO
2005029769 Mar 2005 WO
2005073863 Aug 2005 WO
2005076650 Aug 2005 WO
2006002869 Jan 2006 WO
2006005010 Jan 2006 WO
2006016189 Feb 2006 WO
2006024003 Mar 2006 WO
2006027407 Mar 2006 WO
2006040749 Apr 2006 WO
2006093284 Sep 2006 WO
2006119481 Nov 2006 WO
2007001118 Jan 2007 WO
2007002025 Jan 2007 WO
2007087138 Apr 2007 WO
2007091089 Apr 2007 WO
2007060451 May 2007 WO
2007103263 Sep 2007 WO
2008013437 Jan 2008 WO
2008024852 Feb 2008 WO
2008045867 Apr 2008 WO
2008147919 Dec 2008 WO
2009009507 Jan 2009 WO
2009032856 Mar 2009 WO
2009061914 May 2009 WO
2009077888 Jun 2009 WO
2009099876 Aug 2009 WO
2009158097 Dec 2009 WO
Non-Patent Literature Citations (41)
Entry
“International Search Report and Written Opinion mailed on Aug. 26, 2011 for PCT/US2011/034927 titled “Content Delivery Based on User Terminal Events,” to Apple Inc”.
“Advertisement System, Method and Computer Program Product”, IP.com Prior Art Database Disclosure, Pub No. IPCOM000138557D, dated Jul. 24, 2006, IP.com, Amherst, NY (Available online at http://priorartdatabase.com/IPCOM/000138557, last visited Aug. 30, 2010)., Jul. 24, 2006.
“Combined Search and Examination Report”, for United Kingdom Patent Application No. GB 0816228.1 dated Jan. 2009, Jan. 6, 2009.
“Combined Search and Examination Report dated Mar. 7, 2008”, for United Kingdom Patent Application No. GB 0721863.9, Mar. 7, 2008.
“Communication (Combined Search and Examination Report under Sections 17 and 18(3)) dated Jan. 30, 2009 issued from the United Kingdom Patent Office”, in related United Kingdom Application No. GB.0818145.5 (8 pages), Jan. 30, 2009.
“Communication (European Search Report) dated Jun. 26, 2008”, in European Patent Application No. EP 08101394, Jun. 26, 2008.
“Communication (European Search Report) dated Oct. 17, 2008 issued by the European Patent Office”, in counterpart European Patent Application EP 08156763, Oct. 17, 2008.
“Communication (International Search Report along with Written Opinion of International Searching Authority) mailed Oct. 8, 2008 issued by the International Searching Authority”, in counterpart International Application PCT/EP 2008/056342, Oct. 8, 2008.
“Communication (Notification Concerning Transmittal of International Preliminary Report on Patentability, International Preliminary Report on Patentability, and Written Opinion of the International Searching Authority)”, issued in connection with related International Application PCT/EP 2008/051489 and mailed on Sep. 24, 2009 (6 pages), Sep. 24, 2009.
“Communication (Search Report under Section 17 along with Examination Report under Section 18(3)) dated Oct. 6, 2008 issued by the United Kingdom Intellectual Property Office”, in counterpart U.K. Application GB 0809321.3, Oct. 6, 2008.
“Communication Pursuant to Article 94(3) EPC (European Examination.Report) dated Oct. 23, 2008”, issued in counterpart European Patent Application No. EP 08101394.8-1238, Oct. 23, 2008.
“Examination Report”, for counterpart European Patent Applicaiton No. 08153257.4 issued Jun. 2, 2009.
“Examination Report dated Nov. 9, 2009”, for European Patent Application No. EP 08159355.0, Sep. 11, 2009.
“Examination Report dated Jun. 17, 2009”, issued in counterpart U.K. Application No. GB 0803273.2 by U.K. Intellectual Property Office (4 pages).
“International Preliminary Report on Patentability and Written Opinion issued Nov. 24, 2009”, in International Application PCT/EP2008/056342 (7 pages), Nov. 24, 2009.
“International Search Report”, for International Application No. PCT/FI 2006/050455, dated Jul. 25, 2007.
“International Search Report and Written Opinion of the International Search Authority mailed Jun. 19, 2009”, for International Application No. PCT/EP 2008/056069, Jun. 19, 2009.
“International Search Report and Written Opinion of the International Searching Authority mailed Feb. 11, 2009, issued by the International Searching Authority”, in related International Application PCT/EP 2008/063839 (11 pages).
“International Search Report mailed Mar. 24, 2009”, in related PCT International Application No. PCT/EP 2008/063326 (4 pages), Mar. 24, 2009.
“Notice of Allowance dated Apr. 29, 2011”, U.S. Appl. No. 11/888,680, Apr. 29, 2011, 10 pages.
“Office Action dated Mar. 31, 2011 issue by the U.S. Patent Office”, in related U.S. Appl. No. 12/080,124 (29 pages), Mar. 31, 2011.
“Office Action issued from the USPTO dated Aug. 20, 2009”, issued in related U.S. Appl. No. 12/075,593 (14 pages), Aug. 20, 2009.
“Office Action issued Mar. 17, 2010”, in related U.S. Appl. No. 12/075,593 (11 pages), Mar. 17, 2010.
“Office Action Issued Oct. 15, 2010 by the U.S. Patent Office”, in related U.S. Appl. No. 12/080,124 (28 pages), Oct. 15, 2010.
“Search Report under Section 17 dated Jul. 7, 2008”, in related U.K. Application GB 0803273.2.
“U.K. Search Report under Section 17 dated Oct. 23, 2007”, in U.K. Application No. 0712280.7, Oct. 23, 2007.
“Written Opinion of the International Searching Authority mailed Mar. 24, 2009 issued from the International Searching Authority”, in related PCT International Application No. PCT/EP 2008/063326 (5 pages), Mar. 24, 2009.
“XP002456252—Statement in Accordance with the Notice from the European Patent Office dated Oct. 1, 2007”, concerning business methods (OJ Nov. 2007; p. 592-593), Nov. 1, 2007, 592-593.
Hillard, Dustin et al., “Improving Ad Relevance in Sponsored Search”, Proceedings of the third ACM international conference on Web search and data mining, WSDM'10, Feb. 4-6, 2010, Session: Ads, pp. 361-369, ACM, New York, New York, USA, 2010., Feb. 4, 2010, 361-369.
Internet Reference, “Specific Media Behavioral Targeting Index”, Specific Media, Inc., Irvine, CA, 2010, Available online at http://www.specificmedia.com/behavioral-targeting.php.
Langheinrich, Marc et al., “Unintrusive Customization Techniques for Web Advertising”, Computer Networks: The International Journal of Computer and Telecommunications Networking, vol. 31, No. 11, May 1999, pp. 1259-1272, Elsevier North-Holland, Inc., New York, NY, 1999., May 11, 1999, 1259-1272.
Mueller, Milton , “Telecommunication Access in Age of Electronic Commerce: Toward a Third-Generation Service Policy”, Nov. 1996, HeinOnline, 49. Fed. Comm L.J., Nov. 1, 1996, 655-665.
Perkins, Ed , “When to buy airfare”, http://www.smartertrael.com/travel-advice/when-to-buy-airfare.html?id=1628038, Nov. 21, 2006 (4 pages), Nov. 21, 2006.
Regelson, Moira et al., “Predicting Click-Through Rate Using Keyword Clusters”, Proceedings of the Second Workshop on Sponsored Search Auctions, EC'06, SSA2, Jun. 11, 2006, ACM, 2006., Jun. 11, 2006.
Richardson, Matthew et al., “Predicting Clicks: Estimating the Click-Through Rate for New Ads”, Proceedings of the 16th international conference on World Wide Web, Banff, Alberta, Canada, May 8-12, 2007, Session: Advertisements & click estimates, pp. 521-529, ACM, 2007., May 8, 2007, 521-529.
Shaikh, Baber M. et al., “Customized User Segments for Ad Targeting”, IP.com Prior Art Database Disclosure, Pub No. IPCOM000185640D, dated Jul. 29, 2009 UTC, IP.com, Amherst, NY (Available online at http://priorartdatabase.com/IPCOM/000185640, last visited Aug. 30, 2010)., Jul. 29, 2009.
“AdWords Reference Guide”, Google, 2004.
Ghose, Anindya et al., “An Empirical Analysis of Search Engine Advertising: Sponsored Search in Electronic Markets”, Management Science, Informs, 2009.
Karuga, Gilber G. et al., “AdPalette: An Algorithm for Customizing Online Advertisements on the Fly”, Decision Support Systems, vol. 32, 2001.
Science Dictionary, Definition of “dynamic”, 2002.
World English Dictionary, , Definition of “relevant”, 1998.
Related Publications (1)
Number Date Country
20110276615 A1 Nov 2011 US