System, method and computer program product for human presence detection based on audio

Information

  • Patent Grant
  • 10082574
  • Patent Number
    10,082,574
  • Date Filed
    Thursday, August 25, 2011
    12 years ago
  • Date Issued
    Tuesday, September 25, 2018
    5 years ago
Abstract
Methods, systems and computer program products that allow for the determination of human presence in a room where content is being presented. The audio that is associated with the content may be captured, along with the audio that is being generated collectively by whatever sources may be in the room including the presentation of the content. Features may be extracted from both the content audio and the room audio. These features may then be compared, and the differences may be quantified. If the differences are significant, then human presence may be inferred.
Description
BACKGROUND

For a number of reasons, it would be useful if a home entertainment device or system were able to determine if people were present in the room. If viewers leave the room in order to go to the kitchen, for example, the system could go into a low power consumption state, perhaps by dimming or powering down the display, or by shutting down completely. In this way, power could be conserved. If recorded media were being viewed, the playback could be automatically paused when a viewer leaves the room.


In addition, the next generation of smart televisions may be service platforms offering viewers several services such as banking, on-line shopping, etc. Human presence detection would also be useful for such TV-based services. For example, if a viewer was accessing a bank/brokerage account using the TV, but then leaves the room without closing the service, a human presence detection capability could be used to automatically log off or shut down the service after a predetermined time. In another case, if another person enters the room while the on-line banking service is running, the human presence detection could be used to automatically turn off the banking service for security or privacy reasons.


Detecting human presence would also be useful to advertisers and content providers. Actual viewership could be determined. Content providers could determine the number of people viewing a program. Advertisers could use this information to determine the number of people who are exposed to a given advertisement. Moreover, an advertiser could determine how many people viewed a particular airing of an advertisement, i.e., how many people saw an ad at a particular time and channel, and in the context of a particular program. This in turn could allow the advertiser to perform cost benefit analysis. The exposure of an advertisement could be compared to the cost to produce the advertisement, to determine if the advertisement, as aired at a particular time and channel, is a worthwhile expense.





BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES


FIG. 1 is a block diagram of an exemplary environment in which embodiments of the systems, methods, and computer products described herein may operate.



FIG. 2 is a flow chart illustrating the processing of the systems, methods, and computer products described herein, according to an embodiment.



FIG. 3 is a more detailed flow chart illustrating the overall processing of the systems, methods, and computer products described herein, according to an embodiment.



FIG. 4 is a flow chart illustrating feature extraction of content audio, according to an embodiment.



FIG. 5 is a flow chart illustrating feature extraction of room audio, according to an embodiment.



FIG. 6 is a flow chart illustrating feature extraction of content audio in order to determine the presence of more than one person, according to an embodiment.



FIG. 7 is a flow chart illustrating feature extraction of room audio in order to determine the presence of more than one person, according to an embodiment.



FIG. 8 is a flow chart illustrating the comparison of features of room audio and content audio and the inference of human presence or absence, according to an embodiment.



FIG. 9 is a flow chart illustrating the normalization of data and the inference of human presence or absence based on normalized data, according to an embodiment.



FIG. 10 is a flow chart illustrating the inference of whether more than one person is present room.



FIG. 11 is a block diagram illustrating the components of a system in which the processing described herein may be implemented, according to an embodiment.



FIG. 12 is a block diagram illustrating the computing context of a firmware embodiment of the feature extraction process, according to an embodiment.



FIG. 13 is a block diagram illustrating the computing context of a software embodiment of the comparison, normalization, and inferences processes, according to an embodiment.





In the drawings, the leftmost digits) of a reference number identifies the drawing in which the reference number first appears.


DETAILED DESCRIPTION

An embodiment is now described with reference to the figures, where like reference numbers may indicate identical or functionally related elements. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the description. It will be apparent to a person skilled in the relevant art that this can also be employed in a variety of other systems and applications other than what is described herein.


Disclosed herein are methods, systems and computer program products that may allow for the determination of human presence in a room where content is being presented. The audio that is associated with the content may be captured, along with the audio that may be being generated in the room by whatever sources are collectively present. Features may be extracted from both the content audio and the room audio. These features may then be compared, and the differences may be quantified. If the differences are significant, then human presence may be inferred. Insignificant differences may be used to infer the absence of people.


The overall context of the system is illustrated in FIG. 1, according to an embodiment. Content 110 may be provided to a user's home entertainment or computer system. In the illustrated embodiment, the content 110 may be received at consumer electronics device such as a set-top box (STB) 120. In alternative embodiments, the content 110 may be received at another consumer electronics device, such as a home computer. The content 110 may be received from a content provider, such as a broadcast network, a server associated with web site, or other source. Content 110 may be received via a data network, and may be communicated through fiber, wired or wireless media, or some combination thereof. In an alternative embodiment, the content 110 may not be received from an external source, but may be locally stored content that can be played by a user. Further, note that content 110 may include an audio component, shown as content audio 115.


Content 110 may be presented to a user through one or more output devices, such as television (TV) 150. The presentation of content 110 may be controlled through the use of a remote control 160, which may transmit control signals to SIB 120. The control signals may be received by a radio frequency (RF) interface WO 130 at STB 120.


Room audio 170 may also be present, including all sound generated in the room. Sources for the room audio 170 may include ambient noise and sounds made by any users, including but not limited to speech. Room audio 170 may also include sound generated by the consumer electronics in the room, such as the content audio 115 produced by TV 150. The room audio may be be captured by a microphone 140. In the illustrated embodiment, microphone 140 may be incorporated in STB 120. In alternative embodiments, the microphone 140 may be incorporated in TV 150 or elsewhere.


The processing of the system described herein is shown generally at FIG. 2 as process 200, according to an embodiment. At 210, room audio, which includes any content audio as heard in the room, and content audio may be received. In an embodiment, one or both may be recorded, or, in the case of content audio, extracted from the video stream as it is being transmitted in the room, in order to facilitate the processing described below. At 220, analogous features of both room audio and content audio may be extracted. At 230, the extracted features of the room audio may be compared with those of the content audio. At 240, the comparison may be used to infer either the presence or absence of people in the room.


Process 200 is illustrated in greater detail in FIG. 3, according to an embodiment. At 310, content audio may be received. At 320, the content audio may be sampled. In an embodiment, the content audio may be sampled at 8 kHz. In alternative embodiments, the content audio may be sampled at another frequency. At 330, the sampled content audio may be divided into intervals for subsequent processing. In an embodiment, the intervals may be 0.5 second long. At 340, features may be extracted from each interval of sampled content audio. The feature extraction process will be described in greater detail below. Generally, for each interval, a statistical measure may be calculated, such as the coefficient of variation for each interval and used as the feature for subsequent processing.


Room audio may be processed in an analogous manner. At 315, room audio may be received. As noted above, room audio may be captured using a microphone incorporated into an STB or other consumer electronics component in the room, and may then be recorded for processing purposes. At 325, the room audio may be sampled. In an embodiment, the room audio may be sampled at 8 kHz or any other frequency. At 335, the sampled room audio may be divided into intervals for subsequent processing, in an embodiment, the intervals may be 0.5 second long. The intervals of sampled room audio may correspond, with respect to time, to respective intervals of sampled content audio. At 345, features may be extracted from each interval of sampled room audio. As in the case of content audio, a coefficient of variation or other statistical measure may be calculated for each interval and used as the feature for subsequent processing.


At 350, the extracted features may be compared. In an embodiment, this includes comparison of the coefficients of variation as a common statistical measure, for temporally corresponding intervals of sampled room audio and sampled content audio. The comparison process will be described in greater detail below. In an embodiment, this may comprise calculating the difference between the coefficients of variation of the room audio and the content audio, for corresponding intervals. At 360, a normalization or smoothing process may take place. This may comprise calculation of a function of the differences between the coefficients of variation of the room audio and the content audio over a sequence of successive intervals. At 370, an inference may be reached regarding the presence of people in the room, where the inference may be based on the statistic(s) resulting from the normalization performed at 360. In an embodiment, if the coefficients of variation are sufficiently different between temporally corresponding intervals of room and content audio, then the presence of one or more people may be inferred.



FIG. 4 illustrates an embodiment of the process of feature extraction as may be performed for each interval of sampled content audio. At 410, the standard deviation may be determined for the interval. At 420, the mean may be determined. At 430, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero.



FIG. 5 illustrates the process of feature extraction as may be performed for each interval of sampled room audio, according to an embodiment. At 510, the standard deviation may be determined for the sampled room audio interval. At 520, the mean may be determined. At 530, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero. At 540, the sampled room audio interval may be discarded. This may serve as a privacy precaution for the one or more persons that may be present in the room.


In an alternative embodiment, additional processing may be performed in conjunction with feature extraction. FIG. 6 illustrates such an embodiment of the process of feature extraction as may be performed for each interval of sampled content audio. At 604, a Fourier transform may be applied to the sampled content audio interval. This may allow the transfer of the signal to the frequency domain. At 607, band pass filtering may be performed, so that common speech frequencies may be retained. In an embodiment, the frequencies 85-1000 Hz may be retained, where speech energy may be most concentrated. At 610, the standard deviation may be determined for the output of 607 for this interval. At 620, the mean may be determined. At 630, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero.



FIG. 7 illustrates such an embodiment of the process of feature extraction as may be performed for each interval of sampled room audio. At 704, a Fourier transform may be applied to the sampled room audio interval. This may allow the transfer of the signal to the frequency domain. At 707, band pass filtering may be performed, so that common speech frequencies may be retained. As in the process of FIG. 6, the frequencies 85-1000 Hz may be retained, where speech energy may be most concentrated. At 710, the standard deviation may be determined for the output of 707 for this interval. At 720, the mean may be determined. At 730, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero. At 740, the room audio interval may be discarded.


The comparison of coefficients of variation is illustrated in FIG. 8, according to an embodiment. At 810, for each interval the difference between the coefficients of variation for room audio and for content audio may be determined. In an embodiment, this difference may be expressed as a percentage difference between the two coefficients. At 820, this percentage difference may be calculated. Given a series of content audio intervals and corresponding room audio intervals, the output of 820 may be a series of percentage differences. Each percentage difference may correspond to a pair of time-synchronized intervals, i.e., a content audio interval and a corresponding room audio interval.


Note that the magnitude of the percentage difference may allow greater or lesser confidence in the human presence inference. If the percentage difference is less than the threshold, then human presence may be unlikely, as discussed above. If the percentage is significantly less than the threshold, e.g., close to zero, then this may suggest that the room audio and the content audio are extremely similar, so that a higher degree of confidence may be placed in the inference that human presence is unlikely. Conversely, if the percentage difference exceeds the threshold then human presence may be likely. If the percentage difference exceeds the threshold by a significant amount, then this may suggest that the room audio and the content audio are very different, and a higher degree of confidence may be placed in the inference that human presence is likely.


In an embodiment, the data related to a given interval may be normalized by considering this interval in addition to a sequence of immediately preceding intervals. In this way, significance of outliers may be diminished, while the implicit confidence level of an interval may influence the inferences derived in succeeding intervals. Numerically, the normalization process may use any of several functions. Normalization may use a moving average of data from past intervals, or may use linear or exponential decay functions of this data.



FIG. 9 illustrates normalization that may be performed using a moving average, along with subsequent inference, according to an embodiment. Here, a predetermined number of previous intervals may be used. In this embodiment, ten previous intervals may be used. At 910, the percentage difference between coefficients of variation for room audio and content audio for each of the preceding nine intervals may be considered, along with the percentage difference in a current interval. This series of ten values may then be averaged at 920, yielding an average percentage difference. This average percentage difference may then be compared to a threshold value to determine, at 930, if human presence is to be inferred. If the average is within the threshold (e.g., 10% in an embodiment), then at 940 human presence may be unlikely. Otherwise, human presence may be inferred at 950.


The processes of FIGS. 6 and 7 may be used to extract features in the context of determining human presence as shown in FIG. 3 at 340 and 345 respectively. In alternative embodiments these processes may be used in a slightly different manner. Here, the process of FIG. 3 may take place as shown, where feature extraction 340 (for content audio) may take place as shown in FIG. 4, and feature extraction 345 (for room audio) may lake place as shown in FIG. 5. If human presence has been interred at 370, additional processing may be performed to determine if more than one person is present in the room.


This is shown in FIG. 10 according to an embodiment. At 1030, sampled content audio may be divided into intervals. At 1040, features may be extracted for an interval of content audio. In an embodiment, the features of the content audio interval may be extracted according to the process illustrated in FIG. 6 and discussed above. At 1035, sampled room audio may be divided into intervals. At 1045, features may be extracted for an interval of room audio. In an embodiment, the features of the content audio interval may be extracted according to the process illustrated in FIG. 7 and discussed above.


At 1050, the extracted features of a content audio interval and a room audio interval may be compared. This comparison may be performed in the same manner as shown in FIG. 8. At 1060, normalization and inference may be performed in the same manner as shown in FIG. 9. In this case, the inference may be made as to whether the presence of more than one person is likely or unlikely.


As noted above, the systems, methods and computer program products described herein may be implemented in the context of a home entertainment system that may include an STB and/or a smart television, or may be implemented in a personal computer. Moreover, the systems, methods and computer program products described herein may also be implemented in the context of a laptop computer, ultra-laptop or netbook computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and so forth.


One or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including a computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein. The computer readable medium may be transitory or non-transitory. An example of a transitory computer readable medium may be a digital signal transmitted over a radio frequency or over an electrical conductor, through a local or wide area network, or through a network such as the Internet. An example of a non-transitory computer readable medium may be a compact disk, a flash memory, random access memory (RAM), read-only memory (ROM), or other data storage device.


An embodiment of a system that may perform the processing described herein is shown in FIG. 11. Here, the feature extraction may be embodied in firmware in a programmable integrated circuit (PIC). The comparison and normalization processing may be embodied in software.


A microphone 1105 may capture room audio 1107. Content audio 1117 may be received and routed to PIC 1110. The sampling of the room and content audio and the decomposition of these signals into intervals may be performed in PIC 1110 or elsewhere. After sampling and decomposing into intervals, the content and room audio may be processed by the feature extraction firmware 1115 in PIC 1110. As discussed above, feature extraction process may produce coefficients of variation for each interval, for both sampled room audio and sampled content audio. In the illustrated embodiment, feature extraction may take place in the PIC 1110 through the execution of feature extraction firmware 1115. Alternatively, the feature extraction functionality may be implemented in an execution engine of system on a chip (SOC) 1120.


If feature extraction is performed at PIC 1110, the coefficients of variation may be sent to SOC 1120, and then made accessible to operating system (OS) 1130. Comparison of coefficients from corresponding room audio and content audio intervals may be performed by logic 1160 in presence middleware 1140. Normalization may be performed by normalization logic 1150, which may also be part of presence middleware 1140. An inference regarding human presence may then be made available to a presence-enabled application 1170. Such an application may, for example, put system 1100 into a low power state if it is inferred that no one is present. Another example of a presence-enabled application 1170 may be a program that collects presence inferences from system 1100 and others like it in other households, to determine viewership of a television program or advertisement.


As noted above with respect to FIGS. 6, 7 and 10, embodiments may also infer the presence of more than one person. In this case, if human presence is inferred, feature extraction may be repeated using Fourier transformation and bandpass filtering. In an embodiment, this functionality may be implemented in feature extraction firmware 1115. Comparison and normalization may then be performed on the generated coefficients of variation. This processing may be performed by comparison logic 1160 and normalization logic 1150 in middleware 1140.


Items 1105, 1110, 1120, and 1130 may all be located in one or more components in a user's home entertainment system or computer system, in an embodiment. They may be located in an STB, digital video recorder, or television, for example. Presence middleware 1140 and presence-enabled application 1170 may also be located in one or more components of the user's home entertainment system or computer system. In alternative embodiments, one or both of presence middleware 1140 and presence-enabled application 1170 may be located elsewhere, such as the facility of a content provider, for example.


Note that in some embodiments, the audio captured by the microphone 1105 may be muted. A user may choose to do this via a button on remote control 1180 or the home entertainment system. Such a mute function does not interfere with the mute on remote controls which mutes the audio coming out of the TV, A “mute” command for the microphone would then be sent to audio selection logic in PIC 1110. As a result of such a command, audio from microphone 1105 would not be received by OS 1130. Nonetheless, room audio 1107 may still be received at PIC 1110, where feature extraction may be performed. Such a capability may be enabled by the presence of the feature extraction firmware 1115 in the PIC 1110. The statistical data, i.e., the coefficients of variation, may then be made available to the OS 1130, even though the room audio itself has been muted. The nature of the coefficients of variation may be such that the coefficients may not be usable for purposes of recreating room audio 1107.



FIG. 12 illustrates an embodiment in which the feature extraction functionality may be embodied in firmware. As discussed above, such functionality may be incorporated as part of a PIC. System 1200 may include a processor 1220 and may further include a firmware device 1210. Device 1210 may include one or more computer readable media that may store computer program logic 1240. Firmware device 1210 may be implemented in a read-only memory (ROM) or other data storage component for example, as would be understood by a person of ordinary skill in the art. Processor 1220 and device 1210 may be in communication using any of several technologies known to one of ordinary skill in the art, such as a bus. Computer program logic 1240 contained in device 1210 may be read and executed by processor 1220. One or more ports and/or I/O components, shown collectively as I/O 1230, may also be connected to processor 1220 and device 1210.


Computer program logic 1240 may include feature extraction code 1250. This code may be responsible for determining the standard deviation and mean for intervals of sampled room audio and content audio, as discussed above. Feature extraction code 1250 may also be responsible for implementing Fourier transformation and bandpass filtering as discussed above with respect to FIGS. 6 and 7. Feature extraction code 1250 may also be responsible for calculation of coefficients of variation for each interval of sampled room and content audio.


A software embodiment of the comparison and normalization functionality is illustrated in FIG. 13. The illustrated system 1300 may include a processor 1320 and may further include a body of memory 1310. Memory 1310 may include one or more computer readable media that may store computer program logic 1340. Memory 1310 may be implemented as a hard disk and drive, a removable media such as a compact disk, a read-only memory (ROM) or random access memory (RAM) device, for example, or some combination thereof. Processor 1320 and memory 1310 may be in communication using any of several technologies known to one of ordinary skill in the art, such as a bus. Computer program logic 1340 contained in memory 1310 may be read and executed by processor 1320. One or more I/O ports anchor I/O devices, shown collectively as I/O 1330, may also be connected to processor 1320 and memory 1310.


Computer program logic 1340 may include comparison code 1350. This module may be responsible for comparing coefficients of variation of corresponding intervals of room audio and content audio, and generating a quantitative indication of lire difference, e.g., a percentage difference, as discussed above. Computer program logic 1340 may include code 1350 for performing normalization. This module may be responsible for performing normalization of data generated by comparison code 1350 using a moving average or other process, as noted above. Computer program logic 1340 may include inference code 1370. This module may be responsible for generating an inference regarding the presence or absence of people, given the results of normalization code 1360.


The systems, methods, and computer program products described above may have a number of applications. If a viewer leaves a room, for example, the absence of people could be detected as described above, and the entertainment or computer system could go into a low power consumption state, perhaps by dimming or powering down the display, or by shutting down completely. In this way, power could be conserved. If recorded media were being viewed, the playback could be automatically paused when a viewer leaves the room.


In addition, service platforms may offer viewers services such as banking, on-line shopping, etc. Human presence detection as described above would be useful for such TV-based services. For example, if a viewer were accessing a bank/brokerage account using the TV, but then leaves the room without closing the service, a human presence detection capability could be used to automatically log off or shut down the service after a predetermined time. In another case, if another person enters the room while the on-line banking service is running, the human presence detection could be used to automatically turn off the banking service for security or privacy reasons.


Detecting human presence would also be used by advertisers and content providers. Actual viewership could be determined. Content providers could determine the number of people viewing a program. Advertisers could use this information to determine the number of people who are exposed to a given advertisement. Moreover, an advertiser could determine how many people viewed a particular airing of an advertisement, i.e., how many people saw an ad at a particular time and channel, and in the context of a particular program. This in turn could allow the advertiser to perform cost benefit analysis. The exposure of an advertisement could be compared to the cost to produce the advertisement, to determine if the advertisement, as aired at a particular time and channel, is a worthwhile expense.


Methods and systems are disclosed herein with the aid of functional building blocks illustrating the functions, features, and relationships thereof. At least some of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.


While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the exemplary embodiments disclosed herein.

Claims
  • 1. A machine-implemented method, comprising: sampling a content audio from a content;sampling room audio from a microphone proximate to a consumer electronics device, the room audio comprising all sounds generated in a room, including the content audio;dividing each of the sampled content audio and the sampled room audio into intervals;computing a coefficient of variation for each of the intervals of each of the content audio and the room audio, by dividing a standard deviation of the interval by a mean of the interval, if the mean is non-zero;comparing the coefficient of variation of each interval of the room audio with the corresponding coefficient of variation of the content audio; anddetermining that a person is present if a difference between the coefficients of variation of a predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds a threshold.
  • 2. The machine-implemented method of claim 1, wherein the determining includes: inferring that the person is being exposed to content presented at the consumer electronics device if the difference between the coefficients of variation of the content audio and the coefficients of variation of the room audio exceeds the threshold.
  • 3. The machine-implemented method of claim 1, further including: normalizing results of the comparing over the predetermined number of intervals; and performing the determining based on the normalized results of the comparing.
  • 4. The machine-implemented method of claim 3, wherein, for each interval: the comparing includes determining a percentage difference between the coefficients of variation of the predetermined number of intervals of the room audio and the corresponding coefficients of variation of the content audio; andthe normalizing includes averaging the percentage difference of the intervals with the percentage differences of a predetermined number of preceding intervals.
  • 5. The machine-implemented method of claim 1, wherein the computing includes, for each interval of the room audio and the content audio: determining the standard deviation of the interval; anddetermining the mean of the interval.
  • 6. The machine-implemented method of claim 1, wherein the determining includes: determining a level of confidence that the person is present based upon an extent to which the difference between the coefficients of variation of the content audio and the coefficients of variation of the room audio exceeds the threshold.
  • 7. The machine-implemented method of claim 1, further comprising: generating a frequency domain representation of each interval of the room audio and the content audio; and wherein:the computing includes computing a coefficient of variation of the frequency domain representation of each interval of the content audio and the room audio; andthe determining includes determining that the person is present if a difference between the coefficients of variation of the predetermined number of intervals of the frequency domain representations of the content audio and the corresponding coefficients of variation of the frequency domain representation of the room audio exceeds the threshold.
  • 8. The machine-implemented method of claim 7, further comprising: band pass filtering the frequency domain representation of each interval of the content audio and the room audio to remove frequencies outside of a frequency range of human speech.
  • 9. The machine-implemented method of claim 1, further including: placing the consumer electronics device in a reduced power consumption state if the difference between the coefficients of variation of the predetermined number of intervals of content audio and the corresponding coefficients of variation of the room audio is below the threshold.
  • 10. The machine-implemented method of claim 9, further including: pausing presentation of content at the consumer electronics device if the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio is below the threshold.
  • 11. An apparatus, comprising: a programmable integrated circuit (PIC); andan execution engine in communication with the PIC;wherein the PIC is to: sample a content audio from a content;sample room audio from a microphone proximate to a consumer electronics device, the room audio comprising all sounds generated in a room, including the content audio;divide each of the sampled content audio and the sampled room audio into intervals; andcompute a coefficient of variation for each interval of each of the content audio and the room audio, by dividing a standard deviation of the interval by a mean of the interval, if the mean is non-zero;and wherein the execution engine is to: compare each coefficient of variation of the room audio with the corresponding coefficient of variation of the content audio; anddetermine that a person is present if a difference between the coefficients of variation of a predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds a threshold.
  • 12. The apparatus of claim 11, wherein the execution engine is further to: infer that the person is being exposed to content presented at the consumer electronics device if the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds the threshold.
  • 13. The apparatus of claim 11, wherein the execution engine is further to: normalize results of the comparisons over the predetermined number of intervals; anddetermine that the person is present based on the normalized results of the comparisons.
  • 14. The apparatus of claim 13, wherein the execution engine is further to, for each interval of the predetermined number of intervals: determine a percentage difference between the coefficient of variation of the room audio and the corresponding coefficient of variation of the content audio; andaverage the percentage difference of the interval with the percentage differences of a predetermined number of preceding intervals.
  • 15. The apparatus of claim 11, wherein the PIC is further to, for each interval of the room audio and the content audio: determine the standard deviation of the interval; anddetermine the mean of the interval.
  • 16. The apparatus of claim 11, wherein the execution engine is further to: determine a level of confidence that the person is present based upon an extent to which the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds the threshold.
  • 17. The apparatus of claim 11, wherein the PIC is further to: generate a frequency domain representation of each interval of the room audio and the content audio;compute the coefficient of variation of each interval of the content audio and the room audio as a coefficient of variation of the respective frequency domain representation; andthe execution engine is further configured to determine that the person is present if a difference between the coefficients of variation of the frequency domain representation of the content audio and the corresponding coefficients of variation of the frequency domain representation of the room audio exceeds the threshold.
  • 18. The apparatus of claim 17, wherein the PIC is further to: band pass filter the frequency domain representation of each interval of the content audio and the room audio to remove frequencies outside of a frequency range of human speech.
  • 19. The apparatus of claim 11, wherein the execution engine is further to: place the consumer electronics device in a reduced power consumption state if the difference between the coefficients of variation of each of the predetermined intervals of the content audio and the corresponding coefficients of variation of the room audio is below the threshold.
  • 20. The apparatus of claim 11, wherein the execution engine is further to: pause presentation of content at the consumer electronics device if the difference between the coefficients of variation of each of the predetermined intervals of the content audio and the corresponding coefficients of variation of the room audio is below the threshold.
  • 21. A non-transitory computer readable media encoded with a computer program that includes instructions to cause a processor to: sample a content audio from a content;sample room audio from a microphone proximate to a consumer electronics device, the room audio comprising all sounds generated in a room, including the content audio;divide each of the sampled content audio and the sampled room audio into intervals;compute a coefficient of variation for each interval of the content audio and the room audio, by dividing a standard deviation of the interval with a mean of the interval, if the mean is non-zero;compare the coefficient of variation of each interval of the room audio with the corresponding coefficient of variation of the content audio; anddetermine that a person is present if a difference between the coefficients of variation of a predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds a threshold.
  • 22. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: infer that the person is being exposed to content presented at the consumer electronics device if the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds the threshold.
  • 23. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: normalize results of the comparisons over the predetermined number of intervals; anddetermine that the person is present based on the normalized results of the comparison.
  • 24. The non-transitory computer readable media of claim 23, further including instructions to cause the processor to, for each interval of the predetermined number of intervals: determine a percentage difference between the coefficient of variation of the room audio and the coefficient of variation of the content audio; andaverage the percentage difference of the interval with the percentage differences of a predetermined number of preceding intervals.
  • 25. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to, for each interval of the room audio and the content audio: determine the standard deviation of the interval; anddetermine the mean of the interval.
  • 26. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: determine a level of confidence that the person is present based upon an extent to which the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio exceeds the threshold.
  • 27. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: generate a frequency domain representation of each interval of the room audio and the content audio;compute the coefficient of variation of each interval of the content audio and the room audio as a coefficient of variation of the respective frequency domain representation; anddetermine that the person is present if a difference between the coefficients of variation of the frequency domain representation of the content audio and the corresponding coefficients of variation of the frequency domain representation of the room audio exceeds the threshold.
  • 28. The non-transitory computer readable media of claim 27, further including instructions to cause the processor to: bandpass filter the frequency domain representation of each interval of the content audio and the room audio to remove frequencies outside of a frequency range of human speech.
  • 29. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: place the consumer electronics device in a reduced power consumption state if the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio is below the threshold.
  • 30. The non-transitory computer readable media of claim 21, further including instructions to cause the processor to: pause presentation of content at the consumer electronics device if the difference between the coefficients of variation of the predetermined number of intervals of the content audio and the corresponding coefficients of variation of the room audio is below the threshold.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/US2011/049228 8/25/2011 WO 00 6/23/2014
Publishing Document Publishing Date Country Kind
WO2013/028204 2/28/2013 WO A
US Referenced Citations (183)
Number Name Date Kind
4769697 Gilley et al. Sep 1988 A
5515098 Cartes May 1996 A
5644723 Deaton et al. Jul 1997 A
5661516 Carles Aug 1997 A
5911773 Mutsuga et al. Jun 1999 A
6334110 Walter et al. Dec 2001 B1
6338065 Takahashi et al. Jan 2002 B1
6401034 Kaplan et al. Jun 2002 B1
6708335 Ozer et al. Mar 2004 B1
6941197 Murakami et al. Sep 2005 B1
6947881 Murakami et al. Sep 2005 B1
7072942 Maller Jul 2006 B1
7134130 Thomas Nov 2006 B1
7231198 Loughran Jun 2007 B2
7349513 Bhullar et al. Mar 2008 B2
7363151 Nomura et al. Apr 2008 B2
7366683 Altberg et al. Apr 2008 B2
7457828 Wenner et al. Nov 2008 B2
7546619 Anderson et al. Jun 2009 B2
7567800 Uematsu et al. Jul 2009 B2
7636785 Shahine et al. Dec 2009 B2
7657626 Zwicky Feb 2010 B1
7698236 Cox et al. Apr 2010 B2
7730509 Boulet et al. Jun 2010 B2
7761345 Martin et al. Jul 2010 B1
7778987 Hawkins Aug 2010 B2
7831384 Bill Sep 2010 B2
7835859 Bill Nov 2010 B2
7904461 Baluja et al. Mar 2011 B2
7974873 Simmons et al. Jul 2011 B2
8065227 Beckman Nov 2011 B1
8108405 Marvit et al. Jan 2012 B2
8307068 Schuler Nov 2012 B2
8429685 Yarvis et al. Apr 2013 B2
8515828 Wolf et al. Aug 2013 B1
8527336 Kothari et al. Sep 2013 B2
20010049620 Blasko Dec 2001 A1
20010056461 Kampe et al. Dec 2001 A1
20020023002 Staehelin Feb 2002 A1
20020023010 Rittmaster et al. Feb 2002 A1
20020072952 Hamzy et al. Jun 2002 A1
20020078444 Krewin et al. Jun 2002 A1
20020129368 Schlack et al. Sep 2002 A1
20020174025 Hind et al. Nov 2002 A1
20020188609 Fukuta et al. Dec 2002 A1
20030023678 Rugelj Jan 2003 A1
20030037333 Ghashghai et al. Feb 2003 A1
20030134632 Loughran Jul 2003 A1
20030135499 Schirmer et al. Jul 2003 A1
20030191812 Agarwalla et al. Oct 2003 A1
20040003392 Trajkovic et al. Jan 2004 A1
20040039629 Hoffman et al. Feb 2004 A1
20040064443 Taniguchi et al. Apr 2004 A1
20040133923 Watson et al. Jul 2004 A1
20040205810 Matheny et al. Oct 2004 A1
20040240676 Hashimoto et al. Dec 2004 A1
20040254942 Error et al. Dec 2004 A1
20050079343 Raybould et al. Apr 2005 A1
20050097595 Lipsanen et al. May 2005 A1
20050160002 Roetter et al. Jul 2005 A1
20050216345 Altberg et al. Sep 2005 A1
20050283699 Nomura et al. Dec 2005 A1
20060015637 Chung Jan 2006 A1
20060090131 Kumagai Apr 2006 A1
20060090185 Zito et al. Apr 2006 A1
20060106944 Shahine et al. May 2006 A1
20060155608 Bantz et al. Jul 2006 A1
20060184625 Nordvik et al. Aug 2006 A1
20060212350 Ellis et al. Sep 2006 A1
20060241862 Ichihara et al. Oct 2006 A1
20060242315 Nichols Oct 2006 A1
20060253453 Chmaytelli et al. Nov 2006 A1
20060271425 Goodman et al. Nov 2006 A1
20060282304 Bedard et al. Dec 2006 A1
20070010942 Bill Jan 2007 A1
20070073477 Krumm et al. Mar 2007 A1
20070073682 Adar et al. Mar 2007 A1
20070088801 Levkovitz et al. Apr 2007 A1
20070106468 Eichenbaum et al. May 2007 A1
20070121845 Altberg et al. May 2007 A1
20070157262 Ramaswamy et al. Jul 2007 A1
20070220010 Ertugrul Sep 2007 A1
20070220146 Suzuki Sep 2007 A1
20070226320 Hager et al. Sep 2007 A1
20070239527 Nazer et al. Oct 2007 A1
20070239533 Wojcicki et al. Oct 2007 A1
20070255617 Maurone et al. Nov 2007 A1
20070270163 Anupam et al. Nov 2007 A1
20070294773 Hydrie et al. Dec 2007 A1
20070299671 McLachlan Dec 2007 A1
20080021632 Amano Jan 2008 A1
20080027639 Tryon Jan 2008 A1
20080033841 Wanker Feb 2008 A1
20080036591 Ray Feb 2008 A1
20080040370 Bosworth et al. Feb 2008 A1
20080040475 Bosworth et al. Feb 2008 A1
20080052082 Tsai Feb 2008 A1
20080052168 Peters et al. Feb 2008 A1
20080086477 Hawkins Apr 2008 A1
20080097822 Schigel et al. Apr 2008 A1
20080104195 Hawkins et al. May 2008 A1
20080114651 Jain et al. May 2008 A1
20080120105 Srinivasan May 2008 A1
20080120308 Martinez et al. May 2008 A1
20080134043 Georgis et al. Jun 2008 A1
20080154720 Gounares et al. Jun 2008 A1
20080162186 Jones Jul 2008 A1
20080187114 Altberg et al. Aug 2008 A1
20080201472 Bistriceanu et al. Aug 2008 A1
20080215425 Guldimann et al. Sep 2008 A1
20080221987 Sundaresan et al. Sep 2008 A1
20080222283 Ertugrul et al. Sep 2008 A1
20080235088 Weyer et al. Sep 2008 A1
20080275899 Baluja et al. Nov 2008 A1
20080290987 Li Nov 2008 A1
20080306808 Adjali et al. Dec 2008 A1
20090006995 Error et al. Jan 2009 A1
20090049097 Nocifera et al. Feb 2009 A1
20090091426 Barnes et al. Apr 2009 A1
20090106415 Brezina et al. Apr 2009 A1
20090172035 Lessing et al. Jul 2009 A1
20090172721 Lloyd et al. Jul 2009 A1
20090177528 Wu et al. Jul 2009 A1
20090204706 Ertugrul et al. Aug 2009 A1
20090216704 Zheng et al. Aug 2009 A1
20090240569 Ramer et al. Sep 2009 A1
20090307205 Churchill et al. Dec 2009 A1
20090327486 Andrews et al. Dec 2009 A1
20100031335 Handler Feb 2010 A1
20100042317 Tajima et al. Feb 2010 A1
20100049602 Softky Feb 2010 A1
20100063877 Soroca et al. Mar 2010 A1
20100076997 Koike et al. Mar 2010 A1
20100082432 Feng et al. Apr 2010 A1
20100094878 Soroca et al. Apr 2010 A1
20100106603 Dey et al. Apr 2010 A1
20100106673 Parks Apr 2010 A1
20100114864 Agam et al. May 2010 A1
20100161492 Harvey et al. Jun 2010 A1
20100162285 Cohen Jun 2010 A1
20100220679 Abraham et al. Sep 2010 A1
20100250361 Torigoe et al. Sep 2010 A1
20100251304 Donoghue et al. Sep 2010 A1
20100281042 Windes et al. Nov 2010 A1
20100293048 Singolda et al. Nov 2010 A1
20100299225 Aarni et al. Nov 2010 A1
20100316218 Hatakeyama et al. Dec 2010 A1
20100325259 Schuler Dec 2010 A1
20100325655 Perez Dec 2010 A1
20110010433 Wilburn et al. Jan 2011 A1
20110072452 Shimy et al. Mar 2011 A1
20110078720 Blanchard Mar 2011 A1
20110106436 Bill May 2011 A1
20110137975 Das et al. Jun 2011 A1
20110154385 Price et al. Jun 2011 A1
20110161462 Hussain et al. Jun 2011 A1
20110213800 Saros et al. Sep 2011 A1
20110214148 Gossweiler, III et al. Sep 2011 A1
20110246213 Yarvis et al. Oct 2011 A1
20110246214 Yarvis et al. Oct 2011 A1
20110246283 Yarvis et al. Oct 2011 A1
20110246300 Yarvis et al. Oct 2011 A1
20110246469 Yarvis et al. Oct 2011 A1
20110247029 Yarvis et al. Oct 2011 A1
20110247030 Yarvis et al. Oct 2011 A1
20110251788 Yarvis et al. Oct 2011 A1
20110251918 Yarvis et al. Oct 2011 A1
20110251990 Yarvis et al. Oct 2011 A1
20110258203 Wouhaybi et al. Oct 2011 A1
20110264553 Yarvis et al. Oct 2011 A1
20110264613 Yarvis et al. Oct 2011 A1
20110268054 Abraham et al. Nov 2011 A1
20110288907 Harvey Nov 2011 A1
20110295719 Chen et al. Dec 2011 A1
20110321073 Yarvis et al. Dec 2011 A1
20120079521 Garg et al. Mar 2012 A1
20120126868 Machnicki et al. May 2012 A1
20120246000 Yarvis et al. Sep 2012 A1
20120246065 Yarvis et al. Sep 2012 A1
20120246684 Yarvis et al. Sep 2012 A1
20120253920 Yarvis et al. Oct 2012 A1
20120304206 Roberts et al. Nov 2012 A1
20130013545 Agarwal et al. Jan 2013 A1
Foreign Referenced Citations (55)
Number Date Country
101159818 Apr 2008 CN
102223393 Oct 2011 CN
102316364 Jan 2012 CN
102612702 Jul 2012 CN
1 003 018 May 2000 EP
1 217 560 Jun 2002 EP
1 724 992 Nov 2006 EP
1 939 797 Jul 2008 EP
10-301905 Nov 1998 JP
2000-76304 Mar 2000 JP
2000-198412 Jul 2000 JP
2002-366550 Dec 2002 JP
2004-108865 Apr 2004 JP
2004-171343 Jun 2004 JP
2004-258872 Sep 2004 JP
2006-333531 Dec 2006 JP
2006-350813 Dec 2006 JP
2007-179185 Jul 2007 JP
2007-249413 Sep 2007 JP
2008-152564 Jul 2008 JP
2008-171418 Jul 2008 JP
2008-242805 Oct 2008 JP
2008-546075 Dec 2008 JP
2009-076041 Apr 2009 JP
2009-528639 Aug 2009 JP
10-2002-0024645 Apr 2002 KR
10-2006-0122372 Nov 2006 KR
10-2006-0122375 Nov 2006 KR
10-2007-0061601 Jun 2007 KR
10-2009-0014846 Feb 2009 KR
1999007148 Feb 1999 WO
2002032136 Apr 2002 WO
2002032136 Apr 2002 WO
2002071298 Sep 2002 WO
2002082214 Oct 2002 WO
2002082214 Oct 2002 WO
2006130258 Dec 2006 WO
2006130258 Dec 2006 WO
2007101263 Sep 2007 WO
2008064071 May 2008 WO
2008096783 Aug 2008 WO
2009002999 Dec 2008 WO
2009099876 Aug 2009 WO
2011075119 Jun 2011 WO
2011075120 Jun 2011 WO
2011075137 Jun 2011 WO
2011130034 Oct 2011 WO
2011130034 Oct 2011 WO
2011163411 Dec 2011 WO
2011163411 Dec 2011 WO
2012006237 Jan 2012 WO
2012006237 Jan 2012 WO
2012135239 Oct 2012 WO
2012135239 Oct 2012 WO
2013028204 Feb 2013 WO
Non-Patent Literature Citations (106)
Entry
Final Office Action received for U.S. Appl. No. 13/163,968, dated Aug. 16, 2012, 8 pages.
Office Action received for U.S. Appl. No. 13/163,968, dated Nov. 27, 2012, 9 pages.
Final Office Action received for U.S. Appl. No. 13/163,968, dated Apr. 2, 2013, 8 pages.
Office Action received for U.S. Appl. No. 13/163,968, dated Oct. 3, 2013, 8 pages.
Final Office Action received for U.S. Appl. No. 13/163,968, dated Feb. 13, 2014, 8 pges.
Office Action received for U.S. Appl. No. 13/163,984, dated Feb. 28, 2012, 15 pages.
Bindley, Katherine, “Verizon Files Patent for DVR That Watches Viewers, Delivers Targeted Ads Based on What It Sees”, The Huffington Post, Published Dec. 5, 2012.
International Search Report and Written Opinion received for PCT Application No. PCT/US2009/068131, dated Sep. 1, 2010, 9 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2009/068689, dated Aug. 26, 2010, 11 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2009/068129, dated Aug. 31, 2010, 10 pages.
Office Action received for United Kingdom Patent Application No. GB1108772.3, dated Jun. 2, 2011, 2 pages.
Wikipedia, “Nielsen ratings”, From Wikipedia, the free encyclopedia, retrieved on Feb. 1, 2013, webpage available at: http://en.wikipedia.org/wiki/Nielsen_ratings, 10 pages.
Schonfeld, Erick “Google Now Lets You Target Ads at Yourself”, TechCrunch, posted on Mar. 11, 2009, 2 pages. webpage available at: <http://techcrunch.com/2009/03/11/google-now-lets-you-target-ads-at-yourself/>.
“Introducing Google TV”, retrieved on Aug. 17, 2011, 1 page. webpage available at: <http://www.google.com/tv/>.
“Eloda Protocol Suite of Products”, retrieved on Aug. 17, 2011, 1 page. webpage available at: <http://www.eloda.com/en/protocol/>.
“TRA—The Right Audience”, retrieved on Aug. 17, 2011, 1 page. webpage available at: <http://www.traglobal.com/whatwedo.php>.
Office Action Received for European Patent Application No. 10252181.2, dated Jan. 9, 2012, 4 pages.
European Search Report Received for European Patent Application No. 10252181.2, dated Dec. 23, 2011, 3 pages.
International Preliminary Report on Patentability with Written Opinion received for PCT Patent Application No. PCT/US2012/030776, dated Oct. 10, 2013, 6 pages.
International Search Report and Written Opinion received for International Patent Application No. PCT/US2011/042786, dated Feb. 23, 2012, 8 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/031064, dated Dec. 14, 2011, 8 pages.
Combined Search and Examination Report received for United Kingdom Application No. 1108772.3, dated Sep. 26, 2011, 5 pages.
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/041516, dated Feb. 24, 2012, 7 pages.
International Search Report and Written Opinion received for International Patent Application No. PCT/US2011/049228, dated Mar. 27, 2012, 9 pages.
Norio, et al., A prediction system based on Context Information and Operation Histories, vol. 2004, No. 112 Nov. 10, 2004, pp. 83-90. (English Abstract Only).
Office Action received for Japanese Patent Application No. 2012-541060, dated Jun. 18, 2013, 3 pages of Office Action and 3 pages of English Translation.
Office Action Received for the Chinese patent application No. 201010621563.4, dated Mar. 1, 2012, 6 pages of Office Action and 6 pages of English Translation.
Office Action Received for the Japanese Patent Application No. 2010-274870, dated Apr. 3, 2012, 1 page of Office Action and 2 pages of English Translation.
International Preliminary Report on Patentability and Written Opinion received for PCT Application No. PCT/US2009/068131, dated Jun. 28, 2012, 8 pages.
International Preliminary Report on Patentability and Written Opinion received for PCT Patent Application No. PCT/US2009/068129, dated Jun. 28, 2012, 9 pages.
International Preliminary Report on Patentability and Written Opinion received for PCT Application No. PCT/US2009/068689, dated Jun. 28, 2012, 8 pages.
Office Action Received for Japanese Patent Application No. 2011-084230, dated Jul. 17, 2012, 3 pages of Office Action and 3 pages of English Translation.
Yue et al., “Automatic cookie usage setting with CookiePicker”, Proceedings of the 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Jun. 2007, pp. 460-470.
International Search Report and Written Opinion received for Patent Application No. PCT/US2012/030776, dated Oct. 29, 2012, 7 pages.
International Preliminary Report and Written Opinion received for PCT Patent Application No. PCT/US2011/031064, dated Oct. 26, 2012, 5 pages.
Office Action Received for the Chinese Patent Application No. 201010621563.4, dated Dec. 25, 2012, 8 pages of Office Action and 10 pages of English Translation.
International Preliminary Report on Patentability and Written Opinion received for PCT Patent Application No. PCT/US2011/042786, dated Jan. 24, 2013, 6 pages.
International Preliminary Report on Patentablitiy and Written Opinion received for PCT Patent Application No. PCT/US2011/041516, dated Jan. 10, 2013, 8 pages.
Matsuo et al., “Finding Social Network for Trust Calculation”, National Institute of Advance Industrial Science and technology (AIST), ECAI 2004, 16 pages.
Peter, Mika, “Flink: Semantic Web Technology for the Extraction and Analysis of social Networks”, Department of Computer Science, Vrije Universiteit Amsterdam (VUA), De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands, Elsevier Science, May 14, 2005, 20 pages.
McClellan, “Nielsen, Charter in Set-Top Box Deal”, Adweek, Mar. 12, 2008, 3 pages.
Norio et al., “Using a Positioning System of Cellular Phone to Learn Significant Locations”, IPSJ Journal, Japan, Information Processing Society of Japan, vol. 46, No. 12, Dec. 2005, pp. 2915-2924. (English Abstract Only).
Office Action received for Japanese Patent Application No. 2011-084230, dated Apr. 9, 2013, 2 pages of Office Action including 3 pages of English Translation.
Office Action received for European Patent Application No. 09852385.5, dated Jul. 24, 2012, 2 pages.
Office Action received for Chinese Patent Application No. 201110105727.2, dated May 24, 2013, 8 pages of Office Action and 12 pages of English Translation.
Office Action received for U.S. Appl. No. 13/130,203, dated Mar. 25, 2014, 18 pages.
International Preliminary Report on Patentability and Written Opinion received for PCT Application No. PCT/US2011/049228 dated Mar. 6, 2014, 6 pages.
Office Action received for U.S. Appl. No. 13/162,098, dated Nov. 12, 2013, 14 pages.
Office Action received for Japanese Patent Application No. 2012-541060, dated Feb. 4, 2014, 5 pages of Office Action including 2 page of English Translation.
Office Action received for U.S. Appl. No. 13/163,949, dated Nov. 25, 2013, 22 pages.
Office Action received for U.S. Appl. No. 12/833,035, dated Jun. 6, 2012, 15 pages.
Notice of Allowance received for U.S. Appl. No. 12/833,035, dated Nov. 26, 2012, 10 pages.
Office Action received for U.S. Appl. No. 12/821,376, dated Apr. 10, 2012, 10 pages.
Final Office Action received for U.S. Appl. No. 12/821,376, dated Jul. 17, 2012, 13 pages.
Office Action received for U.S. Appl. No. 12/821,376, dated Apr. 22, 2013, 12 pages.
Final Office Action received for U.S. Appl. No. 12/821,376, dated May 13, 2013, 12 pages.
Office Action received for Chinese Patent Application 201110185034.9, dated May 6, 2013, 13 pages of Office Action including 5 pages of English Translation.
Office Action received for Chinese Patent Application 201110185034.9, dated Jan. 28, 2014, 6 pages of Office Action including 3 pages of English Translation.
Office Action received for United Kingdom Patent Application No. GB1108772.3, dated Apr. 15, 2013, 3 pages.
Office Action received for United Kingdom Patent Application No. GB1108772.3, dated Apr. 1, 2014, 3 pages.
Office Action received for U.S. Appl. No. 13/078,565, dated Oct. 23, 2012, 17 pages.
Office Action received for U.S. Appl. No. 13/078,565, dated Feb. 15, 2013, 19 pages.
Office Action received for U.S. Appl. No. 13/078,565, dated Sep. 11, 2013, 22 pages.
Office Action received for U.S. Appl. No. 13/162,041, dated Feb. 12, 2014, 24 pages.
Davison et al., “Predicting Sequences of User Actions”, To be presented at the AAAI/ICML Workshop on Predicting the Future: AI Approaches to Time-Series Analysis, 1998, pp. 1-8.
Tseng et al., “Efficient mining and prediction of user behavior patterns in mobile web systems”, Information and Software Technology vol. 48, 2006, pp. 357-369.
Dupret et al., “A user browsing model to predict search engine click data from past observations”, SIGIR '08 Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, Jul. 20-24, 2008, pp. 331-338.
Office Action received for U.S. Appl. No. 12/761,448, dated Feb. 10, 2012, 12 pages.
Office Action received for U.S. Appl. No. 12/761,448, dated Aug. 1, 2012, 12 pages.
Final Office Action received for U.S. Appl. No. 12/761,448, dated Jan. 31, 2013, 13 pages.
Office Action received for U.S. Appl. No. 13/164,002, dated Dec. 18, 2013, 10 pages.
International Preliminary Report on Patentability and Written Opinion received for PCT Patent Application No. PCT/US2009/068689, dated Jun. 28, 2012, 8 pages.
European Search Report received for EP Patent Application No. 09852397.0, dated Feb. 14, 2014, 6 pages.
Office Action Received for Japanese Patent Application No. 2012-541987, dated Dec. 17, 2013, 6 pages of office action including 3 pages of English Translation.
Office Action received for U.S. Appl. No. 13/129,968, dated Aug. 27, 2012, 19 pages.
Final Office Action received for U.S. Appl. No. 13/129,968 dated Apr. 4, 2013, 16 pages.
Office Action received for U.S. Appl. No. 13/159,874, dated Sep. 7, 2012, 14 pages.
Final Office Action received for U.S. Appl. No. 13/159,874, dated Apr. 19, 2013, 33 pages.
Office Action received for U.S. Appl. No. 13/159,884, dated Aug. 27, 2012, 19 pages.
Final Office Action received for U.S. Appl. No. 13/159,884, dated May 13, 2013, 32 pages.
Office Action received for U.S. Appl. No. 13/159,894, dated Sep. 11, 2012, 13 pages.
Final Office action Received for U.S. Appl. No. 13/159,894, dated Apr. 25, 2013, 18 pages.
Office Action received for U.S. Appl. No. 13/159,896, dated Sep. 19, 2012, 14 pages.
Final Office action received for U.S. Appl. No. 13/159,896, dated Apr. 24, 2013, 12 pages.
Office Action received for U.S. Appl. No. 13/130,203, dated Mar. 27, 2013, 11 pages.
Final Office Action received for U.S. Appl. No. 13/130,203, dated Nov. 5, 2013, 37 pages.
Extended European Search Report received for EP Patent Application No. 09852385.5, dated Oct. 23, 2013, 7 pages.
Office Action received for Russian Patent Application No. 2012127417, dated Nov. 13, 2013, 5 pages of Office Action including 2 pages of English Translation.
Office Action received for Russian Patent Application No. 2012127417, dated Mar. 11, 2014, 10 pages of Office Action including 4 pages of English Translation.
Office Action received for U.S. Appl. No. 13/161,990, dated Sep. 12, 2012, 9 pages.
Final Office Action Received for U.S. Appl. No. 13/161,990, dated May 16, 2013, 8 pages.
Office Action received for U.S. Appl. No. 13/162,041, dated Oct. 25, 2012, 33 pages.
Final Office Action Received for U.S. Appl. No. 13/162,041, dated Jun. 5, 2013, 26 pages.
Office Action received for U.S. Appl. No. 13/162,098, dated Jan. 10, 2013, 12 pages.
Office Action received for U.S. Appl. No. 13/130,734, dated Mar. 26, 2013, 13 pages.
Final Office Action received for U.S. Appl. No. 13/130,734, dated Aug. 30, 2013, 11 pages.
Extended European Search Report Received for EP Patent Application No. 09852384.8, dated Jan. 3, 2014, 5 pages.
Supplementary European Search Report received for European Patent Application No. 09852384.8, dated Jan. 21, 2014, 1 page.
Office Action received for Japanese Patent Application No. 2012-543075, dated Feb. 12, 2014, 6 pages of Office Action Including 3 pages of English Translation.
Office Action received for Russian Patent Application No. 2012127407, dated Nov. 13, 2013, 16 pages of Office Action including 7 pages of English Translation.
Final Office Action received for U.S. Appl. No. 13/163,949, dated Apr. 21, 2014, 23 pages.
Office Action received for U.S. Appl. No. 13/163,959, dated Feb. 24, 2012, 10 pages.
Final Office Action received for U.S. Appl. No. 13/163,959, dated Jun. 15, 2012, 14 pages.
Office Action received for U.S. Appl. No. 13/163,968, dated Sep. 9, 2011, 8 pages.
Final Office Action received for U.S. Appl. No. 13/163,968, dated Dec. 28, 2011, 7 pages.
Office Action received for U.S. Appl. No. 13/163,968, dated May 4, 2012, 9 pages.
Related Publications (1)
Number Date Country
20140340992 A1 Nov 2014 US