SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR HUMAN PRESENCE DETECTION BASED ON AUDIO

Information

  • Patent Application
  • 20140340992
  • Publication Number
    20140340992
  • Date Filed
    August 25, 2011
    12 years ago
  • Date Published
    November 20, 2014
    9 years ago
Abstract
Methods, systems and computer program products that allow for the determination of human presence in a room where content is being presented. The audio that is associated with the content may be captured, along with the audio that is being generated collectively by whatever sources may be in the room including the presentation of the content. Features may be extracted from both the content audio and the room audio. These features may then be compared, and the differences may be quantified. If the differences are significant, then human presence may be inferred.
Description
BACKGROUND

For a number of reasons, it would be useful if a home entertainment device or system were able to determine if people were present in the room. If viewers leave the room in order to go to the kitchen, for example, the system could go into a low power consumption state, perhaps by dimming or powering down the display, or by shutting down completely. In this way, power could be conserved. If recorded media were being viewed, the playback could be automatically paused when a viewer leaves the room.


In addition, the next generation of smart televisions may he service platforms offering viewers several services such as banking, on-line shopping, etc. Human presence detection would also be useful for such TV-based services. For example, if a viewer was accessing a bank/brokerage account using the TV, but then leaves the room without closing the service, a human presence detection capability could he used to automatically log off or shut down the service after a predetermined time. In another case, if another person enters the room while the on-line banking service is running, the human presence detection could be used to automatically turn off the banking service for security or privacy reasons.


Detecting human presence would also be useful to advertisers and content providers. Actual viewership could he determined. Content providers could determine the number of people viewing a program. Advertisers could use this information to determine the number of people who are exposed to a given advertisement. Moreover, an advertiser could determine how many people viewed a particular airing of an advertisement, i.e., how many people saw an ad at a particular time and channel, and in the context of a particular program. This in turn could allow the advertiser to perform cost benefit analysis. The exposure of an advertisement could he compared to the cost to produce the advertisement, to determine if the advertisement, as aired at a particular time and channel, is a worthwhile expense.





BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES


FIG. 1 is a block diagram of an exemplary environment in which embodiments of the systems, methods, and computer products described herein may operate.



FIG. 2 is a flow chart illustrating the processing of the systems, methods, and computer products described herein, according to an embodiment.



FIG. 3 is a more detailed flow chart illustrating the overall processing of the systems, methods, and computer products described herein, according to an embodiment,



FIG. 4 is a flow chart illustrating feature extraction of content audio, according to an embodiment.



FIG. 5 is a flow chart illustrating feature extraction of room audio, according to an embodiment.



FIG. 6 is a flow chart illustrating feature extraction of content audio in order to determine the presence of more than one person, according to an embodiment.


FIG, 7 is a flow chart illustrating feature extraction of room audio in order o determine the presence of more than one person, according to an embodiment.



FIG. 8 is a flow chart illustrating the comparison of features of room audio and content audio and the inference of human presence or absence, according to an embodiment.



FIG. 9 is a flow chart illustrating the normalization of data and the inference of human presence or absence based on normalized data, according to an embodiment



FIG. 10 is a flow chart illustrating the inference of whether more than one person is present room,



FIG. 11 is a block diagram illustrating the components of a system in which the processing described herein may be implemented, according to an embodiment.



FIG. 12 is a block diagram illustrating the computing context of a firmware embodiment of the feature extraction process, according to an embodiment.



FIG. 13 is a block diagram illustrating the computing context of a software embodiment of the comparison, normalization, and inferences processes, according to an embodiment.





In the drawings, the leftmost digits) of a reference number identifies the drawing in which the reference number first appears.


DETAILED DESCRIPTION

An embodiment is now described with reference to the figures, where like reference numbers may indicate identical or functionally related elements. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can he used without departing from the spirit and scope of the description. It will he apparent to a person skilled in the relevant art that this can also be employed in a variety of other systems and applications other than what is described herein.


Disclosed herein are methods, systems and computer program products that may allow for the determination of human presence in a room where content is being presented. The audio that is associated with the content may be captured, along with the audio that may be being generated in the room by whatever sources are collectively present. Features may be extracted from both the content audio and the room audio. These features may then be compared, and the differences may be quantified. If the differences are significant, then human presence may be inferred. Insignificant differences may be used to infer the absence of people.


The overall context of the system is illustrated in FIG. 1, according to an embodiment. Content 110 may be provided to a user's home entertainment or computer system, In the illustrated embodiment, the content 110 may be received at consumer electronics device such as a set-top box (STB) 120. In alternative embodiments, the content 110 may be received at another consumer electronics device, such as a home computer. The content 110 may be received from a content provider, such as a broadcast network, a server associated with web site, or other source. Content 110 may be received via a data network, and may be communicated through fiber, wired or wireless media, or some combination thereof. In an alternative embodiment, the content 110 may not be received from an external source, but may be locally stored content that can be played by a user. Further, note that content 110 may include an audio component, shown as content audio 115.


Content 110 may be presented to a user through one or more output devices, such as television (TV) 150. The presentation of content 110 may he controlled through the use of a remote control 160, which may transmit control signals to SIB 120. The control signals may be received by a radio frequency (RF) interface WO 130 at STB 120.


Room audio 170 may also be present, including all sound generated in the room. Sources for the room audio 170 may include ambient noise and sounds made by any users, including but not limited to speech. Room audio 170 may also include sound generated by the consumer electronics in the room, such as the content audio 115 produced by TV 150. The room audio may be he captured by a microphone 140. In the illustrated embodiment, microphone 140 may be incorporated in STB 120. In alternative embodiments, the microphone 140 may be incorporated in TV 150 or elsewhere.


The processing of the system described herein is shown generally at FIG. 2 as process 200, according to an embodiment. At 210, room audio, which includes any content audio as heard in the room, and content audio may be received. In an embodiment, one or both may be recorded, or, in the case of content audio, extracted from the video stream as it is being transmitted in the room, in order to facilitate the processing described below. At 220, analogous features of both room audio and content audio may be extracted. At 230, the extracted features of the room audio may be compared with those of the content audio. At 240, the comparison may be used to infer either the presence or absence of people in the room.


Process 200 is illustrated in greater detail in FIG. 3, according to an embodiment. At 310, content audio may be received. At 320, the content audio may be sampled. In an embodiment, the content audio may be sampled at 8 kHz. In alternative embodiments, the content audio may be sampled at another frequency. At 330, the sampled content audio may be divided into intervals for subsequent processing. In an embodiment, the intervals may be 0.5 second long. At 340, features may be extracted from each interval of sampled content audio. The feature extraction process will be described in greater detail below. Generally, for each interval, a statistical measure may be calculated, such as the coefficient of variation for each interval and used as the feature for subsequent processing.


Room audio may be processed in an analogous manner. At 315, room audio may be received. As noted above, room audio may be captured using a microphone incorporated into an STB or other consumer electronics component in the room, and may then be recorded for processing purposes. At 325, the room audio may be sampled. In an embodiment, the room audio may be sampled at 8 kHz or any other frequency. At 335, the sampled room audio may be divided into intervals for subsequent processing, in an embodiment, the intervals may be 0.5 second long. The intervals of sampled room audio may correspond, with respect to time, to respective intervals of sampled content audio. At 345, features may be extracted from each interval of sampled room audio. As in the case of content audio, a coefficient of variation or other statistical measure may be calculated for each interval and used as the feature for subsequent processing.


At 350, the extracted features may be compared. In an embodiment, this includes comparison of the coefficients of variation as a common statistical measure, for temporally corresponding intervals of sampled room audio and sampled content audio. The comparison process will be described in greater detail below. In an embodiment, this may comprise calculating the difference between the coefficients of variation of the room audio and the content audio, for corresponding intervals. At 360, a normalization or smoothing process may take place. This may comprise calculation of a function of the differences between the coefficients of variation of the room audio and the content audio over a sequence of successive intervals. At 370, an inference may be reached regarding the presence of people in the room, where the inference may be based on the statisfic(s) resulting from the normalization performed at 360. In an embodiment, if the coefficients of variation are sufficiently different between temporally corresponding intervals of room and content audio, then the presence of one or more people may be inferred.



FIG. 4 illustrates an embodiment of the process of feature extraction as may be performed for each interval of sampled content audio. At 410, the standard deviation may be determined for the interval. At 420, the mean may be determined. At 430, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero.



FIG. 5 illustrates the process of feature extraction as may be performed for each interval of sampled room audio, according to an embodiment. At 510, the standard deviation may be determined for the sampled room audio interval. At 520, the mean may be determined At 530, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero. At 540, the sampled room audio interval may be discarded. This may serve as a privacy precaution for the one or more persons that may be present in the room.


In an alternative embodiment, additional processing may be performed in conjunction with feature extraction. FIG. 6 illustrates such an embodiment of the process of feature extraction as may be performed for each interval of sampled content audio. At 604, a Fourier transform may be applied to the sampled content audio interval. This may allow the transfer of the signal to the frequency domain. At 607, band pass filtering may be performed, so that common speech frequencies may be retained. In an embodiment, the frequencies 85-1000 Hz may be retained, where speech energy may be most concentrated. At 610, the standard deviation may be determined for the output of 607 for this interval. At 620, the mean may be determined. At 630, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero.



FIG. 7 illustrates such an embodiment of the process of feature extraction as may be performed for each interval of sampled room audio. At 704, a Fourier transform may be applied to the sampled room audio interval. This may allow the transfer of the signal to the frequency domain. At 707, band pass filtering may be performed, so that common speech frequencies may be retained. As in the process of FIG. 6, the frequencies 85-1000 Hz may be retained, where speech energy may be most concentrated. At 710, the standard deviation may be determined for the output of 707 for this interval. At 720, the mean may be determined. At 730, the coefficient of variation may be determined, by dividing the standard deviation by the mean, if the mean is not zero; otherwise the coefficient of variation is set to zero. At 740, the room audio interval may be discarded.


The comparison of coefficients of variation is illustrated in FIG. 8, according to an embodiment. At 810, for each interval the difference between the coefficients of variation for room audio and for content audio may be determined, In an embodiment, this difference may be expressed as a percentage difference between the two coefficients. At 820, this percentage difference may be calculated. Given a series of content audio intervals and corresponding room audio intervals, the output of 820 may be a series of percentage differences. Each percentage difference may correspond to a pair of time-synchronized intervals, i.e., a content audio interval and a corresponding room audio interval.


Note that the magnitude of the percentage difference may allow greater or lesser confidence in the human presence inference. If the percentage difference is less than the threshold, then human presence may be unlikely, as discussed above. If the percentage is significantly less than the threshold, e.g., close to zero, then this may suggest that the room audio and the content audio are extremely similar, so that a higher degree of confidence may be placed in the inference that human presence is unlikely. Conversely, if the percentage difference exceeds the threshold then human presence may he likely. If the percentage difference exceeds the threshold by a significant amount, then this may suggest that the room audio and the content audio are very different, and a higher degree of confidence may he placed in the inference that human presence is likely.


In an embodiment, the data related to a given interval may he normalized by considering this interval in addition to a sequence of immediately preceding intervals. In this way, significance of outliers may be diminished, while the implicit confidence level of an interval may influence the inferences derived in succeeding intervals. Numerically, the normalization process may use any of several functions. Normalization may use a moving average of data from past intervals, or may use linear or exponential decay functions of this data.



FIG. 9 illustrates normalization that may be performed using a moving average, along with subsequent inference, according to an embodiment. Here, a predetermined number of previous intervals may be used. In this embodiment, ten previous intervals may be used. At 910, the percentage difference between coefficients of variation for room audio and content audio for each of the preceding nine intervals may be considered, along with the percentage difference in a current interval. This series of ten values may then he averaged at 920, yielding an average percentage difference. This average percentage difference may then be compared to a threshold value to determine, at 930, if human presence is to be inferred. If the average is within the threshold (e.g., 10% in an embodiment), then at 940 human presence may be unlikely. Otherwise, human presence may be inferred at 950.


The processes of FIGS. 6 and 7 may be used to extract features in the context of determining human presence as shown in FIGS. 3 at 340 and 345 respectively. In alternative embodiments these processes may be used in a slightly different manner. Here, the process of FIG. 3 may take place as shown, where feature extraction 340 (for content audio) may take place as shown in FIG. 4, and feature extraction 345 (for room audio) may lake place as shown in FIG. 5. If human presence has been interred at 370, additional processing may be performed to determine if more than one person is present in the room.


This is shown in FIG. 10 according to an embodiment. At 1030, sampled content audio may be divided into intervals. At 1040, features may be extracted for an interval of content audio. In an embodiment, the features of the content audio interval may be extracted according to the process illustrated in FIG. 6 and discussed above. At 1035, sampled room audio may be divided into intervals. At 1045, features may be extracted for an interval of room audio. In an embodiment, the features of the content audio interval may be extracted according to the process illustrated in FIG. 7 and discussed above.


At 1050, the extracted features of a content audio interval and a room audio interval may be compared. This comparison may be performed in the same manner as shown in FIG. 8. At 1060, normalization and inference may be performed in the same manner as shown in FIG. 9. in this case, the inference may be made as to whether the presence of more than one person is likely or unlikely.


As noted above, the systems, methods and computer program products described herein may be implemented in the context of a home entertainment system that may include an STB and/or a smart television, or may be implemented in a personal computer. Moreover, the systems, methods and computer program products described herein may also be implemented in the context of a laptop computer, ultra-laptop or netbook computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and so forth.


One or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including a computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein. The computer readable medium may be transitory or non-transitory. An example of a transitory computer readable medium may be a digital signal transmitted over a radio frequency or over an electrical conductor, through a local or wide area network, or through a network such as the Internet. An example of a non-transitory computer readable medium may be a compact disk, a flash memory, random access memory (RAM), read-only memory (ROM), or other data storage device.


An embodiment of a system that may perform the processing described herein is shown in FIG. 11. Here, the feature extraction may be embodied in firmware in a programmable integrated circuit (PIC). The comparison and normalization processing may be embodied in software.


A microphone 1105 may capture room audio 1107. Content audio 1117 may be received and routed to PIC 1110. The sampling of the room and content audio and the decomposition of these signals into intervals may be performed in PIC 1110 or elsewhere. After sampling and decomposing into intervals, the content and room audio may be processed by the feature extraction firmware 1115 in PIC 1110. As discussed above, feature extraction process may produce coefficients of variation for each interval, for both sampled room audio and sampled content audio. In the illustrated embodiment, feature extraction may take place in the PIC 1110 through the execution of feature extraction firmware 1115. Alternatively, the feature extraction functionality may be implemented in an execution engine of system on a chip (SOC) 1120.


If feature extraction is performed at PIC 1110, the coefficients of variation may be sent to SOC 1120, and then made accessible to operating system (OS) 1130. Comparison of coefficients from corresponding room audio and content audio intervals may be performed by logic 1160 in presence middleware 1140. Normalization may be performed by normalization logic 1150, which may also be part of presence middleware 1140. An inference regarding human presence may then be made available to a presence-enabled application 1170. Such an application may, for example, put system 1100 into a low power state if it is inferred that no one is present. Another example of a presence-enabled application 1170 may be a program that collects presence inferences from system 1100 and others like it in other households, to determine viewership of a television program or advertisement.


As noted above with respect to FIGS. 6, 7 and 10, embodiments may also infer the presence of more than one person. In this case, if human presence is inferred, feature extraction may be repeated using Fourier transformation and bandpass filtering. In an embodiment, this functionality may be implemented in feature extraction firmware 1115. Comparison and normalization may then be performed on the generated coefficients of variation. This processing may be performed by comparison logic 1160 and normalization logic 1150 in middleware 1140.


Items 1105, 1110, 1120, and 1130 may all be located in one or more components in a user's home entertainment system or computer system, in an embodiment. They may be located in an STB, digital video recorder, or television, for example. Presence middleware 1140 and presence-enabled application 1170 may also be located in one or more components of the user's home entertainment system or computer system. In alternative embodiments, one or both of presence middleware 1140 and presence-enabled application 1170 may be located elsewhere, such as the facility of a content provider, for example.


Note that in some embodiments, the audio captured by the microphone 1105 may be muted. A user may choose to do this via a button on remote control 1180 or the home entertainment system. Such a mute function does not interfere with the mute on remote controls which mutes the audio coming out of the TV, A “mute” command for the microphone would then be sent to audio selection logic in PIC 1110. As a result of such a command, audio from microphone 1105 would not be received by OS 1130. Nonetheless, room audio 1107 may still he received at PIC 1110, where feature extraction may be performed. Such a capability may be enabled by the presence of the feature extraction firmware 1115 in the PIC 1110. The statistical data, i.e., the coefficients of variation, may then be made available to the OS 1130, even though the room audio itself has been muted. The nature of the coefficients of variation may be such that the coefficients may not be usable for purposes of recreating room audio 1107.



FIG. 12 illustrates an embodiment in which the feature extraction functionality may be embodied in firmware. As discussed above, such functionality may be incorporated as part of a PIC. System 1200 may include a processor 1220 and may further include a firmware device 1210. Device 1210 may include one or more computer readable media that may store computer program logic 1240. Firmware device 1210 may be implemented in a read-only memory (ROM) or other data storage component for example, as would be understood by a person of ordinary skill in the art. Processor 1220 and device 1210 may be in communication using any of several technologies known to one of ordinary skill in the art, such as a bus. Computer program logic 1240 contained in device 1210 may be read and executed by processor 1220. One or more ports and/or I/O components, shown collectively as I/O 1230, may also be connected to processor 1220 and device 1210.


Computer program logic 1240 may include feature extraction code 1250. This code may be responsible for determining the standard deviation and mean for intervals of sampled room audio and content audio, as discussed above. Feature extraction code 1250 may also be responsible for implementing Fourier transformation and bandpass filtering as discussed above with respect to FIGS. 6 and 7. Feature extraction code 1250 may also be responsible for calculation of coefficients of variation for each interval of sampled room and content audio.


A software embodiment of the comparison and normalization functionality is illustrated in FIG. 13. The illustrated system 1300 may include a processor 1320 and may further include a body of memory 1310. Memory 1310 may include one or more computer readable media that may store computer program logic 1340. Memory 1310 may be implemented as a hard disk and drive, a removable media such as a compact disk, a read-only memory (ROM) or random access memory (RAM) device, for example, or some combination thereof. Processor 1320 and memory 1310 may be in communication using any of several technologies known to one of ordinary skill in the art, such as a bus. Computer program logic 1340 contained in memory 1310 may be read and executed by processor 1320. One or more I/O ports anchor I/O devices, shown collectively as I/O 1330, may also be connected to processor 1320 and memory 1310.


Computer program logic 1340 may include comparison code 1350. This module may be responsible for comparing coefficients of variation of corresponding intervals of room audio and content audio, and generating a quantitative indication of lire difference, e.g., a percentage difference, as discussed above. Computer program logic 1340 may include code 1350 for performing normalization. This module may he responsible for performing normalization of data generated by comparison code 1350 using a moving average or other process, as noted above. Computer program logic 1340 may include inference code 1370. This module may be responsible for generating an inference regarding the presence or absence of people, given the results of normalization code 1360.


The systems, methods, and computer program products described above may have a number of applications. If a viewer leaves a room, for example, the absence of people could be detected as described above, and the entertainment or computer system could go into a low power consumption state, perhaps by dimming or powering down the display, or by shutting down completely. In this way, power could be conserved. If recorded media were being viewed, the playback could be automatically paused when a viewer leaves the room.


In addition, service platforms may offer viewers services such as banking, on-line shopping, etc. Human presence detection as described above would be useful for such TV-based services. For example, if a viewer were accessing a bank/brokerage account using the TV, but then leaves the room without closing the service, a human presence detection capability could be used to automatically log off or shut down the service after a predetermined time. In another case, if another person enters the room while the on-line banking service is running, the human presence detection could be used to automatically turn off the banking service for security or privacy reasons.


Detecting human presence would also be used by advertisers and content providers. Actual viewership could be determined. Content providers could determine the number of people viewing a program. Advertisers could use this information to determine the number of people who are exposed to a given advertisement. Moreover, an advertiser could determine how many people viewed a particular airing of an advertisement, i.e., how many people saw an ad at a particular time and channel, and in the context of a particular program. This in turn could allow the advertiser to perform cost benefit analysis. The exposure of an advertisement could be compared to the cost to produce the advertisement, to determine if the advertisement, as aired at a particular time and channel, is a worthwhile expense.


Methods and systems are disclosed herein with the aid of functional building blocks illustrating the functions, features, and relationships thereof. At least sonic of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.


While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the exemplary embodiments disclosed herein.

Claims
  • 1. A method, comprising: receiving room audio and content audio at a consumer electronics device;dividing each of the room audio and content audio into corresponding equal length intervals;extracting features from each of room audio and the content audio;comparing the features of the room audio with the features of the content audio; andif there is a significant difference between the features of the content audio and the features of the room audio, inferring the presence of one or more people in a room in which the consumer electronics device is located.
  • 2. The method of claim 1, wherein: the content audio comprises audio from the content, andsaid inferring comprises inferring that the one or more people in the room were exposed to the content.
  • 3. The method of claim 1, wherein the room audio and content audio are received at the consumer electronics device while content is being presented through the device.
  • 4. The method of claim 1, wherein said extracting comprises, for each interval of the room audio and each interval of the content audio: determining a standard deviation;determining a mean; andcalculating a coefficient of variation equal to the standard deviation divided by the mean, if the mean is not zero, otherwise setting the coefficient of variation to zero.
  • 5. The method of claim 4, where for each interval of the room audio, said extracting is followed by discarding of the room audio.
  • 6. The method of claim 4, further comprising: transferring each of the room audio and content audio to a frequency domain by applying a Fourier transform; andapplying a band pass filter to output of the Fourier transform, performed prior to said extraction.
  • 7. The method of claim 6, wherein said application of the band pass filter comprises filtering out frequencies except for a frequency range in which human voice frequencies are concentrated.
  • 8. The method of claim 4, wherein said comparing comprises: determining the percentage difference between the coefficients of variation of corresponding intervals of the room audio and the content audio, thereby producing a series of percentage differences, one for each pair comprising a room audio interval and a corresponding content audio interval.
  • 9. The method of claim 8, further comprising: normalizing the percentage differences by averaging the percentage differences for a predetermined number of preceding intervals, producing a normalized percentage difference,performed after said comparison.
  • 10. The method of claim 9, wherein said inferring comprises: if the normalized percentage difference exceeds a threshold, inferring that one or more people are present in the room; andif the normalized percentage difference is less than the threshold, inferring that no one is present in the room.
  • 11. A system, comprising: a programmable integrated circuit (PIC) that comprises a first processor; andPIC memory circuitry in communication with said first processor, wherein said PIC memory circuitry stores a first plurality of processing instructions configured to direct said first processor to extract features from each of sampled room audio and sampled content audio, wherein the sampled room audio and sampled content audio have each been divided into corresponding equal length intervals;a second processor; anda memory device in communication with said second processor, wherein said memory stores a second plurality of processing instructions configured to direct said second processor to compare the features of the room audio with the features of the content audio; andif there is a significant difference between the features of the content audio and the features of the room audio, infer the presence of one or more people in a room in which the system is located.
  • 12. The system of claim 11, wherein: the content audio comprises audio from content, andthe inferring comprises inferring that the one or more people in the room were exposed to the content.
  • 13. The system of claim 11, wherein said first plurality of processing instructions is further configured to direct said first processor to mute said sampled room audio in response to a user command and prevent said sampled room audio from reaching an operating system.
  • 14. The system of claim 11, wherein said first plurality of processing instructions configured to direct said first processor to extract features comprises processing instructions configured to direct said first processor to: for each interval of the room audio and each interval of the content audio, determine a standard deviation; determine a mean; andcalculate a coefficient of variation equal to the standard deviation divided by the mean if the mean is not zero, otherwise set the coefficient of variation to zero.
  • 15. The system of claim 14, wherein said first plurality of processing instructions further comprises processing instructions configured to direct said first processor to discard the sampled room audio after extracting features from the sampled room audio.
  • 16. The system of claim 14, wherein said first plurality of processing instructions further comprises processing instructions configured to direct said first processor to perform the following, prior to directing said first processor to extract the features: transfer each of the room audio and content audio to a frequency domain by applying a Fourier transform; andapply a bandpass filter to the output of the Fourier transform.
  • 17. The system of claim 16, wherein said application of the band pass filter comprises filtering out frequencies except for a frequency range in which human voice frequencies are concentrated.
  • 18. The system of claim 14, wherein said second plurality of processing instructions configured to direct said second processor to compare features comprises instructions configured to direct said second processor to: determine the percentage difference between the coefficients of variation of corresponding intervals of the room audio and the content audio, thereby producing a series of percentage differences, one for each pair comprising a room audio interval and a corresponding content audio interval.
  • 19. The system of claim 18, wherein said second plurality of processing instructions further comprises instructions configured to direct said second processor to: normalize the percentage differences by averaging the percentage differences for a predetermined number of preceding intervals, producing a normalized percentage difference.
  • 20. The system of claim 19, wherein said plurality of processing instructions configured to direct said second processor to infer presence comprises processing instructions configured to direct said second processor to: infer that one or more people are present in the room if the normalized percentage difference exceeds a threshold; andinfer that no one is present in the room if the normalized percentage difference is less than the threshold.
  • 21. A computer program product including non-transitory computer readable media having computer program logic stored therein, the computer program logic comprising: logic to cause a first processor to extract features from each of sampled room audio and sampled content audio, wherein the sampled room audio and sampled content audio have each been divided into corresponding equal length intervals;logic to cause a second processor to compare the features of the room audio with the features of the content audio; andlogic to cause the second processor to infer the presence of one or more people if there is a significant difference between the features of the content audio and the features of the room audio.
  • 22. The computer program product of claim 21, wherein: the content audio comprises audio from a piece of content, andsaid logic to cause the second processor to infer the presence of one or more people comprises logic to cause the second processor to infer that the one or more people in the room were exposed to the content.
  • 23. The computer program product of claim 21, wherein the room audio and content audio have been sampled at 8 kHz.
  • 24. The computer program product of claim 21, wherein said logic to cause the first processor to extract features comprises logic to cause the first processor to: for each interval of the room audio and each interval of the content audio, determine a standard deviation; determine a mean; andcalculate a coefficient of variation equal to the standard deviation divided by the mean, if the mean is not zero, otherwise set the coefficient of variation to zero.
  • 25. The computer program product of claim 24, further comprising logic to cause the first processor to discard the sampled room audio after extracting features from the sampled room audio.
  • 26. The computer program product of claim 24, further comprising: logic to cause the first processor to transfer each of the room audio and content audio to the frequency domain by applying a Fourier transform; andlogic to cause the first processor to apply a bandpass filter to output of the Fourier transform,wherein the application of the Fourier transform and the application of the bandpass filter are performed prior to the extracting of the features.
  • 27. The computer program product of claim 26, wherein said logic to cause the first processor to apply a band pass filter comprises: logic to cause the first processor to filter out frequencies except for a frequency ran in which human voice frequencies are concentrated.
  • 28. The computer program product of claim 24, wherein said logic to cause the second processor to compare features comprises: logic to cause the second processor to determine the percentage difference between the coefficients of variation of corresponding intervals of the room audio and the content audio, thereby producing a series of percentage differences, one for each pair comprising a room audio interval and a corresponding content audio interval.
  • 29. The computer program product of claim 28, further comprising: logic to cause the second processor to normalize the percentage differences by averaging the percentage differences for a predetermined number of preceding intervals, producing a normalized percentage difference.
  • 30. The computer program product of claim 29, wherein said logic to cause e second processor to infer the presence of one or more people comprises: logic to cause the second processor to infer that one or more people are present in the room if the normalized percentage difference exceeds a threshold; andlogic to cause the second processor to infer that no one is present in the room if the normalized percentage difference is less than the threshold.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/US2011/049228 8/25/2011 WO 00 6/23/2014