This disclosure relates generally to audience tracking, and, more particularly, to systems, apparatus, and related methods to estimate audience exposure based on engagement level.
Media providers as well as advertising companies, broadcasting networks, etc., are interested in viewing behavior of audience members. Media usage and/or exposure habits of audience members in a household can be obtained using a metering device associated with a media presentation device.
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmed with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmed microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of the processing circuitry is/are best suited to execute the computing task(s).
There is a desire to monitor behavior of users of a media presentation device, such as a television, to verify user attention during operation of the media presentation device and, thus, exposure to (e.g., viewing of) content presented by the media presentation device. Audience measurement entities can perform television audience measurements using a television audience measurement (TAM) meter to track the content a user (e.g., audience member) chooses to access (e.g., television programs a user chooses to watch) and corresponding audience demographics associated with the presented content. Such information can be used to, for example, schedule commercials to optimize television content exposure to a target audience.
In some known examples, users (e.g., audience members) of a media presentation device such as a television are prompted to enter viewing panel information (e.g., user identification information) at predefined intervals during the presentation of media content using a TAM remote control. However, upon registering and/or entry of viewing panel information, there may be no additional measure of attention that the user is giving to the content presented on, for example, a screen of the media presentation device (e.g., a television). Yet the user may engage in other activities during presentation of the media content, causing the user's attention level with respect to the media content to vary over the duration of the content viewing period. Put another way, despite the presence of the user relative to the media presentation device while content is presented on the device, the user may be distracted. For example, within the media content viewing period, a user may engage in activities using other user device(s), such as typing a text message on his or her smartphone while the content is presented via a television. In some examples, the user may walk away from the media presentation device, turn his or her head away from the media presentation device, etc. As such, viewer registration alone may not accurately reflect the user's attention to the media content over the duration for which the content is presented.
Example systems, methods, apparatus, and articles of manufacture disclosed herein monitor an audience member's attention relative to content presented on a media presentation device by accounting for user activity associated with user movement and/or user device usage (e.g., a smartphone, an electronic tablet, a wearable device such as a smartwatch, etc.) identified (e.g., detected, predicted) during presentation of the content. In examples disclosed herein, attention indicators can be synchronized with viewing timelines recorded by the TAM meter. Examples disclosed herein identify changes a panelist's engagement with content over time. In some examples, different distraction factors are identified based on varying attention indicators associated with user activities. For example, television remote control usage by the user can be indicative of attention because the user is likely viewing the television screen while using the remote to select content for view. Conversely, user mobile phone usage can indicate that the user is distracted (i.e., not paying attention to the content presented on the screen). Examples disclosed herein assign or classify user activities (e.g., user movement, remote control usage, smartphone usage) based on distraction factors and/or attention factors to obtain a relative measure indicating an overall attention level associated with a given media viewing session (e.g., television session, etc.). In some examples, user activity can be determined (e.g., detected, identified, predicted) based on data captured from one or more devices such as remote controls, motion sensors, mobile phones, electronic tablets, biometric wearable devices, etc. during the given media viewing event (e.g., television session). In examples disclosed herein, data for each user action can be filtered, processed, and/or weighted to estimate a level of impact to a panelist's attention and/or distraction over time. As such, examples disclosed herein provide a measurement indicative of a user's attention level during a media viewing event (e.g., television session) based on detection and/or prediction of varying user activities.
Although examples disclosed herein are discussed in connection with viewing media, disclosed examples apply to monitoring media exposure more generally. Thus, although examples disclosed herein refer to, for instance, a viewing area, examples disclosed herein more generally apply to a media exposure area. Examples disclosed herein apply to, for instance, television monitoring, audio/radio monitoring, and/or other types of media exposure.
In the illustrated example, the media presentation device 119 is located in an example primary media exposure area or a primary viewing area 106 (e.g., a living room) of the house 102. For example, as illustrated in
In some examples, the user(s) 110, 112 have access to user device(s) 114 (e.g., mobile phone, smartwatch, tablet, etc.) other than the media presentation device 119 while in the primary viewing area 106. In some examples, the user(s) 110, 112 interact with the user device(s) 114 at one or more instances while content is presented by the media presentation device 119. The example user device(s) 114 can be stationary or portable computers, handheld computing devices, smart phones, and/or any other type of device that may be connected to a network (e.g., the Internet). In the example of
The example system 100 of
In the example of
When the media presentation device 119 is presenting content, one or more of the users 110, 112 may enter, move about, or exit the primary viewing area 106. Thus, respective ones of the users 110, 112 may be exposed to the content presented via the media presentation device 119 at different times and/or for varying durations of time. As such, the example system 100 of
In some examples, during presentation of content via the media presentation device 119, the user(s) 110, 112 provide input(s) via the remote control device 108. Input(s) from the remote control device 108 can indicate whether the user is changing a channel presented via the screen of the media presentation device 119 and/or whether the user is adjusting other setting(s) associated with the media presentation device 119 (e.g., increasing volume, decreasing volume, initiating a recording, etc.). While in the example of
In some examples, during presentation of content via the media presentation device 119, the user interacts with (i.e., provide input(s) at), one or more of the user device(s) 114 (e.g., laptop 115, smartwatch 116, electronic tablet 117, smartphone 118). In the example of
As disclosed herein, in some examples, the user device(s) 114 include a wearable device (e.g., the smartwatch 116). In addition to or alternatively to information about a screen state of the wearable device, the wearable device can generate motion data, biometric data, and/or user-based physiological data. As disclosed herein, the user attention analyzing circuitry 124 can use the data from the wearable device 116 to determine user activity information (e.g., level of user motion, whether the user is sleeping, etc.). In some examples, this data can be used in combination with other input data source(s) (e.g., remote control device 108, image sensors 121, etc.) to assess user activity. In some examples, in addition to or as an alternative to capturing user activity from devices such as sensors (e.g., motion sensors), user activity data can be captured using different network protocols and/or application programming interfaces (APIs) to obtain an indication of user activity status from the user devices 114 (e.g., the smartphone 118, the electronic tablet 117) connected to the network 126. For example, the status of a user device 114 (e.g., smartphone, tablet, media player) can be obtained via a communicative coupling between the smart device 114 and the user attention analyzing circuitry 124 through a network protocol, where the user attention analyzing circuitry 124 makes queries regarding application status, device status, etc. and the device responds (e.g., via the screen status identifier circuitry 123, another application, etc.). For example, user activity can be identified by remote queries to the smartphone 118 to retrieve data from an application or other software that provides information about other smart devices in, for instance, the user's home that are controlled by the smartphone (e.g., a smart kettle, a smart oven, etc.). For instance, an indication that the oven is in use may indicate a level of distraction of the user. In other examples, a smart home device (e.g., smart kettle, smart oven, etc.) can directly respond to queries from the user attention analyzing circuitry regarding operational status via the network 126. In some examples, user activity can be obtained by a remote query to the smart user device 114 (e.g., smartphone, tablet, etc.) to detect which application is running in the foreground of the user device 114 and the application's state. For example, a mobile game that is running in the foreground on the smartphone 118 implies a higher distraction factor as compared to usage of a web browser to browse information about a given movie being displayed on the media presentation device 119. In some examples, the user activity can be captured and/or assessed based on monitoring of network traffic to predict and/or estimate usage of the target user device(s).
In some examples, a metering device 120 includes one or more image sensors 121 (e.g., camera(s)). The image sensors 121 generate image data of at least a portion of the primary viewing area 106. In some examples, user-based movement can be detected based on the signals output by the motion sensor(s) 122. In some examples, the image sensor(s) 121 can be used to generate image data of the primary viewing area 106 in response to detection of motion by the motion sensor(s) 112. For privacy purposes, the image sensor(s) 121 and/or the metering device 120 can include flash indicator(s) to alert individuals (e.g., the users 110, 112) in the primary viewing area 106 that the image sensor(s) 121 are capturing images.
In the example of
The example user attention analyzing circuitry 124 of
In some examples, the user attention analyzing circuitry 124 analyzes signals output by the motion sensor(s) 122 and/or the wearable device(s) 116 to determine (e.g., predict, recognize) if any of the user(s) 110, 112 have entered, left, or substantially moved above the primary viewing area 106. In some examples, in response to detection of a change in movement relative to the primary viewing area 106, the user attention analyzing circuitry 124 can generate a request to verify which user(s) 110, 112 are present in the primary viewing area 106. In some examples, the user attention analyzing circuitry 124 causes one or more devices (e.g., the media presentation device 119) to output the request or prompt to verify which user(s) 110, 112 are present in the primary viewing area 106.
The user attention analyzing circuitry 124 analyzes user input(s) received via the remote control device 108, the user device(s) 114, the screen status identifier circuitry 123, and/or the image data generated by the image sensor(s) 121 to determine the user(s) 110, 112 activities in the primary viewing area 106 at a particular time. For example, the user attention analyzing circuitry 124 can image data generated by the image sensor(s) 121. In some examples, user position and/or posture can be identified to determine whether the user 110, 112 is looking forward relative to the media presentation device 119 (and, thus, likely looking at the screen of the media presentation device 119 of
In some examples, the user attention analyzing circuitry 124 analyzes input(s) from the remote control device 108 to detect user-based inputs to change a channel, pause the content shown on the screen of the media presentation device 119, etc. In some examples, the user analyzing circuitry 124 analyzes biometric data from the smartwatch 116 or other wearable device to detect whether, for instance, the user 110, 112 is sleeping based on heart rate data collected by the wearable device. In some examples, the user attention analyzing circuitry 124 can analyze input(s) from the screen status identifier circuitry 123 to detect whether the screen of the phone is in an “off” state or an “on” state, track the duration of time the screen is in the “on” state, etc. For example, the user attention analyzing circuitry 124 can determine that the duration of the “on” state of the smartphone 118 surpasses a threshold based on data from the screen status identifier circuitry 123. In this example, the user attention analyzing circuitry 124 can determine that the user is likely using the smartphone 118 and therefore is distracted from the content shown on the screen of the media presentation device 119.
Physical activity (e.g., movement) and/or usage of other electronic devices (e.g., user device(s) 114) during a media session (e.g., content presented using the media presentation device 119) can represent a degree to which a panelist (e.g., user 110, 112) is paying attention to the media content. Different activities can have different impacts to a panelist's attention. For example, a given activity can indicate distraction (e.g., writing a text message on the smartphone 118 while watching television) or attention (e.g., using the remote control device 108 to browse content on the television). In some examples, the user activities can vary in intensity, duration, and/or frequency, etc., which affects the level of impact on attention and/or distraction level(s) during the period of time for which the media is presented.
In some examples, the user attention analyzing circuitry 124 correlates the user activity information and/or attention levels determined therefrom with signatures and/or watermarks, etc. of the particular media content presented via the media presentation device 119 and captured by the metering device 120 to identify user activity and/or attention levels during presentation of particular media content. In some examples, the user attention analyzing circuitry 124 assigns timestamps to the signals output by the motion sensor(s) 122 and/or data captured by the user device(s) 114, the wearable device(s) 116, the image sensor(s) 121, the screen status identifier circuitry 123, etc. The user attention analyzing circuitry 124 can correlate the timestamps indicative of user activity with the presentation of the media content to determine user-based attention level(s) during operation of the media presentation device 119. In particular, the user attention analyzing circuitry 124 can provide outputs identifying which content the user may have missed or not fully paid attention to because he or she was distracted when the content was presented. The user attention information can be stored in a database for access by, for instance, a media broadcaster. The data generated by the user attention analyzing circuitry 124 in connection with the data generated by the metering device 120 can be used to determine the media presented to the member(s) 110, 112 of the household 104, which media each individual user 110, 112 was exposed to, a duration of time for which the user(s) 110, 112 were exposed, the attention and/or distraction level(s) of the user(s) 110, 112 during the media presentation, etc.
In the illustrated example, the metering device 120 includes example processor circuitry 201, example memory 202, an example wireless transceiver 204, and an example power source 206. The power source 206, which can be, for instance, a battery and/or transformer and AC/DC converter, provides power to the processor circuitry 200 and/or other components of the metering device 120 communicatively coupled via an example bus 210.
The example memory 202 of
The example metering device 120 of
The wireless transceiver 204 of the example metering device 120 can communicate with the remote control device 108 (
In the example of
In the example of
In the example of
For example, the user device interface circuitry 301 can receive input(s) from the screen status identifier circuitry 123 of one of the user devices 114 indicting that a state of the screen of that user device 114 has changed from an “off” state to an “on” state. As another example, the user device interface circuitry 301 can received input(s) from the remote control device 108 in response to the user 110, 112 changing channels. In some examples, the user device interface circuitry 301 receives input(s) from the motion sensor(s) 122 in response to detection of movement within a range of the motion sensor(s) 122. In some examples, the user interface circuitry 301 receives the input(s) in substantially real-time (e.g., near the time the data is collected). In some examples, the user interface circuitry 301 receives the input(s) at a later time (e.g., periodically and/or aperiodically based on one or more settings but sometime after the activity that caused the sensor data to be generated, such as a user moving around the viewing area 106, has occurred (e.g., seconds later) or a change in the operative state of the screen that prompts the screen status identifier circuitry 123 to transmit data to the user attention analyzing circuitry 124). In some examples, the user attention analyzing circuitry 124 is instantiated by processor circuitry executing user attention analyzing circuitry 124 instructions and/or configured to perform operations such as those represented by the flowchart of
The user activity identifier circuitry 302 identifies (e.g., predicts) the occurrence of user activity (UA) during presentation of content via the media presentation device 119 based on the input(s) received via the user device interface circuitry 301. For example, overall attention of a panelist user activity can be monitored based on data indicative of user engagement with one or more user devices (e.g., remote control device(s) 108, motion sensor(s) 122, user device(s) 114 including wearable device(s) 116, electronic tablet(s) 117, smartphone(s) 118, etc.) during a media viewing session (e.g., a television session). In some examples, the user activity identifier circuitry 302 identifies user actions performed over time (e.g., a first user action, a second user action, etc.). In some examples, the user activity identifier 302 determines (e.g., predicts) a type of user activity performed (e.g., user movement, remote control keypress, phone screen activity, etc.), as disclosed in connection with
For example, the user activity identifier circuitry 302 can determine (e.g., predict, recognize) user activity based on one or more user activity identification rules associated with usage of the user device(s) 114. For example, if the screen status identifier circuitry 123 reports that the screen of a user device 114 such as the smartphone 118 has been in an “on” state for a duration of five seconds, there is a high probability that the user is actively using the user device 114 (e.g., the smartphone 118). The user activity identification rule(s) can indicate that when the screen status identifier circuitry 123 reports that a screen of a user device 114 is in an “on” state for a threshold duration of time, the user activity identifier circuitry 302 should determine that the user is actively using the user device (e.g., the smartphone 118). The user activity identification rule(s) can be defined by user input(s) and stored in the data store 314. In some examples, the user activity identifier circuitry 302 determines user activity based on screenshots captured using the screen status identifier circuitry 123. For example, the user activity identifier circuitry 302 can determine user activity based on the type(s) of active application(s) that can be used to indicate user attention and/or distraction (e.g., text messaging application, etc.). For example, the user activity identifier circuitry 302 can determine user attention based on queries over the network 126 to monitor the status of a user device (e.g., smart oven, etc.) and/or monitor the status of a custom application that operates on the smartphone (e.g., to monitor operating system events, etc.). In some examples, the user activity identification rule(s) are generated based on machine-learning training.
In some examples, the user activity identifier circuitry 302 can determine occurrence(s) of user activity based on input(s) from wearable device(s) (e.g., the smartwatch 116). For example, the user activity identifier circuitry 302 can receive accelerometer data from the wearable device 116 indicative of a particular rate of movement. Based on the user activity identification rule(s) and the accelerometer data, the user activity identifier circuitry 302 can predict whether the user 110, 112 is sitting still or walking in the viewing area 106. As another example, based on the user activity identification rule(s) and biometric data from the wearable device 116, the user activity identifier circuitry 302 can identify whether the user is resting. For example, a particular heartrate can be identified in the user activity identification rule(s) as indicative of the user sleeping.
In some examples, the user activity identifier circuitry 302 performs image analysis to identify certain activities in image data from the image sensor(s) 121. Based on the image analysis, the user activity identifier 302 can recognize that the user 110, 112 is in a certain position and/or has a certain posture while in the viewing area 106. For instance, based on the image analysis, the user activity identifier 302 can recognize that the user is looking in a particular direction (e.g., user is looking downwards, sideways relative to the media presentation device 119, or in the direction of the media presentation device 119). The user activity identifier circuitry 302 can be trained to perform image analysis based on machine learning training.
The user activity identifier circuitry 302 can determine user activities over the duration for which the media presentation device 119 is operative and based on input(s) received from the image sensor(s) 121, the motion sensor(s) 122, the remote control device 108, the screen status identifier circuitry 123, and/or the user device(s) 114 over time. Thus, in some instances, the user activity identifier circuitry 302 identifies two or more activities for the respective users 110, 112 over time during operation of the media presentation device 119. In some examples, the user activity identifier circuitry 302 is instantiated by processor circuitry executing user activity identifier circuitry 302 instructions and/or configured to perform operations such as those represented by the flowchart of
The classifier circuitry 304 classifies a given user activity identified by the user activity identifier circuitry 302 as a user activity indicating distraction or attention on the part of the user with respect to the media content presented on the media presentation device 119. For example, the user activities can be classified as distraction or attention based on the type of activity. For example, when the user activity identifier circuitry 302 detects that a screen of the smartphone 118 is turned on and, thus, the user is likely looking at the smartphone screen rather than the media presentation device 119, the classifier circuitry 304 associates such activity with distraction on the part of the user. When the user activity identifier circuitry 302 identifies movement by the user within the viewing area 106, such movement can indicate an activity that reduces the user's focus on the television screen. As such, the classifier circuitry 304 classifies the user's movement as a distraction. In some example, the classification of the user's movement as a distraction or attention is based on analysis of user activity from two or more input(s) (e.g., image sensor input, motion sensor input, etc.) to verify whether the user's movement is more likely to be associated with a distraction or attention (e.g., a movement such as readjusting a sitting position in a chair versus a movement such as a user lifting a user device, etc.). As described in connection with an example computing system 350 of
For example, as disclosed in more detail in connection with
The example classifier circuitry 304 of
In the example of
The factor assigner circuitry 306 assigns a distraction factor or an attention factor to a given user activity (e.g., a first user activity, a second user activity, etc.) classified by the classifier circuitry 304. For example, each classified user activity can be filtered, processed and/or weighted based on an estimated or probable impact of the activity to a user's (e.g., panelist's) attention or distraction level over time. In the example of
The aggregator circuitry 308 aggregates the distraction factors (DFs) and attention factors (AFs) over time for a particular viewing interval after the factor assigner circuitry 306 has assigned a distraction factor or an attention factor to a given user activity, as disclosed in more detail in connection with
The distraction level identifier circuitry 310 combines the attention factors over time (e.g., AF(t)) over distraction factors over time (e.g., DF(t)) and normalizes the result relative to a range of full attention (AFULL) (e.g., peak value) to obtain the relative overall distraction level (e.g., DL(t)) for each user 110, 112 with respect to content presented via the media presentation device 119, as disclosed in more detail in connection with
In some examples, the distraction level identifier circuitry 310 determines the overall distraction level using example Equation Set 1, below. However, the distraction level can be determined using other type(s) of method(s). In the example of Equation Set 1, the distraction factors (e.g., DF(t)) and attention factors (e.g., AF(t)) are combined to determine the overall distraction level (e.g., DL(t)). For example, the distraction level (DL) at a particular point in time (t) can be identified using an integer of one when the difference between the distraction factor (DF) and the attention factor (AF) is greater than or equal to one and the DL can be identified as zero when the difference between the DF and the AF is less than or equal to zero. Likewise, the DL can be identified using the value of the difference between the DF and the AF when that value is determined to be greater than zero and less than 1, as shown below in connection with Equation Set 1:
Furthermore, DL(t) can be normalized to the relative scale (e.g., range {0, 1}), as previously disclosed above, in order to provide a common scale compatible and/or provide comparable results to facilitate subsequent processing and/or analysis. As such, the attention level (AL) can be determined as follows: AL(t)=1−DL(t). In some examples, normalization is not performed. In such examples, the attention level can be determined using an identifier for full attention level (e.g., AL(t)), such that AL(t)=AFULL−DL(t), where AFULL represents the range of full attention (e.g., peak value). In some examples, the distraction level identifier circuitry 310 is instantiated by processor circuitry executing distraction level identifier circuitry 310 instructions and/or configured to perform operations such as those represented by the flowchart of
The attention level identifier circuitry 312 determines the overall attention level (e.g., AL(t)) of each user 110, 112 during a given media session (e.g., television session) for the media presentation device 119. For example, the user(s) 110, 112 can be assumed to have full attention during a media session by default. As such, the attention level identifier circuitry 312 can subtract the overall distraction level (e.g., DL(t)) from full attention (e.g., AFULL) to obtain the overall attention level AL(t) during the media session for a corresponding user 110, 112, as disclosed in connection with
The synchronizer circuitry 313 synchronizes determined attention levels over time for each user 110, 112 with the media content presented via the media presentation device 119 of
The data store 314 can be used to store any information associated with the user device interface circuitry 301, user activity identifier circuitry 302, classifier circuitry 304, factor assigner circuitry 306, aggregator circuitry 308, distraction level identifier circuitry 310, attention level identifier circuitry 312, and/or synchronizer circuitry 313. The example data store 314 of the illustrated example of
In some examples, the apparatus includes means for receiving user device input. For example, the means for receiving user device input may be implemented by user device interface circuitry 301. In some examples, the user device interface circuitry 301 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for identifying user activity. For example, the means for identifying user activity may be implemented by user activity identifier circuitry 302. In some examples, the user activity identifier circuitry 302 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for classifying. For example, the means for classifying may be implemented by classifier circuitry 304. In some examples, the classifier circuitry 304 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for assigning factors. For example, the means for assigning factors may be implemented by factor assigner circuitry 306. In some examples, the factor assigner circuitry 306 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for aggregating factors. For example, the means for aggregating factors may be implemented by aggregator circuitry 308. In some examples, the aggregator circuitry 308 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for identifying a distraction level. For example, the means for identifying a distraction level may be implemented by distraction level identifier circuitry 310. In some examples, the distraction level identifier circuitry 310 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for identifying an attention level. For example, the means for identifying an attention level may be implemented by attention level identifier circuitry 312. In some examples, the attention level identifier circuitry 312 may be instantiated by processor circuitry such as the example processor circuitry 912 of
In some examples, the apparatus includes means for synchronizing an attention level. For example, the means for synchronizing an attention level may be implemented by synchronizer circuitry 313. In some examples, the synchronizer circuitry 313 may be instantiated by processor circuitry such as the example processor circuitry 912 of
While an example manner of implementing the user attention analyzing circuitry 124 of
While an example manner of implementing the computing system 350 is illustrated in
A flowchart representative of example hardware logic circuitry, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the user attention analyzing circuitry 124 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., as portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate computing devices, wherein the parts when decrypted, decompressed, and/or combined form a set of machine executable instructions that implement one or more operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine readable instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second”, etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more,” and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
In response data received from one or more of the user devices 114, the screen status identifier circuitry 123, the image sensors 121, the motion sensors 122, the wearable devices(s) 116, and/or the remote control device 108, the user activity identifier circuitry 302 determines (e.g., predicts) the user activity based on the input data (e.g., user movement, user device usage based on an on/off screen status, remote control device usage based on a keypress, etc.) (block 418).
In the example of
In the example of
The aggregator circuitry 304 determines an aggregate 620 of the respective ones of the distraction factor(s) and/or the attention factor(s) 615 (e.g., summation of the factor(s)). For example, the aggregator circuitry 304 sums the attention factors associated with attention-based activities to obtain an overall attention factor for the duration of the media presentation session and sums the distraction factors associated with distraction-based activities to obtain an overall distraction factor for the duration of the media presentation session. The distraction level identifier circuitry 310 combines and normalizes the aggregated attention factors and aggregated distraction factors 625 over a range of full attention (A FULL) to obtain the relative overall distraction level (e.g., DL(t)) 630. In some examples, the distraction level identifier circuitry 310 removes user activity data and/or the corresponding attention or distraction factors associated with time periods when the media presentation device. The attention level identifier circuitry 312 converts 635 the overall distraction level to an overall attention level 640 (e.g., AL(t)) for the media session. In the example of
Thus, in the example of
In the first example of
In the example of
In the example of
In the second example of
In the third example of
In the second example of
In the third example of
The processor platform 800 of the illustrated example includes processor circuitry 812. The processor circuitry 812 of the illustrated example is hardware. For example, the processor circuitry 812 can be implemented by one or more integrated circuits, logic circuits, FPGAs microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 812 may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the processor circuitry 812 implements the user device interface circuitry 301, the user activity identifier circuitry 302, the classifier circuitry 304, the factor assigner circuitry 306, the aggregator circuitry 308, the distraction level identified circuitry 310, the attention level identifier circuitry 312, and/or the example synchronizer circuitry 313.
The processor circuitry 812 of the illustrated example includes a local memory 813 (e.g., a cache, registers, etc.). The processor circuitry 812 of the illustrated example is in communication with a main memory including a volatile memory 814 and a non-volatile memory 816 by a bus 818. The volatile memory 814 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 816 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 814, 816 of the illustrated example is controlled by a memory controller 817.
The processor platform 800 of the illustrated example also includes interface circuitry 820. The interface circuitry 820 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface.
In the illustrated example, one or more input devices 822 are connected to the interface circuitry 820. The input device(s) 822 permit(s) a user to enter data and/or commands into the processor circuitry 812. The input device(s) 822 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 824 are also connected to the interface circuitry 820 of the illustrated example. The output devices 824 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 820 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
The interface circuitry 820 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 826. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.
The processor platform 800 of the illustrated example also includes one or more mass storage devices 828 to store software and/or data. Examples of such mass storage devices 828 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices, and DVD drives.
The machine executable instructions 832, which may be implemented by the machine readable instructions of
The processor platform 900 of the illustrated example includes a processor 912. The processor 912 of the illustrated example is hardware. For example, the processor 912 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example neural network processor 360, the example trainer 358, and the example training controller 356.
The processor 912 of the illustrated example includes a local memory 913 (e.g., a cache). The processor 912 of the illustrated example is in communication with a main memory including a volatile memory 914 and a non-volatile memory 916 via a bus 918. The volatile memory 914 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 916 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 914, 916 is controlled by a memory controller.
The processor platform 900 of the illustrated example also includes an interface circuit 920. The interface circuit 920 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 922 are connected to the interface circuit 920. The input device(s) 922 permit(s) a user to enter data and/or commands into the processor 912. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 924 are also connected to the interface circuit 920 of the illustrated example. The output devices 924 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 920 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 920 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 926. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 900 of the illustrated example also includes one or more mass storage devices 928 for storing software and/or data. Examples of such mass storage devices 928 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
The machine executable instructions 932, which may be implemented by the machine readable instructions of
The cores 1002 may communicate by an example bus 1004. In some examples, the bus 1004 may implement a communication bus to effectuate communication associated with one(s) of the cores 1002. For example, the bus 1004 may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the bus 1004 may implement any other type of computing or electrical bus. The cores 1002 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 1006. The cores 1002 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 1006. Although the cores 1002 of this example include example local memory 1020 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 1000 also includes example shared memory 1010 that may be shared by the cores (e.g., Level 2 (L2_cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 1010. The local memory 1020 of each of the cores 1002 and the shared memory 1010 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 814, 816 of
Each core 1002 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 1002 includes control unit circuitry 1014, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 1016, a plurality of registers 1018, the L1 cache 1020, and an example bus 1022. Other structures may be present. For example, each core 1002 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 1014 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 1002. The AL circuitry 1016 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 1002. The AL circuitry 1016 of some examples performs integer-based operations. In other examples, the AL circuitry 1016 also performs floating point operations. In yet other examples, the AL circuitry 1016 may include first AL circuitry that performs integer-based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 1016 may be referred to as an Arithmetic Logic Unit (ALU). The registers 1018 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 1016 of the corresponding core 1002. For example, the registers 1018 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 1018 may be arranged in a bank as shown in
Each core 1002 and/or, more generally, the microprocessor 1000 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 1000 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.
More specifically, in contrast to the microprocessor 1000 of
In the example of
The configurable interconnections 1110 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 1108 to program desired logic circuits.
The storage circuitry 1112 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 1112 may be implemented by registers or the like. In the illustrated example, the storage circuitry 1112 is distributed amongst the logic gate circuitry 1108 to facilitate access and increase execution speed.
The example FPGA circuitry 1100 of
Although
In some examples, the processor circuitry 812, 912 of
A block diagram illustrating an example software distribution platform 1205 to distribute software such as the example machine readable instructions 832, 932 of
From the foregoing, it will be appreciated that example systems, methods, and apparatus disclosed herein provide for determination (e.g., prediction, estimation) of audience exposure to media based on engagement level. Examples disclosed herein identify changes in a panelist's engagement with media content over time based on any analysis of user activities performed over the media presentation session (e.g., a television session). In some examples, user activity can be determined (e.g., identified, predicted) based on inputs from, for instance, remote controls, motion sensors, mobile phones, tablets, biometric wearables, etc. that collect data indicative of user activities during the given media viewing event. Some examples disclosed herein apply distraction factors based on varying impacts to user attention when a user is performing different activities (e.g., user movement, user remote control usage, user mobile phone usage, etc.). Distraction factors and/or attention factors can be assigned to user activities detected over time during presentation of the media (e.g., a first detected user activity, a second detected user activity, etc.) to obtain a relative measure indicating an overall attention level associated with a given media viewing session (e.g., television session, etc.).
Example methods, apparatus, systems, and articles of manufacture to estimate audience exposure based on engagement level are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus, comprising at least one memory, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to identify a user activity associated with a user during exposure of the user to media based on an output from at least one of a user device, a remote control device, an image sensor, or a motion sensor, classify the user activity as an attention-based activity or a distraction-based activity, assign a distraction factor or an attention factor to the user activity based on the classification, and determine an attention level for the user based on the distraction factor or the attention factor.
Example 2 includes the apparatus of example 1, wherein the user activity is a first user activity and the processor circuitry is to identify a second user activity by the user during the exposure of the user to the media, assign a distraction factor or an attention factor to the second user activity, and determine the attention level based on (a) the assigned distraction factor or attention factor for the first user activity and (b) the assigned distraction factor or attention factor for the second user activity.
Example 3 includes the apparatus of example 2, wherein the processor circuitry is to assign the distraction factor to the first user activity and the attention factor to the second user activity, the processor circuitry to determine a distraction level for the user based on the distraction factor and the attention factor, and determine the attention level based on the distraction level.
Example 4 includes the apparatus of example 2, wherein the processor circuitry is to assign the attention factor to the first user activity and the attention factor to the second user activity, the processor circuitry to aggregate the attention factor for first user activity and the attention factor for the second user activity, and determine the attention level based on the aggregation of the attention factors.
Example 5 includes the apparatus of example 1, wherein the processor circuitry is to determine an estimated level of impact of the user activity on the attention level of the user over time based on the assigned distraction factor or the assigned attention factor.
Example 6 includes the apparatus of example 1, wherein the processor circuitry is to identify the user activity based on an operative state of the user device.
Example 7 includes a non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least identify a first user activity and a second user activity associated with a user during a media presentation session, assign a first distraction factor to the first user activity, assign a second distraction factor to the second user activity, generate an aggregated distraction factor based on the first distraction factor and the second distraction factor, and determine an attention level associated with the user during the media presentation session based on the aggregated distraction factor.
Example 8 includes the non-transitory machine readable storage medium of example 7, wherein the instructions, when executed, cause the processor to identify the first user activity based on an output from at least one of a user device, a remote control device, an image sensor, or a motion sensor.
Example 9 includes the non-transitory machine readable storage medium of example 7, wherein the instructions, when executed, cause the processor to execute a machine learning model to classify the user activity as a distraction-based activity.
Example 10 includes the non-transitory machine readable storage medium of example 7, wherein the instructions, when executed, cause the processor to time-synchronize the attention level with media content associated with the media presentation session.
Example 11 includes the non-transitory machine readable storage medium of example 10, wherein the instructions, when executed, cause the processor to generate a mapping of the attention level to the media content of the media session.
Example 12 includes the non-transitory machine readable storage medium of example 7, wherein the instructions, when executed, cause the processor to identify the first user activity based on an output of a wearable device indicative of movement by the user.
Example 13 includes the non-transitory machine readable storage medium of example 7, wherein the instructions, when executed, cause the processor to identify a third user activity associated with the user during the media presentation session, assign an attention factor to the third user activity, and determine the attention level based on the aggregated distraction factor and the attention factor.
Example 14 includes an apparatus, comprising means for identifying a first user activity associated with a user during presentation of media by a media presentation device, means for classifying the first user activity as an attention-based activity or a distraction-based activity, means for assigning a distraction factor or an attention factor based on the classification, means for aggregating the assigned distraction factor or the assigned attention factor for the first user activity with a corresponding one of an assigned distraction factor or assigned attention factor for a second user activity during the presentation of the media, and means for identifying an attention level of the user based on the aggregated distraction factors or the aggregated attention factors.
Example 15 includes the apparatus of example 14, wherein the means for identifying the first user activity is to identify the first user activity based on an output from at least one of a user device, a remote control device, an image sensor, or a motion sensor.
Example 16 includes the apparatus of example 15, wherein the output from the user device is indicative of an operative state of a screen of the user device, and the means for identifying is to identify the first user activity based on the operative state of the screen.
Example 17 includes the apparatus of example 16, wherein the means for classifying is to classify the first user activity as a distraction-based activity when the operative state is a first operative state and as an attention-based activity when the operative state of the screen is a second operative state different from the first operative state.
Example 18 includes the apparatus of example 14, wherein the means for identifying is to determine an estimated level of impact of the user activity on the attention level of the user over time based on the assigned distraction factor or the assigned attention factor.
Example 19 includes the apparatus of example 15, wherein the user device is a wearable device and the output is indicative of user activity in an environment including the media presentation device, the user activity associated with at least one of movement or audio generated by the user.
Example 20 includes the apparatus of example 14, further including means for synchronizing the attention level with the media over time.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
Number | Name | Date | Kind |
---|---|---|---|
3805238 | Rothfjell | Apr 1974 | A |
4468807 | Moulton | Aug 1984 | A |
4611347 | Netravali et al. | Sep 1986 | A |
4626904 | Lurie | Dec 1986 | A |
4644509 | Kiewit et al. | Feb 1987 | A |
4658290 | McKenna et al. | Apr 1987 | A |
4769697 | Gilley et al. | Sep 1988 | A |
4779198 | Lurie | Oct 1988 | A |
4843631 | Steinpichler et al. | Jun 1989 | A |
4849737 | Kirihata et al. | Jul 1989 | A |
4858000 | Lu | Aug 1989 | A |
4993049 | Cupps | Feb 1991 | A |
5031228 | Lu | Jul 1991 | A |
5063603 | Burt | Nov 1991 | A |
5067160 | Omata et al. | Nov 1991 | A |
5097328 | Boyette | Mar 1992 | A |
5099324 | Abe | Mar 1992 | A |
5121201 | Seki | Jun 1992 | A |
5144797 | Swars | Sep 1992 | A |
5164992 | Turk et al. | Nov 1992 | A |
5229764 | Matchett et al. | Jul 1993 | A |
5331544 | Lu et al. | Jul 1994 | A |
5373315 | Dufresne et al. | Dec 1994 | A |
5384716 | Araki et al. | Jan 1995 | A |
5412738 | Brunelli et al. | May 1995 | A |
5481622 | Gerhardt et al. | Jan 1996 | A |
5497185 | Dufresne et al. | Mar 1996 | A |
5550928 | Lu et al. | Aug 1996 | A |
5629752 | Kinjo | May 1997 | A |
5675663 | Koerner et al. | Oct 1997 | A |
5715325 | Bang et al. | Feb 1998 | A |
5719951 | Shackleton et al. | Feb 1998 | A |
5771307 | Lu et al. | Jun 1998 | A |
5781650 | Lobo et al. | Jul 1998 | A |
5793409 | Tetsumura | Aug 1998 | A |
5801763 | Suzuki | Sep 1998 | A |
5805745 | Graf | Sep 1998 | A |
5835616 | Lobo et al. | Nov 1998 | A |
5850470 | Kung et al. | Dec 1998 | A |
5859921 | Suzuki | Jan 1999 | A |
5864630 | Cosatto et al. | Jan 1999 | A |
5878156 | Okumura | Mar 1999 | A |
5892837 | Luo et al. | Apr 1999 | A |
5901244 | Souma et al. | May 1999 | A |
5920641 | Ueberreiter et al. | Jul 1999 | A |
5963670 | Lipson et al. | Oct 1999 | A |
5978507 | Shackleton et al. | Nov 1999 | A |
5987154 | Gibbon et al. | Nov 1999 | A |
6014461 | Hennessey et al. | Jan 2000 | A |
6032106 | Ishii | Feb 2000 | A |
6047134 | Sekine et al. | Apr 2000 | A |
6055323 | Okumura | Apr 2000 | A |
6144797 | MacCormack et al. | Nov 2000 | A |
6332033 | Qian | Dec 2001 | B1 |
6332038 | Funayama et al. | Dec 2001 | B1 |
6363159 | Rhoads | Mar 2002 | B1 |
6507391 | Riley et al. | Jan 2003 | B2 |
6625316 | Maeda | Sep 2003 | B1 |
6940545 | Ray et al. | Sep 2005 | B1 |
6944319 | Huang et al. | Sep 2005 | B1 |
7035467 | Nicponski | Apr 2006 | B2 |
7043056 | Edwards et al. | May 2006 | B2 |
7134130 | Thomas | Nov 2006 | B1 |
7155159 | Weinblatt et al. | Dec 2006 | B1 |
7158177 | Kage et al. | Jan 2007 | B2 |
7203338 | Ramaswamy et al. | Apr 2007 | B2 |
7440593 | Steinberg et al. | Oct 2008 | B1 |
7466844 | Ramaswamy et al. | Dec 2008 | B2 |
7602524 | Eichhorn et al. | Oct 2009 | B2 |
7609853 | Ramaswamy et al. | Oct 2009 | B2 |
7636456 | Collins et al. | Dec 2009 | B2 |
7676065 | Wiedemann et al. | Mar 2010 | B2 |
7697735 | Adam et al. | Apr 2010 | B2 |
7796154 | Senior et al. | Sep 2010 | B2 |
7899209 | Greiffenhagen et al. | Mar 2011 | B2 |
8194923 | Ramaswamy et al. | Jun 2012 | B2 |
8620088 | Lee | Dec 2013 | B2 |
8660308 | Ramaswamy et al. | Feb 2014 | B2 |
8769557 | Terrazas | Jul 2014 | B1 |
8824740 | Ramaswamy et al. | Sep 2014 | B2 |
9020780 | Zhang et al. | Apr 2015 | B2 |
9082004 | Nielsen | Jul 2015 | B2 |
9426525 | Soundararajan et al. | Aug 2016 | B2 |
9560267 | Nielsen | Jan 2017 | B2 |
9609385 | Hicks | Mar 2017 | B2 |
9843717 | Nielsen | Dec 2017 | B2 |
10165177 | Nielsen | Dec 2018 | B2 |
10966007 | Fenner et al. | Mar 2021 | B1 |
11232688 | Lemberger et al. | Jan 2022 | B1 |
11245839 | Nielsen | Feb 2022 | B2 |
11470243 | Nielsen | Oct 2022 | B2 |
20020198762 | Donato | Dec 2002 | A1 |
20030033600 | Cliff et al. | Feb 2003 | A1 |
20030081834 | Philomin et al. | May 2003 | A1 |
20030093784 | Dimitrova et al. | May 2003 | A1 |
20040036596 | Heffner | Feb 2004 | A1 |
20040122679 | Neuhauser et al. | Jun 2004 | A1 |
20040220753 | Tabe | Nov 2004 | A1 |
20050117783 | Sung et al. | Jun 2005 | A1 |
20050144632 | Mears et al. | Jun 2005 | A1 |
20050198661 | Collins et al. | Sep 2005 | A1 |
20060062429 | Ramaswamy et al. | Mar 2006 | A1 |
20060133699 | Widrow et al. | Jun 2006 | A1 |
20060200841 | Ramaswamy et al. | Sep 2006 | A1 |
20060242715 | Mrazovich | Oct 2006 | A1 |
20070121959 | Philipp | May 2007 | A1 |
20070150916 | Begole et al. | Jun 2007 | A1 |
20070154063 | Breed | Jul 2007 | A1 |
20070244739 | Soito et al. | Oct 2007 | A1 |
20070263934 | Ojima et al. | Nov 2007 | A1 |
20070294126 | Maggio | Dec 2007 | A1 |
20080091510 | Crandall et al. | Apr 2008 | A1 |
20080232650 | Suzuki et al. | Sep 2008 | A1 |
20080243590 | Rich | Oct 2008 | A1 |
20080271065 | Buonasera et al. | Oct 2008 | A1 |
20090070797 | Ramaswamy et al. | Mar 2009 | A1 |
20090091650 | Kodama | Apr 2009 | A1 |
20090133058 | Kouritzin et al. | May 2009 | A1 |
20090141117 | Elberbaum | Jun 2009 | A1 |
20090177528 | Wu et al. | Jul 2009 | A1 |
20090265729 | Weinblatt | Oct 2009 | A1 |
20090290756 | Ramaswamy et al. | Nov 2009 | A1 |
20090307084 | Monighetti et al. | Dec 2009 | A1 |
20090310829 | Baba et al. | Dec 2009 | A1 |
20100124274 | Cheok et al. | May 2010 | A1 |
20100245567 | Krahnstoever et al. | Sep 2010 | A1 |
20100269127 | Krug | Oct 2010 | A1 |
20110019924 | Elgersma et al. | Jan 2011 | A1 |
20110023060 | Dmitriev et al. | Jan 2011 | A1 |
20110137721 | Bansal | Jun 2011 | A1 |
20110164188 | Karaoguz et al. | Jul 2011 | A1 |
20110169953 | Sandler et al. | Jul 2011 | A1 |
20110265110 | Weinblatt | Oct 2011 | A1 |
20110285845 | Bedros et al. | Nov 2011 | A1 |
20120081392 | Arthur | Apr 2012 | A1 |
20120151079 | Besehanic et al. | Jun 2012 | A1 |
20130013396 | Vinson et al. | Jan 2013 | A1 |
20130129159 | Huijgens et al. | May 2013 | A1 |
20130147623 | Somasundaram | Jun 2013 | A1 |
20130152113 | Conrad et al. | Jun 2013 | A1 |
20130156273 | Nielsen | Jun 2013 | A1 |
20130229518 | Reed et al. | Sep 2013 | A1 |
20140052405 | Wackym | Feb 2014 | A1 |
20140143803 | Narsimhan | May 2014 | A1 |
20140201767 | Seiden | Jul 2014 | A1 |
20140254880 | Srinivasan et al. | Sep 2014 | A1 |
20140366123 | DiBona et al. | Dec 2014 | A1 |
20150057967 | Albinali | Feb 2015 | A1 |
20150070495 | Scalisi | Mar 2015 | A1 |
20150213469 | Besehanic et al. | Jul 2015 | A1 |
20150271390 | Nielsen | Sep 2015 | A1 |
20160027262 | Skotty | Jan 2016 | A1 |
20160037209 | Miyoshi | Feb 2016 | A1 |
20160065902 | Deng | Mar 2016 | A1 |
20160080448 | Spears | Mar 2016 | A1 |
20160255384 | Marci et al. | Sep 2016 | A1 |
20160261911 | Soundararajan et al. | Sep 2016 | A1 |
20160381419 | Zhang | Dec 2016 | A1 |
20170142330 | Nielsen | May 2017 | A1 |
20170178409 | Bloch | Jun 2017 | A1 |
20180048807 | Nielsen | Feb 2018 | A1 |
20180052840 | Scott | Feb 2018 | A1 |
20180070145 | Foerster | Mar 2018 | A1 |
20180089898 | Huddy | Mar 2018 | A1 |
20180268865 | Ekambaram et al. | Sep 2018 | A1 |
20180285634 | Varadarajan et al. | Oct 2018 | A1 |
20190089894 | Nielsen | Mar 2019 | A1 |
20190188756 | Bradley | Jun 2019 | A1 |
20190287052 | Sundar et al. | Sep 2019 | A1 |
20190287380 | Verbeke et al. | Sep 2019 | A1 |
20190295393 | Lee et al. | Sep 2019 | A1 |
20190325228 | Chaudhry et al. | Oct 2019 | A1 |
20200265835 | Ni | Aug 2020 | A1 |
20200275835 | Chintala et al. | Sep 2020 | A1 |
20200296463 | Martinez et al. | Sep 2020 | A1 |
20200351436 | Nielsen | Nov 2020 | A1 |
20200364885 | Latapie et al. | Nov 2020 | A1 |
20200374491 | Deangelus et al. | Nov 2020 | A1 |
20200401370 | Shetty | Dec 2020 | A1 |
20210000403 | Xu et al. | Jan 2021 | A1 |
20210133483 | Prabhu et al. | May 2021 | A1 |
20210281943 | Lehnert | Sep 2021 | A1 |
20210319782 | Gong et al. | Oct 2021 | A1 |
20210327243 | Franco et al. | Oct 2021 | A1 |
20210400427 | Burowski et al. | Dec 2021 | A1 |
20210409844 | Livoti et al. | Dec 2021 | A1 |
20220052867 | Nakano et al. | Feb 2022 | A1 |
20220171466 | Zhang et al. | Jun 2022 | A1 |
20220279243 | Watts | Sep 2022 | A1 |
20220312071 | Devaraj | Sep 2022 | A1 |
20230047888 | Christian et al. | Feb 2023 | A1 |
Number | Date | Country |
---|---|---|
0262757 | Apr 1988 | EP |
1133090 | Sep 2001 | EP |
9605571 | Feb 1996 | WO |
9927668 | Jun 1999 | WO |
2004053791 | Jun 2004 | WO |
2004054255 | Jun 2004 | WO |
Entry |
---|
WIKIPEDIA. (Dec. 2010) “Kinect.” http://en.wikipedia.org/wiki/Kinect, 15 pages. |
Gudino, Miguel, “How do Motion Sensors Work? Types of Motion Sensors,” Arrow Electronics, published Jun. 23, 2020, retrieved from https://www.arrow.com/en/research-and-events/articles/how-motion-sensors-work on Mar. 15, 2023, 5 pages. |
Teixeira et al., A Survey of Human-Sensing: Methods for Detecting Presence, Count, Location, Track, and Identity, ENALAB Technical Report Sep. 2010, vol. 1, No. 1, Sep. 2010, 41 pages. |
“Infrared Person Tracking,” IBM ECVG, retrieved May 6, 2009 from http://www.research.ibm.com/ecvg/misc/footprint.html, last updated Jun. 12, 2002, 2 pages. |
Duda et al., “Pattern Classification and Scene Analysis,” Chapter 2, Bayes Decision Theory, Stanford Research Institute, Menlo Park, CA, 1973, 19 pages. |
Qing et al., “Histogram Based Fuzzy C-Mean Algorithm,” Zhejiang University, IEEE, 1992, 4 pages. |
U.S. Appl. No. 17/561,473, filed Dec. 23, 2021. |
Number | Date | Country | |
---|---|---|---|
20240073484 A1 | Feb 2024 | US |