This disclosure relates generally to audience measurement, and, more particularly, to methods and apparatus to correct age misattribution.
Audience measurement entities measure exposure of audiences to media such as television, music, movies, radio, Internet websites, streaming media, etc. The audience measurement entities generate ratings based on the measured exposure. Ratings are used by advertisers and/or marketers to purchase advertising space and/or design advertising campaigns. Additionally, media producers and/or distributors use the ratings to determine how to set prices for advertising space and/or to make programming decisions.
Techniques for monitoring user access to media have evolved significantly over the years. Some prior systems perform such monitoring primarily through server logs. In particular, entities serving media on the Internet can use such prior systems to log the number of requests received for their media at their server.
Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
Examples disclosed herein may be used to generate age correction models that correct age misattribution in impression records. To measure audiences, an audience measurement entity (AME) may use instructions (e.g., Java, java script, or any other computer language or script) embedded in media to collect information indicating when audience members are accessing media on a computing device (e.g., a computer, a laptop, a smartphone, a tablet, etc.). Media to be monitored is tagged with these instructions. When a device requests the media, both the media and the instructions are downloaded to the client. The instructions cause information about the media access to be sent from the device to a monitoring entity (e.g., the AME) and/or a database proprietor (e.g., Google, Facebook, Experian, Baidu, Tencent, etc.). Examples of tagging media and monitoring media through these instructions are disclosed in U.S. Pat. No. 6,108,637, issued Aug. 22, 2000, entitled “Content Display Monitor,” which is incorporated by reference in its entirety herein.
Additionally, the instructions cause one or more user and/or device identifiers (e.g., an international mobile equipment identity (IMEI), a mobile equipment identifier (MEID), a media access control (MAC) address, an app store identifier, an open source unique device identifier (OpenUDID), an open device identification number (ODIN), a login identifier, a username, an email address, user agent data, third-party service identifiers, web storage data, document object model (DOM) storage data, local shared objects also referred to as “Flash cookies”), browser cookies, an automobile vehicle identification number (VIN), etc.) located on the computing device to be sent to a partnered database proprietor to identify demographic information (e.g., age, gender, geographic location, race, income level, education level, religion, etc.) for the audience member of the computing device collected via a user registration process. For example, an audience member may be exposed to an advertisement entitled “When Pigs Fly” in a media streaming website on a tablet. In that instance, in response to instructions executing within the website, a user/device identifier stored on the tablet is sent to the AME and/or a partner database proprietor to associate the instance of media exposure (e.g., an impression) to corresponding demographic information of the audience member. The database proprietor can then send logged demographic impression data to the AME for use by the AME in generating, for example, media ratings and/or other audience measures.
In some examples, the partner database proprietor does not provide individualized demographic information (e.g., user-level demographics) in association with logged impressions. Instead, in some examples, the partnered database proprietor provides aggregate demographic impression data (sometimes referred to herein as “aggregate census data”). For example, the aggregate demographic impression data provided by the partner database proprietor may show that eighteen hundred males age 18-23 were exposed to the advertisement entitled “When Pigs Fly” in the last seven days via computing devices. However, the aggregate demographic information from the partner database proprietor does not identify individual persons (e.g., is not user-level data) associated with individual impressions. In this manner, the database proprietor protects the privacies of its subscribers/users by not revealing their identities and, thus, user-level media access activities, to the AME.
The AME uses this aggregated demographic information to calculate ratings and/or other audience measures for corresponding media. However, during the process of registering with the database proprietor, a subscriber may lie or may otherwise provide inaccurate demographic information. For example, during registration, the subscriber may provide an inaccurate age or location. These inaccuracies cause errors in the aggregate demographic information from the partner database proprietor, and can lead to errors in audience measurement. To combat these errors, the AME recruits panelist households that consent to monitoring of their exposure to media. During the recruitment process, the AME obtains detailed demographic information from the members of the panelist household. While the self-reported demographic information (e.g., age, etc.) reported to the database proprietor is generally considered to be potentially inaccurate, the demographic information collected from the panelist (e.g., via a survey, etc.) by the AME is considered highly accurate. As used herein, the term “true age” refers to age information collected from the panelist by the AME.
The AME also retrieves activity data from the partnered database proprietor. The database proprietor activity data includes self-reported demographic data (e.g., age, high school graduation year, profession, marital status, etc.), subscriber metadata (e.g., number of connections, median age of connections, etc.), and subscriber use data (e.g., frequency of login, frequency of posts, devices used to login, privacy settings, etc.). Examples of retrieving the activity data from the partnered database subscriber(s) are disclosed in U.S. patent application Ser. No. 14/864,300, filed Sep. 24, 2015, entitled “Methods and Apparatus to Assign Demographic Information to Panelists,” which is incorporated by reference in its entirety herein.
The AME develops age correction model(s) (e.g., decision tree models, regression tree models, etc.) to assign an age category (e.g., an age-based demographic bucket), an age category probability density function (PDF), and/or a discrete age to an audience member corresponding to a logged impression. The PDFs indicate probabilities that the audience member falls within certain ones of the respective age categories. The age correction models are generated using the database proprietor activity data of panelists and the detailed demographic information supplied by the panelist to the AME. To generate the age correction models, the database proprietor activity data is organized into attribute-value pairs. In the attribute-value pairs, the attribute is a category in the activity data (e.g., marital status, post frequency, reported age, etc.) and the value is the corresponding value (e.g., single, five times per week, twenty seven, etc.) of the attribute. For example, an attribute-value pair may be [percentage_connections_female, 50]. Examples for generating age correction models are disclosed in U.S. patent application Ser. No. 14/928,468, filed Oct. 30, 2015, entitled “Methods and Apparatus to Categorize Media Impressions by Age,” which is incorporated by reference in its entirety herein.
The AME maintains a database of audience member records that associate the database proprietor activity (e.g., collected from the database proprietor) and demographic information (e.g., collected by the AME). For example, the audience members records may associate a self-reported age (e.g., from a database proprietor) with a true age. The audience member records are divided into a training set and a validation set. Because the composition of the training sets and the validation sets affect performance of the age correction model, the audience member records are randomly divided into the training sets and the validation sets. For example, the audience member records may be randomly divided into a first training set and a first validation set, then the audience member records may be also randomly divided into a second training set and a second validation set, etc. Candidate models are developed from the training sets. Additionally, the candidate models are evaluated using the validation sets. For each of the candidate models, results of applying the validation sets are fused, resulting in an estimate of the actual performance of the candidate model.
Examples disclosed herein may be used to objectively validate the candidate models. To evaluate the candidate models, the AME generates a validation scores (Sv) based on a broad score (Sb) and a targeted score (St). The AME uses the validation score (Sv) to determine which one of the generated candidate models to use when determining the age to associate with a media impression. In some examples, the validation score (Sv) is a weighted average of the broad score (Sb) and the targeted scores (St), where the weights are determined by business interests which may include the proportion of campaigns which are targeted campaigns.
In examples disclosed herein, the broad score (Sb) is used to capture the accuracy of the corrective model in cases which the composition of the target audience members is similar to the composition of the possible audience members as a whole. For example, the composition of the target audience members may include all of the demographic groups (e.g., age categories) that make up the population of the target region. The broad score (Sb) is based on a weighted prediction error of multiple validation sets.
In examples disclosed herein, the targeted score (St) is used to capture the accuracy of the model in cases of a targeted audience, where (i) the composition of the target audience members is narrow compared to the composition of the audience members as a whole, and/or (ii) the composition of the target audience members approaches a pure sample (e.g., audience members with the same demographic characteristics). For example, for an age-based targeted ad campaign, the ideal age distribution of audience members exposed to ads of the campaign may consist of one or two age-based demographic groups. The targeted score (St) is based on an impulse response of the age-correction model when audience members records associated with individual demographic groups are used to validate the age-correction model. The impulse response is the percentage of the audience member records in an age category for which the candidate model correctly predicts the age category. For example, for 1000 audience members records of the validation set having true ages between 25-34, the age-correction model may predict that 97 audience members records are in the 18-24 age category, 855 audience members records are in the 25-34 age category, 42 audience members records are in the 35-54 age category, and 6 audience members records are in the 55+ age category. In such an example, the impulse response is 0.86.
In the illustrated example, member(s) of the panelist household (e.g. a head of household) provide(s) detailed demographic information 114 (e.g., true age, ethnicity, first name, middle name, gender, household income, employment status, occupation, rental status, level of education, etc.) of the member(s) of the panelist household to the AME 104. In the illustrated example, the detailed demographic information 114 is provided via the computing device 112 through the registration website, or any other suitable website. The example computer device 112 sends an example registration message 116 that includes the AME ID 106 and the detailed demographic information 114. Alternatively, in some examples, AME 104 collects the detailed demographic information 114 though other suitable means, such as a telephone survey, a paper survey, or an in-person survey, etc.
In the illustrated example, when a member of the panelist household uses the computing device 112 to visit a website and/or use an app associated with a database proprietor 102, the database proprietor 102 sets or otherwise provides, on the computing device 112, a database proprietor identifier (DPID) 118 associated with subscriber credentials (e.g., user name and password, etc.) used to access the website and/or the app. In some examples, the DPID 118 is a cookie or is encapsulated in a cookie. Alternatively, the DPID 118 could be any other user and/or device identifier. The example DPID extractor 110 extracts the DPID 118 (e.g., from a cookie, etc.). The example collector 108 collects the DPIDs 118 on the computing device 112 and sends an example ID message 120 to the example AME 104. In the illustrated example, the ID message 120 includes the extracted DPID(s) 118 and the AME ID 106 corresponding to the panelist household. In some examples, the DPID extractor 110 remembers the DPIDs 118 that have been extracted and sends the ID message 120 when a new panelist DPID 118 has been extracted.
In the illustrated example, the AME 104 includes an example panelist manager 122, an example panelist database 124, an example demographic retriever 126, an example age modeler 128, an example model validator 130, and an example age corrector 132. The example panelist manager 122 receives the registration message 116 and the ID message(s) 120 from the computing device 112. Based on the registration message 116 and the ID message(s) 120, the panelist manager 122 generates a panelist household record 134 that associates the AME ID 106 to the detailed demographic information 114 and the DPID(s) 118 of the members of the panelist household. The example panelist manager 122 stores the example panelist household record 134 in the panelist database 124.
The example demographic retriever 126 is structured to retrieve database proprietor activity data 136 from the example database proprietor 102. In the illustrated example, the database proprietor 102 provides an application program interface (API) that provides access to a subscriber database 138 based on DPIDs (e.g., the DPIDs 118, etc.). The example subscriber database 138 includes the database proprietor activity data 136 of the subscribers to the database proprietor 102. The example demographic retriever 126 sends queries 140 to the database proprietor 102 that include the DPIDs 118 associated with the example panelist household records 134 in the example panelist database 124. In the illustrated example, in response to the queries 140, the database proprietor 102 sends query responses 142 to the AME 106. The example query responses 142 includes the database proprietor activity data 136 corresponding to the panelist DPID 118 of the example query 140. The example demographic retriever 126 stores the database proprietor activity data 136 in association with the corresponding panelist household record 134 in the panelist database 124.
The example age modeler 128 generates example candidate models 144 based on the panelist household records 134 in the example panelist database 124. To generate the candidate models 144, the age modeler 128 splits the panelist household records 132 into audience member records that each represent a member of one of the panelist households. For example, a panelist household may have three members (e.g., a father, a son, and a daughter, etc.). In such an example, the age modeler 128 creates three audience member records, with each of the audience member records including a portion of the detailed demographic data 114 and the database proprietor activity data 134 corresponding to the respective member of the panelist household.
The example age modeler 128 generates multiple training sets and multiple validation sets. For each one of the training sets and each one of the corresponding validation sets, the example age modeler 128 randomly or pseudo-randomly assigns the audience member records to either the training set or the validation set. For example, the audience member records may be split into a first training set and a first validation, and then the audience member records may be split into a second training set and a second validation set. In such an example, the composition of the audience member records in the first training set are different than composition of the audience member records in the second training set. In some examples, 80% of the audience member records are assigned to the training set, and the remaining 20% of the audience member records are assigned to the validation set. In the illustrated example, the example age modeler 128 generates the candidate models 144 using the training sets. In some examples, the age modeler 128 uses different modeling techniques (e.g., decision tree, regression, etc.) to generate the candidate models 144.
The example model validator 130 selects one of the candidate models 144 to be an age correction model 146 that is used by the age corrector 132 and/or the database proprietor 102 to correct the ages associated with media impressions. As described in more detail in connection with
In some examples, when the AME 104 has access to database subscriber activity data 136 associated with individualized logged impressions, the age corrector 132 receives the age correction model 146 from the model validator 130. In some such examples, the example age corrector 132 uses the age correction model 146 to assign an age category, an age-based PDF and/or a discrete predicted age to the individualized logged media impression. For example, based on the subscriber activity data 136, the age correction model 146 may assigned an age of 23 to the individualized logged media impression.
Alternatively, in some examples, the AME 104 sends the age correction model 146 to the database proprietor 102. In some such examples, when the database proprietor 102 logs a media impression associated with a subscriber, the database proprietor 102 uses the age correction model 146 to assign the age category, the age-based PDF and/or the discrete age to the logged media impression. In some such examples, because the age based PDFs are fixed through the generation of the age correction model 146, the database proprietor 102 assigns a PDF identifier that identifies a particular age based PDF to the logged impression. In some such examples, the database proprietor 102 aggregates the logged impressions based on the PDF identifier. For example, the aggregate logged impression data from the database proprietor 102 may indicate that two thousand subscribers assigned to the “M7” age-based PDF were exposed to a “Waffle Barn” advertisement in the last seven days. In such an example, the “M7” age-based PDF may correspond to probability of the subscribers associated with the aggregate logged impression data being in the 18-21 age category is 3.2%, the probability of the subscribers being in the 22-27 age category is 86.9%, the probability of the subscribers being in the 28-33 age category is 9.4%, and the probability of the subscribers being in the 34-40 age category is 0.5%. In such an example, of the two thousand subscribers, the AME 104 would assign 64 subscribers to the 18-21 age category, 1738 subscribers to the 22-27 age category, 188 subscribers to the 28-33 age category, and 10 subscribers to the 34-40 age category.
The example broad scorer 202 calculates the broad scores (Sb) for the example candidate models 144 based on the validation sets. The broad scores (Sb) measure the reliability of the candidate models 144 when the media impressions from a media campaign encompass a variety of demographic groups (e.g., the possible audience as a whole, etc.). For example, an advertisement campaign may be designed and deployed so that audience members in the 13-17 age category, the 18-24 age category, the 25-34 age category, and the 35-54 age category are likely to be exposed to the advertisement.
To calculate the broad scores (Sb), the example broad scorer 202 applies one or more of the validations sets to the candidate models 144. Initially, the example broad scorer 202 calculates an error (e) for each of the demographic groups. The example broad scorer 202 calculates the error (e) based on equation 1 below
In Equation 1 above, ni is the number of validation sets applied to the candidate model 144 being scored, Pi,j is the predicted number of audience member records in the jth demographic group of the ith test set, and Ti,j is the actual number of audience member records in the jth demographic group of the ith test set. Table 1 below illustrates example predicted number of audience members (P), and example actual number of audience members (T) in a particular demographic group (j) for different test sets (i).
In the example illustrated in Table 2 above, the error (e) for the 13-34 age category demographic group is 0.15 (sqrt((100+196+441)/(10000+9025+12100)).
The example broad scorer 202 calculates the broad scores (Sb) based on Equation 2 below.
In Equation 2 above, ng is a number of demographic groups, the error (ej) is calculated based on Equation 1 above, wj is the weight of the jth demographic group. The weight (w) for each demographic group in the illustrated example is defined as the number of audience members in that demographic group in the validation set. For example, if there are 342 audience member records in the 13-17 age category demographic group, the weight (w) for the 13-17 age category demographic group is 342. Table 2 below illustrates example demographic groups, example errors (e), and example weights (w).
In the example illustrated in Table 1 above, the broad score (Sb) is 0.88 (1-(49.5/425).
The example targeted scorer 204 calculates the targeted scores (St) for the example candidate models 144 based on the validation sets. The targeted scores (St) measure the reliability of the candidate models 144 when the media impressions from a media campaign encompass a narrow set of demographic groups (e.g., one or two demographic groups, etc.). For example, an advertisement campaign may be designed and deployed so that audience members in the 13-17 age category are likely to be exposed to the advertisement.
To calculate the targeted scores (St), the example targeted scorer 204 divides each of the validation sets into subsets that include a single demographic group. For example, the validation set may have a first subset of the audience member records in the 13-34 age category demographic group, a second subset in the 35-54 age category demographic group, and a third subset in the 55+ age category demographic group. The subsets are applied to the candidate models 144, and the predictions for each subset form an impulse response matrix M. An example impulse response matrix M is illustrated in Table 3 below.
The example impulse response matrix (M) represented in Table 3 above, 85% of the audience member records in the 13-34 age category demographic group were predicted to be in the 13-34 age category demographic group, 12% of the audience member records in the 13-34 age category demographic group were predicted to be in the 35-54 age category demographic group, and 3% of the audience member record in the 13-34 age category demographic group were predicted to be in the 55+ age category demographic group. As a result, in the example of Table 3 above, for the 13-34 age category demographic group, the particular candidate model 144 misattributed 15% of the audience member records in the 13-34 age category demographic group of the validation set. In the example, the misattribution includes the 12% of the audience member records in the 13-34 age category demographic group that were predicted to be in the 35-54 age category demographic group and the 3% of the audience member records in the 13-34 age category demographic group that were predicted to be ages 55++(e.g., demographic groups other than the actual demographic group).
Based on the impulse response matrix (M), the target scorer 204 calculates the targeted score (St) based on Equation 3 below.
In Equation 3 above, ng is a number of demographic groups, Mj,j is a value in the jth row and the jth column of the impulse response matrix (M), and wj is the weight of the jth demographic group. In the illustrated example, the weight (w) for each demographic group is defined as the number of audience member records in that demographic group in the validation set. For example, if the number of audience member records in the 35-54 age category demographic group is 200, the number of audience member records in the 35-54 age category demographic group is 150, and the number of audience member records in the 55+ age demographic group is 75, the targeted score (St) of the example corrective model represented by the example impulse response matrix M illustrated on Table 3 above is 0.88 (e.g., 0.88=St=(0.85*200+0.88*150+0.98*75)/(200+150+75)).
In the illustrated example, the model evaluator 206 retrieves and/or otherwise receives the broad scores (Sb) for the candidate models 144 from the broad scorer 202 and the targeted scores (St) for the candidate models 144 from the target scorer 204. The example model evaluator 206 calculates the validation scores (Sv) for the candidate models 144 based on the corresponding broad scores (Sb) and the corresponding targeted scores (St). In some examples, the model evaluator 206 calculates a weighted average of the broad score (Sb) and the targeted scores (St) with a broad weight (Wb) and a targeted weight (Wt) respectively. In some such examples, the validation score (Sv) is calculated based on with Equation 4 below.
In some examples, the broad weight (Wb) is a quantity of broad campaigns that were executed over a time period (e.g., one year, five years, etc.) and the target weight (Wt) is a quantity of narrow campaigns that were executed over the same time period. For example, for one of the candidate models 144, if the broad weight (Wb) is 256 and the target weight (Wt) is 649, the broad score (Sb) is 0.92 and the targeted score (St) is 0.62, the validation score (Sv) is 0.70 ((0.92*256+0.62*649)/(256+649)).
The example model selector 208 selects one of the candidate models 144 to be the age correction model 146 based on the validation scores (Sv) calculated by the example model evaluator 206. In some examples, the model selector 208 selects the candidate model 144 that is associated with the highest validation score (Sv). Example validation scores (Sv) for the example candidate models 144 are shown on Table 4 below.
On Table 4 above, the broad weight (Wb) is 505 and the targeted weight (Wt) is 706. In the example shown on Table 4 above, the model selector 208 may selected the fourth candidate model because the fourth candidate model is associated with the highest validation score (Sv). Alternatively or additionally, in some examples, the model selector 208 selects one of the candidate models 144 that satisfies (e.g., is greater than) a threshold validation score. In some such examples, if none of the candidate models 144 satisfy the threshold validation score, the model selector 208 does not select any of the candidate models 144. In the example shown on Table 4 above, if the threshold validation score is 0.80, the model selector 208 does not select any of the candidate models 144. In some such examples, the model selector 208 instructs the age modeler 128 (
While an example manner of implementing the model validator 130 of
Flowcharts representative of example machine readable instructions for implementing the example model validator 130 of
As mentioned above, the example processes of
Otherwise, the broad scorer 202 selects an age category (j) (block 508). For example, the broad scorer 202 may select the 13-17 age category. The example broad scorer 202 determines the error (ej) for the age category predicted selected at block 508 based on the predicted age categories for the audience member records of the validation sets (block 510). In some examples, the broad scorer 202 determines the error (ej) for the age category according to Equation 1 above. The example broad scorer 202, determines if there is another age category for which to determine the error (block 512). If there is, the example broad scorer 202 selects the next age category (block 508). Otherwise, the broad scorer 202 calculates the broad score (Sb) based on the errors (ej) calculated at block 510 (block 514). In some examples, the broad scorer 202 calculates the broad score (Sb) based on Equation 2 above. The example program of
The example targeted scorer 204 executes the candidate model 144 retrieved at block 602 to determine the predicted age categories for the audience member records in the validation set that have a true age in the age category selected at block 604 (block 606). For example, for 105 audience member records in the validation set with the true age in the 19-34 age category, the candidate model 144 may predict that 13 of the audience member records are in the 13-18 age category, 79 of the audience member records are in the 19-34 age category, and 13 of the audience member records are in the 35-54 age category. The example targeted scorer 204 determines the impulse response of the age category selected act block 604 (block 608). In the example above, the impulse response of the 19-34 age category is 0.75. In some examples, targeted scorer 204 applies the weight (w) to the impulse response. In some such examples, the weight is equal to the quantity of audience member records in the validation set with the true age in the selected age category. In the example above, the weight (w) may be 105 and the weighted impulse response for the 19-34 age category may be 78.75. In some example, the weight is also affected by other demographic measures, such as percentage of the population in that age category. For example, the weight (w) for the 19-34 age category may be 105×0.21, and the weighted impulse response for the 19-34 age category may be 16.54.
The example target scorer 204 determines whether there is another age category for which to calculate another impulse response (block 610). If there is another age category, the example target scorer 204 selects the next age category (block 604). Otherwise, the target scorer 204 determines the target score (St) based on the weighted impulse responses of the age categories (block 612). The example program of
The processor platform 700 of the illustrated example includes a processor 712. The processor 712 of the illustrated example is hardware. For example, the processor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. In the illustrated example, the processor 712 is structured to include the example broad scorer 202, the example targeted scorer 204, the example model evaluator 206, and the example model selected 208.
The processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). The processor 712 of the illustrated example is in communication with a main memory including a volatile memory 714 and a non-volatile memory 716 via a bus 718. The volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714, 716 is controlled by a memory controller.
The processor platform 700 of the illustrated example also includes an interface circuit 720. The interface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
In the illustrated example, one or more input devices 722 are connected to the interface circuit 720. The input device(s) 722 permit(s) a user to enter data and commands into the processor 712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 724 are also connected to the interface circuit 720 of the illustrated example. The output devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.
The interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 700 of the illustrated example also includes one or more mass storage devices 728 for storing software and/or data. Examples of such mass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives.
Coded instructions 732 of
From the foregoing, it will appreciate that examples disclosed herein allow objective evaluation of age correction models before the age correction models is/are deployed. As such, the examples disclosed herein reduce processor resources use (e.g. processor cycles, etc.) by reducing and/or eliminating the verification of the model after live audience member records are processed. That is, the results of the age correction model on the live audience member records do not need to be revalidated.
Furthermore, examples disclosed herein solve a problem specifically arising in the realm of computer networks in the Internet age. Namely, as a large variety of media is increasingly accessed via the Internet by more people, the AME cannot rely on traditional techniques (e.g., telephone surveys, panelist logbooks, etc.) to measure audiences of the variety of the media. Additionally, because the database proprietor data used to measure the audiences is self-reported, the database proprietor data may include inaccuracies that cannot be corrected or verified by the AME through the traditional techniques. For example, because the audience member interacts with the database proprietor in a first Internet domain, the AME in a second Internet domain, and the media in a third Internet domain, the AME cannot verify the demographic information (e.g., true age, etc.) of the audience member using the traditional techniques (e.g., a survey, etc.). Examples disclosed herein solve this problem by using demographic information and activity data of known audience members (e.g., the panelists) that interact with the database proprietor in the first Internet domain and the AME in the second Internet domain to correct the demographic information of unknown audience members (e.g., audience members that interact with the database proprietor in the first Internet domain without interacting with the AME in the second Internet domain).
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
This patent claims benefit of U.S. Provisional Application Ser. No. 62/167,768, which was filed on May 28, 2015, and is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62167768 | May 2015 | US |