Systems and methods for generating a schedule

Information

  • Patent Grant
  • 11336940
  • Patent Number
    11,336,940
  • Date Filed
    Wednesday, May 5, 2021
    3 years ago
  • Date Issued
    Tuesday, May 17, 2022
    2 years ago
  • Inventors
    • Pinkney; Sean (Reston, VA, US)
    • Kohn; Jordan (Reston, VA, US)
  • Original Assignees
  • Examiners
    • Hong; Michael H
    Agents
    • BakerHostetler
Abstract
Data indicative of times at which at least one item of supplemental video content was output via a plurality of client devices may be received. A first time at which the at least one item of supplemental video content was output via a greatest quantity of client devices of the plurality of client devices may be determined. A second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices may also be determined. A schedule associated with output of the at least one item of supplemental video content may be generated based at least on the first time and the second time.
Description
BACKGROUND

A company or an individual may choose to advertise for a variety of different reasons. For example, a company or an individual may create an advertising campaign (e.g. ad campaign) in order to create or improve brand awareness for a new or an existing product or service. Similarly, a company or an individual may create an ad campaign in order to drive sales of a product or service. Accordingly, it may be desirable to have a clear understanding of the times at which advertisements, such as those associated with an ad campaign, are consumed by viewers.


SUMMARY

Methods and systems are disclosed for generating an advertising schedule (e.g. ad schedule). At least one item of supplemental video content, such as advertisement or a copy of an advertisement, may be received. The at least one item of supplemental video content may be received, for example, from an advertiser. The at least one item of supplemental video content may be sent to a computing device, such as a computing device at a remote location. The computing device may utilize the at least one item of supplemental video content to determine data indicative of times at which the at least one item of supplemental video content was output via a plurality of client devices. For example, the data indicative of times at which the at least one item of supplemental video content was output via the plurality of client devices may be determined, at least in part, using automatic content recognition (ACR).


The data indicative of times at which the at least one item of supplemental video content was output via the plurality of client devices may be received from, such as, the computing device. Based on the received data, a first time at which the at least one item of supplemental video content was output via a greatest quantity of client devices of the plurality of client devices may be determined. Based on the received data, a second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices may also be determined. The first time and the second time may be utilized, at least in part, to determine a start time and an end time associated with output of the at least one item of supplemental video content.


Based at least on the first time and the second time, a schedule associated with output of the at least one item of supplemental video content may be generated. For example, if the first time and the second time are utilized to determine the start time and the end time associated with output of the at least one item of supplemental video content, the schedule may indicate the start time and the end time. The schedule may be indicated and/or output. The schedule may be utilized to determine when an advertisement was output, such as when an advertisement was output live. Content viewed by at least one user may be identified based at least in part on the schedule.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments and together with the description, serve to explain the principles of the methods and systems:



FIG. 1 illustrates a block diagram of an example environment.



FIG. 2 illustrates an example of a system in which data indicative of times at which at least one item of supplemental video content was output can be generated.



FIG. 3 illustrates an example of a data sample indicative of times at an item of supplemental video content was output.



FIG. 4 illustrates an example of a chart displaying a sample of data indicative of times at which an item of supplemental video content was output.



FIG. 5 illustrates an example of a sample of an ad schedule.



FIG. 6 illustrates an example method for generating an ad schedule.



FIG. 7 illustrates a block diagram of an example computing device.





DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

A company or an individual that has chosen to advertise, for example via an ad campaign, may want to gain an understanding of the times at which an advertisement has been output, such as on live television. The company or individual may also want to gain an understanding of how many viewers the advertisement reached at these times and/or which network output the advertisement at these times. To gain such an understanding, the company or individual may utilize an ad schedule. An ad schedule may provide useful information about the output of an advertisement, such the times at which the advertisement was output live, network information, impression information, etc.


However, a company or individual that wants an ad schedule may need to wait a significant amount of time for the ad schedule to be generated. For example, it may take weeks to process the data necessary for generation of the ad schedule. This delay may be undesirable for a company or individual that does not want to wait weeks for delivery of the ad schedule. An additional downside to current ad schedule generation techniques is that they are often riddled with error. For example, errors in an ad schedule may result from changes to television schedules. Changes to television schedules may include, for example, live interruptions and/or live events going over the schedule times, such as a sports game going into overtime. As another example, errors in an ad schedule may result from human error in capturing the advertisement. As yet another example, errors in an ad schedule may result from targeted and/or hyper-local advertisements. Targeted and/or hyper-local advertisements are difficult for ad schedule providers to accurately capture.


Accordingly, techniques for generating a more accurate ad schedule in a more efficient manner are desirable. Such a technique utilizes a large, behaviorally tracked television panel to detect when advertisements were output (e.g. played) across the panel. The time(s) at which the greatest number of client devices output the advertisement may be the live television occurrence of the advertisement, as it is likely that a large group of households viewed the advertisement at the same time only during the live television occurrence of the advertisement. The start time and end time for the advertisement may be determined by analyzing a window of time near this peak time(s). An ad schedule may be generated, at least in part, based on the start time and end time for the advertisement. This technique allows the ad schedule to be generated soon after the behaviorally tracked television panel becomes available—thus eliminating the long wait-time associated with traditional ad schedule generation techniques.



FIG. 1 illustrates an example hardware and network configuration in which the systems and methods described herein may be implemented. Such a hardware and network system 100 includes a processor 102, a database 110, at least one external resource 112, at least one user interface device 114, at least one content source 120, and an advertiser network 118. The processor 102, database 110, at least one external resource 112, at least one user interface device 114, at least one content source 120, and advertiser network 118 are in communication via a network 116.


The processor 102 may be in communication with the database 110. The processor 102 may receive data stored in the database 110. The database 110 may store a variety of different data. For example, the database 110 may store data received from at least one external resource 112. In an embodiment, the external resource(s) 112 may include a system configured to provide automatic content recognition (ACR) services. The system configured to provide ACR services may collect data associated with primary content and/or supplemental content (e.g. advertising) viewing habits from user devices, such as televisions, smart televisions, etc. The system configured to provide ACR services may collect such data from a large number of user devices, such as millions of user devices. The data collected by the system configured to provide ACR services may be near real-time data and may be delivered, such as to the database 110, the same day. The data collected by the system configured to provide ACR services may enable advertisers, networks, agencies, and/or ad tech companies to better understand audience viewing trends.


An exemplary system 200 configured to provide ACR services is depicted in FIG. 2. While the exemplary system 200 provides digital fingerprinting ACR services, in another embodiment a system configured to provide ACR services may provide digital watermarking ACR services in addition to, or as an alternative to, digital fingerprinting ACR services. The system 200 includes at least one content source 201, a media processor 202, and a plurality of client devices 207a-n.


The media processor 202 may include an ingestion server 204, a fingerprint server 206, and/or a fingerprint database 212. The media processor 202 may be configured to receive content from the content source(s) 201. The content from the content source(s) 201 may include, for example, television programming from various sources. If the content includes television programming, the television programming may be received at the media processor 202 in advance of the television programming passing through the remainder of the distribution network on its way to viewer's homes. For example, the television programming may be received at the media processor 202 approximately 3 to 5 seconds in advance. If the content includes satellite television content, the satellite television content may be received even further in advance, such as 5-8 seconds in advance.


The content from the content source(s) 201 may be ingested by the ingestion server 204. The ingestion server 204 may be in communication with the fingerprint server 206. The fingerprint server 206 may process the ingested content, such as the ingested television signals. For example, the fingerprint server 206 may include a fingerprint generator 209 configured to generate digital fingerprints associated with the ingested content. To generate the digital fingerprints associated with the ingested content, the fingerprint server 206 may identify, extract, and then summarize characteristic components of the content, enabling that content to be uniquely identified. For example, the fingerprint server 206 may identify portions of the content that are unique to that piece of content, such as a handful of pixels that appear at a particular time during playback of the content. A digital fingerprint may be created at any point in the content, from any copy of the content.


The digital fingerprints generated by the fingerprint generator 209 may be stored in a database, such as the fingerprint database 212. The digital fingerprints generated by the fingerprint generator and stored in the fingerprint database 212 may be used as reference points for identifying or matching unknown video or audio content, such as television programming, from a client device, such as the client devices 207a-n, in a remote location, such as a private home of a user of the client devices 207a-n.


The content may then be distributed to the plurality of client devices 207a-n. Each of the plurality of client devices 207a-n may include a client processor 208a-n. Each of the plurality of client devices 207a-n may include a television, a set-top box, a tablet, a phone, a computer, or a laptop. The client processor 208a-n may be configured to take samples (e.g. video segments) of the programming actually being displayed via the client devices 207a-n at any given point in time. The client processor 208a-n may send digital fingerprints of these samples back to the fingerprint server 206.


In an embodiment, the fingerprint server 206 includes a fingerprint matcher 210. The fingerprint matcher 210 may be configured to compare the samples received from the client processor 208a-n to the digital fingerprints already generated by the fingerprint generator 209 and stored in the fingerprint database 212. The fingerprint matcher 210 may compare the samples received from the client processor 208a-n to the digital fingerprints already generated by the fingerprint generator 209 and stored in the fingerprint database 212 to identify unknown video or audio content, such as television programming, from the client devices 207a-n. For example, to identify unknown content from the client devices 207a-n, the fingerprint matcher 210 may compare the samples received from the client processor 208a-n to the digital fingerprints already generated by the fingerprint generator 209 and stored in the fingerprint database 212 to find a “match.” A “match” may occur if a sample received from the client processor 208a-n is identical to an existing, reference digital fingerprint stored in the fingerprint database 212. As the identity of content associated with the reference digital fingerprints stored in the fingerprint database 212 is known, finding a “match” may identify the previously unknown content from the client devices 207a-n.


In an embodiment, the content received from the content source(s) 201 by the media processor 202 include one or more items of supplemental content (e.g. advertisements) and/or copies of such supplemental content. If the content received by the media processor 202 includes one or more items of supplemental content and/or copies of such supplemental content, the content may be received by a system for generating an ad schedule, such as the system 100 of FIG. 1. The one or more items of supplemental content and/or copies of such supplemental content may include advertisements for which an ad schedule is being generated by the system 100. For example, if a company or individual that is running an ad campaign wants to generate an ad schedule associated with the campaign, the company or individual may do so themselves and/or hire an advertisement agency, such as the advertiser network 118, to create the ad campaign. A system, such as the system 100, may be used to generate the desired ad schedule. To do so, the system may send, to the media processor 202, one or more items of supplemental content and/or copies of such supplemental content.


The one or more items of supplemental content and/or copies of such supplemental content may be ingested by the ingestion server 204. As discussed above, the ingestion server 204 may be in communication with the fingerprint server 206. The fingerprint server 206 may process the ingested supplemental content and/or copies of such supplemental content. For example, the fingerprint generator 209 may generate digital fingerprints associated with the ingested supplemental content and/or copies of such supplemental content. To generate the digital fingerprints associated with the ingested supplemental content and/or copies of such supplemental content, the fingerprint server 206 may identify, extract, and then summarize characteristic components of the supplemental content and/or copies of such supplemental content, enabling that supplemental content to be uniquely identified. For example, the fingerprint server 206 may identify portions of the supplemental content and/or copies of such supplemental content that are unique to that piece of supplemental content, such as a handful of pixels that appear at a particular time during playback of the supplemental content or a sampling of that occurs at a particular time during playback of the supplemental content. A digital fingerprint may be created at any point in the supplemental content, from any copy of the supplemental content.


Once generated by the fingerprint generator 209, the digital fingerprints associated with the item(s) of supplemental content may be stored in the fingerprint database 212. The digital fingerprints associated with the item(s) of supplemental content stored in the fingerprint database 212 may be used as reference points for identifying unknown video or audio content from the client devices 207a-n as the item of supplemental content. For example, as described above, each of the client devices 207a-n may include the client processor 208a-n configured to take samples (e.g. audio and/or video segments) of the programming actually being displayed via the client devices 207a-n at any given point in time. The client processor 208a-n may send digital fingerprints of these samples back to the fingerprint server 206.


The fingerprint server 206 may include the fingerprint matcher 210. The fingerprint matcher 210 may be configured to compare the samples received from the client processor 208a-n to the digital fingerprints associated with the item(s) of supplemental content stored in the fingerprint database 212. The fingerprint matcher 210 may compare the samples received from the client processor 208a-n to the digital fingerprints associated with the item(s) of supplemental content to determine whether unknown audio or video content from the client devices 207a-n includes the item(s) of supplemental content. For example, the fingerprint matcher 210 may determine whether the client devices 207a-n output the item(s) of supplemental content.


To determine whether one or more of the client devices 207a-n output the item(s) of supplemental content, the fingerprint matcher 210 may compare the samples received from each of the client processors 208a-n to the digital fingerprints associated with the item(s) of supplemental content stored in the fingerprint database 212 to determine if a “match” is found for any of the samples. A “match” may be found if a sample, or at least a portion of a sample, received from the client processor 208a-n is identical to an existing, reference digital fingerprint associated with the item(s) of supplemental content stored in the fingerprint database 212. If a “match” is found for a particular client device 207a-n, this may indicate that that particular client device output the item of supplemental content.


The media processor 202 may store the results of the “matching” performed by the fingerprint matcher 210 in a database, such as the fingerprint database 212 or a separate database. The stored results may indicate each client device 207a-n that output the item of supplemental content. For each client device 207a-n that output the item of supplemental content, the stored results may indicate a network (e.g. channel) that output the item of supplemental content to the client device 207a-n. For each client device 207a-n that output the item of supplemental content, the stored results may indicate a designated market area (DMA) (e.g. geographic location) associated with that client device and/or a household identification number associated with that client device. A DMA is a region of the United States that is used to define television and radio markets. For each client device 207a-n that output the item of supplemental content, the stored results may indicate a start time and/or an end time associated with output of the supplemental content.



FIG. 3 depicts an exemplary data sample 300 representative of the results of the “matching” performed by the fingerprint matcher 210. For example, the data sample 300 may be stored in a database, such as the fingerprint database 212 or a separate database. The data sample 300 may be indicative client devices, such as the client devices 207a-n, that output an item of supplemental content. Each row in the data sample 300 may be representative of a different client device. For example, each row in the data sample 300 may be representative of a different television, smart television, tablet, laptop, etc. Each row in the data sample 300 may include one or more of a household identification number 301, a Designated Market Area (DMA) 302 (e.g. geographic location), an adjustment value 304, a start date 306, a start time 308, an end data 310, an end time 312, or a network number 314 associated with the respective client device.


For example, the first row of the data sample 300 is representative of a first client device. The data sample 300 indicates that the first client device is associated with a household identification number of “01,” and is associated with the Des Moines-Ames DMA. The data sample 300 also indicates that first client device has an adjustment value of “2.376.” The data sample 300 indicates that item of supplemental content was output starting at a time of “05:23:22” on Apr. 3, 2019 and ending at a time of “05:23:37” on Apr. 3, 2019. The difference between the start time and end time may indicate the duration of output of the item of supplemental content on this particular client device. For example, the data sample 300 indicates that the first client device output the item of supplemental content for 15 seconds. The data sample 300 also indicates that the network associated with network number “9” output the item of supplemental content on first client device. The remaining rows of the data sample 300, such as those representative of the client devices associated with household identification numbers “02” to “05,” indicate similar information about these other client devices, such as DMA, start and end time/date, and/or network number.


In an embodiment, the time at which an item of supplemental content aired may be different than the time at which the item of supplemental content was detected (e.g. recognized) by ACR, such as by the media processor 202 of FIG. 2. The adjustment value associated with a client device may indicate a measurement of time between when the item of supplemental content aired and when the item of supplemental content was detected by ACR. Accordingly, the start time, such as the start time 308, associated with the respective client device may indicate the time of detection/recognition, but not necessarily the airing time.


Referring back to FIG. 1, the processor 102 may be configured to process and/or analyze data stored in the database 110. The processor 102 may include a variety of different components configured to process and/or analyze data. For example, the processor may include a data grouping component 104, a data analysis component 106, and/or a schedule generation component 108. One or more of these components 104-108 may be used to process and/or analyze data stored in the database 110 in a manner that facilitates the generation of an ad schedule.


For example, the processor 102 may be configured to process and/or analyze data received by the database 110 from a system configured to provide ACR services, such as the system 200. The data received by the database 110 from a system configured to provide ACR services, such as the system 200, may resemble the exemplary data sample 300 of FIG. 3. However, the data received by the database 110 from the system 200 may be much larger than the exemplary data sample 300. For example, the data received by the database 110 from the system 200 may include hundreds, thousands, or even millions of rows, with each row representative of a particular client device that has output a particular item of supplemental content.


In an embodiment, the processor 102 includes the data grouping component 104. The data grouping component 104 may group data based on a variety of different characteristics. For example, the data grouping component 104 may group data by time. If the data is received from a system configured to provide ACR services, such as the system 200 (e.g. data that resembles the exemplary data sample 300), the data grouping component 104 may group the data by time windows, such as 15-minute time windows, 30-minute time window, or any other window of time. For example, each client device that output that item of supplemental content within the same time window may be grouped together. In other words, each client device that belongs to a single group may have output that item of supplemental content within the same 15-minute (or any other time frame) window.


As another example, the data grouping component 104 may group data based on location (e.g. geographic location). If the data is received from a system configured to provide ACR services, such as the system 200 (e.g. data that resembles the exemplary data sample 300), the data grouping component 104 may group the data by DMA (e.g. DMA 302). For example, each client device that output that item of supplemental content in the same DMA may be grouped together. In other words, each client device that belongs to a single group may be associated with a user that resides in the same DMA, such as Des Moines-Ames, San Francisco-Oak-San Jose, Denver, etc.


As yet another example, the data grouping component 104 may group data based on network. If the data is received from a system configured to provide ACR services, such as the system 200 (e.g. data that resembles the exemplary data sample 300), the data grouping component 104 may group the data by network number (e.g. network number 314). For example, each client device that output that item of supplemental content via the same network may be grouped together. In other words, each client device that belongs to a single group may have output the item of supplemental content via the same network.


In an embodiment, the data grouping component 104 may group data based on more than one characteristic, such as those discussed above or different characteristics entirely. For example, if the data is received from a system configured to provide ACR services, such as the system 200 (e.g. data that resembles the exemplary data sample 300), the data grouping component 104 may group the data based on time, location, and network. If the data grouping component 104 groups data based on all three of these characteristics, then each client device in a single group may have output the item of supplemental content within the same time window, in the same DMA, and via the same network. Alternatively, or additionally, the data grouping component 104 may group the data based on just one or two of time, location, or network.


In an embodiment, the processor 102 includes the data analysis component 106. The data analysis component 106 may analyze data received by the processor, such as data received from a system configured to provide ACR services, such as the system 200 (e.g. data that resembles the exemplary data sample 300). The data analysis component 106 may analyze data that has already been processed by the data grouping component 104. For example, the data analysis component 106 may analyze a data group that has been created by the data grouping component 104.


In an embodiment, the data analysis component 106 may analyze data received from the system 200 that has already been grouped by the data grouping component 104 based on time, location, and network. For example, the data analysis component 106 may analyze one or more groups of data, with each individual group representative of client devices that have output the item of supplemental content within the same time window, in the same DMA, and via the same network. The data analysis component 106 may analyze a large number of these groups, with each group representative of a large number of client devices (e.g. hundreds, thousands, or millions of client devices).


The data analysis component 106 may analyze each of these data groups to determine, for each group, the time(s) at which the item of supplemental content was output live. To determine the time(s) at which the item of supplemental content was output live, the data analysis component 106 may analyze the data to identify a large number of client devices that output the item of supplemental content at the same time or approximately at the same time (e.g. within a predetermined time threshold, such as one second). If a large number of client devices output the item of supplemental content at the same time or approximately at the same time, this may indicate that the item of supplemental content was output live (e.g. playing live on that channel).


The data analysis component 106 may analyze each of these data groups to determine, for each group, the start time and end time of the live output of the item of supplemental content. To determine the start time and the end time of the live output of the item of supplemental content, the data analysis component 106 may identify a first time within the time window associated with the group. The first time may occur when the greatest number of client devices output the item of supplemental content. For example, if 500,000 client devices are associated with a group, the first time may occur at the time that the greatest percentage of these 500,000 client devices were all outputting the item of supplemental content.


To determine the start time and the end time of the live output of the item of supplemental content, the data analysis component 106 may additionally identify a second time within the time window associated with the group. The second time may occur when the second greatest number of client devices output the item of supplemental content. For example, if 500,000 client devices are associated with a group, the second time may occur at the time that the second greatest percentage of these 500,000 client devices were all outputting the item of supplemental content.


The first time and the second time may be representative of the start time and the end time of output of the item of supplemental content (or vice versa). For example, the first time may be representative of the start time of output of the item of supplemental content and the second time may be representative of the end time of output of the item of supplemental content. Conversely, the second time may be representative of the start time of output of the item of supplemental content and the first time may be representative of output of the end time of the item of supplemental content. To determine if the first time or the second time is representative of the start time of output of the item of supplemental content, the data analysis component 106 may determine which of the first time and the second time occurs first-in-time. For example, the data analysis component 106 may determine whether the first time occurs before the second time, or vice versa. If the first time occurs before the second time, then the first time may be representative of the start time of output of the item of supplemental content and the second time may be representative of the end time of output of the item of supplemental content. Conversely, if the second time occurs before the first time, then the second time may be representative of the start time of output of the item of supplemental content and the first time may be representative of the end time of output of the item of supplemental content


If the second time is representative of either the start time or the end time of the item of supplemental content, then it may occur within a predetermined time frame of the first time. For example, the second time and the first time may not be separated from each other by a period of time that is larger than the duration of the item of supplemental content. Accordingly, to determine the second time, the data analysis component 106 may first determine an appropriate time frame within which to identify the second time. The time frame may include the first time and may have a duration equal to a multiple (e.g. twice) of the duration of the item of supplemental content. By ensuring that the second time occurs within this time frame, the data analysis component 106 may identify a second time that is not separated from the first time by a period of time that is larger than the duration of the item of supplemental content.


In an embodiment, the data analysis component 106 may generate a chart or graph representative of a data group in order to analyze it. FIG. 4 illustrates an exemplary chart 400 representative of a data group. The x-axis of the chart 400 is representative of time and may represent a window of time that is identical or similar to the window of time associated with the data group. The y-axis of the chart 400 is representative of density, but in other embodiments the y-axis of the chart may be representative of percentage of client devices (e.g. percentage of client devices that output the item of supplemental content), number of client devices (e.g. quantity of client devices that output the item of supplemental content), or any other metric that indicates when the item of supplemental content was output live.


To generate the chart 400, the data analysis component 106 may, based on the data in the data group, aggregate the number of client devices that output the item of supplemental content, second-by-second. The peaks on the chart may be representative of the first time and the second time. As discussed above, the first time may occur when the greatest number of client devices output the item of supplemental content. Accordingly, the highest peak on the chart, such as the peak 402, may be representative of the first time. As also discussed above, the second time may occur when the second greatest number of client devices output the item of supplemental content. Accordingly, the second highest peak on the chart, such as the peak 404, may be representative of the second time.


As depicted by the chart 400, the first and second times are separated from each other by a time period equal to the duration of the item of supplemental content (e.g. 14.3 seconds). As the peak 402 occurs before the peak 404, the first time is representative of the start time of live output of the item of supplemental content and the second time is representative of the end time of live output of the item of supplemental content. However, as noted above, for other data groups, the second time may be representative of the start time of output of the item of supplemental content if it occurs before the first time.


Referring back to FIG. 1, in an embodiment, the processor 102 includes the schedule generation component 108. The schedule generation component 108 may be configured to generate an ad schedule. The ad schedule may indicate the estimated time(s) an item of supplemental content was output live, by network and/or location. The schedule generation component 108 may generate the ad schedule based on the results/output of the data analysis component 106. For example, the schedule generation component 108 may generate an ad schedule for each data group analyzed by the data analysis component 106. The generated ad schedule may indicate, for each output (e.g. airing) of the item of supplemental content, the start time and end time of output of the item of supplemental content, as determined by the data analysis component 106. The ad schedule may also indicate, for each output (e.g. airing), the network number and/or the number of impressions. The number of impressions for an airing indicates how many client devices output the item of supplemental content at that time on that network.


The ad schedule may be specific to a particular DMA. For example, the ad schedule may indicate the estimated time(s) an item of supplemental content was output live in a particular DMA. Alternatively, the ad schedule may not be specific to a particular DMA. If the ad schedule is not specific to a particular DMA, the ad schedule may be aggregated across multiple DMAs or geographic regions. Times may be shifted for broadcast networks based on time-zone, as determined by the DMA.


The individual or company that placed this item of supplemental content may use the generated ad schedule to gain an understanding of the times at which the item of supplemental content was output live, such as on live television. The individual or company may also use the ad schedule to gain an understanding of how many viewers the item of supplemental content reached at these times and/or which network output the item of supplemental content at these times. By using a large, behaviorally tracked television panel to generate the ad schedule, the processor 102 is able to generate the ad schedule soon after the behaviorally tracked television panel becomes available—thus eliminating the long wait-time associated with traditional ad schedule generation techniques. Accordingly, the individual or company is able to gain an understanding of its ad campaign's success and/or reach in a more efficient manner.



FIG. 5 depicts an exemplary ad schedule 500 associated with an item of supplemental content. The ad schedule 500 may be generated, for example, by the schedule generation component 108. The ad schedule 500 may be generated, at least in part, on data that has already been grouped by the data grouping component and/or data that has already been analyzed by the data analysis component 106. For example, the ad schedule 500 may be generated based the results/output of the data analysis component 106.


Each row in the ad schedule 500 is representative of a particular output (e.g. airing) of the item of supplemental content. For example, the ad schedule 500 may be representative of six different outputs, or airings, of the item of supplemental content. For each airing, the ad schedule 500 indicates a start time/date 502 and an end time/date 504. The start time/date 502 and/or the end time/date 504 for a particular airing may have been determined by the data analysis component 106. For each airing, the ad schedule 500 also indicates a duration 506. The duration 506 indicates how long the item of supplemental content was output during that particular airing. The duration 506 may be equal to the difference between the start time/date 502 and the end time/date 504.


For each airing, the ad schedule 500 also indicates a network number 508. The network number 508 may be determined based on the network number(s) 314 listed in the exemplary data sample 300, discussed above with reference to FIG. 3. The network number 508 associated with a particular airing may indicate which network and/or channel was responsible for outputting the item of supplemental content.


For each airing, the ad schedule 500 also indicates a number of impressions 510. The number of impressions 510 for an airing indicates how many client devices output the item of supplemental content at that time (on the network associated with the network number 508). For example, the first row of the ad schedule 500 indicates that 1929 client devices output the item of supplemental content on the network associated with network number 35, starting at 6:08 on Apr. 1, 2019 and ending at 6:09 on Apr. 1, 2019. The first row of the ad schedule 500 also indicates that these 1929 client devices output the item of supplemental content for 17 seconds.


Referring back to FIG. 1, the content source(s) 120 may include a content database. The content database may include content, such as the content 122. The content source(s) 120 may be associated, for example, with a website provider or with a video content provider. If the content source(s) 120 is associated with a website provider, the content 122 may include, for example, a website. If the content source(s) 120 is associated with a video content provider, the content 122 may include, for example, video content. Video content may refer generally to any video content produced for viewer consumption regardless of the type, format, genre, or delivery method. Video content may comprise video content produced for broadcast via over-the-air radio, cable, satellite, or the internet. Video content may comprise digital video content produced for digital video streaming or video-on-demand. Video content may comprise a movie, a television show or program, an episodic or serial television series, or a documentary series, such as a nature documentary series. As yet another example, video content may include a regularly scheduled video program series, such as a nightly news program. The content 122 may be associated with one or more content distributors that distribute the content 122 to viewers for consumption.


The advertiser network 118 may include an advertisement database. The advertisement database may include supplemental content, such as advertisements. The advertisements may be received from the advertiser network 118 and may be stored, for example, in the database 110. The, user interface device(s) 114 may include at least one of a television, set-top box, tablet, phone, computer, or laptop.


The processor 102, electronic storage 110, user interface device(s) 114, external resources 112, advertiser network, and content source(s) 120 may each comprise one or more computing devices and/or network devices. The network 116 may comprise one or more public networks (e.g., the Internet) and/or one or more private networks. A private network may include a wireless local area network (WLAN), a local area network (LAN), a wide area network (WAN), a cellular network, or an intranet. The network 116 may comprise wired network(s) and/or wireless network(s).


As discussed above, it may be desirable to have a more efficient mechanism for generating a more accurate ad schedule. For example, it may be desirable to generate an ad schedule based on a large behaviorally tracked television panel. Such an ad schedule may be generated shortly after this panel data becomes available, which eliminates the traditional long wait-time associated with ad schedule generation. FIG. 6 illustrates an exemplary method 600 for generating an ad schedule, such as one that resembles the exemplary ad schedule 500 of FIG. 5. The method 600 may be performed, for example, by the processor 102 of FIG. 1.


To generate the ad schedule, an item of supplemental content, such as an advertisement, or a copy of the item of supplemental content may be received. For example, the item of supplemental content may be received from an advertiser. The item of supplemental content may be sent to a computing device (such as the media processor 202 of FIG. 2). The computing device may be at a remote location and may be part of a different system than the system that received the item of supplemental content. For example, the system 100 may receive the item of supplemental content from an advertiser, and then the system 100 may forward the item of supplemental content to the media processor 202 of FIG. 1.


The computing device may utilize the item of supplemental content to determine data indicative of times at which the item of supplemental content was output via a plurality of client devices. This data may be sent back to the system 100, such as the processor 102 of system 100. The data may be utilized to determine a start time and an end time associated with output of the item of supplemental content via the plurality of client devices. An ad schedule may be generated, at least in part, based on the determined start time and end time. The generated ad schedule may be indicated and utilized in a variety of different ways, such as to determine content viewed by at least one user.


To generate an ad schedule for an item of supplemental content (e.g. an advertisement), a copy of that item of supplemental content may be needed. At 602, at least one item of supplemental video content may be received. The item of supplemental video content may be, for example, a copy of the item of supplemental video content in a video format (e.g. MP4). The item of supplemental video content may be received from an advertiser, or any other entity or system that has a copy of the item of supplemental video content in a video format. To receive the item of supplemental video content, a request for the item of supplemental video content may be sent, such as to the advertiser or the other entity or system that has a copy of the item of supplemental video content in a video format.


The item of supplemental video content may be used to determine times at which the item of supplemental video content was output to users (e.g. viewed by users). At 604, the item of supplemental video content may be sent to a computing device. The computing device may be located remote to the device that received the item of supplemental video content. For example, the computing device may belong to a different system than the device that received the item of supplemental video content at step 602. The computing device may be, for example, the media processor 202 described above with respect to FIG. 2. Alternatively, the computing device may belong to the same system and/or be the same device as the device that received the item of supplemental video content, such as the processor 102 of FIG. 1. The computing device may utilize the item of supplemental content to determine times at which the item of supplemental video content was output to users (e.g. viewed by users).


In an embodiment, the computing device may include an ingestion server, such as ingestion server 204, a fingerprint server, such as fingerprint server 206, and/or a fingerprint database, such as fingerprint database 209. The item of supplemental content may be ingested by the ingestion server. The ingestion server may be in communication with the fingerprint server.


The fingerprint server may include a fingerprint generator, such as fingerprint generator 209 and/or a fingerprint matcher, such as fingerprint matcher 210. The fingerprint server may process the ingested item of supplemental content. For example, the fingerprint generator may generate digital fingerprints associated with the ingested item of supplemental content. To generate the digital fingerprints associated with the ingested item of supplemental content, the fingerprint generator may identify, extract, and then summarize characteristic components of the item of supplemental content, enabling that item of supplemental content to be uniquely identified. For example, the fingerprint generator may identify portions of the item of supplemental content that are unique to that item of supplemental content, such as a handful of pixels that appear at a particular time during playback of the item of supplemental content. A digital fingerprint may be created at any point in the item supplemental content, from any copy of the supplemental content.


Once generated by the fingerprint generator, the digital fingerprints associated with the item of supplemental content may be stored in the fingerprint database. The digital fingerprints associated with the item of supplemental content stored in the fingerprint database may be used as reference points for identifying unknown video or audio content from a plurality of client devices (e.g. client devices 207a-n) as the item of supplemental content. For example, each of the plurality of client devices may include a client processor (e.g. client processor 208a-n) configured to take samples (e.g. video segments) of the programming actually being displayed via the client devices at any given point in time. The client processor from each client device may send digital fingerprints of these samples back to the fingerprint server.


The fingerprint matcher may be configured to compare the samples received from the client processor(s) to the digital fingerprints associated with the item of supplemental content stored in the fingerprint database. The fingerprint matcher may compare the samples received from the client processor(s) to the digital fingerprints associated with the item of supplemental content to determine whether unknown video or audio content from the client devices includes the item of supplemental content. For example, the fingerprint matcher may determine whether the client devices output the item of supplemental content.


To determine whether one or more of the client devices output the item of supplemental content, the fingerprint matcher may compare the samples received from each of the client processors to the digital fingerprints associated with the item of supplemental content stored in the fingerprint database to determine if a “match” is found for any of the samples. A “match” may be found if a sample, or at least a portion of a sample, received from a client processor is identical to an existing, reference digital fingerprint associated with the item of supplemental content stored in the fingerprint database. If a “match” is found for a particular client device, this may indicate that that particular client device output the item of supplemental content.


The results of the “matching” performed by the fingerprint matcher may be stored in the fingerprint database or a separate database. The stored results may indicate each client device that output the item of supplemental content. For each client device that output the item of supplemental content, the stored results may indicate a network (e.g. channel) that output the item of supplemental content to the client device. For each client device that output the item of supplemental content, the stored results may indicate the DMA associated with that client device and/or a household identification number associated with that client device. A DMA is a region of the United States that is used to define television and radio markets. For each client device that output the item of supplemental content, the stored results may indicate a start time and/or an end time associated with output of the item of supplemental content.


The stored results, or the “matching” data, may be sent back to the device that received the item of supplemental content, such as the processor 102. At 606, data indicative of times at which at least one item of supplemental video content was output via a plurality of client devices may be received. The received data may be, for example, the “matching” data determined by the computing device. In an embodiment, the received data may resemble the data sample 300 of FIG. 3. However, the amount of received data may be much larger than the exemplary data sample 300. For example, the received data may include hundreds, thousands, or even millions of rows, with each row representative of a particular client device that has output a particular item of supplemental content.


The received data may be manipulated and/or analyzed in order to generate an ad schedule. For example, the received data may be grouped by a data grouping component (e.g. data grouping component 104) and analyzed by a data analysis component (e.g. data analysis component 106).


As described above, the data grouping component may group data based on a variety of different characteristics. In an embodiment, the data grouping component may group the received data based on more than one characteristic, such as based on time, location, and network. If the data grouping component groups the received data based on all three of these characteristics, then each client device in a single group may have output the item of supplemental content within the same time window, in the same DMA, and via the same network. Alternatively, or additionally, the data grouping component 104 may group the data based on just one or two of time, location, or network.


The data analysis component may analyze data that has already been grouped by the data grouping component based on time, location, and network. For example, the data analysis component may analyze one or more groups of data, with each individual group representative of client devices that have output the item of supplemental content within the same time window, in the same DMA, and via the same network. The data analysis component may analyze a large number of these groups, with each group representative of a large number of client devices (e.g. hundreds, thousands, or millions of client devices).


The data analysis component may analyze each of these data groups to determine, for each group, the time(s) at which the item of supplemental content was output live. To determine the time(s) at which the item of supplemental content was output live, the data analysis component may analyze the group of data to identify a large number of client devices that output the item of supplemental content at the same time or approximately at the same time (e.g. within a predetermined time threshold, such as one second). If a large number of client devices output the item of supplemental content at the same time or approximately at the same time, this may indicate that the item of supplemental content was output live (e.g. playing live on that channel).


The data analysis component may analyze each of these data groups to determine, for each group, the start time and end time of the live output of the item of supplemental content. To determine the start time and the end time of the live output of the item of supplemental content, the data analysis component may identify a first time within the time window associated with the group. At 608, a first time at which the at least one item of supplemental content was output via a greatest quantity of client devices of the plurality of client devices may be determined. The first time may occur when the greatest number of client devices output the item of supplemental content. For example, if 500,000 client devices are associated with a group, the first time may occur at the time that the greatest percentage of these 500,000 client devices were all outputting the item of supplemental content.


To determine the start time and the end time of the live output of the item of supplemental content, the data analysis component may additionally identify a second time within the time window associated with the group. At 610, a second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices may be determined. The second time may occur when the second greatest number of client devices output the item of supplemental content. For example, if 500,000 client devices are associated with a group, the second time may occur at the time that the second greatest percentage of these 500,000 client devices were all outputting the item of supplemental content.


As an example, the data analysis component may generate a chart or graph representative of a data group in order to determine the first and/or second times. As discussed above, FIG. 4 illustrates an exemplary chart 400 representative of a data group. The highest peak on the chart 400, such as the peak 402, may be representative of the first time, whereas the second highest peak on the chart 400, such as the peak 404, may be representative of the second time.


The first time and the second time may be representative of the start time and the end time of output of the item of supplemental content (or vice versa). For example, the first time may be representative of the start time of output of the item of supplemental content and the second time may be representative of the end time of output of the item of supplemental content. Conversely, the second time may be representative of the start time of output of the item of supplemental content and the first time may be representative of output of the end time of the item of supplemental content. At 612, a start time and end time associated with output of the at least one item of supplemental video content may be determined. The start time and the end time may be determined based on the first time and the second time.


To determine if the first time or the second time is representative of the start time of output of the item of supplemental content, the data analysis component may determine which of the first time and the second time occurs first-in-time. For example, the data analysis component may determine whether the first time occurs before the second time, or vice versa. If the first time occurs before the second time, then the first time may be representative of the start time of output of the item of supplemental content and the second time may be representative of the end time of output of the item of supplemental content. Conversely, if the second time occurs before the first time, then the second time may be representative of the start time of output of the item of supplemental content and the first time may be representative of the end time of output of the item of supplemental content.


As an example, referring again to FIG. 4, because the peak 402 occurs before the peak 404, the first time is representative of the start time of live output of the item of supplemental content and the second time is representative of the end time of live output of the item of supplemental content. However, as noted above, for other data groups, the second time may be representative of the start time of output of the item of supplemental content if it occurs before the first time.


If the second time is representative of either the start time or the end time of the item of supplemental content, then it may occur within a predetermined time frame of the first time. For example, the second time and the first time may not be separated from each other by a period of time that is larger than the duration of the item of supplemental content. Accordingly, to determine the second time, the data analysis component 106 may first determine an appropriate time frame within which to identify the second time. The time frame may include the first time and may have a duration equal to a multiple (e.g. twice) of the duration of the item of supplemental content. By ensuring that the second time occurs within this time frame, the data analysis component 106 may identify a second time that is not separated from the first time by a period of time that is larger than the duration of the item of supplemental content. As depicted by the chart 400, the first and second times are separated from each other by a time period equal to the duration of the item of supplemental content (e.g. 14.3 seconds).


At 614, a schedule (e.g. an ad schedule) may be generated based on the results/output of the data analysis component. For example, a schedule generation component, such as the schedule generation component 108, may generate an ad schedule for each data group analyzed by the data analysis component. The schedule may resemble, for example, the exemplary ad schedule 500 of FIG. 5. The schedule may be associated with output of the at least one item of supplemental content. The generated schedule may indicate, for each live output (e.g. airing) of the item of supplemental content, the start time and end time of output of the item of supplemental content, as determined by the data analysis component. The schedule may also indicate, for each output (e.g. airing), the network number and/or the number of impressions. The number of impressions for an airing indicates how many client devices output the item of supplemental content at that time on that network.


The ad schedule may be specific to a particular DMA. For example, the ad schedule may indicate the estimated time(s) an item of supplemental content was output live in a particular DMA. Alternatively, the ad schedule may not be specific to a particular DMA. If the ad schedule is not specific to a particular DMA, the ad schedule may be aggregated across multiple DMAs or geographic regions. Times may be shifted for broadcast networks based on time-zone, as determined by the DMA.


At 616, the schedule may be indicated (e.g. output, stored, forwarded, etc.). For example, the schedule may be indicated to the individual or company that placed this item of supplemental content. The individual or company may use the generated schedule to gain an understanding of the times at which the item of supplemental content was output live, such as on live television. Additionally, or alternatively, the schedule may be indicated to any other entity that wants to gain an understanding of the times at which the item of supplemental content was output live.


At 618, content viewed by at least one user may be identified based at least in part on the schedule. The schedule may be used to gain an understanding of how many viewers the item of supplemental content reached at these times and/or which network output the item of supplemental content at these times. Utilizing a large, behaviorally tracked television panel to generate an ad schedule allows the ad schedule to be generated soon after the behaviorally tracked television panel becomes available—thus eliminating the long wait-time associated with traditional ad schedule generation techniques. Accordingly, the individual or company is able to gain an understanding of its ad campaign's success and/or reach in a more efficient manner.



FIG. 7 depicts a computing device that may be used in various aspects. With regard to the example environment of FIG. 1, one or more of the processor 102, electronic storage 110, user interface device(s) 114, external resources 112, advertiser network, or content source(s) 120 may be implemented in an instance of a computing device 700 of FIG. 7. The computer architecture shown in FIG. 7 shows a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, PDA, e-reader, digital cellular phone, or other computing node, and may be utilized to execute any aspects of the computers described herein, such as to implement the methods described in FIG. 6.


The computing device 700 may include a baseboard, or “motherboard,” which is a printed circuit board to which a multitude of components or devices may be connected by way of a system bus or other electrical communication paths. One or more central processing units (CPUs) 704 may operate in conjunction with a chipset 706. The CPU(s) 704 may be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device 700.


The CPU(s) 704 may perform the necessary operations by transitioning from one discrete physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements may generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like.


The CPU(s) 704 may be augmented with or replaced by other processing units, such as GPU(s) 705. The GPU(s) 705 may comprise processing units specialized for but not necessarily limited to highly parallel computations, such as graphics and other visualization-related processing.


A user interface may be provided between the CPU(s) 704 and the remainder of the components and devices on the baseboard. The interface may be used to access a random access memory (RAM) 708 used as the main memory in the computing device 700. The interface may be used to access a computer-readable storage medium, such as a read-only memory (ROM) 720 or non-volatile RAM (NVRAM) (not shown), for storing basic routines that may help to start up the computing device 700 and to transfer information between the various components and devices. ROM 720 or NVRAM may also store other software components necessary for the operation of the computing device 700 in accordance with the aspects described herein. The user interface may be provided by a one or more electrical components such as the chipset 706.


The computing device 700 may operate in a networked environment using logical connections to remote computing nodes and computer systems through local area network (LAN) 716. The chipset 706 may include functionality for providing network connectivity through a network interface controller (NIC) 722, such as a gigabit Ethernet adapter. A NIC 722 may be capable of connecting the computing device 700 to other computing nodes over a network 716. It should be appreciated that multiple NICs 722 may be present in the computing device 700, connecting the computing device to other types of networks and remote computer systems.


The computing device 700 may be connected to a storage device 728 that provides non-volatile storage for the computer. The storage device 728 may store system programs, application programs, other program modules, and data, which have been described in greater detail herein. The storage device 728 may be connected to the computing device 700 through a storage controller 724 connected to the chipset 706. The storage device 728 may consist of one or more physical storage units. A storage controller 724 may interface with the physical storage units through a serial attached SCSI (SAS) interface, a serial advanced technology attachment (SATA) interface, a fiber channel (FC) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units.


The computing device 700 may store data on a storage device 728 by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of a physical state may depend on various factors and on different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage units and whether the storage device 728 is characterized as primary or secondary storage and the like.


For example, the computing device 700 may store information to the storage device 728 by issuing instructions through a storage controller 724 to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computing device 700 may read information from the storage device 728 by detecting the physical states or characteristics of one or more particular locations within the physical storage units.


In addition or alternatively to the storage device 728 described herein, the computing device 700 may have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media may be any available media that provides for the storage of non-transitory data and that may be accessed by the computing device 700.


By way of example and not limitation, computer-readable storage media may include volatile and non-volatile, transitory computer-readable storage media and non-transitory computer-readable storage media, and removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or any other medium that may be used to store the desired information in a non-transitory fashion.


A storage device, such as the storage device 728 depicted in FIG. 7, may store an operating system utilized to control the operation of the computing device 700. The operating system may comprise a version of the LINUX operating system. The operating system may comprise a version of the WINDOWS SERVER operating system from the MICROSOFT Corporation. According to additional aspects, the operating system may comprise a version of the UNIX operating system. Various mobile phone operating systems, such as IOS and ANDROID, may also be utilized. It should be appreciated that other operating systems may also be utilized. The storage device 728 may store other system or application programs and data utilized by the computing device 700.


The storage device 728 or other computer-readable storage media may also be encoded with computer-executable instructions, which, when loaded into the computing device 700, transforms the computing device from a general-purpose computing system into a special-purpose computer capable of implementing the aspects described herein. These computer-executable instructions transform the computing device 700 by specifying how the CPU(s) 704 transition between states, as described herein. The computing device 700 may have access to computer-readable storage media storing computer-executable instructions, which, when executed by the computing device 700, may perform the methods described in relation to FIG. 6.


A computing device, such as the computing device 700 depicted in FIG. 7, may also include an input/output controller 732 for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller 732 may provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, a plotter, or other type of output device. It will be appreciated that the computing device 700 may not include all of the components shown in FIG. 7, may include other components that are not explicitly shown in FIG. 7, or may utilize an architecture completely different than that shown in FIG. 7.


As described herein, a computing device may be a physical computing device, such as the computing device 700 of FIG. 7. A computing node may also include a virtual machine host process and one or more virtual machine instances. Computer-executable instructions may be executed by the physical hardware of a computing device indirectly through interpretation and/or execution of instructions stored and executed in the context of a virtual machine.


One skilled in the art will appreciate that the systems and methods disclosed herein may be implemented via a computing device that may comprise, but are not limited to, one or more processors, a system memory, and a system bus that couples various system components including the processor to the system memory. In the case of multiple processors, the system may utilize parallel computing.


For purposes of illustration, application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device, and are executed by the data processor(s) of the computer. An implementation of service software may be stored on or transmitted across some form of computer-readable media. Any of the disclosed methods may be performed by computer-readable instructions embodied on computer-readable media. Computer-readable media may be any available media that may be accessed by a computer. By way of example and not meant to be limiting, computer-readable media may comprise “computer storage media” and “communications media.” “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by a computer. Application programs and the like and/or storage media may be implemented, at least in part, at a remote system.


As used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is no way intended that an order be inferred, in any respect.


It will be apparent to those skilled in the art that various modifications and variations may be made without departing from the scope or spirit. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims.

Claims
  • 1. A method comprising: receiving data indicative of different times at which at least one item of supplemental video content was output via a plurality of client devices;determining, based on the data, a first time at which the at least one item of supplemental video content was output via a greatest quantity of client devices of the plurality of client devices;determining, based on the data, a second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices, the second time being different from the first time; andgenerating, based on the first time and the second time, a schedule indicating a start time and an end time of output of the at least one item of supplemental video content.
  • 2. The method of claim 1, further comprising, before receiving the data indicative of the different times at which the at least one item of supplemental video content was output via the plurality of client devices: receiving the at least one item of supplemental video content; andsending, to a computing device at a remote location, the at least one item of supplemental video content.
  • 3. The method of claim 1, further comprising: indicating the schedule indicating the start time and the end time of the output of the at least one item of supplemental video content.
  • 4. The method of claim 1, further comprising: identifying, based on the schedule, content viewed by at least one user.
  • 5. The method of claim 1, further comprising determining the start time and the end time of the output of the at least one item of supplemental video content by: comparing the first time with the second time to determine that the first time occurs before the second time; anddetermining that the first time is the start time associated with the output of the at least one item of supplemental video content and that the second time is the end time associated with the output of the at least one item of supplemental video content.
  • 6. The method of claim 1, further comprising determining the start time and the end time of the output of the at least one item of supplemental video content by: comparing the first time with the second time to determine that the second time occurs before the first time; anddetermining that the second time is the start time associated with the output of the at least one item of supplemental video content and that the first time is the end time associated with the output of the at least one item of supplemental video content.
  • 7. The method of claim 1, wherein the data indicative of the different times at which the at least one item of supplemental video content was output via the plurality of client devices is indicative of a network that caused output of the at least one item of supplemental video content via the plurality of client devices, and wherein the schedule indicating the start time and the end time of the output of the at least one item of supplemental video content is associated with the network.
  • 8. The method of claim 1, wherein the different times at which the at least one item of supplemental video content was output via the plurality of client devices comprise different times during a predetermined window of time at which the at least one item of supplemental video content was output via the plurality of client devices.
  • 9. The method of claim 1, wherein determining the first time at which the at least one item of supplemental video content was output via the greatest quantity of client devices of the plurality of client devices comprises: identifying a time the at least one item of supplemental video content was output by a maximum quantity of client devices of the plurality of client devices.
  • 10. The method of claim 1, wherein determining the second time at which the at least one item of supplemental video content was output via the second greatest quantity of client devices of the plurality of client devices comprises: determining a time frame including the first time; andidentifying a time, during the time frame, the at least one item of supplemental video content was output via the second greatest quantity of client devices of the plurality of client devices.
  • 11. The method of claim 1, wherein the schedule indicating the start time and the end time of the output of the at least one item of supplemental video content is indicative of an estimated time period during which the at least one item of supplemental video content was output live.
  • 12. The method of claim 1, wherein the at least one item of supplemental video content comprises an advertisement.
  • 13. The method of claim 1, wherein the plurality of client devices comprises at least one television, set-top box, tablet, phone, computer, or laptop.
  • 14. The method of claim 1, further comprising grouping the plurality of client devices that output the at least one item of supplemental video content by time.
  • 15. The method of claim 1, wherein the greatest quantity of client devices of the plurality of client devices and the second greatest quantity of client devices of the plurality of client devices are based on client devices that share a characteristic.
  • 16. The method of claim 1, wherein the first time and the second time are within a time window based on a duration of the at least one item of supplemental video content.
  • 17. The method of claim 2, wherein the data indicative of the different times at which the at least one item of supplemental video content was output via the plurality of client devices comprises data determined using automatic content recognition (ACR) by the computing device at the remote location.
  • 18. The method of claim 10, wherein determining the time frame including the first time comprises: determining a duration of the at least one item of supplemental video content; anddetermining a window of time including the first time and having a duration equal to a multiple of the duration of the at least one item of supplemental video content.
  • 19. A system comprising: at least one processor; andat least one memory storing instructions that, when executed, cause the at least one processor to: receive data indicative of different times at which at least one item of supplemental video content was output via a plurality of client devices;determine, based on the data, a first time at which the at least one item of supplemental video content was output via a greatest quantity of client devices of the plurality of client devices;determine, based on the data, a second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices, the second time being different from the first time; andgenerate, based on the first time and the second time, a schedule indicating a start time and an end time of output of the at least one item of supplemental video content.
  • 20. A non-transitory computer-readable medium storing instructions that, when executed, cause: receiving data indicative of different times at which at least one item of supplemental video content was output via a plurality of client devices;determining, based on the data, a first time at which the at least one item of supplemental video content was output via a greatest quantity of client devices of the plurality of client devices;determining, based on the data, a second time at which the at least one item of supplemental video content was output via a second greatest quantity of client devices of the plurality of client devices, the second time being different from the first time; andgenerating, based on the first time and the second time, a schedule indicating a start time and an end time of output of the at least one item of supplemental video content.
US Referenced Citations (1)
Number Name Date Kind
10250952 Haberman Apr 2019 B2