The present invention generally relates to the field of devices which transmit, receive and process audiovisual content. More particularly, the disclosed embodiments relate to techniques for watermark-based metadata acquisition and processing.
This section is intended to provide a background or context to the disclosed embodiments that are recited in the claims. The description herein may include concepts that could be pursued but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
Watermarks can be embedded in audio, images, video or audiovisual content. New television standards, such as HbbTV and ATSC 3.0 allow applications to run on a TV to provide interactive services, targeted advertising with local ad replacement, and audience measurement, video-on-demand, etc. In the HbbTV compliant devices a terminal, such as a television or a set top box (STB) performs application discovery. When application discovery is performed using watermarks, both a STB and a downstream television may attempt to discover the HbbTV service associated with the broadcast. Problems can arise if both the STB and the downstream television launch the HbbTV service. The result may be duplicated or confusing displays on the screen. In addition, there are a variety of redistribution scenarios based on the metadata extracted from the ATSC content comprising audio/video essence as well as metadata/signaling. A signaling file contains metadata and signaling data about the broadcast service being presented, including URLs that can be used to access signaling information for supplementary signaling and content. Signaling files are delivered to the receiver by a recovery server via broadband in the redistribution scenarios. A first timing information is represented as an interval code that identifies the interval of content in which a watermark payload value is embedded. The watermark payload includes at least the following other fields: 1) a server code field that identifies a server which acts as the starting point for acquisition of supplementary content; and 2) a query flag field, which has a value that changes to announce an event or to indicate the availability of a new signaling file. The first timing information extracted from watermarks and second timing information carried in the ATSC content may need to be reconciled.
Furthermore, it is desirable to perform audience measurement of media consumption on media applications. Such apps can be resident on a platform such as Roku, a set-top box (STB), or a digital video recorder (DVR). it is also desirable to perform audience measurement of consumption of content via a web browser.
Audience data that is currently available to broadcasters is not consistent across consumption platforms. Most broadcasters have access to precise viewing data from browsers and mobile devices via Google analytics, Onmiture/Adobe analytics, etc. Tools for other platforms (e.g. Roku) don't always provide viewing data with the same level of precision and with additional data as the mobile platforms provide, or if they do, they charge premiums for the information.
As a result, the currently available audience measurement data for application-based viewing is not accepted by advertisers and cannot be used for marketing ad placement in their content.
Audiovisual (AV) content services that are delivered over broadband, cable, or satellite to the home may be accessed by consumers using AV service application running on a media adapter or set-top box that is connected to a display system via an HDMI interface. A problem that has been found to exist in this scenario is that in some cases, while the service application on. the media player is actively presenting content for viewing, it may be unable to reliably determine whether the display system is showing the content coming from the media player to a viewer. This may result in wasted resources consumed by the media player. Also, the service's records regarding what the viewer has watched will be wrong and this may cause advertisers to be misled regarding the exposure of a household to their advertisements and to pay for delivery of ads that could not possibly have been viewed, because the TV was turned off.
This section is intended to provide a summary of certain exemplary embodiments and is not intended to limit the scope of the embodiments that are disclosed in this application.
The disclosed embodiments include techniques for confirming an association between two media devices by augmenting media content with a watermark using an upstream device, wherein the watermark conveys an identifier. The media content is sent from the upstream device to a downstream device and the watermark is detected using the downstream device. The identifier is transmitted to a server upon the detection of the watermark using the downstream device. The server identifies an association between the upstream and downstream devices upon receipt of the identifier from the downstream device.
These and other advantages and features of disclosed embodiments, together with the organization and manner of operation thereof, will become apparent from the following detailed description when taken in conjunction with the accompanying drawings.
In the following description, for purposes of explanation and not limitation, details and descriptions are set forth in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments that depart from these details and descriptions.
Additionally, in the subject description, the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word exemplary is intended to present concepts in a concrete mariner
Introduction
Problem Statement
In regular HbbTV broadcast application signaling, the trigger signal (AIT) always goes to the device that has the HbbTV runtime environment, and it always ends there. In some cases, the trigger ends up in a terminal that doesn't understand HbbTV. In such cases HbbTV doesn't work, but no other harm is done. With the current solution for broadband application discovery, something similar applies. The discovery is always performed by the terminal that receives the digital broadcast. This may be the TV that receives the digital broadcast directly, or the Set Top Box (STB), but never a Terminal (TV) that is downstream from the digital receiver (STB).
With Application Discovery using watermarking, this situation changes. Any device that has access to the (unencrypted} baseband signals may try to discover the HbbTV
Service associated with the broadcast. This may be a device (TV) that is downstream from the device that terminates the broadcast, such as the STB. When a TV is connected to an STB, the following situations may occur that may be considered problematic by one or more participants in the value chain: 1) TV discovers, and then launches, an HbbTV service for a broadcast, while the STB also supports HbbTV and may also have launched the HbbTV service. This may result in overlapping calls to action on the screen, which may confuse the user; 2) The HbbTV App can he programmed to automatically show its full user interface. The specification states that this is “usually only used on radio and data services» (that are received on a TV set-a common case with DB services), but the behavior is legal. This could result in two full user interfaces being shown. A user may want to make the interface disappear—with the other one still showing, or a user may start interacting with one of the two user interfaces, which may or may not result in visible feedback, depending on whether the user uses the remote for the STB (the underlying graphics) or the TV {the graphics that are on top); 3) in the above, the broadcast-signaled and broadband-discovered apps may be largely the same, or the broadcaster may have decided that the discovered app has a different look, feel, and/or functionality from the broadcast-signaled one. This is fully legal according to the specs—it's even encouraged for the broadcaster to tailor the app to the discovery mechanism (e.g. because there is no event mechanism in application discovery; knowing about events becomes purely an app functionality, while in broadcast signaling, events are supported “natively” by HbbTV signaling); 4) The TV may show HbbTV functionality on top of a graphical menu that the user interacts with on the STB, This is considered undesirable by TV manufacturers; 5) A completely new element in watermarking-based discovery is that the HbbTV terminal will also attempt to retrieve an app for the recorded program. This is not possible, or at least not done, for broadcast-signaled apps or for app discovery relying on DVB DI metadata. This may result in confusion, but it may also open up new possibilities.
Terminology
Active Application Device means the device on which the application is determined to run according to the pairing protocol.
Active Application Device State is a Boolean value indicating whether the application on a paired device is determined to run or not according to the pairing protocol. If the value is true, the application is determined to run. Otherwise, the application is determined to terminate.
A web session is a sequence of HTTP requests and responses associated to the same web client, application or user.
A session object is a data structure with temporal data stored on a datastore on a remote server.
A session identifier (sessionID) is random number generated by the server that uniquely identifies a web session within the scope of the server (or a cluster of servers).
Device position indicates the position of a device in the paired devices. The device is the upstream device if its value is ‘upstream’ or the downstream device if the value is ‘upstream’.
Pairing token stored as a cookie (or in other web storage) on a device contains the pairing result of the paired device. It consists of Active Application Device State, Device Position of the pairing, and the validity of the token. It may also contain Devicelnfo and associated ServiceInfo.
Upstream device is the first of the two devices connected in series (e.g., via HDMI connection).
Downstream device is the second of the two devices connected in series (e.g., via HDMI connection.).
Upstream application is a web application running on a browser on the upstream device.
Downstream application is a web application running on a browser on the downstream device.
DeviceInfo describes the device being paired, may include device identifier, device model identifier, device manufacturer identifier, device operating system and version, device geographic location, browser information and supported APIs, device capability. identifier, application platform information (e.g., HbbTV 1.0, HbbTV1.5, HbbTV 2.0, or ATSC 3.0) including whether the device supports audio and/or video watermark detection and HTML5 WebSocket API.
ServiceInfo describes the current service associated the launched application, may include the current service/channel identifiers, country code associated with the service, service types (e.g., linear service, on-demand, or time-shift service), broadcast systems (e.g., ATSC 3,0, DVB, . . . ).
Phase 2 Application Discovery (P2AD): A version of the HbbTV specifications that specifies use of ATSC VP1 watermarks to enable application discovery.
Dual Terminal Scenario: Use case where two HbbTV terminals are connected in series and the downstream terminal is a P2AD Terminal.
Legacy Terminal: An HbbTV terminal that is not a P2AD Terminal.
P2AD Terminal: An HbbTV terminal that implements the P2AD specification.
Upstream Terminal: The terminal in a dual terminal scenario which is the first of the two HbbTV terminals connected in series.
Downstream Terminal: The terminal in a dual terminal scenario which is the second of the two HbbTV terminals connected in series.
Use Cases
There are four scenarios where the devices can be connected in series, as shown in the table below.
The pairing protocol described in this disclosure is applicable to the use case #2 where the upstream device is a legacy device (e.g., HbbTV 1.0 or HbbTV 1.5) and the downstream is a watermark-capable device. A legacy device is an HbbTV device that does not support watermark detection and signaling recovery protocol including the pairing protocol. A watermark-capable device supports watermark detection and signaling recovery and may also support WebSocket as defined in W3C HTML5 APIs,
The pairing protocol is also applicable to the use case #4, where two watermark-capable devices equipped with HbbTV app2app capability can directly negotiate over a WebSocket connection using app2app protocol specified in HbbTV specifications, or make agreement where the application should run using the pairing server mediated approach.
Communication between Upstream and Downstream Applications
One-way communication from upstream application to downstream application via watermark
When upstream and downstream devices are connected in series (e.g., via HDMI connection), at least audiovisual content is passed from the upstream device to the downstream device. A one-way communication channel can be established by embedding watermarks by the upstream application and detecting such watermarks by the downstream application.
Presentations rendered by the upstream application are usually combined with the base audiovisual content received at the upstream device and such combined audiovisual content is provided to the upstream device. For example,
In one embodiment, watermarks can be mbedded by the upstream application in the base audiovisual content without accessing such base audiovisual content. For example, the application creates a graphic or image containing watermarks and overlays it at a specific position on the base video content, an audio content containing watermarks to be mixed with the base audio content.
In another embodiment, watermarks can be embedded by the upstream application in the base audiovisual content by accessing and directly modifying such base audiovisual content. For example, the application may use WebAudio technology provided by HTML5 API to embed audio watermarks by accessing and modifying the base audio content.
Watermarks can be used to carry various types of messages for the one-way communication. Watermarks are also used to identify the downstream device: if an application on a device detects watermarks carrying a predefined type of message, this device is identified as the downstream device. For example, if an application on a device detects watermarks carrying a session1D, the device is identified as the upstream device in the pairing process.
Session ID
As discussed below, upstream and downstream applications can communicate with each other using web session, mediated by the server.
In order for two applications on two devices to have access to the session information of a session on a server, a single session ID that uniquely identifies the session must be available to both applications,
The session ID is embedded as watermarks by the upstream application in the audio and/or video content and detected by the downstream application.
WebSocket Server Address
Watermarks carrying WebSocket WS) server address(s) can be embedded into audio and/or video content by the upstream application and detected by the downstream application if WebSocket server is available to the application.
Once the downstream application recovers the WebSocket server address, it can establish a direct two-way communication channel with the upstream application to negotiate the Active Application Device.
The Websocket server address can be a full or partial URL that indicates the Websocket server IP address on the upstream device. Note that if a partial WebSocket server address is used, the application has knowledge to construct the full Websocket server address and use it to establish a WebSocket connection with the upstream device.
Active Application Device State
If a decision is made by the upstream application as described in the section “Decision made by upstream device” below, such decision can be sent to the downstream application using watermarks.
Server-mediated communication between applications using shared session.
Session Management
Two application instances on two different devices can access the same session object identified using the shared session ID for pairing.
Session ID is generated by a web server and can be stored on the user agent as cookies or in other web storage. A session object contains a session ID, its expiration, and associated device information and the current service information being paired.
The general idea is to use a server-generated session ID shared by two devices for communication between two devices. The session ID is passed from the application instance on the upstream device to the application on the downstream device using watermarks.
Session Object
For example, a session object can be represented as in a data structure with key-value pairs:
The session object is like a data locker for web application instances or users, and the key for the locker is the session ID. Server only allows the request with a session ID to access to the data locker corresponding to the session ID.
If the applications connect to multiple web servers, knowledge of session state (as is typical in a cluster environment) session information must be shared between the cluster nodes that are running web server software. Methods and systems for sharing session state between nodes in a cluster are widely known and available.
Direct Communications Between Upstream and Downstream Applications Using WebSocket
When there are direct communications between upstream and downstream applications using protocols such as Websocket or WebRTC, the protocol should be applied after exhausting attempts to establish direct communications between the application instances on more than one HbbTV devices. For example, if the browser on which a HbbTV application is launched on a legacy device supports WebSocket/WebRTC, the application may attempt to establish a WebSocket/WebRTC connection with another device with a WebSocket/WebRTC server according to a known discovery protocol if the Websocket server is not known to an application, or to attempt to establish such connection by scanning all possible IP addresses and ports within the home network if there is known discovery protocol.
Session ID in this case is generated locally by the application for each associated service/channel. For example, it can be a large random number stored as cookie for each Websocket request and response. A session object contains a session ID, its expiration, and associated device information and the current service information being paired.
Active Application Device Determination
Once two devices are paired, the decision on which is the Active Application Device can be made in several approaches as described below.
Decision Made by Upstream Device
Once two devices are paired, the application currently running the upstream device takes leadership to determine which one of the paired devices is Active Application Device.
The event message may be embedded in a single video frame, an interval of audio samples, or continuously in a period of audio and/or video content.
Decision Made by Pairing Server
Once two devices are paired on a server using session management, the server may determine which of the paired device is the Active Application Device based on the predefined rules using the DeviceInfo and ServiceInfo about the device and service involved in the pairing process.
If the upstream device is determined to be the Active Application Device, the server sets the Active Application Device State to true for the upstream device and false for the downstream device in the session object stored on the server, respectively. Otherwise, the server sets the Active Application Device State to false for the upstream device and true for the downstream device in the session object stored on the server, respectively. An event message containing Active Application Device State value ‘false’ for the downstream device, and returns true (i.e., the upstream device is the Active Application Device).
After the Active Application Device is determined, the Active Application Device
State value associated with the requesting device is included in the response to each HTTP request from both upstream and downstream devices.
Decision Made by the User of the Upstream and Downstream Devices
Once two devices are paired, the applications on both upstream and downstream devices can prompt the user to make decision which device is Active Application Device.
The application on the upstream device or downstream device can present a user interface that allows the user to make decision. For example, two choices are presented to the user for selection: click here to select the upstream device (i.e., the set-top box) on which the interactive application will run, or click here to select the downstream device (e.g., the TV) on which the interactive application will run. The application sets the Active Application
State for the upstream device to true if the user selects the first choice, or sets the Active Application State for the upstream device to true if the user selects the second choice.
Decision Made by Negotiation Between Upstream and Downstream Devices
Once two devices are paired using WebSocket communication between applications on paired devices (e.g., application2application protocols specified in HbbTV and ATSC 3.0), the applications can directly negotiate which device is Active Application Device.
Pairing Token
A pairing token contains the result of a paired device. It can be stored as a cookie (or in other web storage) on a device.
A pairing token contains at least the following information:
A pairing token may also contain DeviceInfo, ServiceInfo, date and time when two devices were paired, and she session ID used for the pairing.
A web application is sandboxed according to its web origin. Thus, a pairing token is created for all applications on a device with the same web origin, In addition, the pairing token can contain a service/channel identifier as described in ServiceInfo which allows a different pairing token to be created for each service/channel associated with one or more applications that share the same web origin (e.g., a single application is provided for multiple services of a broadcaster, or a broadcaster offers multiple applications that have the same web origin).
For example, if a broadcast station has 3 channels and a single application is developed for all these channels (or three different applications for the channels share the same web origin), the pairing protocol can be applied to each of the channels, resulting in a different pairing token for each channel. Different pairing token for each channel allows the station to set different pairing preference for different channel. For example, the upstream device may be a legacy device and the downstream device may be a watermark-capable device in paired devices. The station may prefer to select upstream device as Active Application Device for channel #1, and downstream device as Active Application Device fur channel #2 because the application associated with the channel #2 requires advanced capabilities that are available only on the watermark-capable device.
Device Pairing Protocols
Direct Negotiation Between Upstream and Downstream Applications Using WebSocket
This pairing protocol is applicable to the use case where both upstream and downstream devices are watermark capable, and WebSocket is available to both upstream and downstream applications. For further details, see the Section “Dual Terminal Scenario Analysis” below.
Active Application Device Determined by Upstream Application
This pairing protocol is applicable to the use case where:
This pairing protocol uses watermarks to deliver the Active Application Device decision made by the upstream application to the downstream application, For further details, see the Section “Dual Terminal Scenario Analysis” below.
Server-Mediated Device Pairing Protocol
This pairing protocol is applicable to the use case where:
Pairing Process for Application on Legacy Device
Apply the following pairing protocol on a legacy device, once an application is launched on such device. Set the devicePosition to “upstream”. Note that the legacy device is always the upstream device in the pairing process.
Pairing Process for Application on Watermark-Capable Device
If the device is a watermark-capable and the application is launched through watermark-capable signaling recovery process, apply the following pairing protocol. Note that a watermark-capable device can be either an upstream or downstream device.
As shown in
Pairing Process at Server
Examples of Pairing Process
Session Hijacking
The Session Hijacking attack compromises the sessionID by stealing or predicting a valid session token to gain unauthorized access to the Web Server using the common attack methods such as predictable session ID; session sniffing, client-side attacks (XSS, malicious JavaScript Codes, Trojans, etc.), and man-in-the-middle attack.
The sessionID is only valid during a session for pairing which typically lasts for a few seconds. A large and unique random number generated by the sever using a secure hash function such as SHA-128 or -256 can make sessionID less predictable. HTTPs can be used for client-server communication during the pairing process to mitigate these common attacks. In addition, to prevent unauthorized device from accessing the sessionID by performing watermark detection from the content originated from the downstream device, the watermarks carrying the dynamic event containing the sessionID can be removed or overwritten by the watermark detector or Web application on the downstream device after the sessionID has been detected.
Re-Initiation of Pairing Process
A new pairing process may be initiated in the following cases:
De-Duplication of Usage Data
There are several approaches to ensure accurate viewership/usage data is reported. In other words, only the usage data collected on the Active Application Device should be reported, Note that usage data can be collected and/or reported to a usage data server by the application and/or device functions on the Active Application Device.
For background information, see:
Dual Terminal Scenario Analysis
This analysis defines a means to avoid conflict between HbbTV applications running on both terminals in a dual terminal scenario.
Candidate Solutions
The following solutions may be used alone or in combination.
Network Signaling
The Network Signaling (“NS”) solution employs network communication between the applications running on the upstream and downstream terminal to identify and respond to the dual-terminal scenario.
This solution has the following functional requirements:
Req. NS1 is expected to be met in any P2AD environment because the P2AD function itself requires access to a broadband server for DNS resolution, AIT recovery, application acquisition, etc. Since both terminals must be able to reach a WAN server for these functions, by definition there is a network path between the two terminals in the dual terminal scenario. It is also possible that other network paths will exist between the two terminals; e.g. the two terminals may be able to connect over a LAN.
Req. NS2 is non-trivial to fulfill. Issues to address include:
Possible means of fulfilling Req. NS2 and Req. NS3 are described below.
Device Pairing
User performs a configuration step wherein the devices are “paired” through the establishment of a data record identifying the devices and their connection relationship. This data record is accessed by applications running on one or both devices to identify the presence of the dual terminal condition and coordinate the application behavior on the two terminals to avoid conflict. The pairing step may involve an action performed on both devices, similar to Bluetooth or You Tube device pairing. The pairing may be established in a peer-to-peer fashion (directly negotiated between the two terminals) or may be established with the assistance of an intermediary; e.g. with each terminal communicating with one or more network servers, or a combination of the two. For purposes of pairing, the terminals can be identified either using the HbbTV unique device identifiers or third-party cookies (in which case the pairing can be maintained across services) or alternately using first-party cookies or the local (web) storage API (in which case the pairing may only be maintained within a service). Ideally, a pairing will persist across application sessions, but pairings may be transient (e.g. within a session).
An example of how device pairing might be accomplished among devices that support terminal discovery and application-to-application communication (e.g. HbbTV 2.0 and later) is as follows:
An example of how device pairing might be accomplished among devices that do not support terminal discovery (e.g. HbbTV 1.0 and 1.5) is similar to above, but communication between devices must be brokered by a central server. Devices can't discover each other, so either central server needs to identify pairing candidates within a household or user needs to initiate pairing on the devices.
Once pairing is established, applications running on paired devices can negotiate to resolve conflicts, which may be fully automated or may include requests tier user input to direct the resolution.
In-Band Signaling
The In-Band Signaling (“IS”) solution to the dual terminal scenario uses modifications to the audio or video introduced by an application running on an upstream terminal to transmit a message that can be received by a downstream terminal (either at the device level or in the running application). The message provides a means for the application on the upstream terminal to make the downstream terminal (or the application running on it) aware of modifications that it is making to the presentation, enabling the downstream terminal to avoid making conflicting modifications.
IS may be performed using one or more of the techniques described below.
Video Watermark Insertion
The application running on an upstream terminal can insert a video watermark message using the ATSC-defined technology by placing a graphic overlay on the top two lines of video. Such graphic overlays are supported in all HbbTV specification versions, so should be supported in any upstream terminal, With this approach, downstream terminals would be required to support video watermark detection.
The message could be the display_override_message as defined by ATSC, which directs the downstream device to direct applications to suppress any modifications to the received audio and video for up to 15 seconds. The message would need to be periodically sent every 15 seconds during time periods when audio or video modifications made by the application on the downstream terminal could conflict with those of the application on the upstream terminal.
Alternately, the video watermark message could convey a dynamic_event_message carrying a pairing token that uniquely identifies the application instance running on the upstream terminal. This token can be used by both terminals, in combination with NS, to pair the two terminals. An example pairing process is as follows:
Once pairing is established, applications miming on paired devices can negotiate to resolve conflicts, which may be fully automated or may include requests for user input to direct the resolution.
Note that the insertion. of a video watermark message by the upstream terminal may overwrite pre-existing video watermark messages in the video content. For this reason, the insertion should be periodic and very brief (e.g. as short as a single frame). Because terminals in a dual terminal scenario are directly connected by high-quality video connections, very brief video watermark messages may be embedded at very low luminance levels and can be reliably detected.
Audio Watermark Insertion
Audio watermark modification, LSB encoding, audio replacement, DTMF or similar techniques could be used. HbbTV 2,0 supports WebAudio which can be used for insertion. Different means of insertion for v1.0 or v1.5 are needed.
User Device Configuration
P2AD terminals may provide a built-in user configuration setting whereby the P2AD function may be activated or deactivated:
Technique without WAN Pairing Server
With this approach, if the upstream device is a legacy device, it always gets the interactivity. Broadcaster decides on a “leadership position” (either upstream or downstream) for the application which is the terminal position where interactivity is immediately initiated in advance of the initial pairing attempt when dual 2.0.1 terminals are connected in series.
If application is running on a v1.0/1.5 terminal:
If application is running on a version 2.0.1 terminal:
On launch:
Subsequently:
Verifier-Based Signaling File Generation
Introduction
Other embodiments relate to media devices implementing new television standards, such as ATSC 3.0, which includes audio/video essence and metadata/signaling. These embodiments include techniques for creating a signaling file that contains metadata and signaling data about the broadcast service being presented. The signaling file may include URLs that can be used to access signaling information for supplementary signaling and content. The signaling also contains a mapping between a first timing system and a second timing system, The first timing system may comprise the watermark timeline represented in intervals and the second timing system may comprise a DASH presentation time.
This embodiment is related to U.S. patent application US20160057317A1, which is incorporated herein by reference. This section describes a method to populate signaling files for redistribution scenarios based on the metadata extracted from the Advanced Television Systems Committee (ATSC) content comprising audio/video essence as well as metadata/signaling and a mapping between the first timing information extracted from watermarks and the second timing information carried in the ATSC content, For further details about ATSC, including ATSC 3.0 see patent application US20150264429, which is incorporated herein by reference.
A signaling file contains metadata and signaling data about the broadcast service being presented including URLs that can be used to access signaling information for supplementary signaling and content. Signaling files are delivered to the receiver by a recovery server via broadband in the redistribution scenarios.
The first timing information is represented as an interval code that identifies the interval of content in which the watermark payload value is embedded. The watermark payload includes at least the following other fields 1) a server code field that identifies a server which acts as the starting point for acquisition of supplementary content, 2) a query flag field—when its value changes, it announces an event or indicates the availability of a new signaling file.
Signaling File Generation
U.S. patent application US20160057317A1, incorporated herein by reference, and
Receive a multimedia content including audio and video components.
Embed watermarks into one or both of the audio or the video component, embedded watermarks including a first timing information.
Process the embedded content for transmission (including packaging the embedded content with metadata into ATSC emission).
Obtain, at a receiver device, a first version of the processed multimedia content through a communication channel such as over-the-air broadcast, broadband connection, physical connector, or shared file system.
Perform decoding and rendering of the received first version of processed multimedia content, and simultaneously perform the following steps during decoding and rendering of the received first version of processed multimedia content:
Once an interval code is extracted in the step 5, immediately create a signaling file that may contain the following information and make the signaling file available to receivers for content signaling in redistribution scenarios:
As shown in
Mapping Between DASH Timeline and Watermark Timeline
This section describes in detail the mappings between the watermark timeline represented in interval codes and the DASH presentation time used in ATSC 3.0 for media presentation.
For content delivered in ISOBMFF format, samples 0-89 shown in
When the media segments are delivered to the receiver according to DASH, the receiver is required to have a source for wall clock UTC time. A Network Time Protocol (NTP) hosted on the OS or a Content Delivery Network (CDN) server is typically used for delivery of UTC time to the receiver.
The availabilityStartTime field in DASH MPD defines a Segment availability time when the first content segment is available to the distribution network. All content segments of a given presentation described in the MPD have a common availability timeline.
For live broadcast, a fixed delay offset is sometimes specified in MPD to make sure that the live content is presented on all receivers approximately at the same time. This fixed delay is represented as a time relative to availabilityStartTime and defined as the suggestedPresentationDelay field in MPD. For example, if availabilityStartTime is the midnight on Jan. 1, 2016 and the fixed delay is 10 seconds, a receiver should present the first sample of the live content at 00:00:10, Jan. 1, 2016 no matter when the sample is available at the receiver for presentation. Thus, the sum of availabilityStartTime and availabilityStartTime represents the earliest presentation time in a MPD.
Calculation of the Anchor Time
DASH presentation time is represented as wall clock time in absolute UTC time. Watermark timeline is represented in the interval codes embedded in the content. Incremental interval codes are sequentially embedded in the order of the content presentation. An interval code is a value that identifies the interval of content in which a watermark payload value is embedded. A watermark segment is a continuous interval of watermarked content that contains incremental interval codes.
To map from watermark timeline to the presentation timelines, an anchor value (T) on the DASH presentation timeline is established for a watermark segment. For a watermark segment, T represents the DASH presentation time of the first sample (S) of the first content interval from which a specific interval code (N) is extracted.
T can be calculated as follows:
T=EarliestPresentationTime+PeriodStart+offset(S)Where
As an example, shown in
For a given T of a watermark segment, an interval code n can be mapped to the DASH presentation: PT(n)=1.5*(n−N)+T where n is an interval code value extracted from a marked content interval in the watermark segment and PT(n) is the DASH presentation time of the first sample (n) of the marked content interval. For example, the presentation time of first sample 45 of the marked content interval containing the interval code 1 is 87 00:00:11.5000, Jan 1, 2016 as shown in
Alignment Between DASH and Watermark Timelines
Watermark timeline is created when VP1 payloads containing consecutive interval codes are continuously embedded in a content stream. it is expected that the same timeline is recovered with extracted VP1 payloads in a watermark detector.
In order to have a precise mapping between DASH and watermark timelines, alignment between these timelines must be maintained. When overlaps or gaps are introduced in the DASH timeline (e.g., resulting from Media Segments with actual presentation duration of the media stream longer or shorter than indicated by the Period duration), the offsets represented by such overlaps or gaps at the position where overlaps or gaps are present need to be considered in mapping from the watermark timeline to the DASH timeline or vice versa. For example, such overlaps or gaps need to be subtracted from or added to the anchor time T as described in the previous section.
DASH allows for seamless switching between different Representations with different bitrates of a media segment for adaptive delivery of the content. Additional seamless transitions between DASH elements are possible with constraints defined in a system and signaling information described in the MPD:
In order to support seamless switching between the elements, the multiple versions of the same element (representative, adaptation set or period) are aligned (i.e., their boundaries are aligned in the media data). If each of the versions is embedded with identical watermarks, a continuous watermark timeline is maintained even if a presentation is created by switching between these switchable elements. Otherwise, the offset between the boundary of content segment carrying a watermark payload and the boundary of switchable elements needs to he considered in mapping between timelines.
Extension/Variations/Embodiments
Signaling File Generator for Each Service
A device that performs the steps 4-6 above is needed to generate signaling files tier a broadcast service.
Optional Watermark Detection
Once the mapping is established, watermark detection step 5.a (shown in
Broadcast Monitoring and Verification
Once the signaling file generator detects an unexpected discontinuity in interval codes extracted by watermark detector, a notification is provided to the broadcasters. Such discontinuity may be caused in watermark embedder, content encoding and packaging or transmission.
Prediction of Signaling File
To avoid possible latency introduced by signaling file generator, a signaling file can be generated prior to detection of the interval code that is associated with the signaling file. For example, assume a mapping between the first and second timing systems has been established, continuous interval codes are embedded in the current service, and content identifier is not expected to change. If the currently detected interval code is n, a new signaling file can be created which contains the interval code value n+1 and the rest information in the current signaling that is associated with the interval code n.
Timing Information Using Fingerprint
As disclosed in U.S. patent application US20150264429 described above, the first timing information can also be obtained using fingerprint extraction and matching.
Application-Based Reporting of Media Consumption
In another embodiment a system and method for application-based reporting of media consumption is disclosed, Information from Over-The-Top (OTT) metadata (such as content ID's, sources and original broadcast time) is combined with timing from the application (including start and stop times) to populate data in Content Data Messages (CDMs). This allows a broadcast timeline to be built against which the consumer's viewing can be mapped. When a consumer begins OTT viewing, the system captures the metadata which includes the air date and time of the content. The air date and time is used in the CDM as broadcastStartTime. When viewing ends (completes playing or is paused or stopped by the consumer), the duration of the viewing is used to determine the broadcastEndTime. If a consumer skips within the content, the application's data on viewing position is used to determine the position of the consumed content on the broadcast timeline. The system allows comparison of consumption of OTT data with broadcast data by standardizing the data reported, thus bridging the gap between broadcast data used for ATSC-defined CDMs and data available for OTT content.
This solution allows comparison of consumption of OTC data with broadcast data by standardizing the data reported. The software populates the ATSC-defined CDMs using metadata and other data available from OTT applications (e.g., playback start and stop times). Unlike the use of watermark or fingerprinting data, this analysis is unobtrusive and does not require direct access to the content. Current OTT usage reporting is specific to the technology and platform used. Most OTT applications provide proprietary reporting data that is not standardized, thus usage data is not comparable across delivery mechanisms, or even between application providers.
Other forms of measurement use fingerprinting or watermarking to determine content being consumed. This is challenging for application developers as the content is not usually accessible for detection purposes. This software combines information from the OTT metadata (content IDs and sources as well as original broadcast time) with timing from the application (start and stop times) to populate data in the CDM, bridging the gap between broadcast data used for ATSC-defined CDMs and data available for OTT content.
The metadata allows the software to build a broadcast timeline against which the consumer's viewing can be mapped. When a consumer begins OTT viewing, the software captures the metadata which includes the air date and time of the content. The air date and time is used in the CDM as broadcastStartTime. When viewing ends (completes playing or is paused or stopped by the consumer), the duration of the viewing is used to determine the broadcastEndTime. If a consumer skips within the content, the application's data on viewing position is used to determine the position of the consumed content on the broadcast timeline.
This solution allows comparison of consumption of OTT data with broadcast data by standardizing the data reported. The solution consists of lightweight usage reporting client (LURC) software that is resident on a media consumption device using a media application and a processing entity called a CDM Builder (CDMB) that uses data from the client to create Consumption Data Messages (CDMs) as defined by ATSC. Using CDM-based reporting data allows comparison across delivery mechanisms or application providers, unlike current OTT usage reporting which is specific to the technology and platform used.
Unlike the use of watermark or fingerprinting data used by other forms of measurement, this analysis is unobtrusive and does not require direct access to the content, which is crucial as this content is not usually accessible to media applications for detection purposes.
In addition, many consumption devices have extremely limited resources available for processing metadata and reporting usage. The use of an intermediary to construct the CDM usage data alleviates many of the concerns that application developers may have regarding the addition of reporting to their systems.
The LURC reports playback start and stop times along with a media H) that can be used as a reference key to obtain additional data about the content. The CDMB populates CDMs using the timing data from the LURC along with metadata obtained using the media ID, e.g., via queries to other entities or to data stored at a database managed by CDMB.
Consumption Data Messages (CDM's) from Over-The-Top (OTT) Metadata and Playout Information Via an Intermediary Entity
Referring now to
The metadata allows the CDMB to build a broadcast timeline against which the consumer's viewing can be mapped. When a consumer begins OTT viewing, the LURC captures the media ID, which can be used by the CDMB to determine the air date and time of the content. The air date and time is used in the CDM as broadcastStartTime. When viewing ends (completes playing or is paused or stopped by the consumer), the duration of the viewing is used to determine the broadcastEndTime. If a consumer skips within the content, the media application's data on viewing position is used to determine the position of the consumed content on the broadcast timeline.
Application-Based Reporting of Media Consumption—Data Collection and Presentation
Introduction
This section discusses audience measurement as it pertains to applications, or “apps”. These apps can be resident on a platform such as Roku, a set-top box (STB) or a digital video recorder (DVR), or on a mobile device. The measurement described in this document may also apply to user interaction with content via a web browser.
Platforms include
Data that is currently available to broadcasters is not consistent across consumption platforms. Most broadcasters have access to precise viewing data from browsers and mobile devices via Google analytics, Omniture/Adobe analytics, etc. Tools for other platforms (e.g., Roku) don't always provide viewing data with the same level of precision and additional data as the mobile platforms provide, or if they do, they charge premiums for the information.
The present embodiment will be discussed in the context of the Aspect watermark-based audience measurement product available from Verance corporation (www.verance.com) Potential Aspect customers report that the information they have for application-based viewing is not accepted by advertisers and cannot be used for marketing ad placement in their content.
The intention of Aspect Reporting for Applications is to provide data that is consistent across platforms, not to replace the detailed second-by-second consumption information offered by some application platforms.
Use Cases
Users interact with streaming apps in a different way than they interact with linear broadcasts, so measurement via apps will necessarily be different from measurement of linear broadcast television.
However, it is possible to create an aggregate measurement for broadcast and streaming of some content. In cases such as live streaming, the total audience may be calculated by summing the number of viewers of the linear broadcast with those viewers using streaming in its many forms. This will be discussed later in this document as there are cautions around ad loads that come into play when aggregating different delivery mechanisms.
App reporting has a number of distinct use cases:
In the first two cases, cross-platform measurement is fairly straightforward. The third case will be challenging, as only portions of the content will correspond to broadcast.
The last use case will not involve mapping to broadcast time and audience, as there is no broadcast for comparison. However, reporting on this usage is still important to provide consistency.
Live Streaming
Live Streaming is viewing the current broadcast using an OTT mechanism at the same time as it is broadcast over the air (OTA) and/or via cable or satellite. For our purposes, this includes any advertisements and interstitials as broadcast.
If different advertisements are shown during streaming, that would be considered mixed content. However, if the content is streamed as it was broadcast but different advertisements are presented prior to the initiation of streaming, the supplemental ad portion would be “Non-Broadcast” and the program content would be Live Streaming.
Time Shifted Broadcast
Time-Shifted Broadcast refers to viewing a prior broadcast, including advertisements and interstitials as they were broadcast.
As for Live Streaming, if difkrent advertisements are shown, that would be considered mixed content. However, if the content is presented as it was broadcast but different advertisements are presented prior to the initiation of viewing, the supplemental ad portion would be “Non-Broadcast” and the program content would be Time-Shifted Broadcast.
Mixed Content
Mixed Content is a combination of content that was broadcast and content that was not broadcast. This can occur when:
In all these cases, the content that was broadcast will be compared to broadcast viewing, and the content that was not broadcast will be measured separately.
Non-Broadcast Content
Non-broadcast content is content that is exclusively available via apps, such as the new Star Trek series that is only available via subscription to CBS All Access. This content will be measured and the measurement results presented, but not compared to broadcast data.
Usage Metrics
Key usage metrics are:
Views
The basic unit of measurement for apps will be Views. This term is distinct from “Viewers”, which implies individual persons. While linear broadcast measurement assumes that all members of a household are potential viewers, the number viewers of an app can vary greatly depending on the platform on which the app is being run. Viewers of mobile apps are typically one individual (unless sent to a large screen via a device such as a ChromeCast), but viewers of other apps can be an entire household if the app is on a device such as a Roku.
Every streaming start to a target device is considered a View. Both Total and Unique Views will be used, as described below.
For example, if the viewer starts watching at 6:00 pm, stops at 6:10 pm and then starts watching again at 6:42 pm, this is counted as two Total Views, and one Unique View for the time period from 6 pm to 7 pm.
To be considered a View, the content must be actually consumed. Content that is downloaded but not viewed, for example, is not counted as a View. A Live View requires that 30 seconds or more of content be consumed during the reporting time period. Average Quarter Hour (AQH) viewing will be calculated by summing the number of viewing seconds for each unique device. A device with a total of more than 300 seconds within a 15-minute period is counted as one AQH View. For minute-by-minute viewing trends, a View is a unique device that consumes more than 30 seconds of content during a given minute.
Viewers
Because we are measuring content consumption at the device level, it is possible to tell what is being consumed, as well as when viewing occurred and for how long, but it is not possible to determine precisely who is watching the content. Demographic data about household composition will be available, but the exact individuals consuming the content will not be known,
Contet Impressions
The number of people exposed to a program or piece of content. This is based on a TBD minimum percentage of the content consumed, as the duration of streaming clips tend to vary widely.
Potential Views
Data that would be useful to determine Potential Views (akin to HUT or Universe in the broadcast world) includes:
Comparison to Broadcast
The app audience can be compared to or combined with the broadcast audience when the broadcast and app content overlap. When content differs between the distribution methods, this will be indicated in reports, for example showing the measurement in dashed lines or an alternate color.
Reporting Sources
Table 1 shows a matrix of different types of content, their sources and delivery methods, and what entity (device or software) could be used to report usage.
Architecture and Components
The reporting system for applications begins as content is made available, at the point of broadcasting, multicasting, or unicasting. Content is packaged into segments for streaming via technologies such as DASH (Dynamic Adaptive Streaming over HTTP) or HLS (HTTP Live Streaming). Note that, although this document refers specifically to broadcast content, the components and techniques described could be used for non-broadcast content (e.g., supplemental content streams, VOD, etc.).
Components of the system include:
An overview of the App Reporting System is shown in
Audio and/or video content is watermarked prior to multiplexing for broadcast. The watermark can indicate information such as the broadcast source and time. This content is multiplexed with metadata and other broadcast information.
The Publishing tool takes this information and content and creates a data stream or tiles that are suitable for streaming. This may include detection of watermarks in content to create Timed Metadata. If the content is not watermarked, or if watermark detection is not present in the Publishing tool, the Timed Metadata can be created based on the broadcast metadata (broadcast time, source, Media ID, etc.). If Timed Metadata is not created, the Consumption Device will use standard metadata for reporting (see
If using watermarks, the publishing tool also calculates the watermark offset, which is the number of milliseconds since the start of the first 1.5-second watermark segment.
The processed content and metadata is passed to the OTT tool for packaging and distribution. The Consumption Device receives the content, presents it to the end user, and provides the metadata to the Reporting Client. The Reporting Client extracts the key information from the metadata, along with consumption information, and passes this to the CDM Builder (which can be part of the Consumption Device or can be a separate service or entity).
The CDM Builder retrieves recovery data (based on the watermark) and/or metadata information (based on the Media ID) and uses this information, in conjunction with the consumption data, to construct CDMs. The data flow for data preparation is shown in
The data processing at the streaming client can be performed in a number of ways, depending both on the incoming data stream and the client features. See
Timed Metadata is joined with consumption data (playback timing, etc.) by the Reporting Client. This information is passed to a CDM Builder that can be a separate service or internal to the Consumption Device.
Reporting Data
Data is reported in the CDM format. The CDM schema allows additional properties to be defined; additional fields can be added to CDMs and they will still be compliant with the ATSC specifications.
Consumption Data
The watermark is used to determine both the source of the content and the time at which it was broadcast. A number of metadata fields exist to determine the content source, but special processing is required to determine the broadcast time.
Sources of Timeline Data
The best available source should be used to obtain the timeline information. The sources, in order of preference, are:
While Timed Metadata is the most straightforward method of determining the broadcast timeline, the watermark is preferred as it is integral to the content (tags may be removed or altered by intermediaries). Timed Metadata with watermark payloads will provide more consistent and precise timing information than Timed Metadata with Media IDs. Both are more precise and consistent across platforms than standard metadata,
Mapping to Broadcast Timeline
Playback data from the app can be used to determine which content was consumed, and this can be mapped to the broadcast timeline. Apps have access to information about the position of the content being presented relative to the content stream (i.e., the current position within the stream).
For example, suppose the first segment of content was broadcast at 8:02 PM, If the app reports that the user watched the entire first minute of the content, then skipped ahead three minutes and watched two more minutes, we can determine that content between 8:02 and 8:03, and between 8:06 and 8:08, was viewed,
Timing Data Source
As the precision of the timing data will vary between different sources, the CDMs should include a field that indicates the data source. Values should include:
Location
The data reported must include information about where the content consumption occurred geographically. For example, we would like to determine whether the consumption happened inside the market area of the content's origin or outside it. Ideally, this data would be provided by the platform's geolocation service (latitude and longitude), if this is not available, an IP address lookup could be used.
Reporting Device
Device IDs will be assigned per application, not per device. For example, if a tablet has station KTAN's news alert application, Chrome, and the HuluLive app, that tablet will have three device IDs. The device ID included in the usage report will be the device ID that corresponds to the application that was used to consume the content.
Device fields that are tracked currently include:
Data about the reporting device should also include the delivery method and reporting platform and version as reports will need to specify these (e.g., number of Views on Roku vs. Amazon vs, station app on mobile).
Additional Data
Additional data may be available via the content metadata or other sources. To take full advantage of application-based data, CDMs can be extended to include extra data available to apps such as tags (e.g., “local”, “city_hall”, “pets”), duration of VOD clip, ad data, etc.
Data Validation and Processing
Data validation will follow MRC standards for data filtration of suspicious data. Bridging values and other details will be determined during the design phase, in accordance with guidelines such as those set forth by the Market Research Council (MRC) and others as appropriate.
Reports
Total Live Views
Between 6:00 pm and 6:30 pm, how many total views of live content were measured within the market area? The objective of this report is to determine how many views of live content occurred during a specific time period or program for a minimum amount of time. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name. This report will show a summary of all live views and will allow the user to drill down into the data by platform type and platform type detail.
Filters and Selectors
Summary
A summary of the total live views by time period is shown in
By Platform Type
Platform Type Detail
Geographical Distribution of Views
Between 6:00 pm and 6:30 pm, how many total views of live content were measured, and where were the viewers located? The objective of this report is to determine where views of live content occurred during a specific time period or program for a minimum amount of time. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name. This report will show a summary of all live views and will indicate where the views were measured. The user can drill down into the data by platform type and platform type detail,
Filters and Selectors
Summary
By View Location
By Platform Type and Platform Type Detail
The report shows further tables and charts of geographical viewing breakdowns by platform and platform type.
Average Quarter Hour Audience and Rating
Between 6:00 pm and 6:30 pm, what is the Average Quarter Hour (AQH) Audience and Rating (defined as watching 5 minutes within a 15-minute period) of live content? The objective of this report is to determine how many QHVs of live content occurred during a specific time period or program. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name, For this report, the user will also select the market population base (such as the 2+, 12+, or 18+ population of the market) or to input their own value for AQH Rating (denominator). The value and value source will be shown in the report. This report will show a summary of all live QHVs and will allow the user to drill down into the data by platform type and platform type detail.
Filters and Selectors
Summary
By Platform Type
Time Period Summary
Unique Live Views
Between 6:00 pm and 6:30 pm, how many unique viewers of live content were measured within the market area? The objective of this report is to determine how many unique views of live content occurred during a specific time period or program. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name. This report is the same as the Market Live Views report; the only difference is that this report shows unique views instead of total views.
Total Live Views by Duration
Between 6:00 pm and 6:30 pm, what is the frequency distribution of viewing duration for Total Views or Unique Views? The objective of this report is to analyze the diaration of live content views during a specific time period or program for a minimum amount of time. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name. The user selects whether to show total views, unique views, or both, This report will show a summary of view durations and will allow the user to drill down into the data by platform type and platform type detail.
Filters and Selectors
Summary
By Platform Type
By Platform Type Detail
The report shows further tables and charts of viewing duration breakdowns by platform type.
Live View Trend
Between 6:00 pm and 6:30 pm, when were Live Views measured? The objective of this report is to determine how the number of unique concurrent live content views varies during a specific time period or program for a minimum amount of time. The user will enter the market, station, apps (one, multiple or all), date range, and time period or program name. The user selects whether to show total views, unique views, or both. This report will show a summary of view durations and will allow the user to drill down into the data by platform type and platform type detail.
Filters and Selectors
Summary
By Platform Type
By Platform Type Detail
The report shows further tables and charts of viewing duration breakdowns by platform type.
Definitions and Abbreviations
Definitions
Media Content is digital data whether in encrypted or unencrypted form that represents some combination of digitally encoded audio, video, and/or image content intended for output by the Product in a predetermined, continuous, sequential manner.
Media ID uniquely identities content, allowing the system to determine the content source and broadcast time.
Playback is any operation performed by a Product for the purpose of producing or having produced a contemporaneous display of Media Content.
Abbreviations
Usage Data from Applications
This section outlines the data that is required in usage sporting from apps developed for specific local broadcasters.
Use Cases
Content played via an app may be any of the following locally-sourced content:
CDM Data
This table is based on the Extended CDM Format described in Service Usage Reporting: Verance Extensions. Please refer to that document, and to ATSC A/333, for additional details on the fields described below as well as the related schemas.
The following information uses examples from fictional station WKAW. A set of values for each station will be provided separately.
protocolVersion—This field shall contain the major and minor protocol versions of the syntax and semantics of the CDM, coded as hexadecimal values each in the range 0x0 to 0xF. The overall protocolVersion will be coded as a concatenated string of the form “0x<major protocol version as hexadecimal digit><minor protocol version as hexadecimal digit>”. A change in the major version level shall indicate a non-backward-compatible level of change. The initial value of this field shall be 0. The value of this field shall be incremented by one each time the structure of the CDM is changed in a non-backward compatible manner from a previous major version. The second number is the CDM's minor version, which shall represent the minor version of the syntax and semantics of the CDM. A change in the minor version level for each value of the first number shall indicate a backward-compatible level of change within that major version. The initial value is 0. The value of this field shall be incremented by one each time the structure of the CDM is changed in backward-compatible manner from a previous minor change (within the scope of a major revision).
DeviceInfo—The consumption device information.
DevicInfo.deviceID—A field that shall identify the consumption device identifier. A value of “NOTREPORTED” indicates that the consumption device identifier is intentionally not revealed.
DeviceInfo.deviceModel—A field that shall identify the consumption device model (e.g., XYZ-NG3400). A value of “NOTREPORTED” indicates that the consumption device model is intentionally not revealed.
DeviceInfo.deviceManufacturer—A field that shall identify the consumption device manufacturer (e.g. ABC company). A value of “NOTREPORTED” indicates that the consumption device manufacturer is intentionally not revealed.
DeviceInfo.deviceOS—A field that shall identify the consumption device operating system and version (e,g. iOS 9.0.2, Android 5.0.1). A value of “NOTREPORTED” indicates that the consumption device operating system is intentionally not revealed.
DeviceInfo.peripheraiDevice—A field that shall identify if the consumption device is an external peripheral (e.g., a ATSC tuner dongle). A value of “NOTREPORTED” indicates that it is intentionally not revealed if the consumption device is external or not.
DeviceInfo.deviceLocation—An object that shall identify the last known location of the consumption device.
DeviceInfo.deviceLocation.latitude—A field that shall contain the latitude of the last known device location coded in decimal degrees format (e.g., “[+−]DDD.DDDDD”) as a string. A value of “NOTREPORTED” indicates that the device location is intentionally not revealed.
DeviceInfo.deviceLocation.longitude—A field that shall contain the longitude of the last known device location coded in decimal degrees format (e.g., “[+−]DDD.DDDDD”) as a string. A value of “NOTREPORTED” indicates that the device location is intentionally not revealed.
DeviceInfo.clockSource—An unsigned integer that shall contain the source of the time that has been set in the device clock.
0—device clock has been set manually by the user
1—device clock has been set automatically by a service
AVService—This element contains the list of zero or more elements describing activity intervals based on content delivered continuously. For Aspect reporting, there shall be one or more elements.
country—Country code associated with the primary administrative entity under which the value provided in bsid is assigned, using the applicable alpha-2 country code format as defined in ISO 3166-1.
bsid—Identifier of the whole broadcast stream.
serviceID—This value of serviceID identifies the service associated with the usage data in this AVService element.
globalServiceID—This globally unique URI identifies the service associated with the usage data in this AVService element.
serviceType—The value of the field @serviceCategory.
reportlnterval—One or more periods of display of content for this AVService.
reportInternal.startTime—The UTC dateTime at the beginning of the event. Intervals shall begin when display of the content begins.
reportInterval.endTime—The UTC dateTime at the end of the event. Intervals shall end when display of the content ends.
DestinationDeviceType—An unsigned integer denoting the class of usage or device type (presentation device). Defined values are:
0—Content is presented on a Primary Device
1—Content is presented on a Companion Device
2—Content is sent to a Time-shift-buffer
3—Content is sent to a Long-term storage
4 to 255—Reserved.
ContentID—This field shall identify the content associated with this instance of reportInterval. This field is required if the ContentID is available to the device,
ContentID.type—A field that is required when Contentid element is included. Two values are defined currently by ATSC:
ContentID.cid—A field that is required when ContentId element is included that provides the content identification for this reportInterval element. The type of content identifier shall be as given in the ContentID.type attribute. Either an EIDR (34-character canonical form with hyphens) or Ad-ID (11 or 12-character canonical form) can be included.
For example, if a clip of unknown length has an BUD of
For a 2-minute clip with an BUM of
For a live clip with a BUID of
Component—Content component type, role, name, ID and time interval information. A component is present and shall be reported in the Component field only if it is presented on a Primary Device or Companion Device or sent to a Time-shift-buffer or Long-term storage, as specified in DestinationDeviceType field. A component shall not be reported in the Component field if it is not presented on any Primary Device or Companion Device, nor sent to any Time-shift-buffer or Long-term storage.
Component.componentType—The type of component is indicated. Value of 0 shall indicate an audio component. Value of 1 shall indicate a video component. Value of 2 shall indicate a closed caption component. Value of 3 shall indicate an application component. Values 4 to 255 shall be reserved.
Component.componentRole—A unsigned byte that shall represent the role or kind of the component. In this case, the componentRole attribute shall be interpreted as follows:
Component.componentName—A string representing the human-readable name of the component.
Component.componentId—A string representing component identifier.
Component.componentLang—A string representing component language.
Component.startTime—the UTC dateTime at the beginning of the event. Interval shall begin when display of this content component begins within the time period defined by the reportInterval instance. The value shall not be less than the value of startTime attribute of this reportinterval instance.
Component.endTime—the UTC dateTime at the end of the event Interval shall end when display of this content component ends within the time period defined by the reportInterval instance. The value shall not be greater than the value of endTime attribute of this reportInterval instance.
Component.SourceDeliveryPath—Delivery path used for or the source of the content component indicated by the parent Component element.
SourceDeliveryPath.type—
SoureeDeliveryPath.startTime—the UTC dateTime at the beginning of the event, within the time interval defined by the parent Component element. interval shall begin when the delivery of content component begins on the path or from the source indicated by the value of type attribute. The value shall not be less than the value of startTime attribute of the parent Component element.
SourceDeliveryPath.endTime—the UTC dateTime at the end of the event, within the time interval defined by the parent Component element. Interval shall end when the delivery of content component ends on the path or from the source indicated by the value of type attribute. The value shall not be greater than the value of endTime attribute of the parent Component element.
Use Case 1: Live Stream
Use Case 2: Recorded Content Stream
This example includes the case where the device is showing content on a remote screen, and the broadcast time is known by the application, The differences between this example and Use Case 1 are highlighted in bold.
Rescues-Toddler”
23T12:27:40.572Z”,
23T12:28:13.594Z”,
Use Case 3: Recorded Content Stream from Another Station
This example includes the case where the device is using Spanish language audio, from station WTAN. The differences between this example and Use Case 1 are highlighted in bold.
The TV Off Problem
Additional embodiments disclose solutions to the TV off problem. This problem arises with Audiovisual (AV) content services that are delivered over broadband, cable, or satellite to the home that may be accessed by consumers using AV service application running on a media adapter or set-top box that is connected to a display system via an HDMI interface.
Contemporary examples of such AV content services with such service applications are Hulu (broadband), Xfinity TV (cable), and DISH (satellite). Contemporary examples of such media adapters include Apple TV, Chromecast, and Roku Streaming Stick. Contemporary examples of such set-top boxes include X1 TV (cable) and The Hopper (satellite). Contemporary examples of a display system include a television set or an AV receiver or similar device that receives AV content and forwards it to a television set.
For simplicity, and without intending to limit the applicability of our description, we will refer to the AV content service as the “service,” the media. adapter or set-box as the “media player”, and the display system as the “TV”.
A problem that has been found to exist in this scenario is that in some cases, while the service application on the media player is actively presenting content for viewing, it may be unable to reliably determine whether the display system is showing the content coming from the media player to a viewer.
This is important for several reasons. First, if the service is not being presented to a viewer than any resources consumed by the media player are wasted. Electricity and network bandwidth are finite resources and will have to be paid for by the viewer, the service, and (in some respects) the community at large. Second, the service's records regarding what the viewer has watched will be wrong. A viewer who returns to an on-demand service expecting to be offered the next episode of a series following the last one that they watched may instead be presented with an episode that is 8 or 10 episodes later, because the intervening ones were presented to a turned-off TV. For a service, this may lead to a misunderstanding of the relative popularity or viewership of a program; e.g., it may think many viewers remained tuned in for the post-game show after the end of a sporting event, when in fact they turned their TVs off And finally, it may cause advertisers to be misled regarding the exposure of a household to their advertisements and to pay for delivery of ads that could not possibly have been viewed, because the TV was turned off We refer to this as the “TV off” problem.
Limitations of Known Solutions
The HDMI interface provides some mechanisms to assist devices in determining the power state of devices to which they are connected.
One such mechanism is the “hot plug detection” (HPD) pin on the HDMI interface that can indicate to a media player whether the device at the other end of an HDMI cable is powered on. This mechanism is imperfect, however, because the HPD of a television set may indicate that it is powered on when it is actually in a low-power “sleep” state and has its screen turned off. And if the display system includes an AV receiver in between the television set and the media player, the HPD pin may indicate the power state of the AV receiver rather than the television set. So, if the television set is turned off but the AV receiver is turned on, the media player could see power on the HPD pin and incorrectly believe the television sitting behind the AV receiver is displaying its content.
Another mechanism is the HDMI CEC feature, that allows devices connected by HDMI to pass messages to one another about user actions. When the HDMI CEC feature is enabled on devices connected by HDMI, they will send and receive messages with one another whenever their power is turned on or off. This mechanism is also imperfect, however, because most devices allow the user to deactivate the HDMI CEC messaging feature and manufacturers of some home entertainment devices (universal remote controls, in particular) instruct consumers to deactivate HDMI CEC in all of their equipment.
One solution is to adapt the techniques developed for the Dual Terminal problem, discussed above, to address “TV Off”.
The upstream device is a network-connected media player running at least one service application. The service application may be executable software that has been installed on the media player by its manufacturer (either at the time of manufacture or subsequently, as an update) or it may have been installed on the media player at the request of the user (e.g., via “app store” or similar). it could alternatively be a broadcaster application, such as is defined by HbbTV and ATSC 3.0 A1344 specifications, that is discovered and launched when an associated broadcast service is tuned by the media player, in which case there may be two service applications miming on the media player simultaneously; one that is persistently installed and run on the media player and used for broadcast service selection (among other functions) and a broadcaster application that is temporarily installed and run on the media player only while AV content from an associated service is selected for viewing. In any case, one or more service applications provide a user interface for content selection and playback, e.g., from a channel guide or searchable library, among other things, employing functionality and hardware provided by the media player for viewer interaction and for accessing and presenting content selected by the viewer.
The downstream device is a network-connected television that supports detection of a video watermark, such as ATSC A/335. It may also support detection of an audio watermark such as ATSC A1334 for purposes of launching broadcaster applications.
The disclosed embodiments confirm that the television is on by having a service application on the upstream device (e.g., media player) embed a video watermark into the content that is being presented containing a video watermark message that includes an identification code that can be associated with the playback event. The downstream device (e.g., television), if turned on and presenting the content from the upstream device, decodes the video watermark to obtain the identification code and processes it to perform a confirmation protocol that includes sending a data message over a network to a network server as confirmation that the content from the upstream device in which the video watermark was embedded was presented to a viewer. This confirmation provides evidence to the network server that the TV was not off.
Some available methods for enacting the different elements of the embodiments are described in greater detail in the following sections.
Embedding of the Video Watermark Message
The service application may employ known techniques for video watermark generation to produce the video watermark as a graphical data object and direct the media player to compose this graphical data object with the AV content video to produce a watermarked AV content presentation for output from the media player.
In some embodiments, the video watermark generation is performed in the manner specified by the Advanced Television Systems Committee ATSC 3.0 next generation broadcast standard A/335 “Video Watermark Emission,” available at: https://www.atsc.org/atsc-documents/type/3-0-standards/. For additional technical details of video watermark technology that may be suitable for use with the embodiments see: U.S. Pat. Nos. 10,218,994; 10,848,821; and 10,477,285, owned by Verance Corporation.
The graphical data object could be in the form of a pixel map, HTML5 canvas, or encoded image format such as PNG. The composition could be in the form of an overlay or a border, If the composition is in the form of a border, the service application may direct the media player to scale the AV content to create a border region where the video watermark will be placed. If the video watermark is embedded only sporadically, rather than continuously, this AV content scaling may be continued even when the video watermark is not embedded to avoid creating a visible artifact when the scaling is changed. The graphical data object may include a transparency or blending component so that the composition output includes individual video pixels whose values are a combination of those of the graphical data object and the AV content video.
This technique for embedding the video watermark is applicable in the case where the video watermark is embedded by a service application that does not have the ability to easily modify the AV content video data but does have the ability to render graphics on or around the AV content video. Examples of use cases where this approach to embedding the video watermark scenario may be applicable include when the service application is relying on an HLS or DASH player implemented in the firmware or middleware of the media player to retrieve and play the AV content. Another circumstance where this technique may be useful is if the service application has access to the AV content video data in encoded form but does not have access to it in decoded form, such as in the case of a service application that implements an HLS or DASH media player in an HTML5 browser, accesses AV content data from a server, and provides it to a MediaElement for decoding and presentation using Media Source Extensions. Another circumstance where this technique may he useful is if the service application is running as native code on the media player and is using graphics APIs such as OpenGL or X.
The service application may embed the video watermark, but does not necessarily need to embed the video watermark continuously throughout the AV content. In fact, it may he particularly advantageous for the service application to embed the video watermark only at selected moments in time. Some video watermark technologies, like ATSC A/335, are not designed to carry more than one watermark message in a particular video frame and the embedding of a message by the service application may cause a pre-existing message that was embedded into the AV content upstream from the media player to be obscured. In such cases, it is typical for each message to be embedded only occasionally but repeatedly. Occasional transmission allows for other messages to be time-division multiplexed. Repeated transmission allows for messages to be received even if a particular transmission is lost in transmission or overwritten by a downstream embedder. In the case of HDMI transmission from a media player to the television, the AV content is not expected to be transcoded, processed, or undergo further watermark embedding, so each watermark message embedded by the service application can be expected to be accurately decoded by the television. It may therefore he advantageous for the service application to embed each watermark message only very briefly (e.g., for a duration of only one video frame at a time), infrequently (e.g., once every 5 seconds), and with low watermark embedding strength (e.g., luminance). With this approach, interference between the video watermarks embedded by the service application and any preexisting video watermarks as well as any reduction in video quality are minimized.
Contents of the Video Watermark Message
The identification code included in the video watermark message may be constructed in any number of different ways, so as to accomplish a variety of different benefits from the disclosed embodiment.
As is generally known in digital communications, and as discussed specifically with respect to video watermark messages in ATSC A/336 and other similar watermark standards, the message may contain a single data field or multiple data fields. These data fields may be packaged together within a single message or multiple different message types may be defined. The messages may include fields containing error detection or error correction codes. One or more data fields may be encoded via encrypted or hashing. Data fields may be present that specify the tbrmat or contents of values in subsequent data fields. The video watermark message may optionally be conveyed in a format that conforms with a “dynamic event” message, “stream event” message, “VP1” message, or similar format as currently in ATSC A/336, ATSC A/337, HbbTV Application Discovery over Broadband, MPEG ISO BMFF, MPEG-2 Transport Stream, MPEG DASH, or any other format.
In particular, any number of the following data fields (as well as other data fields not described) may be present in video watermark messages used in the disclosed embodiments and with meaning and purpose as described herein.
Media Player Identifier: A value associated with the media player device, such as a model identifier, firmware version identifier, operating system version identifier, or device serial number.
Service Application Identifier: A value associated with the service application, such as an application name, application version number, or service identifier.
Account Identifier: A value associated with the user, such as a user name, subscriber identifier, account identifier, account type identifier (e.g., ad-supported vs. ad-free), or advertising identifier.
Asset Identifier: A value associated with the content that is being played by the service application, such as a content identifier, program identifier, song identifier, promo identifier, or ad identifier, or asset type identifier (e.g., program vs. advertisement), or asset duration identifier.
Timecode: A value represented in a timing system that is associated with a wall-clock time or a point in time in the AV content, such as an absolute time, a time offset from reference time in the content, or a “milestone” in the content such as a fractional marker of asset or asset segment (e.g., start, 25% duration, 50% duration, 75% duration, or end point).
Confirmation Server identifier: A value associated with a network server that can receive a confirmation, such as a hostname, IP address, URL, confirmation service identifier or other value that is associated with a confirmation server.
Avail Identifier: A value associated with a location within an AV content or service at which an opportunity exists for presenting alternate content, such as an advertisement, promotional or informational message.
Confirmation Identifier: A value that is associated with a particular confirmation attempt or confirmation message.
Confirmation Process Identifier: An identifier of the particular confirmation process or protocol that the downstream device should engage with respect to processing of video watermark messages from this AV content.
Tracking URL: A fully formed URL or part thereof that contains tracking information.
Any of these identifiers, if present in the video watermark, or a combination of them may comprise an identification code used for the purpose of confirming that the TV is on. in some cases, video watermark messages are segmented into parts, the parts are embedded separately in accordance with a message segmentation protocol, and must be reassembled in connection with message decoding. Additionally, multiple different video watermark messages may be multiplexed for serial transmission, for example in a carousel fashion.
Embedding of Audio Watermark Messages
In addition to specifying the use of video watermarking technology for carriage of messages in AV content, ATSC 3.0 and HbbTV both specify the use of audio watermarking technology for the purpose of enabling televisions to execute broadcaster applications to be discovered and run together with AV content. Details of the capabilities are given in the public specifications ATSC A/334, ATSC A/336, CTA CEB-32.10, HbbTV Specification, and HbbTV Application Discovery over Broadband.
In any case, the audio watermark generally includes at least sufficient information necessary to reliably identify the specific broadcaster application intended to be run during presentation of the AV content. In the case of the ATSC 3.0 and HbbTV application, this is comprised of an identifier of the service (a “server code”) and of the time in the service content (an “interval code”).
In some implementations of the embodiments, it is beneficial that an audio watermark be present in the AV content output from the media player. It is generally preferred that the audio watermark be embedded within the AV content prior to its distribution to the media player. Practical means of embedding the audio watermark include use of audio watermark embedding software integrated in a linear broadcast chain appliance or file-based embedding using application software in a local or cloud-based computer, prior to encoding of the audio portion of the AV content for distribution.
Detection of the Watermark Messages
A television may incorporate video watermark detection technology in hardware, software, or a combination of the two. When the television is powered on and receiving and displaying AV content, the video watermark detector processes all or a part of the video in the received AV content to detect the presence of video watermarks and decode the messages carried therein.
The television may further incorporate audio watermark detection technology in hardware, software, or a combination of the two. When the television is powered on and receiving and displaying AV content, the audio watermark detector processes all or a part of the audio in the received AV content to detect the presence of audio watermarks and decode the messages carried therein.
When the specified audio watermark is present in AV content received over HDMI and other interfaces, televisions with audio watermark readers and HTML5 browsers (as specified by ATSC 3.0 and HbbTV, for example) can detect the audio watermark from the audio component of the AV content, decode the watermark message, form a URL using data from the watermark message, retrieve a signaling file from a network server using the URL, retrieve a broadcaster application URL from the signaling file, retrieve an HTML5-based broadcaster application associated with the AV content from a network server using the application URL, and execute the broadcaster application. (The term “broadcaster application” is used herein to be consistent with the usage of this term used in the HbbTV and ATSC 3.0 specifications. However, in the disclosed embodiments, it is quite possible that neither the AV content nor the application itself have been broadcast and the application may be from a party who is not a broadcaster.)
Processing of the Video Watermark Message
Depending on the capabilities of the television and the format of the video watermark message, the television may process the video watermark message itself or it may pass the video watermark message to the broadcaster application for processing by the broadcaster application.
A particularly beneficial arrangement for processing of the video watermark message by a broadcaster application can be achieved if: (1) the audio watermark is present in the AV content being played output from the media player; (2) the television has the capability of launching broadcaster applications in accordance with HbbTV, ATSC 3.0 or similar specifications. These specifications designate certain video watermark message formats as application messages and certain other message formats as receiver messages. When receiver messages are decoded by a television, the data in the messages are intended for use in interpretation and action by the television firmware itself. When application messages are received by a television, the data in the messages are intended for use by a broadcaster application and the television is expected to pass the message to the broadcaster application for interpretation and action. In this arrangement, the audio watermark is used to launch a broadcaster application selected by the service provider, the service application can use an application message format in the video watermark, the television will deliver the video watermark message to the broadcaster application, and the service provider's own broadcast application can process the message. A benefit of this arrangement is that the service provider is in control of both the generation of the video watermark message in the media player that is playing the AV content and the processing of it on the television that is presenting it.
In any case, processing of the video watermark message includes parsing and decoding of the fields in the message and performing a confirmation protocol with one or more network servers.
Parsing and decoding the message typically requires multiple steps of decoding data fields to identify message or data field formats, parsing and decoding selected data field values in accordance with those identified formats, and as necessary—repeating the process to until all data fields are parsed and decoded, In some cases, decryption of messages or parsed and decoded data fields is also required.
The confirmation protocol includes identifying a network server, establishing a connection to the network server, and transmitting and receiving data to the network server. The primary purpose of the confirmation protocol is to deliver confirmation information to the network server.
The confirmation protocol could consist simply of posting a message to a server containing confirmation information, for example by using an HTTP operation such as GET using a URL. Alternately, it may include establishing a connection with the server and exchanging a sequence of messages with the server, which may include a protocol negotiation regarding the format and content of the confirmation information as well as delivery of that confirmation information.
If the video watermark message contains a fully formed message such as a tracking URL, the protocol may entail simply performing an HTTP operation such as GET using the URL without the need for further processing. The protocol may require that the television or broadcaster application process the watermark message data to construct messages for use in the confirmation protocol. The procedure may include converting data from the watermark message into one or more strings by way of one or more format conversion, table lookup, DNS lookup, decryption, encryption, decompression, or other data processing operations. The resulting strings may be appended to one another or with one or more additional strings that are pre-stored or generated in the downstream device, such as protocol prefixes, protocol identifiers, operators, separators, device identifiers (such as manufacturer, firmware or operating system name or version identifiers, serial numbers, advertising identifiers), time codes, broadcaster application identifiers, user identifiers, cookies, pre-stored tokens, or session identifiers. Generation of strings from the downstream device may also include the use of encryption or hashing functions.
The confirmation protocol may be performed immediately by the downstream device upon processing of a video watermark message. Alternatively, it may be deferred and performed subsequently depending on the contents of the video watermark message together with the presence (or absence) and contents of subsequently received video watermark messages. For example, if a downstream device receives watermark message data indicating that an advertisement asset of 15 seconds duration is being played by the upstream device and that the beginning of the advertisement is currently being played and presented, the downstream device may choose to wait for additional messages indicating that later portions of the advertisement are presented (e.g. start milestone, 50% milestone, end milestone) before engaging a confirmation protocol that will confirm the amount of the advertisement that was presented.
Use of the Confirmation Information by Network Servers
Servers in receipt of confirmation information obtained from downstream devices via confirmation protocols may forward that information to additional servers, store the information in a database, or compile them into reports. The confirmation information can be used as the basis for important operations such as determining audience size, global impression counts, household exposure, advertising pricing, account payments, or other accounting functions. For example, an advertiser on a service may be charged a higher rate for an advertisement that is played by a service application on a media device in the event that confirmation information is received from a television indicating that the advertisement was actually presented on a screen. The confirmation information received by the server may also be matched or reconciled with other measurement information, as already known in the art, such as playout confirmations received from the service application on the media player (e.g., VAST (https://iabtechlab.com/standards/vast/) tracking URI), in order to verify or confirm that separately received measurement information.
Additionally, such servers may transmit confirmation information or data derived from it to the specific upstream device that is embedding the video watermark from which the confirmation information was derived. This is facilitated when the video watermark message and derived confirmation information identifies a media player identifier, service application identifier, account identifier, or a sufficiently unique account identifier that enables the particular service application to be efficiently determined and addressed with the information.
Use of Confirmation Information by Service Applications
Service applications may change their behaviors upon receiving such information. For example, a service application that is playing content without having received confirmation that the television is turned on may choose to halt playback after a particular period of time without any user input being received (possibly prompting the user for input first) to avoid wasting service or user resources or to protect advertisers against wasted ad purchases, on the assumption that the TV is likely to be off. The same service application may choose to not halt playback or to not prompt the user for input for an increased period of time, or at all, in the event that confirmation that the TV is turned on is received.
In step 2, a streaming service app embeds video watermarks with confirmation data (e.g., tracking URL, token, session/asset/avail identifiers, etc.) as an invisible graphical overlay. Note that no media player OS support is required.
In step 3, a connected television (CTV), detects audio watermarks in HDMI content and runs the streaming service's broadcaster app with TV On confirmation logic while their content is presented. The broadcaster app can remain visible.
In step 4, the streaming service app notifies the confirmation server of assets played out in the viewing session.
In step 5, the CTV detects video watermarks in Ha content and forwards them to the broadcaster app.
In step 6, the broadcaster app with TV on confirmation logic uses confirmation data from the video watermark to notify a confirmation server.
In step 7, the confirmation server reconciles instances of asset playout and TV on confirmations.
In step 8, the confirmation server can notify the streaming service app of session TV on status.
It is understood that the various embodiments of the present invention may be implemented individually, or collectively, in devices comprised of various hardware and/or software modules and components. These devices, for example, may comprise a processor, a memory unit, an interface that are communicatively connected to each other, and may range from desktop and/or laptop computers, to consumer electronic devices such as media players, mobile devices and the like. For example,
Referring back to
Various embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Theretime, the computer-readable media that is described in the present application comprises non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
The foregoing description of embodiments has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit embodiments of the present invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. The embodiments discussed herein were chosen and described in order to explain the principles and the nature of various embodiments and its practical application to enable one skilled in the art to utilize the present invention in various embodiments and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products.
This application is a Continuation-in-part of U.S. patent application Ser. No. 17/702,728 filed on Mar. 23, 2022, which is a continuation of U.S. patent application Ser. No. 16/625,656 filed on Dec. 20, 2019, now patented, which is a 371 of International Application No. PCT/US18/038832 filed on Jun. 21, 2018, which further claims the benefit of priority of U.S. Provisional Patent Application No. 62/523,164 filed on Jun. 21, 2017, U.S. Provisional Patent Application No. 62/524,426 filed on Jun. 23, 2017, and U.S. Provisional Patent Application No. 62/654,128 filed on Apr. 6, 2018, the entire contents of which are incorporated by reference as part of the disclosure of this document.
Number | Date | Country | |
---|---|---|---|
62654128 | Apr 2018 | US | |
62524426 | Jun 2017 | US | |
62523164 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16625656 | Dec 2019 | US |
Child | 17702728 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17702728 | Mar 2022 | US |
Child | 17847121 | US |