For content associated with supplemental features (e.g., interactive features), challenges arise in synchronizing the content and the supplemental features associated therewith. For example, with respect to live programs, there is a time delta (e.g., a delay) between when the live event which is the subject of the live program actually takes place, when the live content is broadcast from the source, and when the live content is rendered at a user device. Some of the time delta is unavoidable (e.g., due to FCC regulations) and some of the time delta may be due to network conditions, (e.g., path length, network conditions). Inaccurate timing information (or the lack thereof) leads do a desynchronization between content and supplemental features. When content and supplemental features are not synchronized, users may not be able to interact with the supplemental features properly and may feel confused or frustrated, thus downgrading the user experience. A poor user experience makes the audience less likely to engage with the content.
It is to be understood that both the following general description and the following detailed description are exemplary and explanatory only and are not restrictive. Methods and systems for monitoring latency associated with a network and granting or blocking access to one or more supplemental features based thereon are described. Time tags, or similar markings or information in content may be identified and/or monitored so as to determine the latency of the network. The timing (e.g., start time, end time, duration, etc.) of events in content can be compared to the latency and access to supplemental features can be granted or blocked based on the comparison.
The accompanying drawings, which are incorporated in and constitute a part of this specification, show examples and together with the description, serve to explain the principles:
As used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. If such a range is expressed, another configuration includes from the one particular value and/or to the other particular value. Similarly, if values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another configuration. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
“Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes cases where said event or circumstance occurs and cases where it does not.
Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” means “an example of” and is not intended to convey an indication of a preferred or ideal configuration. “Such as” is not used in a restrictive sense, but for explanatory purposes.
It is to be understood that if combinations, subsets, interactions, groups, etc. of components are described that, while specific reference of each various individual and collective combinations and permutations of these may not be explicitly described, each is specifically contemplated and described herein. This applies to all parts of this application including, but not limited to, steps in described methods. Thus, if there are a variety of additional steps that may be performed it is understood that each of these additional steps may be performed with any specific configuration or combination of configurations of the described methods.
As will be appreciated by one skilled in the art, hardware, software, or a combination of software and hardware may be implemented. Furthermore, a computer program product on a computer-readable storage medium (e.g., non-transitory) having processor-executable instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, memristors, Non-Volatile Random Access Memory (NVRAM), flash memory, or a combination thereof.
Throughout this application reference is made block diagrams and flowcharts. It will be understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, respectively, may be implemented by processor-executable instructions. These processor-executable instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the processor-executable instructions which execute on the computer or other programmable data processing apparatus create a device for implementing the functions specified in the flowchart block or blocks.
These processor-executable instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the processor-executable instructions stored in the computer-readable memory produce an article of manufacture including processor-executable instructions for implementing the function specified in the flowchart block or blocks. The processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the processor-executable instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowcharts support combinations of devices for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, may be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
“Content items,” as the phrase is used herein, may also be referred to as “content,” “content data,” “content information,” “content asset,” “multimedia asset data file,” or simply “data” or “information.” Content items may be any information or data that may be licensed to one or more individuals (or other entities, such as business or group). Content may be electronic representations of video, audio, text and/or graphics, which may be but is not limited to electronic representations of videos, movies, or other multimedia, which may be but is not limited to data files adhering to MPEG2, MPEG, MPEG4 UHD, HDR, 4 k, Adobe® Flash® Video (.FLV) format or some other video file format whether such format is presently known or developed in the future. The content items described herein may be electronic representations of music, spoken words, or other audio, which may be but is not limited to data files adhering to the MPEG-1 Audio Layer 3 (.MP3) format, Adobe®, CableLabs 1.0, 1.1, 3.0, AVC, HEVC, H.264, Nielsen watermarks, V-chip data and Secondary Audio Programs (SAP). Sound Document (.ASND) format or some other format configured to store electronic audio whether such format is presently known or developed in the future. In some cases, content may be data files adhering to the following formats: Portable Document Format (.PDF), Electronic Publication (.EPUB) format created by the International Digital Publishing Forum (IDPF), JPEG (.JPG) format, Portable Network Graphics (.PNG) format, dynamic advertisement insertion data (.csv), Adobe® Photoshop® (.PSD) format or some other format for electronically storing text, graphics and/or other information whether such format is presently known or developed in the future. Content items may be any combination of the above-described formats.
This detailed description may refer to a given entity performing some action. It should be understood that this language may in some cases mean that a system (e.g., a computer) owned and/or controlled by the given entity is actually performing the action.
The system 100 may comprise a content source 102, an encoder 104, a secondary content source 106, a supplemental feature source 108, a media device 120, and/or any of the other devices in
The content source 102 may be configured to send content (e.g., video, audio, movies, television, games, applications, data, etc.) to one or more devices such as the encoder 104, a network component 129, a first access point 123, a mobile device 124, a second access point 125, the media device 120, the mobile device 124, or any other device of a content distribution network. The content source may receive a content request from, for example, the media device 120 and/or the encoder 104 (e.g., on behalf of the media device 120 or another device). The content source 102 may be configured to send streaming media, such as broadcast content, video on-demand content (e.g., VOD), content recordings, combinations thereof, and the like.
The content may comprise live content (e.g., “live programming”). “Live programming” may refer to, for example, television, internet, or radio content that is broadcast in real-time, as it is happening, without long delays (e.g., hours, days, weeks, etc. . . . ) between the event and its transmission to viewers or listeners. Live programming typically includes events, shows, or broadcasts that are happening in the present moment and are intended to be experienced by the audience as they occur. For example, most movies are not “live content,” but many sporting events, news broadcasts, talk shows, awards ceremonies, reality television, and gameshows, are broadcast as they occur.
The content may comprise one or more latency flags. The one or more latency flags may comprise, for example, one or more of: a time tag, a digital signature, any other data or metadata associated with a moment in time (e.g., in an absolute sense and/or in a relative sense), combinations thereof, and the like. The one or more latency flags may be inserted by any devices and at any point in the distribution of the content. For example, the one or more latency flags may be generated and/or inserted into the video data during compression, during formatting and containerization, during transcoding, during distribution, or at any other time.
Thus, as the latency flag passes through the distribution network it may be sent to and received by various devices along the way. One or more devices may interact with the latency flag. For example, the one or more devices may modify, add, or delete information (e.g., data) from the latency flag. For example, one or more headers or footers may be modified, added or deleted, one or more payloads may be modified, information may be added to or removed from the one or more payloads, one or more packets or frames may be encapsulated. The one or more encapsulations may include the latency flag and/or other timing information.
For example, during compression, the encoder 104 may add a timestamp to one or more content segments of the content indicating when the content was received by the encoder 104, when encoding was complete, and when the content was sent from the encoder 104 to another device. For example, during formatting and containerization, timing data may be incorporated into the MP4, MOV, or AVI format that encapsulated compressed video data. In a similar sense as the encoder 104 whichever device is formatting the content, may add one or more timestamps indicating when the content was received, when it was compressed, and/or when it was sent by the device doing the compression. Similarly, various devices in the distribution network (e.g., one or more access points, one or more head-ends, user devices, media devices) may add timing data as the content makes it way to the user device.
The content source 102 may be managed by third party content providers, service providers, online content providers, over-the-top content providers, combinations thereof, and the like. The content may be sent based on a subscription, individual item purchase or rental, combinations thereof, and the like. The content source 102 may be configured to send the content via a packet switched network path, such as via an IP based connection. The content may comprise a single content item, a portion of a content item (e.g., content fragment), a content stream, a multiplex that includes several content items, combinations thereof, and the like. The content may comprise one or more data packets. The content may be accessed by users via applications, such as mobile applications, television applications, STB applications, gaming device applications, combinations thereof, and the like. An application may be a custom application (e.g., by content provider, for a specific device), a general content browser (e.g., web browser), an electronic program guide, combinations thereof, and the like.
The content may comprise signaling data. The signaling data may include the one or more latency flags. The signaling data may be inserted into the content at the content source 102 or at the encoder 104. The signaling data may be inserted in a Moving Picture Experts Group (MPEG) bitstream, MPEG Supplemental Enhancement Information (SEI) messages, MPEG-2 Transport Stream (TS) packet, MPEG-2 Packetized Elementary Stream (PES) header data, ISO Base Media File Format (BMFF) data, ISO BMFF box, or any in any data packet. The signaling data may comprise one or more markers. For example, the signaling data may comprise Society of Cable and Television Engineers 35 (SCTE-35) markers and/or Society of Cable and Television Engineers 224 (SCTE-224) markers. The Society of Cable Telecommunications Engineers 35 (SCTE-35) and Society of Cable and Television Engineers 224 (SCTE-224) are hereby incorporated by reference in its entirety. The Society of Cable Telecommunications Engineers 30 (SCTE-30) and the Society of Cable Telecommunications Engineers 130 (SCTE-130) are also hereby incorporated by reference in their entirety.
The encoder 104 may receive the content. The encoder 104 may process the content and insert the one or more markers. The encoder 104 may generate a marker ID associated with the marker. The encoder 104 may generate the marker ID based on any parameter associated with the content, for example, a destination of the content. The encoder 104 may send, to the media device 120, marker and/or the marker ID. The marker may be sent to the media device 120 along with the content (e.g., from the content source 102). The marker may be sent to the media device 120 separate from the content. The marker may be sent to the media device 120 in a data packet. The marker ID may be inserted, by the encoder 104, into the SCTE-35 marker.
The content may comprise or otherwise be associated with a content ID. The content ID may be configured to identify the content. For example, the content may comprise one or more data packets, one or more content segments, or the like which may comprise the content ID. The content ID may be a string of characters, numbers, symbols, etc. The content ID may, for example, be a title or description of the content. The content may be associated with a content destination. The content destination may be any one or more devices of the system 100 or the like. For example, the content destination may be the media device 120. The content destination may be associated with a content destination ID. The content destination ID may be a MAC address or some other identifier associated with the one or more devices of system 100. For example, the content destination may be a MAC address or IP associated with the media device 120 and/or the second access point 125 (e.g., a headend or VDE). The content may comprise one or more data packets the one or more data packets may comprise one or more fields. The one or more fields may comprise a destination header configured to contain the destination ID.
For example, the content source 102 may receive a content request from the media device 120. The content request may comprise a device ID associated with the media device 120. The content source 102 may determine the device ID associated with the media device 120 is the destination ID and insert, into the destination field of the one or more data packets the device ID and/or the destination ID (which may be the same or may be different). Similarly, in another embodiment, the encoder or some other device may make a content request on behalf of the media device.
The media device 120 receive the content and determine the content ID. The media device 120 may determine, based on the content ID, one or more policies associated with the content ID. The media device 120 may send one or more queries based on the content ID, wherein the one or more queries are configured to determine the one or more policies associated with the content ID. For example, the one or more policies may comprise one or more timing policies, one or more location policies, one or more demographic policies, one or more subscription policies, one or more service level policies, one or more jurisdictional/legal policies, combinations thereof, and the like. For example, the one or more policies may indicate that given user device (e.g., media device 120) is authorized to enable one or more supplemental features if one or more conditions are met.
For example, the restriction information may indicate a time delta between a live event occurring and the live event being output at the media device 120. For example, the restriction information may indicate that access to one or more supplemental features (e.g., the one or more wagering opportunities) should be restricted if the time delta is greater than a time delta threshold.
For example, the restriction information may indicate the media device 120 is located in a location where users are allowed (e.g., by virtue of location and local rules) to gamble on a sporting event. Based on determining the media device 120 is authorized to enable the one or more supplemental features, the encoder may encode supplemental feature data into the content. For example, the encoder 104 may encode betting related information into the video transport stream. For example, the encoder 104 may encode an interface into the video transport stream, wherein the interface is configured to receive one or more user inputs (e.g., user placed bets). Similarly, the encoder 104 may encode signaling data (e.g., one or more markers) configured to cause the media device 120 to open (e.g., “launch”) one or more applications. For example, the encoder 104 may encode a SCTE-224 marker with a flag or other indicator configured to cause the media device 120 (e.g., a set-top-box) to launch one or more locally stored applications and/or one or more remote applications.
The media device 120 may receive the content and the one or more markers. The media device 120 may comprise a user device such as a set-top-box, computer, mobile phone, combinations thereof, and the like. The media device 120 may be a digital streaming device, a gaming device, a media storage device, a digital recording device, a computing device, a mobile computing device (e.g., a laptop, a smartphone, a tablet, etc.), combinations thereof, and the like. The media device 120 may determine the marker in the content. For example, the media device 120 may parse the SCTE-35 messages carried within the MPEG transport stream of the content. For example, the media device 120 may determine the XML namespace prefix identifying the marker as an SCTE-35 marker. For example, the media device 120 may identify a cue_identifier_descriptor and/or a cue_stream_type value associated with the marker such as “splice_insert” “splice_null,” “splice_schedule” combinations thereof, and the like. A person skilled in the art will appreciate that the aforementioned examples are non-limiting.
The one or more markers may comprise one or more beacons. The one or more beacons may be configured to cause one or more user devices (e.g., the media device 120) that receive the one or more markers to send one or more beacon calls. The one or more beacon calls may comprise messages sent from a device based on receiving the one or more beacons. The one or more beacon calls may comprise, for example, one or more feature queries, one or more content output indications, combinations thereof, and the like. For example, the one or more beacons may comprise one or more HTTP request methods (e.g., GET, POST, etc.). The one or more beacons may comprise one or more uniform resource locators (URLs) configured to cause the media device 120 to send the one or more feature queries.
The one or more feature queries may comprise one or more content identifiers (e.g., a title, a channel, a content identifier, combinations thereof, and the like). The one or more feature queries may comprise timing data (e.g., a time at which the one or more markers were received, a time at which the content was received, a time at which the content was output, a time at which the content was requested by the media device 120, combinations thereof, and the like). The one or more feature queries may comprise location information (e.g., a location of the media device 120 such as latitude and longitude, a geographic region, a syscode, a jurisdiction, combinations thereof, and the like). The one or more feature queries may comprise one or more user device identifiers. The one or more user device identifiers may be associated with the one or more user devices (e.g., the media device 120). For example, the one or more user device identifiers may comprise, for example, a unique string or characters, letters, numbers, symbols, etc. For example, the device identifier may comprise an OUI, a MAC address, an IP address, model number, a brand name, or any other identifier.
Similarly, based on receiving the marker, the media device 120 may send a content output indication. The content output indication may comprise at least one of the marker ID, the marker timestamp, a media device ID, and/or a content ID, the location information, the timing data, and/or the one or more user device identifiers. The media device ID may comprise a unique identifier associated with the media device 120. For example, the media device ID may comprise a MAC address. The MAC address may be associated with a geographic ID (e.g., a “syscode”). The syscode may comprise a four digit code determined by National Cable Communications (NCC) to represent a specific geography available for advertisement insertion. The syscode may represent a specific geographic zone, grouping of zones, a cable interconnect or grouping of cable interconnects, combinations thereof, and the like. The content ID may comprise a unique ID associated with the content. For example, the content ID may comprise at least one of a channel ID, a frequency ID, a content title, a television network name, combinations thereof, and the like.
As shown in
The availability of the one or more supplemental features may be determined based on timing information and/or one or more policies. The timing module 134 may be configured to process timing information. For example, the timing module may determine one or more time deltas associated with content. For example, the timing module may be configured to determine the one or more time deltas based on one or more markers in content (e.g., one or more latency flags). The timing module 134 may be configured to determine (e.g., based on the one or more time deltas) a latency associated with the network of the user device. For example, the content may comprise a latency flag that indicates a time (e.g., real, absolute, relative, otherwise) at which the content originated from the source. The one or more latency flags may include additional information indicting when the content was sent to/received by/pass through various devices in the content distribution network. Based on when the user device (e.g., the media device 120, the mobile device 124 receives the content), the timing module may determine a time difference (e.g., latency) between when real world events happened (e.g., when the content was created) and when the content is received by the user device and/or output via the user device or an associated display device.
The supplemental feature module 136 may be configured to activate or deactivate (e.g., block, grant access to, output, receive user inputs associated with) one or more supplemental features. The one or more supplemental features may comprise one or more interactive features. For example, the one or more supplemental features may comprise one or more wagering opportunities, one or more game show answer opportunities, one or more polling opportunities, combinations thereof, and the like as described herein. For example, the one or more supplemental features may comprise one or more wagering opportunities, one or more live statistics, one or more interactive features (e.g., quiz games, polling questions), closed-caption data, one or more shopping opportunities (e.g., purchase opportunities), combinations thereof, and the like.
The supplemental feature module 136 may be configured to receive, send, store, generate, determine, or otherwise process one or more supplemental features associated with the content. For example, the supplemental feature module 136 may be configured to cause the media device to activate one or more supplemental content features. For example, the one or more supplemental content features may comprise one or more interactive features. For example, the one or more interactive features may comprise one or more wagering opportunities (e.g., one or more in-game betting features, one or more user interfaces configured to facilitate wagering on one or more content items), one or more audience polling features, one or more game show participation features, one or more reality TV features, one or more closed caption features, one or more content interaction features (e.g., one or more trick play features such as rewind, fast-forward, pause, record, volume change, or any change in content such as changing a resolution associated with content).
For example, a first policy of the one or more policies may relate to in-game betting associated with content while a second policy of the one or more policies may relate to content blackouts in one or more geographic regions. The one or more policies may be associated with one or more user device identifiers, one or more content identifiers, one or more geographic regions, one or more time periods, one or more demographic profiles (e.g., age ranges), combinations thereof, and the like.
The policy module 130 may be configured to store one or more content policies. The one or more content policies may comprise, for example, a content gaming policy. The one or more content policies may be configured to indicate one or more geographic regions, one or more user devices, one or more content items, one or more rules/regulations, combinations thereof, and the like associated with one or more supplemental content features. The one or more supplemental content features may comprise one or more interactive content features. For example, the one or more content policies may comprise geographic content output data, subscription output data, timing output data, blackout data, combinations thereof, and the like. For example, the one or more policies may indicate that the content is associated with one or more supplemental features. For example, the one or more policies may indicate the one or more supplemental features are associated with one or more geographic regions. The policy module 130 may determine a violation of the one or more content policies. For example, the policy module 130 may receive a feature query. The policy module 130 may determine one or more of the location information in the feature query, the user device identifier in the feature query, or other information in the feature query. The policy module 130 may compare, for example, the location information and the user device identifier to a supplemental feature policy. The supplemental feature policy may indicate that one or more supplemental features are authorized in a first geographic region. The location information in the content output indication may indicate the user device is located in the first geographic region. As such, the policy module 130 may determine the user device in the first geographic region may receive/execute/generate/or otherwise process one or more supplemental features.
The location module 132 may be configured to process the location data in the feature query. The location module 132 may be configured to determine whether the location information in the feature query complies with a content policy stored in the policy module 130. For example, the location module 132 may determine the location information in the feature indicates a second geographic region. The location module may query the policy module 136 to determine whether the second geographic region is listed on the content policy as a geographic region where the supplemental feature is authorized or not. For example, the location module may determine the second geographic region is a region where the supplemental feature is not authorized.
For example, the supplemental feature module 136 may be configured to send and/or receive one or more SCTE-224 message configured to activate an application (e.g., a gambling application) on the media device 120. Additionally and/or alternatively, the supplemental feature module 136 may be configured to cause the media device 120 to download one or more software applications configured to facilitate one more user interactions with the content.
Returning to the components of system 100, the network 116 may comprise a network component 129. The network component 129 may be any device, module, combinations thereof, and the like communicatively coupled to the network 116. The network component 129 may also be a router, a switch, a splitter, a packager, a gateway, an encoder, a storage device, a multiplexer, a network access location (e.g., tap), physical link, combinations thereof, and the like.
The media device 120 may comprise a demodulator, decoder, frequency tuner, combinations thereof, and the like. The media device 120 may be directly connected to the network (e.g., for communications via in-band and/or out-of-band signals of a content delivery network) and/or connected to the network 116 via a communication terminal 122 (e.g., for communications via a packet switched network). The media device 120 may implement one or more applications, such as content viewers, social media applications, news applications, gaming applications, content stores, electronic program guides, combinations thereof, and the like. Those skilled in the art will appreciate that the signal may be demodulated and/or decoded in a variety of equipment, including the communication terminal 122, a computer, a TV, a monitor, or a satellite dish. The communication terminal 122 may be located at the user location 119. The communication terminal 122 may be configured to communicate with the network 116. The communication terminal 122 may be a modem (e.g., cable modem), a router, a gateway, a switch, a network terminal (e.g., optical network unit), combinations thereof, and the like. The communication terminal 122 may be configured for communication with the network 116 via a variety of protocols, such as IP, transmission control protocol, file transfer protocol, session initiation protocol, voice over IP (e.g., VoIP), combinations thereof, and the like. The communication terminal 122, for a cable network, may be configured to facilitate network access via a variety of communication protocols and standards, such as Data Over Cable Service Interface Specification (DOCSIS).
A first access point 123 (e.g., a wireless access point) may be located at the user location 119. The first access point 123 may be configured to provide one or more wireless networks in at least a portion of the user location 119. The first access point 123 may be configured to facilitate access to the network 116 to devices configured with a compatible wireless radio, such as a mobile device 124, the media device 120, the display device 121, or other computing devices (e.g., laptops, sensor devices, security devices). The first access point 123 may be associated with a user managed network (e.g., local area network), a service provider managed network (e.g., public network for users of the service provider), combinations thereof, and the like. It should be noted that in some configurations, some or all of the first access point 123, the communication terminal 122, the media device 120, and the display device 121 may be implemented as a single device.
The user location 119 is not necessarily fixed. A user may receive content from the network 116 on the mobile device 124. The mobile device 124 may be a laptop computer, a tablet device, a computer station, a personal data assistant (PDA), a smart device (e.g., smart phone, smart apparel, smart watch, smart glasses), GPS, a vehicle entertainment system, a portable media player, a combination thereof, combinations thereof, and the like. The mobile device 124 may communicate with a variety of access points (e.g., at different times and locations or simultaneously if within range of multiple access points), such as the first access point 123 or the second access point 125.
The content source 210 may comprise a content module 212 and a communications module 214. The communications module 214 may be configured to receive, send, store, generate, or otherwise process data. The content module 212 may be configured to receive, send, store, generate, or otherwise process content. For example, the content source 210 may be configured to send content stored in the content module 212. The content module 212 may receive live content (e.g., from a content feed (not pictured)). For example, the content feed may be a camera configured to capture image data and generate a camera feed (e.g., the live content or live programming). The content source 210 may be a computing device configured to receive the camera feed and process it for distribution. For example, the content source 210 may send the content to the encoder 220.
The content source 210 may be configured to insert one or more latency flags into the content. The one or more latency flags may be inserted into the content and associated with one or more events in the content. The one or more events in the content may be real-world events (e.g., the beginning or end of a game, the beginning or end of a play, the beginning or end of a game show or reality TV scene) or content events (e.g., a break between content segments). The one or more latency flags may include timing data associated with the one or more real world events and/or timing data associated with the one or more content events.
The encoder 220 may receive the content. The encoder 220 may comprise a marker module 222 and a communications module 224. The communications module 224 may be configured to receive, send, store, generate, or otherwise process data. The marker module 222 may be configured to insert, into the content, one or more markers. The one or more markers may comprise, for example, one or more SCTE-35 markers and/or one or more SCTE-224 markers. The encoder 220 may send the content and the one or more markers to the user device 230. The one or more markers may comprise one or more beacons.
The user device 230 may receive the content and the one or more markers. For example, the user device 230 may comprise a set-top-box (STB), a mobile device (e.g., a smartphone), a computer, a laptop, combinations thereof, and the like. The user device 230 may comprise a content module 232, a supplemental feature module 234, and a communication module 236. The communications module 236 may be configured to receive, send, store, generate, or otherwise process data. The content module 232 may be configured to receive, send, store, generate, or otherwise process content. The content module 232 may be configured to output the content. For example, the user device 230 may be configured to cause display of the content via a display device. The user device 230 may comprise the display device and/or the display device may be separate from the user device 230. The user device 230 may receive the content and determine the one or more markers. Based on determining the one or more markers, the user device may send one or more feature queries. For example, the user device may receive a SCTE-35 marker and may send, via a SCTE-224 signal (e.g., an out-of-band signal), the one or more feature queries. For example, the user device 230 may receive the SCTE-35 marker and may send, via a SCTE-244 or SCTE-250 message, the one or more feature queries. Similarly, an upstream device may receive the SCTE-35 marker and may implement (e.g., enforce) a policy for one or more downstream devices. The one or more feature queries may comprise, for example, one or more user device identifiers associated with the user device, location data associated with the user device, timing data associated with the content and/or timing data associated with the one or more markers (e.g., a time at which the user device received the content, a time at which the user device output the content, a time at which the user device received the one or more markers). The one or more feature queries may comprise location data associated with the content and/or location data associated with the user device 230. For example, the location data associated with the content may indicate a geographic location within which the content is to be distributed. The location data may indicate a geographic region within which the content originated (e.g., a live-feed). The one or more feature queries may comprise one or more content identifiers such as a title, channel, production company, copyright owner, distributor, content source, combinations thereof, and the like.
The user device 230 may send the one or more feature queries to the supplemental feature device 240. The supplemental feature device 240 may comprise the rights management device 106 (e.g., a linear rights management (LRM) device) and/or the sportsbook 108 of
The one or more policies may comprise policy location information (e.g., a geographic region or other location where a given policy is or is not applicable), an applicability indication (e.g., applicable, not applicable), timing information (e.g., one or more policy start times, one or more policy end times), content information (e.g., one or more content titles, one or more content sources, one or more channels, one or more frequencies, one or more copyright owners, one or more distribution rights, one or more content locators such as a uniform resource locator (URL), combinations thereof, and the like). The one or more policies may indicate one or more rules (e.g., rules, regulations, laws, etc.) associated with one or more geographic regions (e.g., one or more jurisdictions). For example, a first piece of content may comprise a sports game (e.g., a live broadcast of an NBA game) and a first policy of the one or more policies may indicate gambling on the first piece of content is allowed in a first jurisdiction associated with a first user device while gambling on the first piece of content is not allowed in a second jurisdiction associated with a second user device. The one or more policies may indicate trick play (e.g., fast-forward) of the content is allowed in a first geographic region but not allowed in a second geographic region, or allowed for a first group of user devices but not for a second group of user devices.
The feature module 244 may be configured to send and receive supplemental feature data to and from the user device 230 and/or any other device of
For example, the user device 230 may send the one or more feature queries to the supplemental feature device 240. Based on the information in the one or more feature queries, the supplemental feature device 240 may determine the user device is located in Las Vegas Nevada and is receiving and/or outputting an Atlanta Hawks NBA game. The supplemental feature device 240 may determine, based on the one or more policies, that gambling on the Atlanta Hawks game is allowed in Las Vegas. Based on determining that gambling on the Atlanta Hawks game is allowed in Las Vegas, the feature module 244 may send a first supplemental feature message to the user device 230. The first supplemental feature message may be configured to cause the user device to launch a gambling applet (e.g., via the supplemental feature module 234) on the user device 230. The gambling applet may be configured to send and receive data. For example, the gambling applet may be configured to receive and display information related to the Atlanta Hawks game. The gambling applet may be configured to receive one or more user inputs and send and receive one or more subsequent supplemental feature messages based thereon. For example, the gambling applet may display odds related to the Atlanta Hawks game and may receive one or more wagers via a user interface associated with the gambling applet. The supplemental feature module may send the one or more wagers to the supplemental feature device where they may be processed by the feature module 244.
In operation, any one or more of the content provider 301, the gaming rules database 302 (e.g., a gaming authority such as the Nevada Gaming Control Board or any other similar authority), the state authority database 303 (e.g., a state legislature or other body), any one or more gaming leagues databases 304 or wagering organizations may provide rules to the ingest adapter 311. For example, the one or more rules may comprise an indication whether or not a jurisdiction, league, organization, etc. allows betting, if a betting window is only available during this time period, if betting is only allowed for a specific age group or device, etc. In operation, content providers may send video (and/or other data) for distribution to consumers (e.g., the media device 309).
The LRM 333 may aggregate and ingest the one or more policies, and convert them into an event schedule notification interface (ESNI) or digital rights management (DRM) standard configured to apply the one or more policies. For example, the LRM 333 may ingest a gambling policy associated with an NBA basketball game and convert the gambling policy to the SCTE-224 format. The LRM 333 may store the aggregated rules in storage (e.g., cloud storage). The decision manager 313 may determine the one or more SCTE-35 markers in the content and may determine to apply one or more rules based on the content ID and/or the destination ID. The decision manager 313 may trigger the one or more supplemental features (e.g., a user interface overlay to be displayed over the content, wherein the user interface overlay is configured to facilitate placing wagers on an underlying gaming event). The LRM 333 may be configured to aggregate the one or more policies and, in response to a query from the encoder/packager 307 comprising the content ID and/or the destination ID, determine one or more supplemental features associated with the content and/or the destination (e.g., restriction information). The LRM 333 may send the restriction information indicating the one or more supplemental features (and/or the availability or non-availability thereof) to the encoder/packager 307. The encoder/packager 307 may, based on the restriction information, encode supplemental feature signaling data into the outbound stream bound for the destination. For example, by taking the gaming data from all the source inputs (League, Location, Gaming Authorities, etc.), converting this data into a SCTE-224 metadata feed (e.g., one or more SCTE-224 markers) and then integrating that SCTE-224 metadata feed into an outbound stream (e.g., the transport stream), key stakeholders such as content providers and other services may activate an in-game betting pop-up/applet/layover/functionality. Thus, at the start of a gaming event, a SCTE-224 Decision Engine (e.g., the decision manager 313) may respond with the playout rights, to include if in-game betting is allowed or not based on the destination ID, the content ID, local rules, etc. If betting is allowed, the player may a flag as in-game betting allowed. So if the user clicks on the in-gaming betting app the player knows that it can present an in-game betting application to the user.
For example, the one or more SCTE-224 markers may be configured to cause the media device 120 to launch one or more applications. The one or more applications may be associated with the one or more supplemental features. The media device 120 may launch the one or more applications based on receipt of the one or more SCTE-224 markers. For example, the one or more SCTE-224 markers may comprise one or more supplemental feature indicators in one or more fields of the one or more SCTE-224 markers. The one or more supplemental feature indicators may indicate one or more applications associated with the underlying content, subscription information associated with the underlying content, one or more device types, location information, or the like. The media device 120 may, for example, based on the one or more applications associated with the underlying content identified in the SCTE-224 marker, determine if an application of the one or more applications is currently installed on, or otherwise available to (e.g., via download or web interface by virtue of a subscription) the media device 120. If so, the media device 120 may launch (or prompt a user to launch) the application. As described in greater detail with respect to
Additionally and/or alternatively, LRM 333 could determine at the moment a user tries to access the in-game betting pop-up (e.g., based on a received user input), whether that user device (e.g., based on location, class of device, subscription information etc. . . . ) is allowed to access the pop-up. Additionally and/or alternatively, beacon technology may be incorporated to auto-correct or provide real-time audit capabilities to ensure the betting rules are being enforced. For example, when there is content eligible for betting available, a beacon may be added to the playout instructions in a similar fashion to an ad beacon at the start of the ad (e.g., in the or more SCTE-35 markers). The beacon may be configured to trigger a beacon call a beacon call. Likewise, anyone not watching the content, (i.e., they are on slate or alternate content), does not trigger a beacon call and therefore it may be determined that user device is adhering to the policy. In this manner, one or more beacons may be collected and analyzed (e.g., compared against device restriction and a policy footprint determine if anyone is watching gaming content in violation of the policy). The one or more beacon calls may comprise information about the content, a location associated with the media device, an output indication, one or more feature queries, an impression count, combinations thereof, and the like. The one or more beacon calls may comprise one or more output indications. The one or more output indications may comprise one or more content identifiers (e.g., a title, a channel, a content identifier, combinations thereof, and the like). The one or more output indications may comprise timing data (e.g., a time at which the one or more markers were received, a time at which the content was received, a time at which the content was output, a time at which the content was requested by the media device 120, combinations thereof, and the like). The one or more content output indications may comprise location information (e.g., a location of the media device 120 such as latitude and longitude, a geographic region, a syscode, a jurisdiction, combinations thereof, and the like). The one or more output indications may comprise one or more user device identifiers. The one or more user device identifiers may be associated with the one or more user devices (e.g., the media device 120). For example, the one or more user device identifiers may comprise, for example, a unique string or characters, letters, numbers, symbols, etc. For example, the device identifier may comprise an OUI, a MAC address, an IP address, model number, a brand name, or any other identifier.
For example, it may be determined that the granularity of wagering opportunities 401-403 is less than the time delta associated with the network. For example, it may be determined that the points in that match are concluded every 3 seconds, while the time delta associated with the network is 4 seconds. Therefore, access to the wagering opportunities 401-403 is blocked because the real events (e.g., the tennis point played out in the real world) is resolved before the content can reach the user device. However, it may be determined that the granularity of the events associated with wagering opportunities 404-406 is 5 seconds while the latency associated with the network is 4 seconds. Therefore, access to the wagering opportunities 404-406 is allowed because those events will not be resolved before the content reaches the user device.
The interface 500 may comprise one or more fields, panes, areas, etc. The one or more fields may comprise, for example, a first field 501, a second field 503, one or more timing fields 505, one or more source fields 506 and 507, and one or more content fields 508A-D associated with one or more content items. The one or more timing fields 505 may be configured to indicate when content is available (e.g., airing or otherwise available or accessible to a user). For example, the one or more timing fields may indicate a start time, duration, end time, combinations thereof, and the like associated with available content items. The timing field 505 indicates content that may be available today, tomorrow, next week, or any time in the future. The one or more channel indicators 506 and 507 may be configured to indicate a channel or other means of accessing the available content. For example, channel indicator 506 indicates content available on ESPN while channel indicator 507 indicates content available on NBC. The one or more timing indicators 505 and the one or more channel indicators 506 and 507 may be dynamic (e.g., updated as time and content offerings change). The interface 500 may be configured as a navigable interface configured to receive one or more user inputs and update, based on the one or more user inputs, an output associated with the interface. For example, the interface 500 may be configured as a navigable menu wherein the navigable menu displays available content and may be configured to receive a selection of a content item of the available content from the user, via the user device. Based on the user input, the interface 500 may be updated to output information about the selected content. For example, in
The first field 501 may be configured to display primary content data associated with the selected content and supplemental feature data 502 associated with selected content. The primary content data may comprise information such as a title or subject matter associated with the selected content. For example, the first field 501 indicates The primary content data may indicate a title, subject matter, air time (or any other availability), content source, content owner, combinations thereof, and the like. For example, the first field 501 indicates an NBA Basketball game between the Atlanta Falcons and Washington Wizards is airing nationwide. The second field 503 may be configured to output (e.g., display) additional data associated with the selected content item. For example, the second field 503 in
The supplemental feature data 502 may indicate one or more supplemental features associated with the content. For example, the supplemental feature data 502 indicates the Atlanta vs. Washington game is associated with in-game betting, social media sharing, and AR/VR enhancement. The supplemental feature data 502 may indicate one or more supplemental feature icons 502A-D. The one or more supplemental feature icons 502A-D may correspond to or otherwise be associated with one or more supplemental features and may be output on the interface 500 (e.g., in the area indicated by 502, the one or more fields indicating the one or more available content items, etc.). For example, content item 508A ATL vs. WSH (the Atlanta Hawks vs. Washington Wizards), is associated with three icons (in-game betting, social media sharing, and AR/VR), while the LAL vs. LAC (Las Angeles Lakers vs. Las Angeles Clippers) game (associated with field 508B) is associated with only the AR/VR icon, NYC vs. BOS is associated with the AR/VR icon and the social media sharing icon, while a sporting event featuring teams from St. Louis and Chicago is associated only with in-game betting (as indicated by the dollar sign). The supplemental feature data 502 may indicate one or more geographic regions associated with the one or more supplemental features. The one or more geographic regions may indicate where the one or more supplemental features are available and whether or not the one or more supplemental features are where the user is (e.g., by virtue of where a set-top-box, user device, or any other device is located).
The interface 620 in
The one or more supplemental features may comprise one or more interactive features. For example, the one or more supplemental features may comprise one or more wagering opportunities, one or more game show answer opportunities, one or more polling or voting opportunities, combinations thereof, and the like as described herein.
At 720, a latency of a network associated with the user device may be determined. The latency may be determined based on a time delta between an occurrence of an event featured in the content (e.g., an event in real life) and output of the event at the user device. The time delta may be determined based on information such as one or more time tags (e.g., one or more latency flags). The time delta may be determined based on optical character recognition. For example, the broadcast content (e.g., the live programming) may comprise visual timing information (e.g., a clock, a countdown timer, etc.). The visual timing information may be compared to timing information determined or generate by the receiving device (e.g., a clock on the media device). A difference may be determined. The one or more latency flags may comprise, for example, a time tag, a digital signature, or any other data or metadata associated with a moment in time (e.g., in an absolute sense and/or in a relative sense). The one or more latency flags may be inserted at the origin of the content (e.g., the source of the broadcast), and passed with the content through a content distribution network to a requesting user device (e.g., the media device 120). As the one or more latency flags pass through the content delivery/distribution network they may be sent to and received by various devices along the way on the communication paths. One or more devices may interact with the one or more latency flags so as to monitor, modify, add, or delete information (e.g., data) from the latency flag. For example, one or more headers or footers may be added or deleted, one or more payloads may modified, information may be added to or removed from the one or more payloads, one or more packets or frames may be encapsulated. The one or more encapsulations may include the latency flag and/or other timing information.
At 730, access to the one or more supplemental features may be blocked. For example, the access to the supplemental feature may be blocked based on the time delta exceeding the time threshold. Blocking access to supplemental features may comprise not outputting the supplemental features, terminating output of the supplemental features, “locking” the supplemental features, not accepting or not sending user interactions associated with the, combinations thereof, and the like.
The method may comprise granting access to a supplemental feature of the one or more supplemental features. The method may comprise receiving one or more user inputs associated with the one or more supplemental features. For example, the one or more user inputs may comprise one or more wagers, one or more game show answers, one or more poll responses, combinations thereof, and the like. The method may comprise determining a change in the latency. The method may comprise outputting, based on the change in the latency, the one or more supplemental features.
At 820, one or more supplemental features associated with the content may be determined. The one or more supplemental features may comprise one or more interactive features. For example, the one or more supplemental features may comprise one or more wagering opportunities, one or more game show answer opportunities, one or more polling opportunities, combinations thereof, and the like as described herein.
The one or more supplemental features may be associated with one or more time thresholds. The one or more time thresholds may indicate a timing granularity associated with the one or more supplemental features. The timing granularity may indicate a length of time (e.g., a time duration) typically required to resolve an event (e.g., a live event). For example, Major League Baseball employs a pitch-clock which mandates a time frame during which the pitcher must throw the ball without incurring a penalty. For example, game shows may employ timers to indicate a countdown during which contestants must take action or risk being disqualified or losing. Other live events may be associated with time thresholds indicating an average time for an event or series of events to resolve or conclude. For example, in the NFL, the average length (e.g., time duration) of a single play (e.g., a “down”) is four seconds but is not mandated by rule. On the other hand, teams have 40 seconds to begin a play after the previous play concludes. Further, using the NFL again as an example, while the total number of play minutes (e.g., time during which the game clock is running), is mandated, the actual length of time that a game may last is not mandated. For example, the average NFL games requires over 3 hours to complete the 60 minutes of play time. For example, some tennis organization employ a service clock that mandates players begin the service motion within 25 seconds of the conclusion of the previous point.
The one or more of time thresholds may be indicated in one or more policies associated with the content (e.g., associated with the content ID) and/or associated with the one or more supplemental features. The one or more policies may be sent with (and/or received with) the content. The one or more policies may be requested by the user device (e.g., the media device 120) based on receiving the content. The one or more policies may be stored on the user device or may be stored remotely.
At 830, a time delta may be determined. The time delta may be determined based on one or more time tags (e.g., one or more latency flags) in the content. The latency flag may comprise, for example, timing information such as a time tag, a digital signature, or any other data or metadata associated with a moment in time (e.g., in an absolute sense and/or in a relative sense). The latency flag may be inserted at the origin of the content (e.g., the source of the broadcast), and passed with the content through a content distribution network to a requesting user device (e.g., the media device 120). As the latency flag passes through the distribution network it may be sent to and received by various devices along the way. One or more devices may interact with the latency flag such as to modify, add, or delete information (e.g., data) from the latency flag. For example, one or more headers or footers may be added or deleted, one or more payloads may modified, information may be added to or removed from the one or more payloads, one or more packets or frames may be encapsulated. The one or more encapsulations may include the latency flag and/or other timing information.
At 840, it may be determined that the time delta exceeds a time threshold of the one or more time thresholds. For example, the time delta may be 3 seconds, while a first time threshold of the one or more time thresholds is one second. A second time threshold of the one or more time thresholds may be longer, (e.g., four seconds, one hour, etc. . . . ). For example, an arrival time may be determined (e.g., measured) at the user device. The time delta may be determined at any point of the delivery path.
At 850, access to a supplemental feature of the one or more supplemental features may be blocked. For example, the access to the supplemental feature may be blocked based on the time delta exceeding the time threshold. For example, if play already occurred, by the time the play is rendered, the result may be generally known to user via other social media means, and therefore bet cannot be placed. Blocking access to supplemental features may comprise not outputting the supplemental features, terminating output of the supplemental features, “locking” the supplemental features, not accepting or not sending user interactions associated with the supplemental features, combinations thereof, and the like. For example, it may be determined that the granularity of wagering opportunities is less than the time delta. For example, it may be determined that the points in that match are concluded every 3 seconds, while the time delta associated with the network is 4 seconds. Therefore, access to the wagering opportunities is blocked because the real events (e.g., the tennis point played out in the real world) is resolved before the content can reach the user device. However, it may be determined that the granularity of the events associated with wagering opportunities is 5 seconds while the latency associated with the network is 4 seconds. Therefore, access to the wagering opportunities is allowed because those events will not be resolved before the content reaches the user device.
The method may comprise granting access to a supplemental feature of the one or more supplemental features. The method may comprise receiving one or more user inputs associated with the one or more supplemental features. For example, the one or more user inputs may comprise one or more wagers, one or more game show answers, one or more poll responses, combinations thereof, and the like.
At 920, a plurality of time thresholds associated with the plurality of supplemental features may be determined. The plurality of time thresholds may indicate a timing granularity associated with the one or more supplemental features. The timing granularity may indicate a length of time (e.g., a time duration) typically required to resolve an event (e.g., a live event). For example, Major League Baseball employs a pitch-clock which mandates a time frame during which the pitcher must throw the ball without incurring a penalty. For example, game shows may employ timers to indicate a countdown during which contestants must take action or risk being disqualified or losing. Other live events may be associated with time thresholds indicating an average time for an event or series of events to resolve or conclude. For example, in the NFL, the average length (e.g., time duration) of a single play (e.g., a “down”) is four seconds but is not mandated by rule. On the other hand, teams have 40 seconds to begin a play after the previous play concludes. Further, using the NFL again as an example, while the total number of play minutes (e.g., time during which the game clock is running), is mandated, the actual length of time that a game may last is not mandated. For example, the average NFL games requires over 3 hours to complete the 60 minutes of play time. For example, some tennis organization employ a service clock that mandates players begin the service motion within 25 seconds of the conclusion of the previous point.
The plurality of time thresholds may be indicated in one or more policies associated with the content (e.g., associated with the content ID) and/or associated with the one or more supplemental features.
At 930, it may be determined that a first quantity of time thresholds of the plurality of time thresholds is greater than or equal to the time delta. The first quantity of time thresholds may be associated with a first quantity of supplemental features of the plurality of supplemental features.
At 940, it may be determined that a second quantity of time thresholds of the plurality of time thresholds is less than the time delta. The second quantity of time thresholds may be associated with a second quantity of supplemental features of the plurality of supplemental features.
At 950, the first quantity of supplemental features associated with the first quantity of time thresholds may be output. Outputting the first quantity of supplemental features associated with the first quantity of time thresholds may comprise displaying the first quantity of supplemental features, sending the first quantity of supplemental features, enabling the first quantity of supplemental features, granting access to the first quantity of supplemental features, combinations thereof, and the like.
At 960, access to the second quantity of supplemental features associated with the second quantity of time thresholds may be blocked. Blocking access to the second quantity of supplemental features may comprise not outputting the second quantity of supplemental features, terminating output of the second quantity of supplemental features, “locking” the second quantity of supplemental features, not accepting or not sending user interactions associated with the second quantity of supplemental features, combinations thereof, and the like.
The method may comprise receiving one or more user inputs associated with the plurality of supplemental features. The method may comprise determining a change in the time delta. The method may comprise granting access, based on the change in the time delta, to the second quantity of supplemental features.
The computer 1001 may operate on and/or comprise a variety of computer readable media (e.g., non-transitory). The readable media may be any available media that is accessible by the computer 1001 and may comprise both volatile and non-volatile media, removable and non-removable media. The system memory 1012 has computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory 1012 may store data such as the feature data 1007 and/or program modules such as the operating system 1005 and the feature software 1006 that are accessible to and/or are operated on by the one or more processors 1003. The machine learning module may comprise one or more of the feature data 1007 and/or the feature software 1006.
The computer 1001 may also comprise other removable/non-removable, volatile/non-volatile computer storage media.
Any quantity of program modules may be stored on the mass storage device 1004, such as the operating system 1005 and the feature software 1006. Each of the operating system 1005 and the feature software 1006 (or some combination thereof) may comprise elements of the program modules and the feature software 1006. The feature data 1007 may also be stored on the mass storage device 1004. The feature data 1007 may be stored in any of one or more databases known in the art. Such databases may be DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, MySQL, PostgreSQL, and the like. The databases may be centralized or distributed across locations within the network 1015.
A user may enter commands and information into the computer 1001 via an input device (not shown). Examples of such input devices comprise, but are not limited to, a keyboard, pointing device (e.g., a computer mouse, remote control), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, motion sensor, and the like These and other input devices may be connected to the one or more processors 1003 via a human machine interface 1002 that is coupled to the bus 1013, but may be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, network adapter 1008, and/or a universal serial bus (USB).
The display device 1011 may also be connected to the bus 1013 via an interface, such as the display adapter 1009. It is contemplated that the computer 1001 may comprise more than one display adapter 1009 and the computer 1001 may comprise more than one display device 1011. The display device 1011 may be a monitor, an LCD (Liquid Crystal Display), light emitting diode (LED) display, television, smart lens, smart glass, and/or a projector. In addition to the display device 1011, other output peripheral devices may be components such as speakers (not shown) and a printer (not shown) which may be connected to the computer 1001 via the Input/Output Interface 1010. Any step and/or result of the methods may be output (or caused to be output) in any form to an output device. Such output may be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display device 1011 and computer 1001 may be part of one device, or separate devices.
The computer 1001 may operate in a networked environment using logical connections to one or more remote computing devices 1014A,B,C. A remote computing device may be a personal computer, computing station (e.g., workstation), portable computer (e.g., laptop, mobile phone, tablet device), smart device (e.g., smartphone, smart watch, activity tracker, smart apparel, smart accessory), security and/or monitoring device, a server, a router, a network computer, a peer device, edge device, and so on. Logical connections between the computer 1001 and a remote computing device 1014A,B,C may be made via a network 1015, such as a local area network (LAN) and/or a general wide area network (WAN). Such network connections may be through the network adapter 1008. The network adapter 1008 may be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, and the Internet.
Application programs and other executable program components such as the operating system 1005 are shown herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 1001, and are executed by the one or more processors 1003 of the computer. An implementation of the feature software 1006 may be stored on or sent across some form of computer readable media. Any of the described methods may be performed by processor-executable instructions embodied on computer readable media.
While specific configurations have been described, it is not intended that the scope be limited to the particular configurations set forth, as the configurations herein are intended in all respects to be possible configurations rather than restrictive.
Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; the quantity or type of configurations described in the specification.
It will be apparent to those skilled in the art that various modifications and variations may be made without departing from the scope or spirit. Other configurations will be apparent to those skilled in the art from consideration of the specification and practice described herein. It is intended that the specification and described configurations be considered as exemplary only, with a true scope and spirit being indicated by the following claims.