Dynamic binding of live video content

Information

  • Patent Grant
  • 9953347
  • Patent Number
    9,953,347
  • Date Filed
    Thursday, September 11, 2014
    10 years ago
  • Date Issued
    Tuesday, April 24, 2018
    6 years ago
Abstract
A method of dynamically binding supplemental content to live video content includes receiving the live video content at a device and identifying a description of the live video content. The method also includes obtaining the supplemental content based on the description, where the supplemental content provides additional information about one or more products or services related to the live video content. The method further includes dynamically binding the supplemental content to the live video content and positioning the supplemental content in association with the live video content using a supplemental interactive display.
Description
TECHNICAL FIELD

This disclosure is directed generally to software and more specifically to dynamic binding of live video content.


BACKGROUND

It is well-known that videos may be broadcast or provided through a number of media, such as television, the Internet, DVDs, and the like. To finance such video broadcasts, commercial advertisements are often placed in the videos. Commercials, however, require that a video be momentarily interrupted while the commercials are displayed. Not only is this annoying to viewers, but digital video recorders (DVRs) allow video programs to be pre-recorded. When the video programs are viewed, DVRs allow the viewers to fast-forward through commercials, thereby defeating the effectiveness and value of the commercials. When commercials are de-valued, costs are not adequately covered, and broadcast service quality suffers as a result. In many cases, costs are made up by charging viewers for video services.


In many conventional systems, a variety of different content has little or no interactivity. This includes both videos and images. For example, when viewing video, different objects in the video are often merely part of a single video stream that is inseparable with respect to the different objects. Static advertisements near the video stream related to the video are not very compelling as they are separated from the video in such a way that a user is not encouraged to interact with the static advertisement.


SUMMARY

This disclosure provides dynamic binding of live video content.


In a first embodiment, a method of dynamically binding supplemental content to live video content includes receiving the live video content at a device and identifying a description of the live video content. The method also includes obtaining the supplemental content based on the description, where the supplemental content provides additional information about one or more products or services related to the live video content. The method further includes dynamically binding the supplemental content to the live video content and positioning the supplemental content in association with the live video content using a supplemental interactive display.


In a second embodiment, an apparatus includes at least one memory and at least one processing device. The at least one memory is configured to receive and store live video content. The at least one processing device is configured to identify a description of the live video content and obtain supplemental content based on the description, where the supplemental content provides additional information about one or more products or services related to the live video content. The at least one processing device is also configured to dynamically bind the supplemental content to the live video content and position the supplemental content in association with the live video content using a supplemental interactive display.


In a third embodiment, a non-transitory computer-readable medium includes logic stored on the computer-readable medium. The logic is configured when executed to cause at least one processing device to receive live video content and identify a description of the live video content. The logic is also configured when executed to cause at least one processing device to obtain supplemental content based on the description, where the supplemental content provides additional information about one or more products or services related to the live video content. The logic is further configured when executed to cause at least one processing device to dynamically bind the supplemental content to the live video content and position the supplemental content in association with the live video content using a supplemental interactive display.


Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of this disclosure and its advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:



FIG. 1 illustrates an example communication system that can be utilized to facilitate communication between endpoints through a communication network according to this disclosure;



FIGS. 2A through 2E illustrate example dynamic bindings of supplemental content to base content according to this disclosure;



FIGS. 3A through 3C illustrate example displays that may be created with dynamic binding according to this disclosure;



FIG. 4 illustrates another example display that may be created with dynamic binding according to this disclosure;



FIG. 5 illustrates example servers storing supplemental content according to this disclosure;



FIG. 6 illustrates an example record stored in a server such as a database server according to this disclosure;



FIG. 7 illustrates an example decision engine according to this disclosure;



FIG. 8 illustrates an example process for ad-hoc binding of supplemental content to base content according to this disclosure;



FIG. 9 illustrates an example ad-hoc binding system according to this disclosure;



FIG. 10 illustrates an example process for dynamically binding supplemental content to video content according to this disclosure;



FIG. 11 illustrates an example process for dynamically binding supplemental content to live video content according to this disclosure;



FIG. 12 illustrates an example process for dynamically binding supplemental content to a content transactional item according to this disclosure; and



FIG. 13 illustrates an example computing device for dynamically binding supplemental content according to this disclosure.





DETAILED DESCRIPTION


FIGS. 1 through 13, discussed below, and the various embodiments used to describe the principles of this disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of this disclosure may be implemented in any suitably arranged system.



FIG. 1 illustrates an example communication system 100 that can be utilized to facilitate communication between endpoints through a communication network according to this disclosure. As shown in FIG. 1, the system 100 includes various endpoints 110, 120, and 130. In this document, the term “endpoint” generally refers to any device, system, or other structure that communicates with another endpoint. Example endpoints 110, 120, and 130 include but are not limited to servers (such as application servers and enterprise servers), desktop computers, laptop computers, netbook computers, tablet computers (such as APPLE IPADs), switches, mobile phones (such as IPHONE and ANDROID-based phones), networked glasses (such as GOOGLE GLASS), networked televisions, networked disc players, components in a cloud-computing network, or any other device or component suitable for communicating information to and from a communication network. Endpoints 110, 120, and 130 may support Internet Protocol (IP) or any other suitable communication protocol(s). Endpoints 110, 120, and 130 may additionally include medium access control (MAC) and physical layer (PHY) interfaces, such as those that conform to the IEEE 701.11 standard. An endpoint 110, 120, and 130 can have a device identifier, such as a MAC address, and may have a device profile that describes the endpoint.


A communication network 140 facilitates communications between the endpoints 110, 120, and 130. Various links 115, 125, and 135 couple the endpoints 110, 120, and 130 to the communication network 140. The communication network 140 and associated links 115, 125, and 135 may include but are not limited to a public or private data network, a telephony network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a wireline or wireless network (such as GSM, CDMA, LTE, WIMAX, 5G, or the like), a local/regional/global communication network, portions of a cloud-computing network, a communication bus for components in a system, an optical network, a satellite network, an enterprise intranet, or any other communication links or combinations of the preceding. In particular embodiments, portions of the links 115, 125, 135 or the communication network 140 may be on or form a part of the Internet.


Although the endpoints 110, 120, and 130 generally appear as being in a single location in FIG. 1, various endpoints may be geographically dispersed, such as in cloud computing scenarios. Also, each endpoint could represent a fixed or mobile device. When the endpoints 110, 120, and 130 communicate with one another, any of a variety of security schemes may be utilized. As an example, in particular embodiments, the endpoints 110 and 120 may represent clients, and the endpoint(s) 130 may represent one or more servers in a client-server architecture. The server(s) may host a website, and the website may have a registration process whereby a user establishes a username and password to authenticate or log into the website. The website may additionally utilize a web application for any particular application or feature that may need to be served up to the website for use by the user. Additionally, in particular configurations, the communication between the endpoints 110 and 120 may be facilitated using a communication path through the endpoint 130.


Various embodiments described in this patent document may benefit from and/or utilize SMART CONTAINER technology from CINSAY, INC., which is briefly described below and is described more fully in U.S. Pat. No. 8,769,053 (which is hereby incorporated by reference in its entirety). This technology provides an innovative way for merchants to reach their customers online. In the traditional online sales model, merchants need to create search or display ads that show up when online consumers visit search engine sites or various web properties. If a consumer sees an interesting ad related to a product or service, the consumer needs to leave his or her current activity and visit some other web destination to discover more information or make an online purchase. Consumers have specific online behavior patterns. If consumers are actively shopping, the traditional multistep model is workable. The traditional advertising sales model requires that a consumer stop what he or she is doing and visit some other online destination. However, if consumers are on social sites interacting with friends, reading the news, playing games, or engaging in other online activities, they are much less likely to leave their current activities to visit some external Internet destinations.


The SMART CONTAINER model brings product information or a store to the consumer. The SMART CONTAINER code/technology virally syndicates across the web, for example, using components described with reference to FIGS. 1 and 5 or using other components. It is ideal for those types of destinations that online consumers tend to frequent, such as social networks and blogs. Regardless, if the SMART CONTAINER code is located on a web page, a blog article, a social network page or wall, or a mobile device, a consumer can complete a transaction right there with no need to be diverted to some external destination.


SMART CONTAINER objects are intelligent Internet objects that virally syndicate and propagate across the web and other connected networks and mobile devices. They can be configured in a variety of ways to address the entire value chain of online marketing and shopping. This includes impressions, clicks, lead generation, and performing e-commerce transactions. A modern shopping experience works best when interactive media is used. One of the most appealing forms of media for sales and shopping is video. It allows a much more lifelike representation than text or static pictures. It also creates a much richer product browsing or shopping experience.


SMART CONTAINER code is normally configured with a video player window, a selection of products or services being offered, and a variety of related video clips. This collection of video clips allows a consumer to learn more about the products or services being offered. The consumer can select any of these offered items to get more details, all enclosed within the SMART CONTAINER technology.


The offered items (products or services) may be items being advertised or sold. Depending on the type, the SMART CONTAINER code may allow a consumer to request to be contacted, or even purchase the object, right there. The consumer need not leave his or her current activity or web page. Offered items could also include or be associated with discounts or coupons. They may even be an opportunity to donate to a charity or political campaign. Of course, sometimes it does make sense to visit another Internet designation, and if appropriate the consumer can certainly be linked there as well.


Because the SMART CONTAINER code handles all the complexity, it can turn the simplest website into an instant e-commerce store. This enables anyone to transact online without having to deal with the complexity of setting up an e-commerce site. For merchants with an e-commerce site, it readily enables a much richer shopping experience. For the creative hobbyist or local band, it lets them readily sell directly to interested consumers. To support and promote them, supplemental items in the SMART CONTAINER code called ON-DEMAND merchandise can be offered. Merchants can custom design a selection of apparel with their art and graphics to be sold along with their own creations. ON-DEMAND fulfillment dynamically produces and ships their custom apparel for them, eliminating the need to manage inventory and providing their online customers with a richer line of products. Of course, because their instant e-commerce stores are based on SMART CONTAINER objects, it can also propagate out onto all forms of viral syndication methods as well.


The SMART CONTAINER code is also auto-customizing according to particular configurations. If a device is a traditional personal computer (PC) or laptop, it will render using optimal technology, which for this purpose could represent FLASH. On mobile devices such as IPHONEs, IPADs, or ANDROID phones, this means HTML5 or a native interactive app will likely get used. The items in the SMART CONTAINER code also know about each other according to particular configurations. When a video is playing, a container can update product and service objects being shown that correspond with the particular sequence in a video segment. It allows a “mini QVC” shopping channel to be created and syndicated across the Internet. Beyond device type, there are other dimensions of customization. Smaller devices and some environments such as social sites restrict window sizes, so the SMART CONTAINER code adapts. In addition, it may be appropriate to provide different content based on geolocation, so the SMART CONTAINER code can customize for these, as well.


The SMART CONTAINER code virally syndicates across the Internet following the more popular network paths. SMART CONTAINER objects can be hosted on traditional web pages or blogs, contained in emails, operate on mobile devices, or propagate social networks. Because the SMART CONTAINER code is flexible, it can also be set up in the form factor of a display ad unit and distributed via ad servers on display advertising networks. When the code exists on social networks like FACEBOOK, it can ride the wave of user “likes.” For example, if a woman shopper likes some great shoes shown in a SMART CONTAINER object interface, the SMART CONTAINER object can propagate directly to their “wall.” Now all of her friends see the SMART CONTAINER object and can view or transact right there on their own walls. Of course, if any of her friends also “like” it, the SMART CONTAINER object propagates and rides the wave further out into that branch of the social network, yielding a potential exponential growth factor. The container does not necessarily involve products like shoes. As another example, a container can support a politician running for office. His or her supporters may be passionate about a message and “like” it, again making it available to their networks. Now, similarly-minded political supporters can view those messages and, if so moved, donate to the cause. Yet another example is sports. In this case, a sports fan may wish to watch content on his or her high-definition (HD) large screen television. More and more users have interconnected devices such as ROKU and CHROMECAST devices, and the SMART CONTAINER code may be sent to such IP television boxes, as well.


When merchants launch and syndicate their SMART CONTAINER objects onto the Internet, they want to know how their campaigns are performing. SMART CONTAINER objects report back status on events and transactions of interest such as impressions, video views, clicks, leads, and sales. All such events/transactions can be sent back as events occur, providing details on how they are doing. Because the containers are smart, they can be instructed to change behavior, offer different clips, update products, or to end when it is time to stop a marketing or sales campaign.


Another form of tracking relates to how the SMART CONTAINER code is propagated. A merchant may wish to use affiliates to help syndicate them and pay them a percentage based on the transactions resulting from their work. SMART CONTAINER objects can be tagged with affiliate tracking identifiers, allowing status reports and transactions from container instances or their descendants to be properly filtered. Another tracking usage may be for a politician to assign affiliate codes to his or her supporters and be able to measure whose efforts result in the most new supporters.


SMART CONTAINER objects are designed to be highly scalable according to particular configurations. Rather than burden a single website with massive traffic (which would result from a traditional model of bringing all consumers to a store), SMART CONTAINER code operates in a distributed manner. For example, the SMART CONTAINER code can execute where it is located, such as on a blog, a social network, or a mobile device. SMART CONTAINER objects fetch their instructions when started and then gather their product items and video streams from a worldwide distributed content delivery network. This results in a highly scalable architecture, allowing millions of concurrent consumers.


By bringing the store to the customer, the SMART CONTAINER code enables many new ways for merchants to connect with their consumers without disrupting the consumers' web activities. The end result is to connect the consumers directly with the merchants, eliminating the middleman and promoting a much more natural shopping experience.


The functionality of the above description may avail from any suitable components, such as those described in FIGS. 1 and 12 or other suitable components. The code itself may be written in any suitable format, including but not limited to Java, C++, C-sharp, HTML, HTML5, JAVA SCRIPT, PYTHON, RUBY, and the like.


There exists a variety of base content (e.g., media content such as video and audio content) in the world that is independent, existing separate from any special containers such as the SMART CONTAINER code. Certain embodiments of this disclosure seek to harness the power of such content by dynamically binding supplemental content to the underlying base content. To “dynamically bind” base content (whether video, audio, or other type of content) to supplemental content, the supplemental content is associated with the base content in real time as the base content is being delivered to a device. As a simple example, a video may be streamed from a content server, such as is provided by one of many video streaming services. According to certain embodiments of this disclosure, supplemental content is added dynamically to such content. In one or more embodiments, “dynamically” may also be referred to as “real-time.” The base content is dynamically bound to the supplemental content through an interactive supplemental display. In an embodiment, the interactive supplemental display is similar to the SMART CONTAINER. The disclosure below, among other things, describes the addition of such supplemental content and the determination of which supplemental content to provide. This can be done based on the base content, a user profile, a device profile, or other factors.



FIGS. 2A through 2E illustrate example dynamic bindings of supplemental content to base content according to this disclosure. As seen in FIGS. 2A through 2E, a base content 200 is generally shown. The base content 200 represents literally any type of visual or audio content—be it a picture, a streaming video, a live stream from a remote location, real-time content from the current location of a device, a web page, or other types of visual content. The supplemental content represents additional information related to the base content and/or a user accessing the base content. In one or more embodiments, the supplemental content can override the module playing the base content and expand the functionality of the module (such as with YOUTUBE).


In some embodiments, supplemental content may include additional information, configurable controls, selectable configurations, content transactional items such as products or services, and the like. Although the displayable area for the base content 200 is generally shown as having a rectangular boundary area, the displayable area for the base content 200 may take on other shapes. Additionally, the base content 200 may be shown in (or through) a virtually limitless number of devices, from mobile phones to computers to televisions.


As examples of the above, the base content 200 may be a video streamed through a video-based provider, such as YOUTUBE, VIMEO, NETFLIX, REDBOX INSTANT or others, being viewed on a computer, a mobile device, a television screen, or any other suitable device or devices. The base content 200 may also be a real-time view of content at a current location being viewed through an electronic device such as GOOGLE GLASS or a real-time view in a mobile computing device such as a tablet or phone. In yet other configurations, the base content 200 may be an image. In still other configurations, the base content 200 may be a web page.


Also shown in FIGS. 2A through 2E are non-limiting examples of the supplemental content 210a-210e that are configured to dynamically bind to the base content 200. Although certain examples are provided, it should be understood that such examples are non-limiting and other configurations may be utilized as will become apparent to one of ordinary skill in the art having read this disclosure. In some configurations, the supplemental content may overlay the base content, whether partially transparent or not. Examples of supplemental content 210b and 210e overlaying the base content 200 are shown in FIG. 2B (left position) and FIG. 2E. In other configurations, the supplemental content may be positioned outside of the base content 200, such as to the left, right, top, bottom, or other positions. Examples of supplemental content 210a, 210c, and 210d outside of a boundary area of the base content 200 are shown in FIG. 2A, FIG. 2C (left position), and FIG. 2D.


In certain configurations, the supplemental content may be selectively displayable and/or selectively “hideable,” such as due to user action or inaction. For example, in some configurations, a user interacting with a container for the base content may cause a menu with supplemental content to appear. Examples of these configurations are shown in FIGS. 2B and 2C with the double-edged arrows representing selective display-ability or selective hide-ability.


In still other configurations, the supplemental content may begin outside an area of the base content 200 and expand to cover, partially transparent or not, the base content 200. For example, as seen in FIG. 2D, the position of the supplemental content 210d on the left is just below a displayable area for the base content 200. However, in the position of the supplemental content 210d on the right (which may be the result of interactivity by a user), the supplemental content 210d expands to at least partially overlay the base content 200 (as shown by an area 210d′). A similar configuration is also shown in FIG. 2E except that the supplemental content 210e began as an overlay of the screen and an area 210e′ covers an entire edge of the displayable area for the base content 200.


In particular configurations, the supplemental content is independent of the base content and is bound dynamically as the base content is displayed. For example, in particular settings, a web page may have a container (such as an embed code) that instantiates (loads or invokes) (i) the base content and (ii) the supplemental content. According to certain configurations, a call for supplemental content can be based on what is being shown in the base content, with the supplemental content specifically relating to the base content. Additionally, the supplemental content may be based on other parameters, such as a user profile or a geolocation of the user viewing the base content. As another example, in other configurations, a page analyzer can review a web page to determine locations where base content is contained and overlay or adjust such base content.


According to this specification, the concept of “binding” refers to associating supplemental content with base content, whereas “dynamic binding” refers to associating content on the fly, such as upon detection of the base content. In particular configurations, the initial association may allow the subsequent sharing of both the supplemental content and the base content together, as will be described with reference to figures below. More particularly, in certain configurations, an initial dynamic binding yields a shareable container (which may or may not be instantiated by an embed code) that, upon being shared to a new device, instantiates the underlying base content and the supplemental content. In other configurations, no such container is created, and a dynamic binding or dynamic association of the supplemental content is done for every playing of the video. In yet other configurations, supplemental content may be bound to a video, and the particular content is dynamically determined when the video is requested for playback.


A variety of technologies may be used for the above-described dynamic binding. As an example non-limiting configuration, the supplemental content may be configured as one layer in a display, where the base content is another layer. In such configurations, the layer for the supplemental content may be forward in the layers to allow an overlay as might be appropriate. In other configurations, the supplemental content may simply be provided a positioning with respect to the base content.


In particular configurations, the supplemental content can be dynamically sized based on a determined size of the base content and/or the spacing configurations for the device on which the base content and the supplemental content will be displayed. In other configurations, given a particular size for the base content, the supplemental content may use the same size for a container that requests a slightly reduced-size base content with extra room for the supplemental content. In implementing such a configuration, the technology can intercept a request for the base content and redirect such a request in order to request a container that, in turn, requests the base content and then the supplemental content. This latter configuration may be beneficial for scenarios where the supplemental content does not overlay the base content.



FIGS. 3A through 3C illustrate example displays that may be created with dynamic binding according to this disclosure. With reference to FIG. 3A, a base content 300 is shown. Here, the base content 300 is a video, although as noted above other types of content may also be used for the base content 300. Two types of supplemental content are shown, namely supplemental content 310a that initially overlays the displayable area of the base content 300 and supplemental content 310b that initially does not overlay the displayable area of the based content 300.


The supplemental content 310a is an interactive toolbar that contains a variety of options, including play and audio options 311, share options 313, account login options 315, video quality options 317, and further information options 319. The functionality of the play and audio options 311 are apparent to one of ordinary skill in the art. Also shown is a play bar 312, which is apparent to one of ordinary skill in the art. In particular configurations, the play bar 312 may replace a play bar that would otherwise co-exist for a display of the base content 300.


Upon clicking on the share options 313, a variety of other options may be provided. For example, a user may be given the opportunity to share a container of the dynamically bound content via networks such as FACEBOOK, MYSPACE, TWITTER, YAHOO, LINKEDIN, GOOGLE, or WORDPRESS. Furthermore, the user may be given the option to copy embed codes and share via email. Additionally, the user may be able to propagate the container by clicking the “like” thumb or “+1”ing on GOOGLE PLUS. The account login options 315 may allow a user to sign into a variety of networks including, for example, CINSAY, FACEBOOK, or GOOGLE. The video quality options 317 allow modification of the video, and the further information options 319 provide any of a variety of options that may be selected related to supplemental information.


The supplemental content 310b is shown as a product carousel that contains a plurality of interactive items corresponding to products or services shown in the video. In particular configurations, a user may interact with the displayable product carousel and purchase items or transact without leaving the displayable areas of the supplemental content 310b and base content 300. For example, with reference to a container (with the supplemental content and the base content 300) on a FACEBOOK wall of a friend, a user may purchase the product directly from the container for such items. In other configurations, a user may leave the container and be redirected to a website.


With reference to FIG. 3B, interactivity with a particular item 340 is shown, namely a product called SMYTHE. When a user “mouse overs” an item, eye tracking identifies a pause of the user's eyes over the item, a mouse hovers over the item, or the user “mouse clicks” on the item, it overlays as shown in FIG. 3B. In another example, a user could touch a touch screen to select an item. Further information about the product or service is shown. Additionally, when one clicks on the “TAKE ACTION” button, the user is taken to the view shown in FIG. 3C.


With reference to FIG. 3C, a user is allowed to further interact with the overlay screen, including viewing even further additional information 350. In FIG. 3C, in some embodiments, the base content 300 may be completely overlaid. In other examples, the base content 300 may be partially overlaid. In some embodiments, according to certain configurations, the user is allowed to further interact with the overlay screen by purchasing the item displayed, sharing the item displayed, and/or closing the additional information and reverting to the screen shown in FIG. 3A or 3B. In an example embodiment, the user may access these functions by clicking a “TAKE ACTION” button 355a, a “SHARE” button 355b, and/or a “CLOSE” button 355c.



FIG. 4 illustrates another example display that may be created with dynamic binding according to this disclosure. In FIG. 4, a mobile phone 450 is positioned in front of an object. The object is displayed on the mobile phone 450, for example, as captured through a camera on the mobile phone. The display shown corresponds to underlying base content 400. Upon initiation of embodiments of this disclosure, supplemental content 410 can be provided on the display of the mobile phone 450.



FIG. 5 illustrates example servers 580a-580c storing supplemental content according to this disclosure. Three devices (namely a laptop, a mobile phone, and a networked television) are respectively showing base content 200a-200c. Once the base content 200a-200c has been identified, the appropriate supplemental content 210a-210c may be bound to the base content. The supplemental content may be located on one or more of the three different servers 580a-580c.


In different embodiments, certain supplemental content or identifiers for such supplemental content can be pre-authored to correspond to the base content and used when the base content is detected. For example, when a dress in a particular movie is shown, certain pre-authored supplemental content can be displayed. Alternatively, in other configurations, just an identifier for the supplemental is pre-authored. For instance, a dress identifier may be pre-authored. When the base content is identified, the dress identifier may trigger the dynamic creation of content, which may include, among other things, a dynamically-changing price for the dress. Additionally, in particular configurations as discussed below, the supplemental content can be customized based on attributes of a user and/or a device displaying the base content and the supplemental content.



FIG. 6 illustrates an example record 682 stored in a server such as a database server according to this disclosure. The record 682 generically shows a value 684 that corresponds to either a supplemental content 686 or an identifier (or pointer) for the supplemental content. In particular configurations, when the base content is determined, the value 684 can be looked up to determine what supplemental content 686 should be obtained. For example, the base content may correspond to a particular movie that shows a dress. When the movie and corresponding value is determined, the record for that value is looked up to yield the corresponding supplemental content (which may include items for the dress). As recognized by one of ordinary skill in the art, the record itself may simply contain pointers to obtain an actual storage of the supplemental content. In operation, the actual supplemental content and/or identifiers (or pointers) for the supplemental content that correspond to a particular value can change over time.



FIG. 7 illustrates an example decision engine 702 according to this disclosure. The decision engine 702 may exist as a logical construction (such as software) on any suitable server or computer, which may include components described with reference to FIG. 12. According to particular embodiments, the decision engine 702 receives a value 684 corresponding to base content and user parameters 704, some of which are discussed below. Based on these inputs (and other inputs according to other configurations), the decision engine 702 determines which supplemental content is to be sent for the base content.


More than one supplemental content may correspond to a value 684. For example, as shown in FIG. 7, different supplemental contents 686a-686c correspond to the value 684. Accordingly, other items such as user parameters 704 may assist the decision engine 702 in finding the optimal supplemental content to send to a user. In addition to the supplemental content, other associated parameters, such as price 706, inventory 708, and the like, for the supplemental content may also be obtained, either based on input from the record for the supplemental content or other parameters. As will be recognized by one of ordinary skill in the art, these associated parameters may dynamically change over time.


As a non-limiting example of the above, a value may have a variety of supplemental content 686a-686c. The decision engine 702, based on dynamic feedback from previous transactions (such as from other users), may determine that the supplemental content 686c should be selected because it currently has the best transactional conversion rate for users of similar demographics (such as when using a random sampling of the items 686a-686c to determine the conversion rate). The decision engine 702 may also determine, based on user parameters, that the user is entitled to a discount because either (a) the user is a member of a loyalty rewards club or (b) the user is transacting at a discount time of the day. The above is one example of the dynamic determination of the appropriate supplemental content that can be dynamically selected based on the base content and user parameters, including parameters of a particular user and statistical parameters of other users.


As referenced above, a variety of technologies may be utilized to recognize the content. In some configurations, content fingerprinting is utilized. For example, almost every piece of content has certain identifying characteristics that can be used to uniquely identify the content. As a non-limiting example, audio has unique sound wave characteristics when the audio is played. This remains true even with varying qualities of content. Examples of content fingerprinting are used by GOOGLE in their GOOGLE GOGGLES product for images, SHAZAM's audio fingerprinting, and GRACENOTES audio fingerprinting. For video fingerprinting, the fingerprint of the video may be based on just the audio feed, just the video feed, or both. Additionally, for the video fingerprinting, frames can be extracted and analyzed, where a confidence raises based on matches for multiple frames of content. Upon recognition of the fingerprint for the content, the content is identified, and appropriate supplemental information can be obtained. Yet other details surrounding content fingerprinting will become apparent to one of ordinary skill in the art after reviewing this specification.


In addition to the above content fingerprinting, other types of fingerprinting-type analysis can be done to either identify the content or enhance a confidence that the content is actually the content it is believed to be. Parameters that can be evaluated include the IP address or domain name from which the content is obtained, the encoding parameters (such as the codec and data transmission rate per second of video), the size of the content (such as if it is an image, pixel size, and image size), and specific metadata tags associated with the content. A variety of other content characteristics will become apparent to one of ordinary skill in the art after review of this disclosure.


As yet another example of content recognition, tags such as radio frequency identification (RFID) tags can be placed on objects that inform items that read such RFID tags of the identities of the objects. As an example, in a store, a mannequin wearing a particular dress may have an RFID tag that informs devices that read the tag of the identity of the item. In a similar vein, a fashion show can broadcast a signal with an identifier of the content being shown to allow devices to determine what is being shown for the appropriate obtaining of the supplemental content.


As still another example of content recognition, the tagging of geospatial coordinates can be performed. For example, the geospatial coordinates of a statue can be tagged. When a device is in proximity of the statue or has a geospatial view (such as with a camera of the device), the items corresponding to such geospatial coordinates can be recognized.


Multiple content recognition techniques may also be used at the same time. For example, the geoposition of a park is known. Additionally, it is known that the particular park item displays four different statues that play four different songs. Accordingly, the geoposition of the park along with the audio fingerprints for the particular songs known to be played in the park can yield the particular statue.


A variety of other types of content recognition technologies may also be utilized according to this disclosure to recognize, among other things, audio, pictures, and video. This disclosure is not limited to any particular technology. For example, in addition to the above recognition techniques, other techniques may involve an actual electronic reading of a tag that is placed on an object in the real world.


In addition to recognizing the content itself, supplemental content can also be customized based on characteristics of a user, a device, and/or other statistical information. Non-limiting examples include a profile that has been developed corresponding to a user (including but not limited to FACEBOOK SHADOW profiles), geographical location, IP address, any suitable device identifier (such as MAC address), items posted in a header that identify a client (such as GOOGLE CHROME browser), and time of the day. Based on such information, the supplemental content can be customized to correspond to a particular user.



FIG. 8 illustrates an example process 800 for ad-hoc binding of supplemental content to base content according to this disclosure. The process 800 begins by detecting parameters of the base content at step 810. This may involve detecting parameters that can be used for fingerprint detection. This may also involve detecting a tag associated with the base content. This may further involve detecting other parameters associated with the content, such as geospatial coordinates. In particular embodiments, in order to detect the parameters, an intercept process may occur where content intended to be sent to a display area is intercepted for evaluation prior to being displayed. In yet other embodiments, a capturing device that can capture audio, sound, or images may be utilized.


At step 820, the base content is determined based on the parameters. Any suitable technique may be used for this process, including the fingerprinting techniques described above or other approaches. One non-limiting example includes detecting audio in the base content, which may indicate that a particular video is being played.


At step 830, parameters associated with a device are determined. Example parameters include but are not limited to a device type, a browser type, a geolocation, bandwidth (which may include a consideration of a simultaneously streamed file for the base content), an IP address, and a time of day. In some embodiments, this step may be optional.


At step 840, parameters associated with a user of the device are determined. Example parameters include but are not limited to a profile that has been developed corresponding to a user (such as a FACEBOOK SHADOW profile). In particular embodiments, a user may have logged into a website, or a cookie corresponding to the user may be created. As other examples, a profile associated with an IP address or a MAC identifier may associate a user with a particular device. In some embodiments, this step may be optional.


At step 850, based on these parameters and the detected base content, the appropriate supplemental content is selected. In particular configurations, a decision engine such as is shown in FIG. 7 or the apparatus as shown in FIG. 12 may be utilized. A virtual limitless number of scenarios may involve the use of this process. Several non-limiting examples follow.


As a first example, a networked television may display a variety of content. Such content can be intercepted and analyzed just prior to display. In particular embodiments, this could delay the display from presenting the content for a period of a few microseconds to a few seconds or more. The analysis may be at the location of the networked television, remote from the networked television, or a combination thereof. The analysis may involve determining the content and the customization parameters for the particular user for such content. When an item, such as a dress, is shown on the networked television, supplemental content for the dress can be displayed. In particular configurations, the supplemental content could include an option to purchase the dress. Further, a pre-populated particular dress size may show up as determined by a user profile, which may include information based on previous purchases.


As another example, the display of content can be replicated from one device to another device. For instance, a television can display content. A device such as a computer, tablet, or mobile device can capture and recognize the content (using local analysis, remote analysis, or a combination thereof). Upon recognition, the content can then be replicated on the computer, tablet, or mobile device with supplemental content that is determined to be appropriate for the replicated content. As the content to be replicated may be subject to certain restrictions, any suitable authorization scheme may be utilized. If authorization cannot be obtained, an error message may be returned. However, if content can be returned, the content can be displayed on the computer, tablet, or mobile device.



FIG. 9 illustrates an example ad-hoc binding system 900 according to this disclosure. The ad-hoc binding system 900 may utilize a communication system, such as the communication system 100 shown in FIG. 1. The ad-hoc binding system 900 here includes communication architecture 902, a television 904, a computing device 906, a content server 908, and a supplemental content server 910.


In this example embodiment, the television 904 may be displaying an over-the-air broadcast or other showing of a particular movie that displays a dress. The dress catches the eye of a particular user. Accordingly, the user grabs his or her computing device 906 (such as a computer, tablet, or mobile phone) to capture the movie as base content 200. Upon detection of the movie (such as by using an audio fingerprint or other capture techniques), the detected base content 200 can be displayed (subject to authorization in certain configurations) as base content 200a along with appropriate supplemental content 210a. The base content 200a could be presented on the computing device 906 in a manner that is substantially synchronized with the presentation of the base content 200, although this need not be the case. The supplemental content 210a may include the dress along with options to purchase or information about which local stores have the dress (based on a determined geolocation of the device). The supplemental content 210a may be provided by the supplemental content server 910 while the base content 200 may be provided by the content server 908.


Moreover, notwithstanding a potential lack of a “rewind” feature for the over-the-air or other broadcast, a user can rewind the content on his or her computer, tablet, or mobile phone because the content is being streamed from the content server 908 as opposed to the communication architecture 902. This feature avoids the need for one to capture in real-time the actual moment at which something is displayed. The capture of content just after the item is displayed allows one to rewind to the moment the item is captured. Additionally, in particular embodiments, the user may be allowed to play back an uninterrupted version of the content (such as without commercial interruptions).


As a technology such as the above may be appear as disruptive (such as for a broadcaster), the playback from a content server 908 may be limited in time. Alternatively, the broadcaster may have a fee-sharing agreement for revenues that may be generated as a result of a display of information from the content server 908 or the supplemental content server 910.


As another example, a user may be located in a particular book store and see a particular book. The user can capture the book with a camera of a mobile device. With appropriate software either on the phone or at a remote location (such as when the image is uploaded to a remote server), the book is recognized using any suitable technique (such as via image or bar code recognition). Additionally, a geolocation of the mobile device may be recognized (such as by using GPS, cell-tower triangulation, or the like). Additionally, the user may be recognized as a frequent shopper of the particular book store. Having this input, the appropriate supplemental content can be generated and bound to the base content. The supplemental content, for example, may include an option to purchase from the same particular store but at a discounted price compared to the current list price.


As yet another non-limiting example, an indicator or directory may indicate that a live broadcast of a football game is being shown between the WASHINGTON REDSKINS and the DALLAS COWBOYS. A further determination may yield the likelihood that a person watching the game is a fan of quarterback Robert Griffin III (“RG3”). Accordingly, when this quarterback is shown in the live broadcast, supplemental content may show an RG3 jersey for sale.



FIG. 10 illustrates an example process 1000 for dynamically binding supplemental content to video content according to this disclosure. The process 1000 may, for example, be performed by at least one processing device 1312 as shown in FIG. 13 and described below. In the following description, the at least one processing device 1312 is referred to as a controller, although the process 1000 could be performed by any other suitable device.


At step 1010, the controller receives video content at a display device. The video content could represent any suitable type of video content. Also, the video content could be received from any suitable source, such as a video service that provides video content (like YOUTUBE, TWITTER, VINE, or the like).


At step 1020, the controller identifies at least one value related to one or more products or services. The at least one value is associated with the video content. This could be done in any suitable manner, such as by locally or remotely identifying characteristics of the media, such as its fingerprint, title, size, and the like. The value can also be provided through a data file, such as but not limited to an XML file. The value can also be retrieved by data mining Internet information associated with the video content. For instance, the value could be found by identifying a website providing values for the video content.


At step 1030, the controller obtains supplemental content related to the one or more products or services based on the at least one value. The supplemental content provides additional information about the one or more products or services. For example, the additional information could be pricing, descriptions, reviews, or the like about the one or more products or services. The supplemental information can also include controls related to the one or more products or services, the additional information, and configurations related to the one or more products or services. In some embodiments, controls could be interactions available with the one or more products or services. Additionally, user parameters may be sent with the values to help identify supplemental content to send to the user.


At step 1040, the controller dynamically binds the supplemental content to the video content through a supplemental interactive display. The supplemental interactive display can override the controls of the display previously displaying the video. At step 1050, the controller positions the supplemental content in association with the video content, such as adjacent to or overlaying the video content. At this point, the process 1000 terminates.



FIG. 11 illustrates an example process 1100 for dynamically binding supplemental content to live video content according to this disclosure. The process 1100 may, for example, be performed by the at least one processing device 1312 as shown in FIG. 13 and described below. Again, in the following description, the at least one processing device 1312 is referred to as a controller, although the process 1100 could be performed by any other suitable device.


At step 1110, the controller receives live video content through a device. Live video content represents real-time video content, such as video content that is being filmed at or near real-time by an endpoint. At step 1120, the controller searches for a description of the live video content. In some embodiments, the description may be in a programming directory, within a broadcast feed, searched for on the Internet, or the like.


At step 1130, the controller obtains supplemental content based on the description. The supplemental content provides additional information about one or more products or services related to the live video content. The supplemental content can be obtained from a supplemental content server. At step 1140, the controller dynamically binds the supplemental content to the live video content through a supplemental interactive display. The supplemental interactive display can override the controls of the display previously displaying the video. At step 1150, the controller positions the supplemental content in association with the video content, such as adjacent to or overlaying the video content. At this point, the process 1100 terminates.



FIG. 12 illustrates an example process 1200 for dynamically binding supplemental content to a content transactional item according to this disclosure. The process 1200 may, for example, be performed by the at least one processing device 1312 as shown in FIG. 13 and described below. Once again, in the following description, the at least one processing device 1312 is referred to as a controller, although the process 1200 could be performed by any other suitable device.


At step 1210, the controller identifies a content transactional item through a device. A “content transactional item” represents a product or service available for purchase, lease, rental, or other transaction. In an example embodiment, a content transactional item could be a physical product or service, such as a book at a store. In another example, the content transactional item could be a product or service in a media stream or base content. Identifying the content transactional item could be done in any suitable manner, such as by capturing an image of an item and then sending that image or information related to that image to a server with a repository of images.


At step 1220, the controller identifies a description related to the content transactional item. Location information, such as a name of a store, geographical information, global positioning information, or the like, may also be obtained. The transactional item could further be identified by an RFID tag, a signal with information about the product, or a barcode. In an embodiment, identifying the description could be performed by obtaining one or more parameters of the image. The one or more parameters include an image histogram, a bar code, and/or a location of the device when the image was captured. For example, bar code could be tied to a database of product description, the location could provide information related to a location in a store such as a jeans department, and the image histogram could provide information related to fingerprinting the image and identifying objects in the image.


At step 1230, the controller obtains supplemental content related to the content transactional item based on the description. The supplemental content provides additional information about the content transactional item. The supplemental content may include information related to the item, current discounts, user discounts, or the like. At step 1240, the controller dynamically binds the supplemental content to the content transactional item through a supplemental interactive display. The supplemental content can be displayed as one or more services or products related to the content transactional item. At step 1250, the controller positions the supplemental content in association with the video content, such as adjacent to or overlaying the video content. At this point, the process 1200 terminates.



FIG. 13 illustrates an example computing device 1300 for dynamically binding supplemental content according to this disclosure. The computing device 1300 here could be used to implement any of the techniques or functions described above, including any combination of the techniques or functions described above. The computing device 1300 may generally be adapted to execute any of suitable operating system, including WINDOWS, MAC OS, UNIX, LINUX, OS2, IOS, ANDROID, or other operating systems.


As shown in FIG. 13, the computing device 1300 includes at least one processing device 1312, a random access memory (RAM) 1314, a read only memory (ROM) 1316, a mouse 1318, a keyboard 1320, and input/output devices such as a disc drive 1322, a printer 1324, a display 1326, and a communication link 1328. In other embodiments, the computing device 1300 may include more, less, or other components. Computing devices come in a wide variety of configurations, and FIG. 13 does not limit the scope of this disclosure to any particular computing device or type of computing device.


Program code may be stored in the RAM 1314, the ROM 1316 or the disc drive 1322 and may be executed by the at least one processing device 1312 in order to carry out the functions described above. The at least one processing device 1312 can be any type(s) of processing device(s), such as one or more processors, microprocessors, controllers, microcontrollers, multi-core processors, and the like. The communication link 1328 may be connected to a computer network or a variety of other communicative platforms, including any of the various types of communication networks 140 described above. The disc drive 1322 may include a variety of types of storage media such as, for example, floppy drives, hard drives, CD drives, DVD drives, magnetic tape drives, or other suitable storage media. One or multiple disc drive 1322 may be used in the computing device 1300.


Note that while FIG. 13 provides one example embodiment of a computer that may be utilized with other embodiments of this disclosure, such other embodiments may utilize any suitable general-purpose or specific-purpose computing devices. Multiple computing devices having any suitable arrangement could also be used. Commonly, multiple computing devices are networked through the Internet and/or in a client-server network. However, this disclosure may use any suitable combination and arrangement of computing devices, including those in separate computer networks linked together by a private or public network.


The computing devices 1300 could represent fixed or mobile devices, and various components can be added or omitted based on the particular implementation of a computing device. For example, mobile devices could include features such as cameras, camcorders, GPS features, and antennas for wireless communications. Particular examples of such mobile devices include IPHONE, IPAD, and ANDROID-based devices.


Although the figures above have described various systems, devices, and methods related to the dynamic binding of base content to supplemental content, various changes may be made to the figures. For example, the designs of various devices and systems could vary as needed or desired, such as when components of a device or system are combined, further subdivided, rearranged, or omitted and additional components are added. As another example, while various methods are shown as a series of steps, various steps in each method could overlap, occur in parallel, occur in a different order, or occur any number of times. In addition, examples of graphical presentations are for illustration only, and content can be presented in any other suitable manner. It will be understood that well-known processes have not been described in detail and have been omitted for brevity. Although specific steps, structures, and materials may have been described, this disclosure may not be limited to these specifics, and others may be substituted as it is well understood by those skilled in the art, and various steps may not necessarily be performed in the sequences shown.


In some embodiments, various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.


It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The terms “transmit,” “receive,” and “communicate,” as well as derivatives thereof, encompasses both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.


While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Other changes, substitutions, and alterations are also possible without departing from the invention as defined by the following claims.

Claims
  • 1. A method of dynamically binding supplemental content to live video content, the method comprising: capturing, at a first device, live video content from a first display screen;determining one or more user parameters for a user associated with the first device;transmitting one or more parameters of the captured live video content along with the one or more user parameters to a remote server;receiving, from the remote server, live video content data representing the live video content, at the first device;identifying a description of the live video content based on the one or more parameters of the captured live video content;obtaining supplemental content based on the description, the one or more parameters of the captured live video content, and the one or more user parameters, the supplemental content providing additional information about one or more products or services related to the live video content, the supplemental content having associated supplemental content data;dynamically binding supplemental content data and a content transaction interface to the live video content data to form a shareable container allowing sharing of the live video content and the supplemental content together to a second device, the shareable container also allowing a user to adjust playback of the live video content and completion of a transaction within the shareable container via the content transaction interface; andpositioning the supplemental content in association with the live video content using at least one of: the first display screen and a second display screen.
  • 2. The method of claim 1, wherein obtaining the supplemental content comprises: sending the description to the remote server; andreceiving the supplemental content associated with the description from the remote server.
  • 3. The method of claim 1, wherein obtaining the supplemental content comprises: sending the description to the remote server; andreceiving the supplemental content associated with the description and one or more statistical parameters of one or more other users.
  • 4. The method of claim 1, wherein identifying the description comprises: retrieving the description from a directory associated with the live video content.
  • 5. The method of claim 1, wherein identifying the description comprises: retrieving the description by data mining information associated with the live video content.
  • 6. The method of claim 1, wherein the live video content comprises video content received at the first device from the second device or a third device that captures the live video content.
  • 7. The method of claim 1, further comprising: synchronizing the live video content with the first display screen.
  • 8. The method of claim 1, wherein the shareable container allows the user to rewind the live video content.
  • 9. The method of claim 1, further comprising: recognizing the live video content from the captured live video content; andrequesting the live video content data from the remote server based on the recognition.
  • 10. The method of claim 1, wherein: the first display screen comprises a television;the supplemental content is positioned in association with the live video content using the second display screen; andthe second display screen comprises a display of a smartphone or networked glasses.
  • 11. An apparatus comprising: at least one memory configured to receive and store video content; andat least one processing device communicatively coupled to the at least one memory and configured to: capture, at the apparatus, live video content from a first display screen;transmit one or more parameters of the captured live video content and one or more user parameters of a user associated with the apparatus to a remote server;receive, from the remote server, live video content data representing live video content associated with the captured live video content at the apparatus;identify a description of the live video content based on the one or more parameters of the captured live video content;obtain supplemental content data based on the description, the one or more user parameters, and the one or more parameters of the captured live video content, the supplemental content data associated with supplemental content, the supplemental content providing additional information about one or more products or services related to the live video content;dynamically bind the supplemental content data and a user interface to the live video content data to form a shareable container allowing sharing of the live video content and the supplemental content together to a second apparatus, the user interface of the shareable container also allowing user adjustment of playback of the live video content and completion of a transaction within the shareable container; andposition the supplemental content in association with the live video content using at least one of: the first display screen and a second display screen.
  • 12. The apparatus of claim 11, wherein the at least one processing device is further configured to: send the description to the remote server; andreceive at least one of the supplemental content data and the supplemental content associated with the description from the remote server.
  • 13. The apparatus of claim 11, wherein the at least one processing device is further configured to: send the description to the remote server; andreceive at least one of the supplemental content data and the supplemental content associated with the description and one or more statistical parameters of one or more other users.
  • 14. The apparatus of claim 11, wherein the at least one processing device is configured to retrieve the description from a directory associated with the live video content.
  • 15. The apparatus of claim 11, wherein the at least one processing device is configured to retrieve the description by data mining information associated with the live video content.
  • 16. The apparatus of claim 11, wherein the live video content comprises live video content captured by the second apparatus or a third apparatus.
  • 17. A non-transitory computer-readable medium comprising logic stored on the computer-readable medium, the logic configured when executed to cause at least one processing device of a first device to: capture live video content at the first device from a first display screen;transmit one or more parameters of the captured live video content and one or more user parameters associated with the first device to a remote server;receive live video content data, the live video content data representing live video content associated with the captured live video content, at the first device from the remote server;identify a description of the live video content based on the one or more parameters of the captured live video content;obtain supplemental content data based on the description, the one or more user parameters, and the one or more parameters of the captured live video content, the supplemental content data associated with supplemental content providing additional information about one or more products or services related to the live video content;dynamically bind the supplemental content data and a content transaction interface to the live video content data to form a shareable container allowing sharing of the live video content and the supplemental content together to a second device, the shareable container configured to provide user-responsive controls to adjust playback of the live video content and completion of a transaction within the shareable container; andposition the supplemental content in association with the live video content using at least one of: the first display screen and a second display screen.
  • 18. The non-transitory computer readable medium of claim 17, wherein the logic is configured when executed to cause the at least one processing device to: send the description to the remote server; andreceive the supplemental content associated with the description from the remote server.
  • 19. The non-transitory computer readable medium of claim 17, wherein the logic is configured when executed to cause the at least one processing device to: send the description to the remote server; andreceive the supplemental content associated with the description and one or more statistical parameters of one or more other users.
  • 20. The non-transitory computer readable medium of claim 17, wherein the logic is configured when executed to cause the at least one processing device to retrieve the description from a directory associated with the live video content.
  • 21. The non-transitory computer readable medium of claim 17, wherein the logic is configured when executed to cause the at least one processing device to retrieve the description by data mining information associated with the live video content.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY CLAIM

This disclosure claims priority under 35 U.S.C. § 119(e) to the following patent applications: U.S. Provisional Patent Application No. 61/876,668 filed on Sep. 11, 2013 and entitled “DYNAMIC BINDING OF INTELLIGENT INTERNET OBJECTS;”U.S. Provisional Patent Application No. 61/876,647 filed on Sep. 11, 2013 and entitled “AD-HOC DYNAMIC BINDING OF INTELLIGENT INTERNET OBJECTS;” andU.S. Provisional Patent Application No. 61/883,809 filed on Sep. 27, 2013 and entitled “AD-HOC DYNAMIC BINDING.” These provisional patent applications are hereby incorporated by reference in their entirety.

US Referenced Citations (346)
Number Name Date Kind
5774664 Hidary et al. Jun 1998 A
5778181 Hidary et al. Jul 1998 A
5903816 Broadwin et al. May 1999 A
5929849 Kikinis Jul 1999 A
6006257 Slezak Dec 1999 A
6009410 LeMole et al. Dec 1999 A
6014638 Burge et al. Jan 2000 A
6018768 Ullman et al. Jan 2000 A
6154771 Rangan et al. Nov 2000 A
6169573 Sampath-Kumar et al. Jan 2001 B1
6188398 Collins-Rector et al. Feb 2001 B1
6233682 Fritsch May 2001 B1
6240555 Shoff et al. May 2001 B1
6263505 Walker et al. Jul 2001 B1
6275989 Broadwin et al. Aug 2001 B1
6282713 Kitsukawa et al. Aug 2001 B1
6321209 Pasquali Nov 2001 B1
6330595 Ullman et al. Dec 2001 B1
6357042 Srinivasan et al. Mar 2002 B2
6536041 Knudson et al. Mar 2003 B1
6564380 Murphy May 2003 B1
6604049 Yokota Aug 2003 B2
6628307 Fair Sep 2003 B1
6766528 Kim et al. Jul 2004 B1
6857010 Cuijpers et al. Feb 2005 B1
6910049 Fenton et al. Jun 2005 B2
6912726 Chen et al. Jun 2005 B1
6941575 Allen Sep 2005 B2
6976028 Fenton et al. Dec 2005 B2
6990498 Fenton et al. Jan 2006 B2
7000242 Haber Feb 2006 B1
7017173 Armstrong et al. Mar 2006 B1
7072683 King et al. Jul 2006 B2
7097094 Lapstun et al. Aug 2006 B2
7136853 Kohda et al. Nov 2006 B1
7158676 Rainsford Jan 2007 B1
7162263 King et al. Jan 2007 B2
7188186 Meyer et al. Mar 2007 B1
7207057 Rowe Apr 2007 B1
7222163 Girouard et al. May 2007 B1
7231651 Pong Jun 2007 B2
7243139 Ullman et al. Jul 2007 B2
7243835 Silverbrook et al. Jul 2007 B2
7254622 Nomura et al. Aug 2007 B2
7269837 Redling et al. Sep 2007 B1
7331057 Eldering et al. Feb 2008 B2
7353186 Kobayashi Apr 2008 B2
7409437 Ullman et al. Aug 2008 B2
7412406 Rosenberg Aug 2008 B2
7432768 Han et al. Oct 2008 B2
7444659 Lemmons Oct 2008 B2
7464344 Carmichael et al. Dec 2008 B1
7485397 Eck et al. Feb 2009 B2
7487112 Barnes, Jr. Feb 2009 B2
7509340 Fenton et al. Mar 2009 B2
7539738 Stuckman et al. May 2009 B2
7555444 Wilson et al. Jun 2009 B1
7574381 Lin-Hendel Aug 2009 B1
7593965 Gabriel Sep 2009 B2
7613691 Finch Nov 2009 B2
7614013 Dollar et al. Nov 2009 B2
7624416 Vandermolen et al. Nov 2009 B1
7631327 Dempski et al. Dec 2009 B2
7661121 Smith et al. Feb 2010 B2
7664678 Haber Feb 2010 B1
7673017 Kim et al. Mar 2010 B2
7691666 Levy et al. Apr 2010 B2
7721307 Hendricks et al. May 2010 B2
7739596 Clarke-Martin et al. Jun 2010 B2
7750343 Choi et al. Jul 2010 B2
7756758 Johnson et al. Jul 2010 B2
7769827 Girouard et al. Aug 2010 B2
7769830 Stuckman et al. Aug 2010 B2
7773093 Bates et al. Aug 2010 B2
7774161 Tischer Aug 2010 B2
7774815 Allen Aug 2010 B1
7800102 Park et al. Sep 2010 B2
7804506 Bates et al. Sep 2010 B2
7818763 Sie et al. Oct 2010 B2
7840415 Schifone Nov 2010 B2
7853477 O'Shea et al. Dec 2010 B2
7870592 Hudson et al. Jan 2011 B2
7885951 Rothschild Feb 2011 B1
7899719 Lin-Hendel Mar 2011 B2
7912753 Struble Mar 2011 B2
7923722 Ryu et al. Apr 2011 B2
7925973 Allaire et al. Apr 2011 B2
7946492 Rohs May 2011 B2
7975020 Green et al. Jul 2011 B1
7975062 Krikorian et al. Jul 2011 B2
7979877 Huber et al. Jul 2011 B2
7982216 Imai Jul 2011 B2
7987098 Schifone Jul 2011 B2
7987483 Des Jardins Jul 2011 B1
8001116 Cope Aug 2011 B2
8001577 Fries Aug 2011 B2
8006265 Redling et al. Aug 2011 B2
8010408 Rubinstein et al. Aug 2011 B2
8032421 Ho et al. Oct 2011 B1
8055688 Giblin Nov 2011 B2
8091103 Cope Jan 2012 B2
8108257 Sengamedu Jan 2012 B2
8112324 Frank et al. Feb 2012 B2
8122480 Sholtis Feb 2012 B2
8132486 Calvert Mar 2012 B1
8141112 Cope et al. Mar 2012 B2
8181212 Sigal May 2012 B2
8196162 van de Klashorst Jun 2012 B2
8312486 Briggs et al. Nov 2012 B1
8316450 Robinson et al. Nov 2012 B2
8341152 Bates Dec 2012 B1
8433611 Lax et al. Apr 2013 B2
8438646 Sidi May 2013 B2
8458053 Buron et al. Jun 2013 B1
8468562 Miller et al. Jun 2013 B2
8549555 Briggs et al. Oct 2013 B2
8615474 Avedissian et al. Dec 2013 B2
8635169 Avedissian et al. Jan 2014 B2
8639621 Ellis et al. Jan 2014 B1
8645214 Hipolito et al. Feb 2014 B2
8645217 Siegel et al. Feb 2014 B2
8645991 McIntire et al. Feb 2014 B2
8655146 Bennett et al. Feb 2014 B2
8682809 Avedissian et al. Mar 2014 B2
8689251 Heath Apr 2014 B1
8695877 Mesaros Apr 2014 B2
8708223 Gates et al. Apr 2014 B2
8752083 Geer, III et al. Jun 2014 B2
8769016 Jakobson Jul 2014 B2
8769053 Spitz et al. Jul 2014 B2
8769584 Neumeier et al. Jul 2014 B2
8782690 Briggs et al. Jul 2014 B2
8813132 Andrews, II et al. Aug 2014 B2
8893173 Briggs et al. Nov 2014 B2
9002727 Horowitz et al. Apr 2015 B2
9091851 Border et al. Jul 2015 B2
9697504 Spitz et al. Jul 2017 B2
20010016828 Philippe et al. Aug 2001 A1
20020059196 I'Anson et al. May 2002 A1
20020062481 Slaney et al. May 2002 A1
20020066103 Gagnon et al. May 2002 A1
20020075332 Geilfuss, Jr. et al. Jun 2002 A1
20020078441 Drake et al. Jun 2002 A1
20020083447 Heron et al. Jun 2002 A1
20020083469 Jeannin et al. Jun 2002 A1
20020126990 Rasmussen et al. Sep 2002 A1
20020133414 Pradhan et al. Sep 2002 A1
20020199181 Allen Dec 2002 A1
20030004750 Teraoka et al. Jan 2003 A1
20030018971 McKenna Jan 2003 A1
20030023490 Lenyon et al. Jan 2003 A1
20030028873 Lemmons Feb 2003 A1
20030055667 Sgambaro et al. Mar 2003 A1
20030135563 Bodin et al. Jul 2003 A1
20030149983 Markel Aug 2003 A1
20030163832 Tsuria et al. Aug 2003 A1
20030195800 Peters Oct 2003 A1
20030208396 Miller et al. Nov 2003 A1
20030220835 Barnes, Jr. Nov 2003 A1
20030220841 Maritzen Nov 2003 A1
20030228615 Rossi et al. Dec 2003 A1
20030231851 Rantala et al. Dec 2003 A1
20040003406 Billmaier Jan 2004 A1
20040021684 Millner Feb 2004 A1
20040056101 Barkan et al. Mar 2004 A1
20040064379 Anderson et al. Apr 2004 A1
20040204063 Van Erlach Oct 2004 A1
20050021369 Cohen et al. Jan 2005 A1
20050022226 Ackley et al. Jan 2005 A1
20050029514 Moriya Feb 2005 A1
20050033656 Wang et al. Feb 2005 A1
20050076372 Moore et al. Apr 2005 A1
20060009243 Dahan et al. Jan 2006 A1
20060122895 Abraham et al. Jun 2006 A1
20060136305 Fitzsimmons et al. Jun 2006 A1
20060169772 Page et al. Aug 2006 A1
20060202191 Gerlach et al. Sep 2006 A1
20060230064 Perkowski Oct 2006 A1
20060242016 Chenard Oct 2006 A1
20060253290 Kwon Nov 2006 A1
20060256133 Rosenberg Nov 2006 A1
20060265657 Gilley Nov 2006 A1
20060276266 Sullivan Dec 2006 A1
20070030080 Han et al. Feb 2007 A1
20070083762 Martinez Apr 2007 A1
20070087489 Park et al. Apr 2007 A1
20070088746 Baker Apr 2007 A1
20070106646 Stern et al. May 2007 A1
20070150360 Getz Jun 2007 A1
20070157228 Bayer et al. Jul 2007 A1
20070180461 Hilton Aug 2007 A1
20070204308 Nicholas et al. Aug 2007 A1
20070239546 Blum et al. Oct 2007 A1
20070241327 Kim et al. Oct 2007 A1
20070250901 McIntire et al. Oct 2007 A1
20070266399 Sidi Nov 2007 A1
20070271149 Siegel et al. Nov 2007 A1
20070276721 Jackson Nov 2007 A1
20070288518 Crigler et al. Dec 2007 A1
20070300263 Barton et al. Dec 2007 A1
20070300280 Turner et al. Dec 2007 A1
20080005999 Pervan Jan 2008 A1
20080012010 Myung-Seok et al. Jan 2008 A1
20080066099 Brodersen et al. Mar 2008 A1
20080066107 Moonka et al. Mar 2008 A1
20080091552 Aas Apr 2008 A1
20080098425 Welch Apr 2008 A1
20080109306 Maigret et al. May 2008 A1
20080109844 Baldeschwieler et al. May 2008 A1
20080126191 Schiavi May 2008 A1
20080126226 Popkiewicz et al. May 2008 A1
20080126949 Sharma May 2008 A1
20080148283 Allen et al. Jun 2008 A1
20080149921 Choi et al. Jun 2008 A1
20080163283 Tan et al. Jul 2008 A1
20080177627 Cefail Jul 2008 A1
20080177630 Maghfourian et al. Jul 2008 A1
20080235085 Kovinsky et al. Sep 2008 A1
20080237340 Emmons et al. Oct 2008 A1
20080250445 Zigmond et al. Oct 2008 A1
20080255934 Leventhal et al. Oct 2008 A1
20080270249 Rosenbaum et al. Oct 2008 A1
20080276266 Huchital et al. Nov 2008 A1
20080281685 Jaffe et al. Nov 2008 A1
20080294694 Maghfourian et al. Nov 2008 A1
20080296568 Ryu et al. Dec 2008 A1
20080306999 Finger et al. Dec 2008 A1
20080307310 Segal et al. Dec 2008 A1
20080319852 Gardner et al. Dec 2008 A1
20080319856 Zito et al. Dec 2008 A1
20090013347 Ahanger et al. Jan 2009 A1
20090018904 Shipman et al. Jan 2009 A1
20090031382 Cope Jan 2009 A1
20090032809 Kim et al. Feb 2009 A1
20090043674 Minsky et al. Feb 2009 A1
20090077598 Watson et al. Mar 2009 A1
20090083815 McMaster et al. Mar 2009 A1
20090094339 Allen et al. Apr 2009 A1
20090119169 Chandratillake et al. May 2009 A1
20090132349 Berkley et al. May 2009 A1
20090157500 Ames et al. Jun 2009 A1
20090158322 Cope et al. Jun 2009 A1
20090172793 Newstadt et al. Jul 2009 A1
20090199230 Kumar et al. Aug 2009 A1
20090210790 Thomas Aug 2009 A1
20090248546 Norris et al. Oct 2009 A1
20090259563 Ruhuke et al. Oct 2009 A1
20090265255 Jackson et al. Oct 2009 A1
20090265387 Gabriel et al. Oct 2009 A1
20090276805 Andrews, II et al. Nov 2009 A1
20090315776 Khosravy et al. Dec 2009 A1
20090315995 Khosravy et al. Dec 2009 A1
20090319348 Khosravy et al. Dec 2009 A1
20090320073 Reisman Dec 2009 A1
20090327894 Rakib Dec 2009 A1
20100030578 Siddique et al. Feb 2010 A1
20100114983 Robert et al. May 2010 A1
20100131385 Harrang et al. May 2010 A1
20100145795 Haber et al. Jun 2010 A1
20100153831 Beaton Jun 2010 A1
20100185504 Rajan et al. Jul 2010 A1
20100223107 Kim et al. Sep 2010 A1
20100228612 Khosravy et al. Sep 2010 A1
20100247061 Bennett et al. Sep 2010 A1
20100279766 Pliska et al. Nov 2010 A1
20100280960 Ziotopoulos et al. Nov 2010 A1
20100283827 Bustamente Nov 2010 A1
20100287580 Harding et al. Nov 2010 A1
20100299183 Fujioka Nov 2010 A1
20100299616 Chen et al. Nov 2010 A1
20100306805 Neumeier et al. Dec 2010 A1
20100332329 Roberts et al. Dec 2010 A1
20110004517 Soto et al. Jan 2011 A1
20110052144 Abbas et al. Mar 2011 A1
20110066504 Chatow et al. Mar 2011 A1
20110071865 Leeds et al. Mar 2011 A1
20110133176 Lee et al. Jun 2011 A1
20110173300 Levy et al. Jul 2011 A1
20110184798 Wang et al. Jul 2011 A1
20110191809 Briggs et al. Aug 2011 A1
20110231260 Price Sep 2011 A1
20110238755 Khan et al. Sep 2011 A1
20110247042 Mallinson Oct 2011 A1
20110251897 Litvack et al. Oct 2011 A1
20110276157 Wang et al. Nov 2011 A1
20110306368 McCarthy Dec 2011 A1
20110307397 Benmbarek Dec 2011 A1
20110320317 Yuan et al. Dec 2011 A1
20120030704 Schiller et al. Feb 2012 A1
20120037697 Boone et al. Feb 2012 A1
20120079021 Roman et al. Mar 2012 A1
20120095805 Ghosh et al. Apr 2012 A1
20120130855 Nielsen et al. May 2012 A1
20120158511 Lucero et al. Jun 2012 A1
20120166289 Gadoury et al. Jun 2012 A1
20120185355 Kilroy Jul 2012 A1
20120210340 Reynolds et al. Aug 2012 A1
20120222064 Geer, III et al. Aug 2012 A1
20120227060 Allen et al. Sep 2012 A1
20120227074 Hill et al. Sep 2012 A1
20120246073 Gore et al. Sep 2012 A1
20120296738 Leeder Nov 2012 A1
20120296782 Tsai et al. Nov 2012 A1
20120311662 Abnous et al. Dec 2012 A1
20120330736 Beckner et al. Dec 2012 A1
20130006790 Raskin et al. Jan 2013 A1
20130014137 Bhatia et al. Jan 2013 A1
20130014155 Clarke et al. Jan 2013 A1
20130048723 King Feb 2013 A1
20130051554 Braness et al. Feb 2013 A1
20130054757 Spitz et al. Feb 2013 A1
20130110608 Cassidy et al. May 2013 A1
20130144903 Andrews, II et al. Jun 2013 A1
20130151352 Tsai et al. Jun 2013 A1
20130152123 Briggs et al. Jun 2013 A1
20130166382 Cassidy et al. Jun 2013 A1
20130183021 Osman Jul 2013 A1
20130211891 Daniel et al. Aug 2013 A1
20130212611 Van Aacken et al. Aug 2013 A1
20130215116 Siddique et al. Aug 2013 A1
20130218964 Jakobson Aug 2013 A1
20130228615 Gates et al. Sep 2013 A1
20130238452 Frazier et al. Sep 2013 A1
20130282522 Hassan Oct 2013 A1
20130290550 Bangalore et al. Oct 2013 A1
20140006129 Heath Jan 2014 A1
20140019860 Sathish et al. Jan 2014 A1
20140032366 Spitz et al. Jan 2014 A1
20140046759 Drozd et al. Feb 2014 A1
20140052576 Zelenka et al. Feb 2014 A1
20140095330 Briggs et al. Apr 2014 A1
20140108111 Klein Apr 2014 A1
20140164099 Schlesinger et al. Jun 2014 A1
20140172530 He Jun 2014 A1
20140250211 Spitz et al. Sep 2014 A1
20140254942 Liu et al. Sep 2014 A1
20140282700 Briggs et al. Sep 2014 A1
20140303991 Frank Oct 2014 A1
20140304075 Dillingham et al. Oct 2014 A1
20140359671 Andrews, II et al. Dec 2014 A1
20150039468 Spitz et al. Feb 2015 A1
20150073919 Spitz et al. Mar 2015 A1
20150074710 Spitz et al. Mar 2015 A1
20150092111 Spitz et al. Apr 2015 A1
20150095455 Spitz et al. Apr 2015 A1
20150254632 Shin et al. Sep 2015 A1
Foreign Referenced Citations (28)
Number Date Country
2849882 Feb 2013 CA
0867690 Oct 2003 EP
2401461 May 2006 GB
2002-150120 May 2002 JP
2002-245141 Aug 2002 JP
2009-093292 Apr 2009 JP
52-78093 Sep 2013 JP
WO 2001069364 Sep 2001 WO
WO 2008016634 Feb 2008 WO
WO 2008118906 Oct 2008 WO
WO 2008138080 Nov 2008 WO
WO 2008146217 Dec 2008 WO
WO 2009012580 Jan 2009 WO
WO 2009027110 Mar 2009 WO
WO 2009085229 Jul 2009 WO
WO 2011057156 May 2011 WO
WO 2011149491 Dec 2011 WO
WO 2012135115 Oct 2012 WO
WO 2013033239 Mar 2013 WO
WO 2013192557 Dec 2013 WO
WO 2015008156 Jan 2015 WO
WO 2015013117 Jan 2015 WO
WO 2015038795 Mar 2015 WO
WO 2015038798 Mar 2015 WO
WO 2015038802 Mar 2015 WO
WO 2015048375 Apr 2015 WO
WO 2015048377 Apr 2015 WO
WO 2015054644 Apr 2015 WO
Non-Patent Literature Citations (106)
Entry
International Search Report dated Nov. 24, 2014 in connection with International Application No. PCT/US14/55233; 2 pages.
Written Opinion of International Searching Authority dated Nov. 24, 2014 in connection with International Application No. PCT/US14/55233; 3 pages.
International Search Report dated Dec. 15, 2014 in connection with International Application No. PCT/US14/57595; 2 pages.
Written Opinion of International Searching Authority dated Dec. 15, 2014 in connection with International Application No. PCT/US14/57595; 3 pages.
International Search Report dated Dec. 16, 2014 in connection with International Application No. PCT/US14/55226; 2 pages.
Written Opinion of International Searching Authority dated Dec. 16, 2014 in connection with International Application No. PCT/US14/55226; 4 pages.
International Search Report dated Dec. 16, 2014 in connection with International Application No. PCT/US14/55229; 2 pages.
Written Opinion of International Searching Authority dated Dec. 16, 2014 in connection with International Application No. PCT/US14/55229; 4 pages.
Non-Final Office Action dated Dec. 18, 2014 in connection with U.S. Appl. No. 14/484,047; 8 pages.
Non-Final Office Action dated Dec. 19, 2014 in connection with U.S. Appl. No. 14/484,065; 8 pages.
Non-Final Office Action dated Dec. 26, 2014 in connection with U.S. Appl. No. 14/484,225: 8 pages.
Non-Final Office Action dated Jan. 6, 2015 in connection with U.S. Appl. No. 14/512,204; 18 pages.
Non-Final Office Action dated Jan. 9, 2015 in connection with U.S. Appl. No. 14/292,423; 14 pages.
International Search Report dated Jan. 9, 2015 in connection with International Application No. PCT/US14/57597; 2 pages.
Written Opinion of International Searching Authority dated Jan. 9, 2015 in connection with International Application No. PCT/US14/57597; 4 pages.
Office Action for U.S. Appl. No. 14/484,047, dated Dec. 12, 2016, 12 pages.
Office Action for U.S. Appl. No. 14/484,047, dated Apr. 7, 2016, 8 pages.
Office Action for U.S. Appl. No. 14/484,047, dated May 18, 2015, 9 pages.
Supplementary European Search Report for European Application No. 14843697.5, dated Jan. 9, 2017, 7 pages.
Office Action for U.S. Appl. No. 14/484,065, dated Jul. 1, 2016, 13 pages.
Office Action for U.S. Appl. No. 14/484,065, dated Nov. 23, 2015, 15 pages.
Office Action for U.S. Appl. No. 14/484,065, dated Jul. 17, 2015, 12 pages.
Supplementary European Search Report for European Application No. 14843444.2, dated Jan. 9, 2017, 7 pages.
Supplementary European Search Report for European Application No. 14844749.3, dated Jan. 9, 2017, 7 pages.
Office Action for U.S. Appl. No. 14/497,686, dated Feb. 24, 2015, 13 pages.
Office Action for U.S. Appl. No. 14/497,686, dated Sep. 10, 2015, 13 pages.
Office Action for U.S. Appl. No. 14/497,686, dated Nov. 4, 2016, 13 pages.
Office Action for U.S. Appl. No. 14/497,686, dated Aug. 15, 2017, 15 pages.
Supplementary European Search Report for European Application No. 14848243.3, dated May 22, 2017, 7 pages.
Office Action for U.S. Appl. No. 14/498,800, dated Mar. 27, 2015, 18 pages.
Office Action for U.S. Appl. No. 14/498,800, dated Sep. 25, 2015, 20 pages.
Office Action for U.S. Appl. No. 14/498,800, dated May 5, 2016, 22 pages.
Supplementary European Search Report for European Application No. 14849116.0, dated Mar. 6, 2017, 8 pages.
Office Action for U.S. Appl. No. 14/292,423, dated Jul. 12, 2016, 14 pages.
International Search Report and Written Opinion for International Application No. PCT/US2014/060150, dated Jan. 26, 2015, 6 pages.
International Search Report and Written Opinion for International Application No. PCT/US2010/057567, dated Jun. 24, 2011, 9 pages.
International Search Report and Written Opinion for International Application No. PCT/US2012/052897, dated Nov. 14, 2012, 6 pages.
International Search Report and Written Opinion for International Application No. PCT/US2013/047124, dated Jan. 10, 2014, 6 pages.
International Search Report and Written Opinion for International Application No. PCT/US2015/018140, dated Jun. 3, 2015, 9 pages.
Office Action for U.S. Appl. No. 12/363,713, dated Oct. 3, 2011, 9 pages.
Office Action for U.S. Appl. No. 12/363,713, dated Jun. 13, 2012, 13 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Apr. 5, 2012, 16 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Mar. 1, 2013, 10 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Mar. 18, 2014, 10 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Jul. 16, 2015, 14 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Nov. 23, 2015, 14 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Apr. 13, 2016, 14 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Feb. 10, 2017.
Office Action for U.S. Appl. No. 12/787,505, dated Aug. 27, 2014, 12 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Sep. 23, 2013, 10 pages.
Office Action for U.S. Appl. No. 12/787,505, dated Oct. 24, 2012, 9 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Oct. 2, 2014, 10 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Nov. 19, 2013, 11 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Mar. 15, 2013, 8 pages.
Office Action for U.S. Appl. No. 12/434,569, dated May 4, 2012, 6 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Mar. 20, 2014, 10 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Jul. 18, 2013, 8 pages.
Office Action for U.S. Appl. No. 12/434,569, dated Oct. 25, 2012, 9 pages.
Office Action for U.S. Appl. No. 13/753,384, dated Oct. 25, 2013, 14 pages.
Office Action for U.S. Appl. No. 13/753,384, dated Jul. 9, 2014, 9 pages.
Office Action for U.S. Appl. No. 13/753,384, dated Dec. 20, 2013, 10 pages.
Office Action for U.S. Appl. No. 13/753,384, dated May 17, 2013, 9 pages.
Office Action for U.S. Appl. No. 14/079,385, dated Mar. 3, 2015.
Office Action for U.S. Appl. No. 14/079,385, dated Feb. 3, 2016.
Office Action for U.S. Appl. No. 14/079,385, dated Sep. 6, 2016.
Office Action for U.S. Appl. No. 14/079,385, dated Feb. 21, 2014, 10 pages.
Office Action for U.S. Appl. No. 14/079,385, dated Aug. 27, 2014, 12 pages.
Office Action for U.S. Appl. No. 14/042,477, dated Apr. 10, 2014, 9 pages.
Office Action for U.S. Appl. No. 14/091,219, dated Apr. 11, 2014, 9 pages.
Office Action for U.S. Appl. No. 14/091,219, dated Jul. 21, 2014, 11 pages.
Office Action for U.S. Appl. No. 13/923,089, dated Aug. 20, 2014, 10 pages.
Office Action for U.S. Appl. No. 13/923,089, dated Dec. 2, 2014, 5 pages.
Office Action for U.S. Appl. No. 13/923,089, dated Mar. 22, 2016, 11 pages.
Office Action for U.S. Appl. No. 14/484,047, dated Dec. 18, 2014, 7 pages.
Office Action for U.S. Appl. No. 14/484,225, dated May 21, 2015, 12 pages.
Office Action for U.S. Appl. No. 14/484,065, dated Jul. 17, 2015, 13 pages.
Notification on Results of Estimation of Patentability of Invention for Russian Application No. 2012105917, dated Feb. 16, 2015, 7 pages.
Office Action for U.S. Appl. No. 14/512,204, dated Jul. 30, 2015, 21 pages.
International Search Report and Written Opinion for International Application No. PCT/US2015/019979, dated Jul. 30, 2015, 10 pages.
Office Action for Mexican Application No. MX/a/2012/002646, dated Aug. 13, 2013, 6 pages.
“Akamai for Media & Entertainment,” Akamai Technologies, Inc., 2007, 38 pages.
“Ebd Web Video Player, Increase Online Video Ad Monetization,” www.ebdsoft.tv, 2010, 2 pages.
“Content distributors can shopping-enable video content,” www.web.archive.org, Apr. 27, 2007, 1 page.
Kaplan, D., “Delivery Agent lets you buy products in your favorite TV shows,” www.web.archive.org, May 4, 2007, 4 pages.
“Shopisodes Enable You to Dress Like Your Favorite TV Character,” www.web.archive.org, Oct. 26, 2007, 1 page.
Liebman, J., “Reality TV That's Social, Bravo!,” www.web.archive.org, Dec. 22, 2008, 6 pages.
Wan, K. et al., “Advertising Insertion in Sports Webcasts,” 2007, IEEE, p. 78-82.
Helft, M., “Google Aims to Make YouTube Profitable With Ads,” The New York Times, Aug. 22, 2007, 3 pages.
Tomlinson, C., “Google Tries to Relive Past Glories by Making YouTube PPay for Itself,” Birmingham Post, Sep. 4, 2007, 3 pages.
Skidgel, J., “Producing Flash CS3 Video, Techniques for Video Pros and Web Designers,” 2007, 9 pages.
Krikke, J., “Streaming Video Transforms the Media Industry,” IEEE, Jul./Aug. 2004, p. 6-12.
Mei, T. et al., “VideoSense—Towards Effective Online Video Advertising,” Sep. 23-28, 2007, p. 1075-1084.
Van Vilet, H., “Where Television and Internet Meet . . . New Experiences for Rich Media,” E-VIEW 02-1, Jan. 2002, 35 pages.
“IAB Announces Advertising Creative Guidelines for Online Broadband Video Commercials,” Nov. 29, 2005, 4 pages.
“Digital Video In-Stream Ad Format Guidelines and Best Practices,” Interactive Advertising Bureau, May 2008, 17 pages.
“Final Broadband Ad Creative Guidelines,” Interactive Advertising Bureau, Standards & Guidelines, 4 pages.
“Broadband Ad Creative Guidelines,” Dec. 31, 2006, 3 pages.
Rich Media Guidelines: Fall 2004, Dec. 31, 2006, 3 pages.
“About Rich Media Guidelines Compliance: In-Page Units,” Jan. 7, 2007, 2 pages.
“About Rich Media Guidelines Compliance: Over-the-Page Units,” Jan. 7, 2007, 2 pages.
“Digital Video Ad Serving Template (VAST), Version 2.0,” iab., Nov. 2009, 18 pages (Redlined).
“Digital Video Ad Serving Template (VAST), Version 2.0,” iab., Nov. 2009, 16 pages.
“DART Motif for In-Stream Helps Publishers Improve Efficiency, Push the Envelope with Video Ad Effects and Offer Advertisers Trusted, Reliable Reporting Metrics,” Nov. 6, 2006, 3 pages.
“DoubleClick Debuts Video Ad-Serving Solution,” Nov. 6, 2006, 2 pages.
Gannes, L., “YouTube's New Inline Ads: Screenshots,” May 11, 2007, 7 pages.
Ried et al., “An Analysis of Anonymity in the Bitcoin System,” http://arxiv.org/pdf/11 07.4524.pdf, May 2, 2012, 29 pages.
Related Publications (1)
Number Date Country
20150074711 A1 Mar 2015 US
Provisional Applications (3)
Number Date Country
61876668 Sep 2013 US
61876647 Sep 2013 US
61883809 Sep 2013 US