The present technology pertains to outputting video content on a primary display, and more specifically pertains to navigation on a second-screen device with the effects of the navigation being shown on the primary display.
Current technology is allowing a user to watch a primary display, such as a television, while using a second-screen device, such as a tablet or a smart phone, to interact with the primary display. As such interactions are becoming more popular, television viewers are using the second-screen devices to find out what is on television and/or using the second-screen devices for searches, queries, sharing media content related to the content on the television and other interactions on the second-screen devices. However, the primary displays and second-screen devices typically do not interact and more specifically do not share a visual connection between the two devices. For example, a user sits in front of the television with the tablet on his or her lap and uses an application to find information related to a channel or program. If the application is related to what is on the television, the user has to make the connection between the two devices. For example, the user has to watch and/or listen to the television and interact with the second-screen device.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings.
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
Overview: Disclosed are systems, methods, and non-transitory computer-readable storage media for providing coordinated graphical user interfaces on a plurality of devices. A set top box can output first media content to a primary display, such as a television. A first application on the primary display can display a video stream associated with the first media content. The set top box can output second media content to a second-screen device, such as a tablet. A second application on the second-screen device can display a video stream associated with the second media content. The video stream of the second media content can be displayed in a video display area on the second-screen device. The video stream of the first media content and the video stream of the second media content can be associated with the same content and can be substantially synchronized. The second application can receive user gestures on a touchscreen of the second-screen device to control the video stream displayed on the primary display. For example, the gestures can include next channel up, next channel down, up peek, down peek, pause, play, fast forward and rewind. More specifically, gestures on the second-screen device can alter the displayed video stream on the primary display and/or can alter the displayed video stream on the second-screen device. As a result, the second-screen device acts as a bridge to affect the displayed video stream on the primary display.
The disclosed technology addresses the need in the art for a user to interact with a second-screen device that is coordinated with the content displayed on a primary display to control a video stream on a primary display and/or a video stream on a second-screen device. More specifically, a user is able to enter commands on a touchscreen of the second-screen device to control the video stream being displayed on the primary device. In addition, the second-screen device displays at least a partial video stream that is associated with the same content and can be substantially synchronized with the video stream that is displayed on the primary display. As a result, there is a visual connection between the primary display and the second-screen device with the entered gestures on the second-screen device being reflected on the primary display and/or the second-screen device. By having the results of the entered command on the second-screen device, the user can look at the second-screen device and know that the entered command was executed without having to go back and forth between the primary display and the second-screen device.
Disclosed are systems, methods, and non-transitory computer-readable storage media for providing coordinated graphical user interfaces on a plurality of devices. An application can display a partial video stream in a video display area on a touchscreen of the second-screen device and display a contextual panel in an active display area on the touchscreen. The contextual panel can include content associated with the partial video stream, with the displayed partial video stream being associated with a video stream being displayed by a primary display with the two displayed video streams being associated with the same content and substantially synchronized. The contextual panel can include messages associated with a social media feed with the displayed messages corresponding to a selected time frame associated with the video stream being displayed on the touchscreen. The contextual panel can include at least one time frame, with each time frame including information associated with a product for sale, with each time frame being associated with the video stream displayed on the touchscreen.
The disclosed technology addresses the need in the art to allow messages from a social media application to tell the story of video stream. More specifically, the rate of messages per time period can tell the story of the video stream. For example, when watching a football game, the number of tweets per minute are typically caused by big plays in the game. In response to each big play, people tweet about the play causing the number of tweets per minute to rise. For each rise that is above a predetermined threshold, a new time frame can be created. For each time frame, a corresponding mini trends panel can be displayed showing the most relevant words or phrases in the messages. Thus, a user can watch clips of the football game based on the rate of tweets per minute. As a result, the user can watch the clip of the football game and can see what the corresponding tweets about the play or plays associated with the clip of the football game. As for the shopping application, viewers are able to shop for products that are offered by television programs and/or programs available via the internet. Not only can the user watch products that aired or they may have missed being aired, but can still have the current price despite the product no longer being displayed on a user's device.
As used herein the term “configured” shall be considered to interchangeably be used to refer to configured and configurable, unless the term “configurable” is explicitly used to distinguish from “configured.” As used herein the term “transceiver” can mean a single device comprising a transmitter and receiver or can mean a separate transmitter and a separate receiver. The proper understanding of the term will be apparent to persons of ordinary skill in the art in the context in which the term is used.
Referring to
The primary display 104 can be a television, a smart television, or any other device that is capable of receiving and displaying media content. The primary display 104 can display the video stream and/or non-video content on a screen associated with the primary display 104. The primary display 104 can play the audio stream on one or more speakers associated with the primary display 104. The primary display 104 can include a first application 108 configured to receive and output the first media content. In some embodiments, the first application 108 can decode the received first media content. The first application 108 can be configured to output one or more of the video stream, audio stream and the non-video content. For example, a processor (not shown) in the primary display 104 can run the first application 108 and generate output for the primary display 104.
The second-screen device 106 can be a touchscreen device, such as a tablet, a smart phone, a laptop computer or any other device capable of receiving and displaying media content. The second-screen device 106 can display one or more of the video stream, video streams and non-video content on a touchscreen of the second-screen device 106. The second-screen device 106 can play the audio stream on one or more speakers associated with the second-screen device 106. In some embodiments, the second-screen device 106 can play a different audio stream compared to the audio stream played on the primary display 104. For example, the audio stream played on the second-screen device can be in a different language or can be an audio description, which can also be known as video description or visual description. The second-screen device 106 can receive inputs, such as navigational input, via the touchscreen. The second-screen device 106 can include a second application 110 configured to receive and output the second media content. In some embodiments, the second application 110 can decode the received second media content. The second application 110 can be configured to output one or more of the video stream, audio stream and the non-video content. For example, a processor (not shown) in the second-screen device 106 can run the second application and generate output for the second-screen device 106.
The primary display 104 and second-screen device 106 can be communicatively coupled to the set top box 102. For example, the primary display 104 can be communicatively coupled to the set top box 102 via a cable and/or wirelessly. For example, the second-screen device 106 can be communicatively coupled to the set top box 102 via a cable and/or wirelessly. The cable can be an HDMI cable or any other suitable coupler for providing media content between the two devices. The wireless connection can be Bluetooth, Wi-Fi, or any other suitable wireless communication means for providing media content between the two devices.
As shown, the set top box 102 can include an input media feed 112, a tuner 114, a transceiver 116, memory 118 and a processor 120. Although
In some embodiments, in addition to receiving and presenting the second media content, the second screen device 106 can be configured to receive content from another source simultaneously. For example, second screen device might display auxiliary content, coming from an auxiliary content provider 130. The auxiliary content can be transmitted to the second screen device 106 directly, or through set top box 102. The auxiliary content can also be transmitted to the primary display 104.
In some embodiments the auxiliary content provider 130 can be a provider of social media, such as FACEBOOK or TWITTER, or could be any other source of content that an application on the second screen device 106 attempts to access and display along with second media content. In some embodiments, explained in more detail below, the auxiliary content from the auxiliary content provider can be a curated collection of social media posts, i.e., TWEETS, or a feed of products offered for sale by a shopping networks.
Referring to
The touchscreen 204 can display a video stream of the second media content and/or non-video content of the second media stream. More specifically, the second application can cause the video stream of the second media content and/or the non-video content of the second media content to be displayed on the touchscreen 204. For example, the second-screen device 106 can be a tablet displaying part of the video stream of the second media content in a video display area 206 on the touchscreen 204 and/or can display the non-video content of the second media content in an active display area 208 on the touchscreen 204. As shown, the video display area 206 and the active display area 208 can each be limited in size, for example, less than full screen. The video display area 206 can display the video stream of the second media content and/or can display the non-video content of the media content. The active display area 208 can display non-video content of the second media content. The active display area 208 can display non-video content associated with the second media content or other media content. For example, the non-video content can be information associated with video stream, such as a listing of cast members of a television show being displayed on the primary display 104 and the second-screen device 106. In some embodiments, the other media content can be media content not associated with the second media content. For example, the other media content can be information associated with a television show not being displayed on the primary display 104 and the second-screen device 106. As shown, the video display area 206 can be displayed near the top of the touchscreen 204 and the active display area 208 can be displayed below the active display area 208. In some embodiments, the video display area 206 and active display area 208 can located in other locations on the second-screen device 206, such as switched as shown in
The set top box can transmit the first media content to the primary display 104 and can transmit the second media content to the second-screen device 106. More specifically, one or more transceivers 116 can transmit the first media content to the primary display 104 and can transmit the second media content to the second-screen device 106. In some embodiments, one or more transceivers 116 can be dedicated to only transmit first media content to one or more primary displays 204 and one or more transceivers 116 can be dedicated to only transmit second media content to one or more second-screen devices 106. In some embodiments, one or more transceivers 116 can transmit first media content and second media content to one or more primary displays 204 and to one or more second-screen devices 106.
The video stream being displayed on the screen 202 and the video stream being displayed on the touchscreen 204 can be associated with the same content and can be substantially synchronized. Synchronization of the video stream being displayed on the screen 202 and the video stream being displayed on the touchscreen 204 can be accomplished using various known techniques or methodologies. In some embodiments, the processor 120 of the set top box 102 can synchronize the video stream for the primary display 104 and the video stream for the second-screen device 106. In such embodiments, the set top box 102 can act as a master and the primary device 104 and the second-screen device 106 can be slaves in a master-slave relationship. For example, the processor 120 can send, via one or more transceivers 116, the first media content and the second media content at the same time, so that the primary device 204 displays the video stream of the first media content and the second-screen device 106 display the video stream of the second media content at about the same time, so the two video streams are substantially synchronized. In another example, the processor 120 can send, via one or more transceivers 116, time coded segments of the video streams in a coordinated manner. For example, the processor 120 can send, via one or more transceivers 116, a stream of video streams that are time coded in some ways, such as continuous streams (e.g., a broadcast) or fragmented streams (e.g., internet streamed content). In such embodiments, both the primary display 104 and the second-screen device 106 can have their playback position, such as the timecode of a given frame, coordinated such that both the primary display 104 and the second-screen device 106 are displaying the same video frames substantially at the same time. In such embodiments, the set-top box 102 can control the synchronization. In addition, the primary display 104 and the second-screen device 106 are able to maintain the temporal synchronization through normal playback and trick modes (such as skipping for playback at speeds other than normal playback).
In some embodiments, the primary display 104 and the second-screen device 106 can access the content directly from the internet, with the set top box 102 having little to no involvement. In embodiments having set-top box 102 involvement, the set top box 102 can act as a master and the primary device 104 and the second-screen device 106 can be slaves in a master-slave relationship. In embodiments having no set-top box 102 involvement, the primary display 104 can act as a master and the second-screen device 106 can act as a slave. In such arrangements, the media content provided by the primary display 104 to the second-screen device 106 can use simple and low latency encoding over a connection, such as WiFi with the video content can be temporally or spatially down sampled to minimize required bandwidth. As a result, the displayed video content on the second-screen device 106 can be substantially synchronized with the displayed video content on the primary device 104. In other embodiments having no set-top box 102 involvement, the second-screen device 106 can act as a master and the primary display 104 can act as a slave. In yet other embodiments, the functionalities described above with respect to the set-top box 102, can be performed by a different entity, such as cloud computing.
The video display area 206 can serve as a bridge between the primary display 104 and the second-screen device 106. The video display area 206 can be used to enter commands to control the video stream being displayed on the primary display 104. In response to the touchscreen 204 sensing a gesture, the second-screen device 106 can send a command to the set top box 102. In response to receiving the command, the set top box 102 can respond to the received command. For some commands, the set top box 102 can respond by sending a corresponding command to the primary display 104 causing the primary display 104 to execute the command thereby affecting the media content being displayed on the screen 202. For other commands, the set top box 102 can respond by changing and/or altering the media content being sent to the primary display 104 and/or the second-screen device 106. The active display area 208 can be used to enter commands to control the video stream being displayed on the second-screen device 106 as explained below.
Referring to
At block 302, first media content is outputted to a primary display and second media content is outputted to a second-screen device. For example, the set top box 102 outputs, via one or more transceivers 116, first media content to the primary display 104 and second media content to the second-screen device 106. A first application 108 on the primary device 104 causes a video stream associated with the received first media content to be displayed on the screen 202 of the primary device 104. A second application 110 on the second-screen device 106 causes a video stream associated with the received second media content to be displayed on the touchscreen 204 of the second-screen device 106. After outputting the first media content and the second media content, the method 300 can proceed to block 304.
Referring to
Returning to
After a gesture for a command is sensed, the method 300 can proceed to block 306.
At block 306, data associated with the sensed command is sent. For example, the second-screen device 106 can send, via a transceiver, data associated with a partial or full command to the set top box 102. In some embodiments, the data associated with the sensed command can be the touch data. The touch data can be the data associated with the gesture. For example, the data associated with the command gesture can include one or more of the following: coordinates of the original touch, coordinates of the last touch, the time from the original touch to the last touch, and whether the touch is maintained or released. The touch data can be sent in one or more messages. The data associated with the sensed command can include time data, such as how long the gesture was made. After the data associated with the sensed command is sent, the method 300 can proceed to block 308.
At block 308, the command is executed in response to the received data associated with the sensed command. For example, the processor 120 of the set top box 102 can receive, via a transceiver 116, the data associated with the sensed command and can execute the sensed command. In response to receiving the data associated with the sensed command, the processor 120 can determine the sensed command based on the received data and can execute the sensed command. After executing the command in response to the received data, the method 300 can proceed to block 310.
At block 310, the results of the executed command can be reflected on the primary device and/or on the second-screen device. For example, the processor 120 can change the first media content being sent to the primary display 104 and/or the second media content being sent to the second-screen device 106. Below, each of the commands are described in further detail and one or more of the blocks of method 300 are described with more detail.
Regarding a peek up command, the sensed gesture at block 304 can comprise an upward gesture starting in the video display area 206, continuing vertically upward in the video display area 206 and maintaining the touch in the video display area 206 as shown in
For a peek up command, the video stream displayed on top can be the tuned channel and the video stream displayed on bottom can be the newly tuned channel and for a peek down command, the video stream displayed on top can be the newly tuned channel and the video stream displayed on bottom can the tuned channel. In other embodiments, the two video streams can be displayed in other manners, for example, vice versa or side by side. The percentage of each video stream being displayed can be in accordance with the distance from the original touch to the last touch. In response to the user moving the user's finger in the opposite direction, the percentage of each video stream being displayed can be reflected accordingly. For example, the two video streams can be scrolled up and down with the percentages of each changing accordingly. In some embodiments, a peek distance threshold can be used to set the percentage of each displayed video stream at fifty-fifty (50%/50%). For example, the processor 120 can compare the distance traveled from the first touch to the last sensed touch to a peek distance threshold and in the event the distance traveled is not less than the peek distance threshold, the percentage of each displayed video stream can be set to fifty-fifty (50%/50%). For distances below the threshold, the percentage can be in accordance with the traveled distance. For example, if the distanced traveled is ten percent (10%) of the peek distance threshold, the percentages can be ten percent and ninety percent (10%/90%), if the distanced traveled is twenty percent (20%) of the peek distance threshold, the percentages can be twenty percent and eighty percent (20%/80%), etc. The percentages can change in response to the user moving the user's finger up or down with the percentages corresponding to the distance traveled from the original touch.
In response to the user releasing the user's touch on the touchscreen 204 prior to reaching the change channel distance threshold (discussed below), the peek up command or peek down command can end. As a result of the peek up or peek down command ending, the results can be reflected on the primary device 204 and/or on the second-screen device 106 at block 208 with the percentage of the video stream of the media content associated with the tuned channel increasing and the video steam of the media content associated with the newly tuned channel decreasing until the video stream associated with the tuned channel reaches one hundred percent (100%) of the screen 202 of the primary display 104 and of the video display are 208 of the second-screen device 106. In some embodiments, the percentages of the displayed video streams can change quicker in response to an end peek up or end peek down command compared to how the percentages of the displayed video streams change in response to a peek up command or peek down command. By changing the percentages of the displayed video streams quickly, the video stream of the media content associated with the tuned channel can appear to slam or push the video stream of the media content associated with the newly tuned channel.
Referring to
Referring to
Regarding a channel up command, the sensed gesture at block 304 can comprise an upward gesture starting in the video display area 206, continuing vertically upward in the video display area 206 and being released in the video display area 206 as shown in
For a channel up command, the video stream displayed on top can be the tuned channel and the video stream displayed on bottom can be the newly tuned channel with the video stream of the newly tuned channel moving up until it replaces the tuned or previously tuned channel. For a channel down command, the video stream displayed on the bottom can be the tuned channel and video stream displayed on top can be the newly tuned channel with the video stream of the newly tuned channel moving down until it replaces the tuned or previously tuned channel. In other embodiments, the two video streams can be displayed in other manners, for example, vice versa or side by side.
In some embodiments, a change channel distance threshold can be used. The change channel distance threshold can be different from the peek distance threshold. For example, the processor 120 can compare the distance traveled from the first touch to the last sensed touch with the change channel distance threshold. In the event the distance traveled is less than the change channel distance threshold, the sensed gesture percentage of each displayed video stream can be in accordance with the traveled distance. This can be the same as for the peak commands. For example, if the distanced traveled is ten percent (10%) of the change channel distance threshold, the percentages can be ten percent and ninety percent (10%/90%), if the distanced traveled is twenty percent (20%) of the change channel distance threshold, the percentages can be twenty percent and eighty percent (20%/80%), etc. However, in the event the traveled distance is not less than the change channel distance threshold and the user releases the user's touch on the touchscreen 204, the video stream from the newly tuned channel can continue to increase compared to the tuned channel or previously tuned channel until the video stream of the newly tuned channel is one hundred percent (100%) of the available display area of the screen 202 of the primary display 104 and (100%) of the available display area of the video display area 206 of the second-screen device 106.
Referring to
Regarding a pause command, the sensed gesture at block 304 can comprise a tap in the video display area 206 of the second-screen device 106 as shown in
Regarding a resume command, the sensed gesture at block 304 can comprise a tap in the video display area 206 of the second-screen device 106 as shown in
Regarding adjustment commands, such as a fast forward command or a rewind command, the processor 120 can execute the command by adjusting the video stream of the first media content and the video stream second media content being displayed by a time factor for as long as the user's touch on the screen is maintained. A fast forward command can be a touch in the video display area 206, continuing laterally to the right for a predetermined distance of the video display area 206 and being maintained in the video display area 206 as shown in
In the event the received command is a fast forward command and the memory 118 does not contain stored media content, then the transmitted media content is not incremented by the time factor. In the event, the time factor is not beyond the stored media content in memory 118, the transmitted media content can be incremented by the time factor providing the time factor is not beyond the stored media content. In the event, the time factor is beyond the stored media content in memory 118, then the transmitted media content can be the media content received from the input media feed 112.
In the event the received command is a rewind command and the memory 128 does not contain stored media content, then the transmitted media content is not changed. In the event, the time factor is not beyond the stored media content in memory 128, the transmitted media content can be decremented by the time factor providing the time factor is not beyond the stored media content. In the event, the time factor is beyond the stored media content in memory 128, then the transmitted media content can be the media content stored in the memory 128 starting at the beginning of the stored media content.
Regarding a full screen command, the processor 120 can execute the command by causing the displaying the video stream on the second-screen device 106 full screen, for example, not only in the video display area 206. For example, a full screen command can be a touch in the video display area 206, continuing to the vertically downward beyond the video display area 206 and ending in the active display area 208. The full screen command can be sensed in block 304 as shown in
Referring to
Regarding the tear to unlock command, the processor 120 can execute the command by no longer requiring the video stream of the first media content and the video stream of the second media content to be substantially synchronized. A tear to unlock gesture command can be a touch to the left of the video display area 206, continuing to the right into the video display area 206 and ending in the video display area 206 as shown in
The above described method 300 and commands are directed to embodiments where the second-screen device 106 is a “dumb” device which sends touch data. In some embodiments, the second screen device 208 can be a “smart” device and can interpret the sensed commands and send the “sensed command” to the set top box 102 which can execute the sensed command. For example, the second application 110 on the second-screen device 106 can sense a pause command on the touchscreen 204 and can send the sensed command to the set top box 102 which can execute the pause command. In some embodiments, the set top box 102 can receive the touch data or a sensed command and can execute the command by sending commands to the primary display 104 and/or second-screen device 106 with the first application 108 and/or the second application 110 executing the commands. In some embodiments, the second application 110 can determine the sensed command and can execute the command on the second-screen device 106. In such embodiments, the second-screen device 106 can send the touch data or the sensed command to the set top box 102 which can execute the command and have the results displayed on the primary display 104. Regardless of how the commands are executed, the user is able to enter commands via gestures on the second-screen device 106 with the set top box 102 causing the effects of the command on the primary display 104.
The processor 120 can send, via one or more transceivers 116, media content to the primary display 104 and the second-screen device 106. This can be done in various ways. In some embodiments, the processor 120 can send the same media content to both the primary display 104 and the second-screen device 106. In some embodiments, first media content can be sent to the primary display 104 and second media content can be sent to the second-screen device 106. In such embodiments, the first media content can comprise one or more video streams and one or more associated audio streams and the second media content can comprise one or more video streams, one or more associated audio streams and non-video content. The non-video content can include information associated with the one or more video stream. In other embodiments, the media content sent to the primary display 104 and/or the second-screen device 106 can include a single video stream comprising the video stream associated with the tuned channel and the video stream associated with a newly tuned channel in accordance with the command. For example, for a peek command, the single video stream can have ninety percent (90%) of the tuned channel and ten percent (10%) of the newly tuned channel with the percentage of each changing in accordance with the sensed gesture. In yet other embodiments, the media content can contain the video stream associated with the tuned channel and the video stream associated with the newly tuned channel along with instructions on how to display the video streams in accordance with the sensed gesture. For example, the media content can comprise both video streams with instructions for each application, the first application 108 and the second application 110, to display ninety percent (90%) of the tuned channel and ten percent (10%) of the newly tuned channel with the percentage of each changing in accordance with the sensed gesture. In yet another embodiment, one or more transceivers 116 can send media content comprising a video stream associated with the tuned channel, one or more transceivers can send media content comprising a video stream associated with the newly tuned channel, along with instructions on how to display the video streams in accordance with the sensed gesture. Regardless of how the media content is sent to the primary display 104 and the second-screen device 106, the primary display 104 and the second-screen device 106 display the video stream or video streams in accordance with the sensed gesture.
In some embodiments, graphic items, such as a touch indicator 220, line 222, time scale 224, time stamp 226 and an icon 228, 230, 232, 234, are displayed on the second-screen device 106 for an executed command that affects the location of the video content being displayed. These commands can include pause, resume, fast forward and rewind commands. In such embodiments, more, less and/or different graphic items can be displayed. For example, the time stamp 226 may not be displayed on the primary device 104 in response to an executed command. In other embodiments, one or more graphic items can be displayed for other executed commands, such as a peek command.
Referring to
At block 1702, a partial video stream is displayed in a video display area on a touchscreen of a second-screen device. For example, the second application 110 can display a partial video stream in the video display area 206 on the second-screen device 106. After displaying the partial video stream, the method 1700 can proceed to block 1704.
At block 1704, a contextual panel is displayed in an active display area on the touchscreen. For example, the second application 110 can display the contextual panel in the active display area 208. The display of the contextual panel can be in response to an application being selected from an application menu. Referring to
Referring to
The graphical user interface for the social media application can be created using analytics, such as a Twitter analytic. The following is a description of how the graphical user interface can be created but is not limited to this description. First a raw Twitter feed can be received. In some embodiments, the raw Twitter feed can be received by auxiliary content provider 130, which can be a server configured to perform analytics on the raw Twitter feed to divide the raw Twitter feed into collections pertaining to events, or scenes that are presented in the first media content. In some embodiments, this service could be performed by Twitter, or a third party provider. The auxiliary content provider 130 can monitor social media feeds for hash tags or other identifiers indicating that social media commentary is relevant to live or recorded media content. The auxiliary content provider 130 can curate the social media feeds into individual collections that pertain to its related content. In some embodiments, the curating of social media feeds also involves parsing a collection of social media commentary relevant to media content into sub-collections relevant to a time period surrounding a scene or event that generates an increased amount or rate of social media commentary.
The auxiliary content provider 130 can first associate social media commentary with a video program by matching hash tags in a database. The auxiliary content provider 130 can further monitor the rate of messages or tweets on an ongoing basis. The rate can be tweets per minute (TPM). The rate can be measured on a predetermined basis. For example, the rate of tweets per minute can be measured every five second. In other words, the rate of tweets per minute can change every five seconds. When TPM exceeds a predetermined threshold, or change from a prior period the auxiliary content provider 130 can divide the feed into a new sub-collection and associate the sub-collection with the media content to which it pertains a period beginning a few seconds earlier than the point in time that the change in the rate of social media commentary was detected (as there will be some delay), and ending at the start of the beginning of the next sub-collection. Many sub-collections associated with time frames can potentially be associated with the media. Each sub-collection can include a reference pointer to the time frame that the sub-collection is relevant to. In some embodiments, the auxiliary content provider is associated with a cloud DVR service so that the reference pointer has a specific media file to reference.
These collections and sub-collections of social media content can be accessed by the second screen device 106 directly, or through the set top box 102 in associating with the primary media content being displayed on the primary display device 104. In some embodiments, the collections of social media content can be made available through an API to the auxiliary content provider 130. The collection can be streamed to the second screen device 106 while the primary media content is displayed, and the sub-collections can be displayed in the contextual panels described herein.
In some embodiments the tweets contained within a time frame and displayed within the interactive contextual panels can be dynamic. While one set of analytics, such as tweets per minute or other analytics, can be used to determine the time boundaries of an event that can become a contextual panel, other analytics can be used to curate the collection of tweets that are displayed within the contextual panel. A large number of tweets might be associated with one of the contextual panels, but over time, algorithms can be used to determine that some tweets are more important or interesting than others. For example, if a tweet was re-tweeted often, an occurrence of that tweet could be considered important and included in the curated collection, while the re-tweets are duplicates and can be filtered out of the collection displayed in the interactive contextual panel. Accordingly, overtime, data will develop that will allow an algorithm to curate the collection of tweets into the most relevant collection. This has the consequence that two viewers viewing the same content at different times could experience a somewhat different collection of tweets pertaining to the same time frame of the content. Another way that the collection of tweets can be curated includes identifying zero-retweet tweets from verified twitter accounts with a high number of followers (e.g., >10,000), implying popular or well known users saying something more or less as it happens. Of course any method of curating the social media collections could be used. In some embodiments, the curated collection could be supplemented with tweets that come from accounts that are followed by the viewing user.
Even though the contextual panels may relate to events that happened in the past, a user experiencing recorded content enhanced with the contextual panels can still provide their own commentary via social messaging. In some embodiments the application on the second screen device that is providing the contextual panels and content therein can provide an interface for a user to add their own commentary. Since the application can identify what program the commentary is associated with, and can identify the time during the program that the user is providing their commentary, the application can associate such information (program and time marker) in metadata associated with the commentary. The commentary can then be posted on various social media outlets, and shared with the auxiliary content service 130 for potential addition to the contextual panel.
Referring to
Returning to
At block 1708, a command is executed in response to the sensed command. For example, the second application 110 can determine the command based on the sensed command. After executing the command in response to the received data, the method 1700 can proceed to block 1710.
At block 1710, the results of the executed command can be reflected at least on the second-screen device. For example, the second application 110 can change the display on the second-screen device 106. For some commands, the second application 110 can execute the command by sending a command or instructions to the primary display 104 for execution by the first application 108. The commands can be sent directly from the second-screen device 106 to the primary display 104 or can be sent to the set-top-box 102 which forward them to the primary display 104.
Referring to
Referring to
Referring to
Referring to
Referring to
As addressed above the interactive contextual panels, whether they display trending comments (or Tweets) or products for sale or any other content, can be linked to video content. The linking can be handled by a reference pointer, a bookmark in the video, or other technique known in the art.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
This application claims priority to U.S. Provisional Patent Application No. 61/901,383, filed on Nov. 7, 2013, the content of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6330231 | Bi | Dec 2001 | B1 |
6415164 | Blanchard et al. | Jul 2002 | B1 |
6453345 | Trcka et al. | Sep 2002 | B2 |
6470383 | Leshem et al. | Oct 2002 | B1 |
6529218 | Ogawa et al. | Mar 2003 | B2 |
7027052 | Thorn et al. | Apr 2006 | B1 |
7036087 | Odom | Apr 2006 | B1 |
7043702 | Chi et al. | May 2006 | B2 |
7051029 | Fayyad et al. | May 2006 | B1 |
7603373 | Error et al. | Oct 2009 | B2 |
7644365 | Bhattacharya et al. | Jan 2010 | B2 |
7730223 | Bavor et al. | Jun 2010 | B1 |
7792844 | Error et al. | Sep 2010 | B2 |
7921459 | Houston et al. | Apr 2011 | B2 |
7945620 | Bou-Ghannam et al. | May 2011 | B2 |
7958189 | Bernstein | Jun 2011 | B2 |
8006198 | Okuma et al. | Aug 2011 | B2 |
8037421 | Scott et al. | Oct 2011 | B2 |
8245297 | Lim | Aug 2012 | B2 |
8325626 | Tóth et al. | Dec 2012 | B2 |
8380359 | Duchene et al. | Feb 2013 | B2 |
8396874 | Shamma | Mar 2013 | B2 |
8423163 | Park | Apr 2013 | B2 |
8429562 | Gourdol et al. | Apr 2013 | B2 |
8442693 | Mirza et al. | May 2013 | B2 |
8443289 | Sahashi et al. | May 2013 | B2 |
8448076 | Hammack et al. | May 2013 | B2 |
8619958 | Patisaul et al. | Dec 2013 | B2 |
8650492 | Mui et al. | Feb 2014 | B1 |
8738158 | Sims et al. | May 2014 | B2 |
8762475 | Cheung et al. | Jun 2014 | B2 |
8839404 | Li et al. | Sep 2014 | B2 |
8868736 | Bowler et al. | Oct 2014 | B2 |
8958318 | Hastwell et al. | Feb 2015 | B1 |
8977794 | Grohman et al. | Mar 2015 | B2 |
8994539 | Grohman et al. | Mar 2015 | B2 |
9112719 | Sasaki et al. | Aug 2015 | B2 |
9185002 | Sasaki et al. | Nov 2015 | B2 |
9317778 | Cordova-Diba | Apr 2016 | B2 |
9318016 | Park | Apr 2016 | B2 |
9354798 | Sasaki et al. | May 2016 | B2 |
9462041 | Hagins et al. | Oct 2016 | B1 |
9516374 | Cormican et al. | Dec 2016 | B2 |
9584853 | Frebourg et al. | Feb 2017 | B2 |
9686581 | Cormican et al. | Jun 2017 | B2 |
9733983 | Kukreja et al. | Aug 2017 | B2 |
9900224 | Dumitriu et al. | Feb 2018 | B2 |
9985837 | Rao et al. | May 2018 | B2 |
20010048373 | Sandelman | Dec 2001 | A1 |
20020049749 | Helgeson et al. | Apr 2002 | A1 |
20020087976 | Kaplan et al. | Jul 2002 | A1 |
20030035075 | Butler et al. | Feb 2003 | A1 |
20030229529 | Mui et al. | Dec 2003 | A1 |
20040010561 | Kim et al. | Jan 2004 | A1 |
20040034614 | Asher et al. | Feb 2004 | A1 |
20040041833 | Dikhit | Mar 2004 | A1 |
20040236774 | Baird et al. | Nov 2004 | A1 |
20050146534 | Fong et al. | Jul 2005 | A1 |
20060005228 | Matsuda | Jan 2006 | A1 |
20060129939 | Nelles et al. | Jun 2006 | A1 |
20070037563 | Yang et al. | Feb 2007 | A1 |
20070061486 | Trinh et al. | Mar 2007 | A1 |
20070226325 | Bawa et al. | Sep 2007 | A1 |
20070239854 | Janakiraman et al. | Oct 2007 | A1 |
20080045142 | Kim | Feb 2008 | A1 |
20080084888 | Yadav et al. | Apr 2008 | A1 |
20080101381 | Sun et al. | May 2008 | A1 |
20080126930 | Scott | May 2008 | A1 |
20080127057 | Costa et al. | May 2008 | A1 |
20080163207 | Reumann et al. | Jul 2008 | A1 |
20080219243 | Silverman | Sep 2008 | A1 |
20080307451 | Green | Dec 2008 | A1 |
20090044185 | Krivopaltsev | Feb 2009 | A1 |
20090153288 | Hope et al. | Jun 2009 | A1 |
20090307485 | Weniger et al. | Dec 2009 | A1 |
20100023865 | Fulker et al. | Jan 2010 | A1 |
20100031202 | Morris et al. | Feb 2010 | A1 |
20100033422 | Mucignat et al. | Feb 2010 | A1 |
20100169755 | Zafar et al. | Jul 2010 | A1 |
20100174583 | Passova et al. | Jul 2010 | A1 |
20100188328 | Dodge et al. | Jul 2010 | A1 |
20100218211 | Herigstad | Aug 2010 | A1 |
20100262477 | Hillerbrand et al. | Oct 2010 | A1 |
20100275139 | Hammack et al. | Oct 2010 | A1 |
20100280637 | Cohn et al. | Nov 2010 | A1 |
20110030013 | Diaz Perez | Feb 2011 | A1 |
20110050594 | Kim et al. | Mar 2011 | A1 |
20110115741 | Lukas et al. | May 2011 | A1 |
20110142053 | Van Der Merwe et al. | Jun 2011 | A1 |
20110182295 | Singh et al. | Jul 2011 | A1 |
20110185303 | Katagi et al. | Jul 2011 | A1 |
20110191303 | Kaufman et al. | Aug 2011 | A1 |
20110193788 | King et al. | Aug 2011 | A1 |
20110202270 | Sharma et al. | Aug 2011 | A1 |
20110208541 | Wilson et al. | Aug 2011 | A1 |
20110209089 | Hinckley et al. | Aug 2011 | A1 |
20110209104 | Hinckley et al. | Aug 2011 | A1 |
20110221777 | Ke | Sep 2011 | A1 |
20110239142 | Steeves | Sep 2011 | A1 |
20110264286 | Park | Oct 2011 | A1 |
20120005609 | Ata et al. | Jan 2012 | A1 |
20120054367 | Ramakrishnan et al. | Mar 2012 | A1 |
20120140255 | Tanaka | Jun 2012 | A1 |
20120154138 | Cohn et al. | Jun 2012 | A1 |
20120154294 | Hinckley et al. | Jun 2012 | A1 |
20120185791 | Claussen et al. | Jul 2012 | A1 |
20120192111 | Hsu et al. | Jul 2012 | A1 |
20120210349 | Campana | Aug 2012 | A1 |
20120235921 | Laubach | Sep 2012 | A1 |
20120290940 | Quine | Nov 2012 | A1 |
20120291068 | Khushoo et al. | Nov 2012 | A1 |
20120324035 | Cantu et al. | Dec 2012 | A1 |
20130021281 | Tse et al. | Jan 2013 | A1 |
20130024799 | Fadell et al. | Jan 2013 | A1 |
20130047125 | Kangas et al. | Feb 2013 | A1 |
20130069969 | Chang et al. | Mar 2013 | A1 |
20130124523 | Rogers et al. | May 2013 | A1 |
20130145008 | Kannan et al. | Jun 2013 | A1 |
20130145307 | Kawasaki | Jun 2013 | A1 |
20130152017 | Song et al. | Jun 2013 | A1 |
20130155906 | Nachum et al. | Jun 2013 | A1 |
20130159898 | Knospe et al. | Jun 2013 | A1 |
20130174191 | Thompson, Jr. | Jul 2013 | A1 |
20130179842 | Deleris et al. | Jul 2013 | A1 |
20130201215 | Martellaro et al. | Aug 2013 | A1 |
20130218987 | Chudge et al. | Aug 2013 | A1 |
20130265905 | Filsfils | Oct 2013 | A1 |
20130290783 | Bowler et al. | Oct 2013 | A1 |
20130322438 | Gospodarek et al. | Dec 2013 | A1 |
20130322848 | Li | Dec 2013 | A1 |
20130326583 | Freihold et al. | Dec 2013 | A1 |
20130347018 | Limp | Dec 2013 | A1 |
20140002580 | Bear et al. | Jan 2014 | A1 |
20140007089 | Bosch et al. | Jan 2014 | A1 |
20140013271 | Moore et al. | Jan 2014 | A1 |
20140016926 | Soto et al. | Jan 2014 | A1 |
20140023348 | O'Kelly | Jan 2014 | A1 |
20140025770 | Warfield et al. | Jan 2014 | A1 |
20140033040 | Thomas | Jan 2014 | A1 |
20140040784 | Behforooz et al. | Feb 2014 | A1 |
20140089992 | Varoglu | Mar 2014 | A1 |
20140105213 | A K et al. | Apr 2014 | A1 |
20140108614 | Gunderson et al. | Apr 2014 | A1 |
20140108985 | Scott et al. | Apr 2014 | A1 |
20140130035 | Desai et al. | May 2014 | A1 |
20140132594 | Gharpure | May 2014 | A1 |
20140176479 | Wardenaar | Jun 2014 | A1 |
20140198808 | Zhou | Jul 2014 | A1 |
20140201681 | Mahaffey et al. | Jul 2014 | A1 |
20140269321 | Kamble et al. | Sep 2014 | A1 |
20140280133 | Dulitz | Sep 2014 | A1 |
20140281012 | Troxler | Sep 2014 | A1 |
20140282213 | Musa et al. | Sep 2014 | A1 |
20140298210 | Park et al. | Oct 2014 | A1 |
20140310623 | O'Connell, Jr. et al. | Oct 2014 | A1 |
20140320387 | Eriksson et al. | Oct 2014 | A1 |
20140337824 | St John et al. | Nov 2014 | A1 |
20140373064 | Ray | Dec 2014 | A1 |
20150006296 | Gupta et al. | Jan 2015 | A1 |
20150012881 | Song et al. | Jan 2015 | A1 |
20150019991 | Kristjansson | Jan 2015 | A1 |
20150030024 | Venkataswami et al. | Jan 2015 | A1 |
20150043581 | Devireddy et al. | Feb 2015 | A1 |
20150058314 | Leclerc et al. | Feb 2015 | A1 |
20150074735 | Herigstad | Mar 2015 | A1 |
20150081701 | Lerios et al. | Mar 2015 | A1 |
20150121436 | Rango | Apr 2015 | A1 |
20150128050 | Cormican et al. | May 2015 | A1 |
20150163192 | Jain et al. | Jun 2015 | A1 |
20150169208 | Cho | Jun 2015 | A1 |
20150193549 | Frye et al. | Jul 2015 | A1 |
20160034051 | Xi et al. | Feb 2016 | A1 |
20160063954 | Ryu | Mar 2016 | A1 |
20160154575 | Xie et al. | Jun 2016 | A1 |
20160202879 | Chen | Jul 2016 | A1 |
20160253046 | Garrison et al. | Sep 2016 | A1 |
20160364085 | Henderson et al. | Dec 2016 | A1 |
20180143868 | Johnston et al. | May 2018 | A1 |
Number | Date | Country |
---|---|---|
2 389 017 | Nov 2003 | GB |
2011-204656 | Oct 2011 | JP |
Entry |
---|
Christian, Josh, “Four Images on One Screen!—Make Your Home Theater More Versatile,” posted Sep. 2, 2010, http://blog.dsientertainment.com/audio-video-home-theater-automation/bid/29732/Four-Images-On-One-Screen-Make-Your-Home-Theter-More-Versatile. |
DSI Entertainment Systems, “Crestron control of a high-end custom home theater design in Los Angeles,” voutube.com, published Dec. 14, 2010, http://www.youtube.com/watch?v=zq4KVo7XRUE. |
rAVe Publications, “ISE 2014: Savant Systems Displays SmartView Video Tiling Interface,” youtube.com, published Feb. 6, 2014, http://www.youtube.com/watch?v=XN80MOr1Nj4. |
Residential Systems, Inc. “Savant Video Tiling from Residential Systems, Inc..” Video, youtube.corn, published Dec. 26, 2013, http://www.youtube.com/watch?v=20JWj5IJSIQ. |
Sky.com, Jun. 24, 2013, British Sky Broadcasting Group PLC and Sky International AG, http://www.sky.com/mysky/latestnewslarticle/my-sky-updates/2012-07/sky-plus-app-on-ipad. |
SmartView Tiling User Guide, Savant Systems LLC, Jan. 2014, pp. 1-25. |
Zeeebox.com, copyright 2012, https://web.archive.org|web/20130609083253/http://zeebox.com. |
“AppRF,” arubanetworks.com, retrieved Nov. 7, 2017, 12 pages. |
“Attractive-jQuery-Circular-Countdown-Timer-Plugin-TimeCircles,” Jan. 19, 2015, 1 page. |
“Definition of together,” Merriam-Webster, 2 pages. |
“Flow diagram,” http://en.wikipedia.org/wiki/Flow_diagram, retrieved on Jun. 11, 2015, 2 pages. |
“GitHub,” https://github.com/tejas-rane/CLI-forRYU-Firewail retrieved Nov. 9, 2017. |
“Google Gesture Search,” Goggle, Jun. 21, 2013. |
“Introducing the new Sky+ app for iPad,” Sky.com, 2 pages. |
“Suggestion: Browser “new tab”—cover gesture to include bookmarks,” Feb. 11, 2014. |
“Tweetbot for MAC,” http://tapbots.com/tweetbot/mac/ retrieved Jun. 8, 2015, 3 pages. |
“Y! Multi messenger 2.0.0.100,” last update Sep. 10, 2013, http://y-multi-messeger.soft32.com. |
Author Unknown, “Sorting Your Chat List,” available at https://support.google.com/chat/answer/161035?hl=en, retrieved on Jan. 1, 2014, 2 pages. |
Author Unknown, “User Interface—Changing Icon Appearance Based on Frequency of Use (Samsung)—Patent Application—Prior Art Request,” available at http://patents.stackexchange.com/questions/4233/user-interface-changing- icon-appearance-based-on-frequency-of-use-samsung Jul. 26, 2013, 9 pages. |
Author Unknown, “Using the Tile View,” Visokio, 2013, 3 pages. |
Constine, Josh, “Facebook's Relevance-Filtered Chat buddy List, or, Why Users Don't Know Who's Online,” Aug. 8, 2011, 9 pages. |
Firewall Builder, http://www.fwbuilder.ord/4.0/screenshots.shtml retrieved Nov. 7, 2017, 4 pages. |
Galitz, Wilbert O., “The Essential Guide to User Interface Design,” second edition, 2002, p. 477-478. |
McNamara, Katherine, “Firepower Setup and Policy Creation,” Aug. 12, 2016, 2 pages. |
Mui, Phil, “Introducing Flow Visualization: visualizing visitor flow,” Google Analytics Blog, Oct. 19, 2011, 6 pages. |
Neeman, Patrick, “Goggle Is Missing Social and Their Culture May Be to Blame,” Jun. 12, 2013, 9 pages. |
Pozo, S., et al., “AFPL2, An Abstract Language for Firewall ACLs with NAT support,” Jun. 2009, 8 pages. |
The Lync Insider, “The “Inside” Perspective on Skype for Business Server 2015, Lync Server 2012, VOIP and Unifie . . . ” Jun. 8, 2015, 23 pages. |
Wagner, Kyle, “The OS X Lion Survival Guide,” Jul. 21, 2011, 7 pages. |
Wikipedia, “Sankey Diagram,” Jun. 11, 2015, 2 pages. |
Yu, Toby, “Resizable Contacts Widget Pro,” Oct. 7, 2013, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20150128046 A1 | May 2015 | US |
Number | Date | Country | |
---|---|---|---|
61901383 | Nov 2013 | US |