The present disclosure relates to connected multi-screen social media applications.
A variety of devices in different classes are capable of receiving and playing video content. These devices include tablets, smartphones, computer systems, game consoles, smart televisions, and other devices. The diversity of devices combined with the vast amounts of available media content has created a number of different presentation mechanisms.
However, mechanisms for providing common experiences across different device types and content types are limited. Consequently, the techniques of the present invention provide mechanisms that allow users to have improved experiences across devices and content types.
The disclosure may best be understood by reference to the following description taken in conjunction with the accompanying drawings, which illustrate particular embodiments.
Reference will now be made in detail to some specific examples of the invention including the best modes contemplated by the inventors for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims.
For example, the techniques of the present invention will be described in the context of fragments, particular servers and encoding mechanisms. However, it should be noted that the techniques of the present invention apply to a wide variety of different fragments, segments, servers and encoding mechanisms. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. Particular example embodiments of the present invention may be implemented without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
Various techniques and mechanisms of the present invention will sometimes be described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. For example, a system uses a processor in a variety of contexts. However, it will be appreciated that a system can use multiple processors while remaining within the scope of the present invention unless otherwise noted. Furthermore, the techniques and mechanisms of the present invention will sometimes describe a connection between two entities. It should be noted that a connection between two entities does not necessarily mean a direct, unimpeded connection, as a variety of other entities may reside between the two entities. For example, a processor may be connected to memory, but it will be appreciated that a variety of bridges and controllers may reside between the processor and memory. Consequently, a connection does not necessarily mean a direct, unimpeded connection unless otherwise noted.
Overview
Disclosed herein are mechanisms and techniques that may be used to provide a connected, multi-screen social media application. Users may employ various types of devices to view media content such as video and audio. The devices may be used alone or together to present the media content. The media content may be received at the devices from various sources. According to various embodiments, different devices may communicate to present a common interface across the devices. The user interface may display a social media application. The social media application may be used to share comments, ratings, or other social content related to media content items accessed via a media system.
According to various embodiments, a connected multi-screen system may provide a common experience across devices while allowing multi-screen interactions and navigation. Content may be organized around content entities such as shows, episodes, sports categories, genres, etc. The system includes an integrated and personalized guide along with effective search and content discovery mechanisms. Co-watching and companion information is provided to allow for social interactivity and metadata exploration.
According to various embodiments, a connected multi-screen interface is provided to allow for a common experience across devices in a way that is optimized for various device strengths. Media content is organized around media entities such as shows, programs, episodes, characters, genres, categories, etc. In particular embodiments, live television, on-demand, and personalized programming are presented together. Multi-screen interactions and navigation are provided with social interactivity, metadata exploration, show information, and reviews.
According to various embodiments, a connected multi-screen interface may be provided on two or more display screens associated with different devices. The connected interface may provide a user experience that is focused on user behaviors, not on a particular device or service. In particular embodiments, a user may employ different devices for different media-related tasks. For instance, a user may employ a television to watch a movie while using a connected tablet computer to search for additional content or browse information related to the movie.
According to various embodiments, a connected personalized content guide may facilitate user interaction with content received from a variety of sources. For instance, a user may receive content via a cable or satellite television connection, an online video-on-demand provider such as Netflix, a digital video recorder (DVR), a video library stored on a network storage device, and an online media content store such as iTunes or Amazon. Instead of navigating and searching each of these content sources separately, a user may be presented with a digital content guide that combines content from the different sources. In this way, a user can search and navigate content based on the user's preferences without being bound to a particular content source, service, or device.
According to various embodiments, a social media application may facilitate the exchange of user-generated content. The user-generated content may be related to media content accessed via the media system. For instance, the user-generated content may include comments, recommendations regarding content, ratings of content, and other such content. The social media application may be provided by the media system or by a third party, such as a social networking service.
According to various embodiments, a social media application may facilitate interaction via a standalone social media system provided by the connected user interface provider. Alternately, or additionally, the social media application may facilitate interaction via a third party social media system such as YouTube, Twitter, or Facebook.
According to various embodiments, a user interface for presenting and/or interacting with media content may include various types of components. For instance, a user interface may include one or more media content display portions, user interface navigation portions, media content guide portions, related media content portions, media content overlay portions, web content portions, interactive application portions, or social media portions.
According to various embodiments, the media content displayed on the different devices may be of various types and/or derive from various sources. For example, media content may be received from a local storage location, a network storage location, a cable or satellite television provider, an Internet content provider, or any other source. The media content may include audio and/or video and may be television, movies, music, online videos, social media content, or any other content capable of being accessed via a digital device.
As shown in
According to various embodiments, a user interface may include one or more portions that are positioned on top of another portion of the user interface. Such a portion may be referred to herein as a picture in picture, a PinP, an overlaid portion, an asset overlay, or an overlay.
According to various embodiments, a user interface may include one or more navigation elements, which may include, but are not limited to: a media content guide element, a library element, a search element, a remote control element, and an account access element. These elements may be used to access various features associated with the user interface, such as a search feature or media content guide feature.
According to various embodiments, the techniques and mechanisms described herein may be used in conjunction with grid-based electronic program guides. In many grid-based electronic program guides, content is organized into “channels” that appear on one dimension of the grid and time that appears on the other dimension of the grid. In this way, the user can identify the content presented on each channel during a range of time.
According to various embodiments, the techniques and mechanisms described herein may be used in conjunction with mosaic programming guides. In mosaic programming guides, a display includes panels of actual live feeds as a channel itself. A user can rapidly view many options at the same time. Using the live channel as a background, a lightweight menu-driven navigation system can be used to position an overlay indicator to select video content. Alternatively, numeric or text based navigation schemes could also be used. Providing a mosaic of channels in a single channel instead of merging multiple live feeds into a single display decreases complexity of a device application. Merging multiple live feeds require individual, per channel feeds of content to be delivered and processed at an end user device. Bandwidth and resource usage for delivery and processing of multiple feeds can be substantial. Less bandwidth is used for a single mosaic channel, as a mosaic channel would simply require a video feed from a single channel. The single channel could be generated by content providers, service providers, etc.
In
In
In
In
It should be noted that the user interfaces shown in
At operation 1a, an episode of the television show “Dexter” is playing on a television, which may also be referred to as a set top box (STB). According to various embodiments, the television show may be presented via any of various techniques. For instance, the television show may be received via a cable television network connection, retrieved from a storage location such as a DVR, or streamed over the Internet from a service provider such as Netflix.
According to various embodiments, the television or an associated device such as a cable box may be capable of communicating information to another device. For example, the television or cable box may be capable of communicating with a server via a network such as the Internet, with a computing device via a local network gateway, or with a computing device directly such as via a wireless network connection. The television or cable box may communicate information such as a current device status, the identity of a media content item being presented on the device, and a content management account associated with the device.
At operation 2a, a communication application is activated on a mobile device that is not already operating in companion mode. The communication application may allow the mobile device to establish a communication session for the purpose of entering into a companion mode with other media devices. When in companion mode, the devices may present a connected user interface for cross-device media display. In the example shown in
At operation 3a, the mobile phone receives a message indicating that the television is active and is playing the episode of the television show “Dexter.” Then, the mobile phone presents a message that provides a choice as to whether to enter companion mode or to dismiss the connection. When the user selects companion mode, the mobile phone initiates the communications necessary for presenting the connected display. For example, the mobile phone may transmit a request to a server to receive the information to display in the connected display.
In particular embodiments, the connected display may present an asset overlay for the content being viewed. For example, the asset overlay may display information related to the viewed content, such as other episodes of the same television program, biographies of the cast members, and similar movies or television shows. In asset overlay user interface may include a screen portion for displaying a small, thumbnail image sized video of the content being presented on the television. Then, the user can continue to watch the television program even while looking at the mobile phone.
In particular embodiments, a device may transmit identification information such as a content management account identifier. In this way, a server may be able to determine how to pair different devices when more than one connection is possible. When a device is associated with a content management account, the device may display information specific to the content management account such as suggested content determined based on the user's preferences.
In some embodiments, a device may automatically enter companion mode when an available connection is located. For instance, a device may be configured in an “auto-companion” mode. When a first device is in auto-companion mode, opening a second device in proximity to the first device causes the first device to automatically enter companion mode, for instance on the asset overlay page. Dismissing an alert message indicating the possibility of entering companion mode may result in the mobile phone returning to a previous place in the interface or in another location, such as a landing experience for a time-lapsed user. In either case, the television program being viewed on the television may be added to the history panel of the communication application.
In
At operation 1b1, the mobile device is displaying an asset overlay associated with the television program as discussed with respect to
At operation 2b, the user would like to switch to watching the television program in full screen video on the mobile device while remaining in companion mode. In order to accomplish this task, the user activates a user interface element, for instance by tapping and holding on the picture-in-picture portion of the display screen. When the user activates the selection interface, the mobile device displays a list of devices for presenting the content. At this this point, the user selects the mobile device that the user is operating.
At operation 3b1, the device is removed from companion mode. When companion mode is halted, the video playing on the television may now be presented in the mobile device in full screen. According to various embodiments, the device may be removed from proximity of the television while continuing to play the video.
At operation 4b1, the user selects the asset overlay for display on top of, or in addition to, the video. According to various embodiments, various user interface elements may be used to select the asset overlay for display. For example, the user may swipe the touch screen display at the mobile device. As another example, the user may click on a button or press a button on a keyboard.
At operation 3b2, the electronic program guide or entity flow continues to be displayed on the mobile device. At the same time, the “bug” is removed on the picture-in-picture portion of the display screen. As used herein, the term “bug” refers to an icon or other visual depiction. In
At operation 4b2, the video is displayed in full screen mode. According to various embodiments, the video may be displayed in full screen mode by selecting the picture-in-picture interface. Alternately, the video may be automatically displayed in full screen mode when the device is no longer operating in companion mode.
At 1802, a connected user interface is presented on two or more media content playback devices. According to various embodiments, the content playback devices may be any devices capable of presenting media content items for playback. For instance, each content playback device may be a laptop computer, a desktop computer, a tablet computer, a mobile phone, or a television.
According to various embodiments, each of the content playback devices may perform various operations related to content management and/or playback. For example, one content playback device may present media content for playback, while a digital program guide or asset overlay is presented on another content playback device. As another example, one content playback device may present media content for playback in a full screen mode while another content playback device presents content in a windowed, picture-in-picture playback mode to allow other ports of a display screen to be used for other purposes.
At 1804, a media content item is presented for playback at one of the media content playback devices. According to various embodiments, the media content item may be retrieved from any of a variety of media content sources. For example, the media content item may be streamed or downloaded from an internet content service provider such as iTunes or Netflix. As another example, the media content item may be transmitted from the media system. As yet another example, the media content item may be retrieved from a local or network storage location.
According to various embodiments, the media content item may be presented when it selected by a user. For instance, a user may select the media content item for playback from a digital content guide or from an asset overlay. In particular embodiments, the user may also select a media content device for presenting the media content item. For instance, the user may select any active media playback device associated with the content management account.
At 1806, a social media application relating to the media content is presented at another of the media content playback devices. According to various embodiments, the social media application may be any application capable of facilitating the exchange of user-generated content. For instance, the social media application may facilitate interaction via Facebook, Twitter, YouTube, or any other social media service.
According to various embodiments, the social media application may be provided from any of a variety of sources. For example, the social media application may be provided by the media system. As another example, the social media application may be provided by a social networking system such as Facebook, Twitter, or YouTube.
According to various embodiments, the social media application may facilitate interaction regarding the specific media content item presented for playback as discussed in operation 1804. For instance, the social media application may facilitate the exchange of content ratings or comments regarding the media content item. The presentation and updating of a social media application in a connected user interface are described in further detail with respect to
At 1902, a media content item presented in a connected user interface is identified. According to various embodiments, the media content item may be any media content item accessible via the media system. For instance, the media content item may be a television program, a movie, or any digital video or audio content. The media content item may be received from any of a variety of content sources, which may include, but are not limited to: a broadcast network such as cable or satellite television, an online digital content service such as Netflix or iTunes, or a local or network storage location.
At 1904, one or more social media applications related to the media content item are identified. According to various embodiments, the social media applications may include any applications for exchanging user-generated content related to the identified media content item. The identified social media applications may include general applications, such as a Facebook or Twitter application, or specific applications, such as a “Mad Men” community application. The identified social media applications may include applications that are focused on general content such as a television show or specific content such as a particular television show episode.
According to various embodiments, one or more social media applications may be associated with a content management account. For instance, a user may indicate that he or she has accounts associated with a designated list of social media services. Alternately, or additionally, a social media application may be associated with a media content item regardless of whether a particular content management account is associated with a user account for the social media application. For instance, a social media application may provide access to blogs and mainstream media sources related to a media content item. These media sources may not require a user account but may still facilitate the exchange of user-generated content such as comments.
According to various embodiments, the social media application may aggregate secondary media content concerning the specific media content item presented for playback. For example, for the television show “Mad Men”, the social media application may present the user with relevant articles from mainstream media and web logs (blogs) concerning the show in general or a particular episode. For instance, the social media application may present the user with links to or content from The New York Times' “Arts Beat” blog, providing a summary of the most recent episode, and Slate.com's “TV Club”, an online discussion of the show by staff journalists with comments by other readers. The social media application may also present links to non-mainstream media content, such as blog posts, that relate to the show in general or to a particular episode of the show. For example, the social media application may present links to discussions and articles from professional blogs such as “The A.V. Club” or “The Hitfix” blogs and/or fan-based blogs such as “Basket of Kisses.”
According to various embodiments, the social media application may facilitate the exchange of user-generated content. For instance, the social media application may highlight in particular mainstream news articles or blogs that are read by, shared, or “liked” by peers in the user's social network in order to make social recommendations of such content. For example, if a user's Facebook friend or Google Circle member expresses a preference for Slate.com's “TV Club” online discussion, then that secondary media source can be highlighted and/or positioned at the top of a list of recommended social media content presented in the social media application. As another example, the social media application may also allow the user to selectively subscribe to certain comment feeds and discussions and be alerted when new comments are made to a particular article, blog post, or Facebook comment thread.
According to various embodiments, the social media application may allow a user to join a discussion via a commenting mechanism, indicate a “like” of the discussion on Facebook, or share the discussion with the user's social network via a social media service such as Facebook or Twitter. For instance, the user may subscribe to a particular comment thread of a discussion on Slate.com of the most recent episode of Mad Men. Then, through the social media application, the user may post his or her own comment to the discussion and share the comment with his or her social network. This sharing may then generate additional discussion or feedback by the user's friends and other social connections.
According to various embodiments, the social media application may aggregate secondary social media content concerning the specific media content item presented for playback. In some cases, this aggregation may be based on information available from social media services in which the user participates. For example, the social media application can generate content suggestions based on the user's friends, self-description, interests, and other information expressed or available on a social network such as Facebook, Twitter, or LinkedIn. For instance, if the user's occupation is in graphic design and the user lists “design” or “history” as a personal interest, the social media application may present the user with media content recommendations related to interior design, typography, or American mid-century history.
According to various embodiments, the social media application can aggregate social media content based on the user's viewing history and search history. For example, the user may have searched for media content featuring Jon Hamm, the lead actor in “Mad Men,” or may have a history of viewing movies and shows in which he stars. In this case, the social media application can suggest that the user “subscribe” to Jon Hamm's Facebook page or “follow” Jon Hamm's Twitter account.
According to various embodiments, the social media application may analyze aggregated social media and e-commerce information based on the user's social media activities. For example, a user may “pin” pictures from Mad Men onto their “My Style” board on the image sharing site Pinterest or post a collage of clothing inspired by the fashions of Mad Men to the collage sharing site Polyvore. Then, the social media application may recommend other Pinterest or Polyvore boards to follow that are similar in content or style. Also, the social media application may recommend other media content items that elicited similar social media reactions from other users. For example, the social media application can suggest that the user view social media content or media content items boards that are categorized as “vintage” or “retro.”
At 1906, a menu for selecting from among the social media applications is provided. According to various embodiments, an instruction for providing the menu may be transmitted from the media system to a client machine. In some instances, the client machine may be the machine at which the content item is presented. In other instances, the client machine may be another machine, such as a client machine displaying a content guide or other user interface portions.
At 1908, a selection of one or more of the social media applications is received. The selection of one or more of the social media applications may be received at the client machine and transmitted to the media system. In particular embodiments, the selection may be processed by the media system. Alternately, the selection may be transmitted to a third party social media service.
At 1910, a user account for accessing the selected social media application is identified. According to various embodiments, the user account may be identified by login information such as a username and password. This information may be provided by a user at the client machine. Alternately, or additionally, the information may be stored in association with the content management account and retrieved when the social media application is selected or access. In some cases, the identifying information may be associated with a third party account such as Facebook or Twitter. In other cases, the identifying information may identify the content management account associated with content presentation, for instance when the social media application is provided by the media system.
At 1912, the social media application is presented within the connected user interface. According to various embodiments, the social media application may be presented within the connected user interface. For instance, the social media application may be presented at one content playback device, while another content playback device presents the media content item. When the social media application is displayed, it may be displayed on the entire area of the display screen or within a portion of the display screen. For instance, a portion of the display screen may display a smaller scale, picture-in-picture version of a media content item, while another portion of the display screen is used to display the social media application.
According to various embodiments, one or more of the operations shown in
At 2002, a request to update a social media application is received. According to various embodiments, the request may be received at the media system providing the connected user interface. Alternately, the request may be received at a third party social media service. In some cases, the request may be received from the client machine. For instance, the client machine may transmit a request for new information to display within the social media application. In other cases, the request may be received from the social media service. For example, the social media service may transmit a request to push information out to the social media application presented at the client machine.
At 2004, information for updating the social media application is identified. According to various embodiments, the information may be received from any of various sources. For instance, the information may be received from the media system, from a social media service, or from the client machine. The information may include any data capable of being used for client-side and/or server-side social media application updating.
At 2006, server-side social media application information is updated. According to various embodiments, updating the server-side social media application information may involve storing any new information at the media system and/or the social media service. The updated information may include, but is not limited to: new media content recommendations, the user's social media service contacts, information regarding other aspects of the social media service, inferences regarding the user's content viewing preferences, information related to the content management account, and any information discussed with respect to the presentation of the social media application in
At 2008, client-side social media application information is updated. According to various embodiments, updating the client-side social media application information may involve transmitting a message to the client machine instructing the client machine to display new information. The new information may include any information capable of being presented in conjunction with the client-side social media application. For instance, the information may include, but is not limited to: new comments or content recommendations, new information regarding members of a user's social network, new articles or news regarding a media content item or media content category, new social media service status information, and any information discussed with respect to the presentation of the social media application in
At 2010, a determination is made as to whether to perform additional updating of the social media application. According to various embodiments, the determination may be made at least in part based on whether the social media application continues to be presented at the client machine. If the social media application continues to be presented and if additional information for updating the social media application is received, then additional updating may be performed.
The fragment server 2111 provides the caching layer with fragments for clients. The design philosophy behind the client/server application programming interface (API) minimizes round trips and reduces complexity as much as possible when it comes to delivery of the media data to the client 2115. The fragment server 2111 provides live streams and/or DVR configurations.
The fragment controller 2107 is connected to application servers 2103 and controls the fragmentation of live channel streams. The fragmentation controller 2107 optionally integrates guide data to drive the recordings for a global/network DVR. In particular embodiments, the fragment controller 2107 embeds logic around the recording to simplify the fragment writer 2109 component. According to various embodiments, the fragment controller 2107 will run on the same host as the fragment writer 2109. In particular embodiments, the fragment controller 2107 instantiates instances of the fragment writer 2109 and manages high availability.
According to various embodiments, the client 2115 uses a media component that requests fragmented MPEG-4 files, allows trick-play, and manages bandwidth adaptation. The client communicates with the application services associated with HTTP proxy 2113 to get guides and present the user with the recorded content available.
The fragment server 2211 provides the caching layer with fragments for clients. The design philosophy behind the client/server API minimizes round trips and reduces complexity as much as possible when it comes to delivery of the media data to the client 2215. The fragment server 2211 provides VoD content.
According to various embodiments, the client 2215 uses a media component that requests fragmented MPEG-4 files, allows trick-play, and manages bandwidth adaptation. The client communicates with the application services associated with HTTP proxy 2213 to get guides and present the user with the recorded content available.
According to various embodiments, the fragment writer command line arguments are the SDP file of the channel to record, the start time, end time, name of the current and next output files. The fragment writer listens to RTP traffic from the live video encoders and rewrites the media data to disk as fragmented MPEG-4. According to various embodiments, media data is written as fragmented MPEG-4 as defined in MPEG-4 part 12 (ISO/IEC 14496-12). Each broadcast show is written to disk as a separate file indicated by the show ID (derived from EPG). Clients include the show ID as part of the channel name when requesting to view a prerecorded show. The fragment writer consumes each of the different encodings and stores them as a different MPEG-4 fragment.
In particular embodiments, the fragment writer writes the RTP data for a particular encoding and the show ID field to a single file. Inside that file, there is metadata information that describes the entire file (MOOV blocks). Atoms are stored as groups of MOOF/MDAT pairs to allow a show to be saved as a single file. At the end of the file there is random access information that can be used to enable a client to perform bandwidth adaptation and trick play functionality.
According to various embodiments, the fragment writer includes an option which encrypts fragments to ensure stream security during the recording process. The fragment writer will request an encoding key from the license manager. The keys used are similar to that done for DRM. The encoding format is slightly different where MOOF is encoded. The encryption occurs once so that it does not create prohibitive costs during delivery to clients.
The fragment server responds to HTTP requests for content. According to various embodiments, it provides APIs that can be used by clients to get necessary headers required to decode the video and seek any desired time frame within the fragment and APIs to watch channels live. Effectively, live channels are served from the most recently written fragments for the show on that channel. The fragment server returns the media header (necessary for initializing decoders), particular fragments, and the random access block to clients. According to various embodiments, the APIs supported allow for optimization where the metadata header information is returned to the client along with the first fragment. The fragment writer creates a series of fragments within the file. When a client requests a stream, it makes requests for each of these fragments and the fragment server reads the portion of the file pertaining to that fragment and returns it to the client.
According to various embodiments, the fragment server uses a REST API that is cache-friendly so that most requests made to the fragment server can be cached. The fragment server uses cache control headers and ETag headers to provide the proper hints to caches. This API also provides the ability to understand where a particular user stopped playing and to start play from that point (providing the capability for pause on one device and resume on another).
In particular embodiments, client requests for fragments follow the following format: http://{HOSTNAME}/frag/{CHANNEL}/{BITRATE}/[{ID}/]{COMMAND}[/{ARG}] e.g. http://frag.hostty.com/frag/1/H8QVGAH264/1270059632.mp4/fragment/42. According to various embodiments, the channel name will be the same as the backend-channel name that is used as the channel portion of the SDP file. VoD uses a channel name of “vod”. The BITRATE should follow the BITRATE/RESOLUTION identifier scheme used for RTP streams. The ID is dynamically assigned. For live streams, this may be the UNIX timestamp; for DVR this will be a unique ID for the show; for VoD this will be the asset ID. The ID is optional and not included in LIVE command requests. The command and argument are used to indicate the exact command desired and any arguments. For example, to request chunk 42, this portion would be “fragment/42”.
The URL format makes the requests content delivery network (CDN) friendly because the fragments will never change after this point so two separate clients watching the same stream can be serviced using a cache. In particular, the head end architecture leverages this to avoid too many dynamic requests arriving at the Fragment Server by using an HTTP proxy at the head end to cache requests.
According to various embodiments, the fragment controller is a daemon that runs on the fragmenter and manages the fragment writer processes. A configured filter that is executed by the fragment controller can be used to generate the list of broadcasts to be recorded. This filter integrates with external components such as a guide server to determine which shows to record and which broadcast ID to use.
According to various embodiments, the client includes an application logic component and a media rendering component. The application logic component presents the user interface (UI) for the user, communicates to the front-end server to get shows that are available for the user, and authenticates the content. As part of this process, the server returns URLs to media assets that are passed to the media rendering component.
In particular embodiments, the client relies on the fact that each fragment in a fragmented MP4 file has a sequence number. Using this knowledge and a well-defined URL structure for communicating with the server, the client requests fragments individually as if it was reading separate files from the server simply by requesting URLs for files associated with increasing sequence numbers. In some embodiments, the client can request files corresponding to higher or lower bit rate streams depending on device and network resources.
Since each file contains the information needed to create the URL for the next file, no special playlist files are needed, and all actions (startup, channel change, seeking) can be performed with a single HTTP request. After each fragment is downloaded, the client assesses, among other things, the size of the fragment and the time needed to download it in order to determine if downshifting is needed or if there is enough bandwidth available to request a higher bit rate.
Because each request to the server looks like a request to a separate file, the response to requests can be cached in any HTTP Proxy, or be distributed over any HTTP based content delivery network CDN.
The fragment may be cached for a short period of time at caching layer 2403. The mediakit 2405 identifies the fragment number and determines whether resources are sufficient to play the fragment. In some examples, resources such as processing or bandwidth resources are insufficient. The fragment may not have been received quickly enough, or the device may be having trouble decoding the fragment with sufficient speed. Consequently, the mediakit 2405 may request a next fragment having a different data rate. In some instances, the mediakit 2405 may request a next fragment having a higher data rate. According to various embodiments, the fragment server 2401 maintains fragments for different quality of service streams with timing synchronization information to allow for timing accurate playback.
The mediakit 2405 requests a next fragment using information from the received fragment. According to various embodiments, the next fragment for the media stream may be maintained on a different server, may have a different bit rate, or may require different authorization. Caching layer 2403 determines that the next fragment is not in cache and forwards the request to fragment server 2401. The fragment server 2401 sends the fragment to caching layer 2403 and the fragment is cached for a short period of time. The fragment is then sent to mediakit 2405.
Particular examples of interfaces supported include Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control communications-intensive tasks such as packet switching, media control and management.
According to various embodiments, the system 2600 is a server that also includes a transceiver, streaming buffers, and a program guide database. The server may also be associated with subscription management, logging and report generation, and monitoring capabilities. In particular embodiments, the server can be associated with functionality for allowing operation with mobile devices such as cellular phones operating in a particular cellular network and providing subscription management capabilities. According to various embodiments, an authentication module verifies the identity of devices including mobile devices. A logging and report generation module tracks mobile device requests and associated responses. A monitor system allows an administrator to view usage patterns and system availability. According to various embodiments, the server handles requests and responses for media content related transactions while a separate streaming server provides the actual media streams.
Although a particular server is described, it should be recognized that a variety of alternative configurations are possible. For example, some modules such as a report and logging module and a monitor may not be needed on every server. Alternatively, the modules may be implemented on another device connected to the server. In another example, the server may not include an interface to an abstract buy engine and may in fact include the abstract buy engine itself. A variety of configurations are possible.
In the foregoing specification, the invention has been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of invention.
This application claims priority to Provisional U.S. Patent Application No. 61/639,689 by Billings et al., filed Apr. 27, 2012, titled “CONNECTED MULTI-SCREEN VIDEO”, which is hereby incorporated by reference in its entirety and for all purposes.
Number | Date | Country | |
---|---|---|---|
61639689 | Apr 2012 | US |