The disclosed implementations relate to presenting media content generally and switching between media streams in particular.
As computer technology has improved and become ubiquitous, users increasingly are able to use computer based devices to consume media content. For example, users can listen to audio content or watch video content on a variety of computer based electronic devices that are not receiving a predefined set of channels being broadcast to multiple devices (e.g., via radio, broadcast television or cable). In addition, advances in network technology have increased the speed and reliability with which information can be transmitted over computer networks to individual computers. As such, it is possible to select particular media items to play over computer networks as needed rather than tuning in to a particular channel of a predefined broadcast transmission.
Despite the advances in networking speed and reliability, some solutions for switching between different media items are cumbersome and involve multiple steps that can be confusing and frustrating for a user. This is especially true when a user has to perform searches or navigate through menus and indexes to identify media items to play. In such circumstances, a user is less likely to switch between different media items while viewing media items on a device due to the inconvenience of doing so, thereby reducing the user's enjoyment of and satisfaction with the device.
Accordingly, there is a need for a method to reduce the time needed to switch between media items to provide a seamless user experience that enables a user to easily and intuitively switch between media items in a first sequence of media items and media items in different sequences of media items based on a direction of motion of an input. Such methods and interfaces may complement or replace conventional methods for switching between media items. Such methods and interfaces enhance the user experience, as the user is able to switch between media items quickly. In particular, users watching media items will be able to browse through different sequences of media items easily and intuitively.
In accordance with some implementations, a method for switching between media items is disclosed. The method is performed at an electronic device with one or more processors, memory, and a display. The electronic device obtains information about a plurality of sequences of media items, including a first sequence of media items and a second sequence of media items that is different from the first sequence of media items, and plays an initially-displayed media item of the first sequence of media items on a display. While playing the initially-displayed media item in a respective region of the display, the electronic device detects a media-change input. In response to detecting the media-change input, in accordance with a determination that the media-change input corresponds to movement in a first direction, the electronic device ceases to play the initially-displayed media item in the respective region of the display and plays a first media item in the respective region of the display. The first media item is different from the respective media item and is sequentially adjacent to the initially-displayed media item in the first sequence of media items. In accordance with a determination that the media-change input corresponds to movement in a second direction that is different from the first direction, the electronic device ceases to play the initially-displayed media item in the respective region of the display and plays a second media item in the respective region of the display, where the second media item is different from the initially-displayed media item and the first media item and is from the second sequence of media items.
In accordance with some implementations, a computer system (e.g., a client system or server system) includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing the operations of the method described above. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions which when executed by one or more processors, cause a computer system (e.g., a client system or server system) to perform the operations of the methods described above.
The implementations disclosed herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. Like reference numerals refer to corresponding parts throughout the drawings.
Client device 110-1 in
In some implementations, client device 110-1 is one of the group of: a personal computer, a mobile electronic device, a wearable computing device, a laptop, a tablet computer, a mobile phone, a digital media player, or any other electronic device able to prepare media content for presentation, control presentation of media content, and/or present media content. For example, server system 120-1 is operated and/or provided by a subscription-based media streaming service to which a user, optionally, has an account associated with account credentials that enable client device 110-1 to communicate with and receive content from content sources such as server system 120-1, P2P network 132, network cache 136, and/or redundant content host server(s) 138.
In some implementations, client device 110-1 includes a first electronic device (e.g., a controlling electronic device) and a second electronic device (e.g., a controlled electronic device), and both the first electronic device and the second electronic device are associated with a common user account (or associated user accounts) provided by a content provider with which server system 120-1 is associated. The first electronic device (e.g., a personal computer or a set top box) is optionally associated with account credentials and receives content from server system 120-1, and the second electronic device is a media presentation device (e.g., a set of speakers, a display, a television set, etc.) that receives the content from the first electronic device and presents that content to the user.
In some implementations, client device 110-1 includes a media content presentation and control application 104 (hereinafter “media application”). Media application 104 is able to control the presentation of media by client device 110-1. For example, media application 104 enables a user to navigate media content items, select media content items for playback on client device 110-1, select media streams for presentation, change currently displayed media streams, create and edit playlists, and other such operations.
In some implementations, media content is stored by client device 110-1 (e.g., in a local cache such as a media content buffer 105 and/or in permanent storage at client device 110-1). In some implementations, the media content is stored by a server system 120-1 (e.g., an origin server), which is located remotely from client device 110-1. In some implementations, the media content is stored by one or more computing devices in media delivery system 150, discussed in more detail below with reference of
In some implementations, the data sent from (or streamed from) server system 120-1 is stored/cached by client device 110-1 in a local cache such as one or more media content buffers 105 in the memory of client device 110-1. Media content stored in media content buffer(s) 105 is, typically, removed after the media content is presented by client device 110-1, allowing new media content data to be stored in media content buffer 105. At least some of the media content stored in media content buffer(s) 105 is, optionally, retained for a predetermined amount of time after the content is presented by client device 110-1 and/or until other predetermined conditions are satisfied For example the content is stored until the content has been presented by the client device, the content corresponding to a media tile is stored until the media corresponding to the media tile has reached an end of the content (e.g., an end of a movie/television show or sporting event), or the content corresponding to a first media tile is stored until the client device switches to playing content corresponding to a second media tile to enable the user to play the content corresponding to the first media tile again without re-downloading the content (e.g., in response to activation of a “play again” or “replay” affordance in a media player user interface). Media content buffer 105 is configured to store media content from more than one media content stream. Storing data in a buffer while it is being moved from one place to another (e.g., temporarily storing compressed data received from a content source before it is processed by a codec and/or temporarily storing decompressed data generated by a codec before it is rendered by a renderer) is sometimes referred to as “buffering” data, and data stored in this way is sometimes referred to a “buffered” data. “Buffered” data is typically, but optionally, removed (or marked for deletion) from the buffer in which it was stored after it is transmitted from the buffer to its destination (e.g., a codec or a renderer), rather than being stored for later use.
In some implementations, when client device 110-1 includes a first electronic device and a second electronic device, media application 104 (e.g., on a set top box) is also able to control media content presentation by the second electronic device (e.g., a set of speakers or a television set or other display connected to the set top box), which is distinct from the first electronic device. Thus, in some circumstances, the user is able to use media application 104 to cause the first electronic device to act both as a media presentation device as well as a remote control for other media presentation devices. This enables a user to control media presentation on multiple electronic devices from within media application 104 and/or using a single user interface.
When a user wants to playback media on client device 110-1, the user is enabled to interact with media application 104 to send a media control request to server system 120-1. Server system 120-1 receives the media control request over one or more networks 115. For example, the user is enabled to press a button on a touch screen of client device 110-1 in order to send the media control request to server system 120-1. As described below, a media control request is, for example, a request to begin presentation of media content by client device 110-1. Though often used herein to describe requests to initiate or begin presentation of media by client device 110-1, media control requests optionally also include requests and/or signals to control other aspects of the media that is being presented on client device 110-1, including but not limited to commands to pause, skip, fast-forward, rewind, seek, adjust volume, change the order of items in a playlist, add or remove items from a playlist, adjust audio equalizer settings, change or set user settings or preferences, provide information about the currently presented content, begin presentation of a media stream, transition from a current media stream to another media stream, and the like. In some implementations, media controls control what content is being delivered to client device 110-1 (e.g., if the user pauses playback of the content, delivery of the content to client device 110-1 is stopped). However, the delivery of content to client device 110-1 is, optionally, not directly tied to user interactions with media controls. For example, while the content that is delivered to client device 110-1 is selected based on a user request for particular content by the user, the content optionally continues to be delivered to client device 110-1 even if the user pauses playback of the content (e.g., so as to increase an amount of the content that is buffered and reduce the likelihood of playback being interrupted to download additional content). In some implementations, if user bandwidth or data usage is constrained (e.g., the user is paying for data usage by quantity or has a limited quantity of data usage available), client device 110-1 ceases to download content if the user has paused or stopped the content, so as to conserve bandwidth and/or reduce data usage.
Client-server environment 100 in
In some circumstances, the received media control request includes information identifying the client device (e.g., an IP address) to which server system 120-1 should forward the media control request. For example, a user, optionally, has multiple client devices that can present media received from server system 120-1, such as a mobile phone, a computer system, a tablet computer, a television, a home stereo, etc. The identifying information optionally includes a unique or semi-unique device identifier, such as an IP address, a Media Access Control (MAC) address, a user-specified device name, an International Mobile Equipment Identity (IMEI) number, or the like. Accordingly, the media control request will identify that a request is intended for the home stereo, for example, so that server system 120-1 can send the requested media and/or the media control request to the home stereo. Client device 110-1 optionally provides server system 120-1 with an indication of device capabilities of the device such as screen resolution, processing speed, video buffer size/availability, available bandwidth, target/desired bandwidth, codec availability, and the like, and the server system provides content to the electronic device in accordance with the device capabilities.
In some implementations, server system 120-1 includes a context database 126. Context database 126 stores data associated with the presentation of media content by client device 110-1 that includes, among other things, the current position in a media content stream that is being presented by client device 110-1, a playlist associated with the media content stream, previously played content, skipped pieces of media content, and previously indicated user preferences. For example, context database 126, optionally, includes information that a content stream to client device 110-1 currently is presenting a song, at 1 minute and 23 seconds into the song, as well as all the songs played in the last hour and the next 20 songs in the playlist. In some circumstances, server system 120-1 transmits the context associated with a media content stream to client device 110-1 that is presenting the content stream so that one or more items of context information can be used by client device 110-1, such as for display to the user. When the client device to which the media content is being streamed changes (e.g., from client device 110-1 to client device 110-n), server system 120-1 transmits the context associated with the active media content to the newly active client device (e.g., client device 110-n).
When client device 110-1 sends a media control request to server system 120-1 for media content, server system 120-1 (e.g., media delivery module 122) responds to the request by utilizing source information to instruct one or more of the computing devices in media delivery system 150 to send media content associated with the media control request to client device 110-1 as requested or sends relevant source information to client device 110-1 that enables client device 110-1 to request the media content associated with the media control request from a source (e.g., P2P network 132, network cache 136, and/or redundant content host servers 138). Client device 110-1 optionally obtains media content associated with the media control request from a local cache such as media content buffer 105. Client device 110-1 optionally utilizes locally stored source information to request or obtain media content associated with the media control request from one or more computing devices in media delivery system 150 (e.g., P2P network 132, network cache 136, or redundant content host servers 138).
Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. Memory 212 optionally stores a subset of the modules and data structures identified above. Furthermore, Memory 212 optionally stores additional modules and data structures not described above.
Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. Memory 306 optionally stores a subset of the modules and data structures identified above. Furthermore, Memory 306 optionally stores additional modules and data structures not described above.
Although
Attention is now directed towards implementations of user interfaces (“UI”) and associated processes that are, optionally, implemented on an electronic device with a display and a touch-sensitive surface, such as electronic device 110.
Attention is now directed towards
Attention is now directed towards
Attention is now directed towards
Attention is now directed towards
In
In
In response to detecting the movement 518 of contact 514 in
Alternatively, in response to detecting downward movement 520 of contact 514 as shown in
In
In
In
In
A device (e.g., electronic device 110 as shown in
The plurality of sequences of media items optionally include a sequence of episodic media items (e.g., episodes of a television show) in episode order. The plurality of sequences of media items optionally include a user-curated sequence of media items (e.g., a user generated queue) in a user-determined order (e.g., a playlist or video queue determined by a user of the device or a playlist or video queue determined by another user and subscribed to by the user of the device). The plurality of sequences of media items optionally include a sequence of live video streams; and (e.g. television channels in channel order). The plurality of sequences of media items optionally include a sequence of thematically related media items in an order determined in accordance with a user preference profile (e.g., a set of movies that are recommended for the user based on past viewing habits or ratings of other movies and/or television shows). The plurality of sequences of media items optionally include live television channels, like Channel 2, 3, and 4 that correspond to an ordered sequence of broadcast television channels, cable channels, satellite channels, or the like. The plurality of sequences of media items optionally include time shifted recordings of television content, like a VCR or TiVo. The plurality of sequences of media items optionally include on-demand content, like movies or past seasons of a TV show. The plurality of sequences of media items optionally include content genres, like science fiction, news, sports, and drama. The plurality of sequences of media items optionally include sequential media, like episodes 1, 2, and 3 of a show. The plurality of sequences of media items optionally include user activity lists, like starred, history, and/or favorites. The plurality of sequences of media items optionally include lists curated by editors or experts, like staff picks, favorites by an artist or director. The plurality of sequences of media items optionally include suggestions generated based on a user preference profile, like featured, top picks for you. The plurality of sequences of media items optionally include subscribed social lists, like playlists, followed feeds, and/or media items shared/liked/commented on by friends.
In some implementations, the ordering of media items in a respective sequence of media items is based on a predefined content order, like episode 1, 2, 3. The ordering of media items in a respective sequence of media items is optionally based on human generated order (like the first, second, and third items the user chose when assembling a list) or the order as a result of user activity (a history list, for instance, showing the items the user most recently watched, in reverse chronological order). The ordering of media items in a respective sequence of media items is optionally based on a computer generated order, like a list of recommendations, the first item being the algorithms best guess for a content item the user would be interested in viewing now.
The device plays (704) an initially-displayed media item (e.g., an initial media item or a currently playing media item, as shown in
In some implementations, the initially-displayed media item is displayed without displaying representations of other media items (e.g., the initially-displayed media item is full-screen) and displaying the media-context user interface includes reducing a size of the initially-displayed media item and partially displaying representations of the one or more other media items in regions of the display that were previously occupied by the initially-displayed media item (e.g., the initially-displayed media item is pushed back so that edges of the other media items are displayed adjacent to the other media items). For example, in
A user performs (706) a media change input (e.g., a touch on a touchscreen, a click and drag with a mouse, a wave gesture at a motion sensing device). The media-change input and the context-display input described above are, in some circumstances, part of a continuously detected gesture (e.g., a touch and swipe gesture or a wave and swipe gesture). While playing the initially-displayed media item in a respective (e.g., central) region of the display, the device detects (708) the media-change input. In some implementations, prior to detecting the media-change input, the device preloads at least a portion of one or more media items other than the initially-displayed media item, and in response to detecting the media-change input, displaying a preloaded portion of one of the media items other than the initially-displayed media item. For example, the device requests media content corresponding to one of the adjacent media items before the adjacent media item is requested by a user of the device so as to improve the responsiveness of the device. In some implementations, media items are not preloaded. In situations where all four of the adjacent media items are not preloaded (e.g., to conserve bandwidth), right and top media items are preloaded while other adjacent media items are not (e.g., because users are most likely to swipe down and to the left). Preloading media items that correspond to adjacent media streams is described in greater detail in U.S. Prov. Pat. App. No. 61/836,079, entitled “System and Method for Switching between Media Streams while Providing a Seamless User Experience” filed Jun. 17, 2013, which is hereby incorporated by reference in its entirety.
In response to detecting the media-change input, in accordance with a determination that the media-change input corresponds to movement (710) in a first direction, the device ceases (712) to play the initially-displayed media item in the respective (e.g., central) region of the display (e.g., sliding a video tile that corresponds to the respective media item off of the display in accordance with the media-change input) and plays (714) (e.g., starts to play) a first media item in the respective (e.g., central) region of the display. The first media item is different from the respective media item and is sequentially adjacent to the initially-displayed media item in the first sequence of media items. For example, the first sequence of media items are episodes of a television show and the first media item is a next episode of the television show that comes after the episode of the television show that corresponds to the respective media item. For example, in
In contrast, in accordance with a determination that the media-change input corresponds to movement (716) in a second direction that is different from the first direction, the device ceases (718) to play the initially-displayed media item in the respective (e.g., central) region of the display and plays (720) (e.g., starts to play) a second media item in the respective (e.g., central) region of the display. The second media item is different from the initially-displayed media item and the first media item and is from the second sequence of media items. For example, the second sequence of media items is a user-selected queue of on-demand videos, and the second media item is a first item in the user-selected queue. For example, in
In some implementations, the initially-displayed media item has a position in the first sequence of media items that is after a beginning of the first sequence of media items and the second media item has a position at a beginning of the second sequence of media items. Thus, in some of these implementations, regardless of which media item in the first sequence of media items is being displayed, when switching to a media item in a different sequence of media items, the user gets the first (which is the most likely to be relevant, or which is intended to be viewed first) item of that sequence (swiping up from B7 leads to A1, and swiping up from B8 also leads to A1). For example, in
In some implementations, the initially-displayed media item has an ordinal position in the first sequence of media items and the second media item has the same ordinal position in the second sequence of media items. For example, for the sequences of media items shown in
In some implementations, in response to detecting the media-change input, in accordance with a determination that the media-change input corresponds to movement in a third direction that is substantially opposite to the first direction, ceasing to play the initially-displayed media item in the respective (e.g., central) region of the display and playing (e.g., starting to play) a third media item in the respective (e.g., central) region of the display. The third media item is different from the initially-displayed media item, the first media item, and the second media item. For example, when the first direction is a left-to-right, the first media item is a media item (e.g., a media item that corresponds to tile 506-3 in
In some implementations, in response to detecting the media-change input, in accordance with a determination that the media-change input corresponds to movement in a fourth direction that is substantially opposite to the second direction the device ceases to play the initially-displayed media item in the respective (e.g., central) region of the display and plays (e.g., starts to play) a fourth media item in the respective (e.g., central) region of the display. The fourth media item is different from the initially-displayed media item, the first media item, the second media item, and the third media item, and is from a third sequence of media items that is different from the first sequence of media items and the second sequence of media items. For example, if the media-change input is an upward swipe input, then the fourth media item is from a row of media items below the first row (e.g., an row of media items in the “Sci-Fi” genre that are recommended for the user). In this example, in
In some implementations, the device presents a user interface using a respective language (e.g., a default language, a user-selected language, or a language selected based on predefined criteria such as a manufacturing location or operating location of the device). For example, when the respective language is a language that has a left-to-right primary reading direction, (e.g., English) the first direction is along a left-to-right axis of the display. For example, when the respective language is a language that has a top-to-bottom primary reading direction, (e.g., Japanese or Chinese) the first direction is along a top-to-bottom axis of the display. In particular, if the written language is left-to-right, then leftward and rightward swipes cause navigation within a sequence of media items and a downward or upward swipe causes navigation between different sequences of media items; while if written language is top-to-bottom, then a downward or upward swipe causes navigation within the sequence of media items, and a leftward or rightward swipe causes navigation between different sequences of media items.
Thus, in some implementations, the direction in which the sequence of media items is configured to be navigated is chosen based on culture. For example, if the device is set to use a particular language or operate in a particular geographic region with a language that reads left-to-right the device would arrange the tiles to match the reading direction of the particular language or geographic location. Likewise, if the device is set to use a particular language or operate in a particular geographic region with a language that reads top-to-bottom the device would arrange the tiles to match the reading direction of the particular language or geographic location.
In some implementations, while playing a currently playing media item, the device detects a directory-view input (e.g., a pinch gesture). For example, in
In some implementations, while displaying a directory view that includes a large number of video tiles, the device plays video in a recently active video tile while displaying non-video content (e.g., images or text corresponding to the content associated with the video tiles) in the other video tiles. For example, in
In some implementations, prior to playing the initially-displayed media item, the device displays a media item directory in a respective arrangement (e.g., with the first sequence of media items, the second sequence of media items and, optionally, one or more other sequences of media items displayed in a particular order on the display). For example, in
In some implementations, the initially-displayed media item corresponds to on-demand content, and ceasing to play the initially-displayed media item includes ceasing to play the initially-displayed media item at a stop point (e.g., a particular timestamp). After ceasing to play the initially-displayed media item that corresponds to on-demand content, the device detects a predefined input that corresponds to a request to resume playing the initially-displayed media item and in response to detecting the predefined input, the device resumes playing the initially-displayed media item at a predefined point relative to the stop point. For example, the device detects a predefined input that corresponds to movement that is substantially opposite to the movement that corresponds to the media-change input and, in response, resumes playing the initially-displayed media item at the stop point, a predefined interval before the stop point, or a predefined interval after the stop point. For example, if Media Item A is on-demand content, when the device navigates to Media Item B in response to a right-to-left swipe, as shown in
In some implementations, the initially-displayed media item corresponds to live content, and ceasing to play the initially-displayed media item includes ceasing to play the initially-displayed media item at a stop point (e.g., a particular timestamp). After ceasing to play the initially-displayed media item that corresponds to live content the device detects a predefined input that corresponds to a request to resume playing the initially-displayed media item and in response to detecting the predefined input, the device resumes playing the initially-displayed media item at a current point in the initially-displayed media item that is different from the stop point. For example, the device detects a predefined input that corresponds to movement that is substantially opposite to the movement that corresponds to the media-change input and, in response, the device resumes playing the live content “live” without regard to the location of the stop point in the respective media content. For example, if Media Item A is live content, when the device navigates to Media Item B in response to a right-to-left swipe, as shown in
It should be understood that the particular order in which the operations in
Plural instances are, optionally provided for components, operations, or structures described herein as a single instance. Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and optionally fall within the scope of the implementation(s). In general, structures and functionality presented as separate components in the example configurations are, optionally, implemented as a combined structure or component. Similarly, structures and functionality presented as a single component are, optionally, implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the implementation(s).
It will also be understood that, although the terms “first,” “second,” are, in some circumstances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, which changing the meaning of the description, so long as all occurrences of the “first contact” are renamed consistently and all occurrences of the second contact are renamed consistently. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined (that a stated condition precedent is true)” or “if (a stated condition precedent is true)” or “when (a stated condition precedent is true)” is, optionally, construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
The foregoing description included example systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative implementations. For purposes of explanation, numerous specific details were set forth in order to provide an understanding of various implementations of the inventive subject matter. It will be evident, however, to those skilled in the art that implementations of the inventive subject matter is, optionally, practiced without these specific details. In general, well-known instruction instances, protocols, structures and techniques have not been shown in detail.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles and their practical applications, to thereby enable others skilled in the art to best utilize the implementations and various implementations with various modifications as are suited to the particular use contemplated.
This application claims priority to U.S. Provisional Patent Application Ser. No. 61/892,343, filed Oct. 17, 2013, entitled “System and Method for Switching between Media Items in a Plurality of Sequences of Media Items,” which application is incorporated by reference in its entirety. This application is related to U.S. Provisional Patent Application Ser. No. 61/836,079, filed Jun. 17, 2013, entitled “System and Method for Switching Between Media Streams while Providing a Seamless User Experience;” U.S. Provisional Patent Application Ser. No. 61/861,330, filed Aug. 1, 2013, entitled “Transitioning from Decompressing One Compressed Media Stream to Decompressing another Media Stream;” U.S. Provisional Patent Application Ser. No. 61/881,353, filed Sep. 23, 2013, entitled “System and Method for Efficiently Providing Media and Associated Metadata,” which applications are incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
5646866 | Coelho et al. | Jul 1997 | A |
5682207 | Takeda et al. | Oct 1997 | A |
6208354 | Porter | Mar 2001 | B1 |
6384869 | Sciammarella et al. | May 2002 | B1 |
6804824 | Potrebic et al. | Oct 2004 | B1 |
6919929 | Iacobelli et al. | Jul 2005 | B1 |
7519223 | Dehlin et al. | Apr 2009 | B2 |
7797713 | Dawson et al. | Sep 2010 | B2 |
8146019 | Kim et al. | Mar 2012 | B2 |
8341662 | Bassett et al. | Dec 2012 | B1 |
8341681 | Walter et al. | Dec 2012 | B2 |
8532464 | Randall | Sep 2013 | B2 |
8564728 | Petersson et al. | Oct 2013 | B2 |
8683377 | Zuverink et al. | Mar 2014 | B2 |
8736557 | Chaudhri | May 2014 | B2 |
20040003399 | Cooper | Jan 2004 | A1 |
20040221306 | Noh | Nov 2004 | A1 |
20050114885 | Shikata et al. | May 2005 | A1 |
20050138658 | Bryan | Jun 2005 | A1 |
20050234992 | Haberman | Oct 2005 | A1 |
20060015904 | Marcus | Jan 2006 | A1 |
20060075428 | Farmer et al. | Apr 2006 | A1 |
20060159184 | Jang | Jul 2006 | A1 |
20070028270 | Ostojic et al. | Feb 2007 | A1 |
20070067815 | Bowen et al. | Mar 2007 | A1 |
20070083911 | Madden et al. | Apr 2007 | A1 |
20070169156 | Zeng | Jul 2007 | A1 |
20070263066 | Henning et al. | Nov 2007 | A1 |
20080126919 | Uskali et al. | May 2008 | A1 |
20080155459 | Ubillos | Jun 2008 | A1 |
20090046545 | Blinnikka | Feb 2009 | A1 |
20090100380 | Gardner et al. | Apr 2009 | A1 |
20090119594 | Hannuksela | May 2009 | A1 |
20090195515 | Lee | Aug 2009 | A1 |
20100077441 | Thomas et al. | Mar 2010 | A1 |
20100153999 | Yates | Jun 2010 | A1 |
20100162180 | Dunnam et al. | Jun 2010 | A1 |
20100175026 | Bortner et al. | Jul 2010 | A1 |
20100180297 | Levine et al. | Jul 2010 | A1 |
20100235733 | Drislane et al. | Sep 2010 | A1 |
20100235746 | Anzures | Sep 2010 | A1 |
20100287586 | Walter et al. | Nov 2010 | A1 |
20110066703 | Kaplan et al. | Mar 2011 | A1 |
20110090402 | Huntington et al. | Apr 2011 | A1 |
20110119611 | Ahn et al. | May 2011 | A1 |
20110119711 | Marshall et al. | May 2011 | A1 |
20110242002 | Kaplan et al. | Oct 2011 | A1 |
20110289139 | McIntosh et al. | Nov 2011 | A1 |
20110289534 | Jordan et al. | Nov 2011 | A1 |
20110296351 | Ewing et al. | Dec 2011 | A1 |
20120030619 | Lee et al. | Feb 2012 | A1 |
20120054679 | Ma et al. | Mar 2012 | A1 |
20120079429 | Stathacopoulos et al. | Mar 2012 | A1 |
20120131459 | Ilama-Vaquero et al. | May 2012 | A1 |
20120137216 | Choi | May 2012 | A1 |
20120141095 | Schwesinger et al. | Jun 2012 | A1 |
20120158802 | Lakshmanan | Jun 2012 | A1 |
20120170903 | Shirron et al. | Jul 2012 | A1 |
20120180090 | Yoon et al. | Jul 2012 | A1 |
20120182384 | Anderson et al. | Jul 2012 | A1 |
20120204106 | Hill et al. | Aug 2012 | A1 |
20120213295 | Quere et al. | Aug 2012 | A1 |
20120216117 | Arriola et al. | Aug 2012 | A1 |
20120221950 | Chao et al. | Aug 2012 | A1 |
20120254793 | Briand et al. | Oct 2012 | A1 |
20120254926 | Takahashi et al. | Oct 2012 | A1 |
20120257120 | Nakai | Oct 2012 | A1 |
20120290933 | Rajaraman et al. | Nov 2012 | A1 |
20120311444 | Chaudhri | Dec 2012 | A1 |
20120323917 | Mercer et al. | Dec 2012 | A1 |
20130016129 | Gossweiler III et al. | Jan 2013 | A1 |
20130080895 | Rossman et al. | Mar 2013 | A1 |
20130145268 | Kukulski | Jun 2013 | A1 |
20130179925 | Woods et al. | Jul 2013 | A1 |
20130222274 | Mori et al. | Aug 2013 | A1 |
20130236158 | Lynch et al. | Sep 2013 | A1 |
20130263047 | Allen et al. | Oct 2013 | A1 |
20130265501 | Murugesan et al. | Oct 2013 | A1 |
20130275924 | Weinberg et al. | Oct 2013 | A1 |
20130283154 | Sasakura | Oct 2013 | A1 |
20130293454 | Jeon et al. | Nov 2013 | A1 |
20130307792 | Andres et al. | Nov 2013 | A1 |
20130309986 | Cox et al. | Nov 2013 | A1 |
20130332835 | Mace | Dec 2013 | A1 |
20140098140 | Tran et al. | Jan 2014 | A1 |
20140059479 | Hamburg et al. | Feb 2014 | A1 |
20140082497 | Chalouhi et al. | Mar 2014 | A1 |
20140108929 | Garmark et al. | Apr 2014 | A1 |
20140114985 | Mok et al. | Apr 2014 | A1 |
20140143725 | Lee | May 2014 | A1 |
20140157124 | Roberts et al. | Jun 2014 | A1 |
20140164984 | Farouki | Jun 2014 | A1 |
20140176479 | Wardenaar | Jun 2014 | A1 |
20140178047 | Apodaca et al. | Jun 2014 | A1 |
20140282281 | Ram et al. | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
1672923 | Jun 2006 | EP |
1775953 | Apr 2007 | EP |
2469841 | Jun 2012 | EP |
WO 2009088952 | Jul 2009 | WO |
WO2011095693 | Aug 2011 | WO |
WO2013022486 | Feb 2013 | WO |
Entry |
---|
Hoffert, Office Action U.S. Appl. No. 14/165,508, Apr. 21, 2014, 17pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,514, May 9, 2014, 19 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,512, May 28, 2014, 19 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,517, May 28, 2014, 18 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,507, May 14, 2014, 18 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,513, Jun. 6, 2014, 13 pgs. |
Hoffert, Final Office Action U.S. Appl. No. 14/165,508, Sep. 22, 2014, 24 pgs. |
Hoffert, Final Office Action U.S. Appl. No. 14/165,517, Oct. 7, 2014, 7 pgs. |
Hoffert, Notice of Allowance U.S. Appl. No. 14/165,512, Oct. 14, 2014, 5 pgs. |
Spotify AB, Invitation to Pay Additional Fees and Partial International Search Report, PCTUS2014/042571, Sep. 24, 2014, 6 pgs. |
Hoffert, Final Office Action U.S. Appl. No. 14/165,514, Oct. 23, 2014, 23 pgs. |
Hoffert, Final Office Action U.S. Appl. No. 14/165,513, Nov. 7, 2014, 14 pgs. |
Hoffert, Notice of Allowance, U.S. Appl. No. 14/165,517, Jan. 21, 2015, 6 pgs. |
Hoffert, Final Office Action U.S. Appl. No. 14/165,507, Oct. 22, 2014, 20 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,508, Jan. 5, 2015, 24 pgs. |
Spotify AB, International Search Report, PCTUS2014/042571, Dec. 12, 2014, 6 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,514, Mar. 3, 2015, 19 pgs. |
Hoffert, Office Action U.S. Appl. No. 14/165,513, Mar. 27, 2015, 16 pgs. |
Hoffert, Notice of Allowance U.S. Appl. No. 14/165,512, Mar. 2, 2015, 6 pgs. |
Hoffert, Notice of Allowance, U.S. Appl. No. 14/165,507, Mar. 16, 2015, 17 pgs. |
Hoffert, Notice of Allowance, U.S. Appl. No. 14/165,508, Mar. 2, 2015, 5 pgs. |
Hoffert, Notice of Allowance, U.S. Appl. No. 14/165,517, Apr. 28, 2015, 6 pgs. |
ISO/IEC 14496-12, Oct. 1, 2005, International Standard, ISO/IEC, XP55178146, 94 pgs. |
Siglin, “Unifying Global Video Strategies, MP4 File Fragmentation for Broadcast, Mobile and Web Delivery,” A Transitions in Technology White Paper, Nov. 16, 2011, 16 pgs. |
Spotify AB, International Search Report and Written Opinion, PCTIB/2014/002831, Mar. 19, 2015, 11 pgs. |
Spotify AB, Invitation to Pay Additional Fees and Partial Search Report, PCTIB/2014002726, Mar. 31, 2015, 8 pgs. |
Zambelli, Alex, “IIS Smooth Streaming Technical Overview,” Mar. 1, 2009, Microsoft Corporation, downloaded from http://dfpcorec-p.international.epo.org/wf/storage/14C3247F2EA000308DF/originalPdf, 8 pgs. |
Number | Date | Country | |
---|---|---|---|
20150113407 A1 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
61892343 | Oct 2013 | US |