This application relates to the technical field of data processing, more specifically to methods and apparatuses associated with user effected adaptive streaming.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Existing web based multi-media streaming methods often require a user to use one of the following default resolutions (240p, 360p, 420p, 720p etc) for streaming and viewing the multi-media content. As a result, streaming of the multi-media content often defaults to either a website's default or the lowest common denominator (in the case of streaming for multi-users). If improving the streaming is desired, typically, a user must manually select a lower or higher resolution (if available). Further, adjustment of resolution is typically made through an unfriendly form type interface. Additionally, the user typically makes the adjustment without knowledge of the streaming context, such as available bandwidth, what resolution will provide good quality, and so forth. Thus, the user will typically make the adjustment on a trial and error basis. For example, make an adjustment, then observe whether the streaming progress bar suggests the content is being received faster than playback, if not, make another adjustment, and repeat the process. However, the average user often does not always understand this process, thus an average user will often simply pause the media player, go do something else, and return at sometime later when the higher quality stream has been received. The end result is generally poor and frustrating user experience in consuming multi-media content.
There are commercial streaming mechanisms for automatically adjusting the streaming given detected available bandwidth. However, these mechanisms typically remove the user and their requirements from the equation, thus also can provide a frustrating user experience, especially if the user is willing to use a lower quality stream (e.g., when quickly scanning or reviewing some multi-media). Further, the server side typically has no knowledge of the resulting “window” size being used to display the multi-media content on the client device. Hence streamed content is often not scaled for the display unit of the client device. Users are often forced to use a set window size.
The above problems are also evident in existing single/multi-user video conferencing and social networking videoconferencing. A user is typically unable to selectively adjust their viewing experience in view of their own streaming context. Further, in multi-user meeting/conference situations, a user is unable to increase the quality of one stream over other streams (e.g., viewing more clearly the current speaker or a whiteboard, and less clearly for other people in the meeting).
Embodiments of the present invention will be described by way of exemplary embodiments, but not limitations, illustrated in the accompanying drawings in which like references denote similar elements, and in which:
Methods, apparatuses and storage medium associated with multi-media streaming with user effected adaptation are disclosed. In various embodiments, a method may include receiving, by a device, streaming of a multi-media content from a multi-media server, and determining, by the device, current multi-media streaming context of the device. The method may further include providing, by the device, a user control for a user of the device to effect adaptation of the streaming of the multi-media content. The user control may include a plurality of control selections having associated qualitative descriptions of the control selections. Other embodiments may be disclosed or claimed.
Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that alternate embodiments may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that alternate embodiments may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments.
Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the illustrative embodiments; however, the order of description should not he construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Further, descriptions of operations as separate operations should not be construed as requiring that the operations be necessarily performed independently and/or by separate entities. Descriptions of entities and/or modules as separate modules should likewise not be construed as requiring that the modules be separate and/or perform separate operations. In various embodiments, illustrated and/or described operations, entities, data, and/or modules may he merged, broken into further sub-parts, and/or omitted.
The phrase “in one embodiment” or “in an embodiment” is used repeatedly. The phrase generally does not refer to the same embodiment; however, it may. The terms “comprising,” “having,” and “including” are synonymous, unless the context dictates otherwise. The phrase “A/B” means “A or B”. The phrase “A and/or B” means “(A), (B), or (A and B)”. The phrase “at least one of A, B and C” means “(A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C)”.
In various embodiments, multi-media player 124 may be configured to render streamed multi-media content on display unit 108, through GPU 106. Multi-media player 124 may be configured to cooperate with multi-media server 132 to enable the multi-media content to be adaptive streamed. Cooperation may include determining the streaming context, which may include available bandwidth of a network connection between client device 102 and multi-media server 132, the processing capability of the GPU 106 (including decoding capability of an embedded or external decoder), the processing capability of processor and memory arrangement 104, the display capability (e.g., screen size) of display unit 108, and so forth. Cooperation may further include providing the determined information, and/or configuration information of the device to the server. Further, cooperation may include jointly arriving with the server the operation parameters of the streaming, such as resolution, color depth, encoding and/or compression scheme, bit rate, and an forth. Additionally, multi-media player 124 may be configured to provide a user control feature to enable a user to effect the adaptive streaming. As will be described in more detail below, the user control feature may be in view of the determined streaming context, and may include features that assist the user in effecting the adaptive streaming, thus potentially providing a better user experience in consuming the streamed multi-media content. Multi-media player 124 (except for the earlier described aspects) is otherwise intended to represent a broad range of media players known in the art.
In various embodiments, as described earlier, processor and memory arrangement 104 may be configured to enable OS 122, including multi-media player 124, and media application 120 to be operated therein. Processor and memory arrangement 104 is intended to represent abroad range of processor and memory arrangement, including but are not limited to arrangements with single or multi-core processors of various execution speeds and power consumptions, and memory of various architectures with one or more levels of caches, and of various types, dynamic random access, FLASH, and so forth.
In various embodiments, GPU 106 (with decoder 126) may be configured to provide video decoding and/or graphics processing functions to OS 122 and/or media application 120, through multi-media player 124, white display unit 108 may be configured to enable multi-media content, e.g., HD video, to be rendered thereon. Examples of graphics processing functions may include, but are not limited to, transform, lighting, triangle setup/clipping, polygon processing, and on forth.
OS 122 (except for multi-media player 124) and media application 120 are intended to represent a broad range of these elements known. Examples of OS 122 may include, but are not limited to Windows® operating systems, available from Microsoft Corporation of Redmond, Wash., Linux, available from e.g., Red Hat of Raleigh, N.C., Android™ developed by the Open Handset Alliance, or IOS, available from Apple Computer of Cupertino, Calif. Examples of media application 120 may include, but are not limited to, videoconferencing applications, or generic application agents, such as a browser. Examples of a browser may include, but are not limited to, Internet Explorer, available from Microsoft Corporation of Redmond, Wash., or Firefox, available from Mozilla of Mountain View, Calif.
Similarly, multi-media server 132 and network(s) 134 are intended to represent a broad range of these elements known. Examples of multi-media server 132 may include, but are not limited to, a video server from Netflix, Inc, of Los Gatos, Calif., or a video server from CNN of Atlanta, Ga. Network(s) 134 may include wired or wireless, local or wide area, private or public networks, including the Internet.
Referring now to
As illustrated, in various embodiments, media application 120 may include user interface 202 for rendering video images 204 of an adaptively streamed multi-media content. Further, user interface 202 may include user control feature 206 to enable a user to effect the adaptive streaming. In various embodiments, user control feature 206 may include a number of control selections 212 (e.g., resolutions 1080p, 720p, 480p, 360p and/or 240p) for the user to select and control the adaptive streaming. In alternate embodiments, control selections may be e.g., 32 bit color depth, 24 bit color depth, 16 bit color depth, 256 colors, and/or monochrome, instead. Further, user control feature 206 may include a control selection of “audio only” 214, whereby streaming of video images will be halted. Additionally, in various embodiments, control selections 212 may have corresponding qualitative descriptions (e.g., “Low,” “OK,” “Normal,” “Good,” “Very Good,” and/or “Excellent” in terms of the overall quality of the audio/video rendering) to assist the user in selecting one of the control selections, accounting for the possibility that the user might be a non-technical user and not having full appreciation of the resolution or other control selections. User control feature 206 may also include a colored background 216 having a continuous spectrum of different shades of different color (e.g., from dark red, medium dark red, light red, light green, medium dark green to light green) to further assist the user in selecting one of the control selections. In alternate embodiments, background 216 may be a continuous spectrum of grayscales instead.
In various embodiments, user control feature 206 may be presented in the form of a slider, with a slidable feature 218, using e.g., a cursor control device or finger/stylus (in the case of touch sensitive screens), for the user to make selection. User control feature 206 may also include recommendation indicator 220 to recommend to the user with respect to which control selection or selections to select.
In various embodiments, as described earlier, media application 120 may be a video conferencing application. Accordingly, video images 304a-e may be images of various participants of a videoconference. Thus, with respective user control features 306a-306e, a user may selectively and individually control the adaptive streaming of different conference participants, e.g., favoring one or a subset of the conference participants over other conference participants.
At block 404, multi-media player 124 may cooperate with multi-media server 132 in adapt streaming the multi-media content. As described earlier, as part of the cooperation, multi-media player 124 may determine the streaming context of client device 102. From block 404, method 400 may proceed to block 406.
At block 406, multi-media player 124 may provide user control feature 206/306a-e for a user to effect adaptive streaming as earlier described. If method 400 arrives at block 406 without having first passing through block 404, multi-media player 124 may likewise first make a determination of the streaming context of client device 102, before providing the user control feature. At block 406, method 400 may remain there and await the user in making a selection of the presented control selections. On receipt of a user selection, method 400 may proceed/return to block 404, wherein multi-media player 124 may cooperate with multi-media server 132 to (further) adapt streaming of the multi-media content, in view of the streaming context of client device 102 and the user selection. Thereafter, method 400 may proceed to block 406 again, and continue operation therefrom.
In alternate embodiments, after looping for a period of time waiting for user selection, method 400, in lieu of continuing looping at block 406, may optionally proceed to block 408 instead (as denoted by the dash lines). At block 408, method 400 may enter an idle state with user control feature 206/306a-e hidden. From block 408, method may then proceed to either block 406 again, in response to a user request for the user control feature 206/306a-e as described earlier, or to block 404 again, in response to a change in the streaming context, e.g., change in bandwidth, change in device workload, and so forth. On return to block 404, method 400 may again first adapt the streaming in view of the changed context, e.g., changing resolution, changing color depth (including changing from color to monochrome), and then proceed to block 406 again to provide with the user a means to effect the adaptation, as earlier described.
Accordingly, better user experience in consuming streamed multi-media content potentially may be had.
Referring back to
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described, without departing from the scope of the embodiments of the present disclosure, This application is intended to cover any adaptations or variations of the embodiments discussed herein, Therefore, it is manifestly intended that the embodiments of the present disclosure be limited only by the claims and the equivalents thereof.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CN11/84784 | 12/28/2011 | WO | 00 | 8/19/2014 |