The field relates generally to video display on a mobile device and in particular the video delivery on mobile devices that have processing units with limited threading capabilities.
There are 7.2 billion videos streamed in the Internet today from major video sharing sites. (See http://www.comscore.com/press/release.asp?press=1015.) In the month of December 2006 alone 58M unique visitors visited these sites. In the coming years this number is expected to triple.
The streaming of videos currently is very popular on desktop systems. However, it is not pervasive on mobile devices, such as mobile phones, due to the many constraints associated with the mobile device. One of the constraints is that most processing units on mobile devices have limited threading capability.
The thread scheduling on most embedded processing units, such as CPUs, are not very efficient, especially when one of the threads is decoding video data with high priority. As a result, the other low priority thread that is streaming the data from the network is “starved” or not given a chance to execute. This results in the video playback that is frequently being interrupted to buffer data from the network. Thus, it is desirable to provide a system and method to stream and render videos on mobile devices that have processing units with limited threading capability and it is to this end that the system and method are directed.
The system and method are particularly applicable to a mobile phone with a limited threading capability processing unit for streaming and rendering video and it is in this context that the system and method will be described. It will be appreciated, however, that the system and method has greater utility since it can be used with any device that utilizes a limited threading capability processing unit and where it is desirable to be able to steam and render digital data.
The system and method provide a technique to efficiently stream and render video on mobile devices that have limited threading and processing unit capabilities. Each mobile device may be a cellular phone, a mobile device with wireless telephone capabilities, a smart phone (such as the RIM® Blackberry™ products or the Apple® iPhone™) and the like which have sufficient processing power, display capabilities, connectivity (either wireless or wired) and the capability to display/play a streaming video. However, each mobile device has a processing unit, such as a central processing unit, that has limited threading capabilities. The system and method allows user of the each mobile device to watch streaming videos on the mobile devices efficiently while conserving battery power of the mobile device and processing unit usage as described below in more detail.
In the system, a user of a mobile device can connect to the one or more directory units 16 and locate a content listing which are then communicated from the one or more directory units 16 back to the mobile device 12. The mobile device can then request the content from the content units 18 and the content is streamed to the video unit 12f that is part of the mobile device.
In operation, the video unit 12f executing on the mobile device 12 streams content, such as videos, from the link and the video unit spawns child applications and each child application will be involved in a specific task such as streaming video, decoding video, decoding audio, rending video to screen. All such process will share a file mapped memory region or a “memory window” through which video and audio data is transmitted to each other. There are two different types of mobile phone devices that are in use today: smart phones and feature phones. Smart phones are devices tat have a higher CPU processing capabilities namely with 200-500 MHz CPU with optimizations to perform multimedia operations. Most multimedia functionality is supported and accelerated thorough the help special purpose integrated circuits. Smart phones also have a general purpose operating system for which applications can be built etc. On the other hand featured phones have limited CPU's specialized for executing voice related functions. Streaming or rendering video o such devices is not possible. Some newer featured phone models do have support for multimedia in a limited manner. If one has to undertake an application to render and stream video and sound on such devices it becomes an impossible task unless careful consideration is given to the implementation. There are few techniques we employed to make this possible on smaller devices without the aid of specialized accelerating hardware components.
The streaming process 22 and the decoding process 24 share a file mapped memory window (video data window 12g such as a portion of memory in the mobile device in one embodiment) though which data is shared wherein the streaming process writes to the window 12b while the decoding process consumes from the window 12g. When the streaming process (which writes the streaming content data into the window) reaches the bottom of the window, it circulates back to the top (like a circular buffer) and start writing at the top of the window provided that the content at the top of the window has already been consumed by the decoding process 24. If the window 12g is full or the decoding process 24 did not consume the data in the portion of the window that the streaming process 22 is trying to write new content data into, then the writing or the streaming process will pause. In most video player implementations, memory blocks are transferred from one subsystem to the other and thus this transfer will hold up resources including the processing unit because the default shared memory offered by the mobile device system is not efficient on mobile devices without using the above-mentioned windowing scheme. In systems that support hardware acceleration, both the video decoder and the audio decoder will leverage such acceleration.
The decoding process 24 and the rendering process 26 may share another file mapped memory window (raw frame data window 12h such as a portion of memory in the mobile device in one embodiment.) As decoding happens, the decoding process 24 will write raw frame content data to this window 12h and the rendering process 26 consumes the raw frame data from this window 12h. The decoding process 24 may wait if it has not got enough video data to decode. The rendering process 26 may also wait until it has received at least a single frame to render. In case the video is paused by the user, content of this shared window 30 is transferred into a memory cache of the mobile device. Then, when the content is played again, the content is moved from the cache onto the screen 12b for rendering. Since processes instead of threads are used in the system and method, the operating system of the mobile device will give equal priority and will not “starve” any single operation.
The system may also incorporate YUV color conversion. Video data in most codec implementations is handled by converting the video data into the known YUV color scheme because the YUV color scheme efficiently represents color and enables the removal of non significant components that are not perceived by the human eye. However this conversion process is very processing unit intensive, consist of several small mathematical operations and these operations in turn consume more processing unit cycles and computational power, which are scarce resource on mobile phones. The system uses an efficient methodology of providing file mapped lookup tables to perform this computation and completely avoiding standard mathematically operations, resulting in efficient processing unit usage.
In one embodiment, the tables are implemented as follows:
Y_to_R[255], Y_to_G[255], Y_to_B[255], U_to_R[255], U_to_G[255], U_to_B[255],
V_to_R[255],V_to_G[255], and V_to_B[255].
The tables thus contains a conversion table for each YUV element to each RGB element so that a simple summation is therefore sufficient for calculation instead of multiplications. For instance, if the Y,U,V values of a pixel are y, u, v, then the corresponding r,g,b values for the pixel are calculated using the equations shown on
While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.
This application claims the benefit under 35 U.S.C. 119(e) and priority under 35 U.S.C. 120 to U.S. Provisional Patent Application Ser. No. 60/989,001 filed on Nov. 19, 2007 and entitled “Method to Stream and Render Video Data On Mobile Phone CPU's That Have Limited Threading Capabilities”, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60989001 | Nov 2007 | US |