In general, this relates to providing 3D animation in an electronic device. In particular, this relates to selectively adjusting the frame rate during a 3D animation for the purpose of optimizing resource use in an electronic device.
3D animation is used in a host of electronic applications, ranging from the graphics on cellular phones and digital audio player such as the iPod™ to sophisticated video games and animated movies. Similar to traditional 2D animation, 3D animation fundamentally involves displaying a series of still images at a rate fast enough to create the optical illusion of motion. Each still image is displayed on an electronic screen by manipulating the color and intensity of the pixels that constitute the display.
The process of generating a 3D animation usually begins with describing the object of the animation (e.g. the ball in a bouncing ball animation) using a computer model, such as a wireframe model. The spatial and temporal trajectory of the animation sequence is created by providing multiple frames of the animated object in which the object is incrementally changing. The rate at which the frames are provided is referred to as the frame rate. Lastly, each frame (or a series of frames) is rendered to create a realistic 2D image from the 3D model contained in the frame. The rendering process generally uses the 3D model in each frame to determine the kind of texture and lighting that should be applied to the image so that the finished image has perspective and depth.
The process of creating animation, and in particular the rendering step, can often be computationally very expensive. This can be a particular problem in portable electronic devices, such as cellular phones and digital audio players, where power consumption, memory space, and CPU power are often limiting factors. Thus, there is a need in the art for systems and methods for creating resource-friendly 3D animation.
Accordingly, systems, methods and computer-readable medium are provided for generating 3D animation using limited hardware resources.
Nominally, the images that comprise an animation sequence are provided to a screen at a constant rate. However, the amount of computational resources needed to generate each image is not constant. Generally, images that use up more pixels (i.e. screen area) require more resources to model and render. Large incremental change between consecutive images also often leads to more system resource requirements. The present invention can limit the amount of resources needed to create a 3D animation by selectively decreasing the frame rate of the animation during segments that are deemed too resource intensive.
The invention can generate 3D animation with frame rates that are dependent on the computational complexity of rendering each image. The system may include hardware that is configured to store a collection of frames that compose an animation of a 3D object. An optimal frame rate for each frame may be computed based on the resources required for each frame. The system may then select a group of frames to render and provide the rendered images at their associated optimal frame rate to a screen, thereby creating a resource-limited animation.
In one embodiment, the animation is of a rotating object, and in particular, a 2D rotating object such as a music album cover. In this scenario, images of the object that are parallel and almost parallel to the screen are the most computationally expensive to render (i.e., requires significant system resources). Thus, fewer frames of the object in these orientations are rendered and displayed during the animation. The exact point in the animation sequence to slow down the frame rate may be determined by plotting the relationship between the frame number of the animation sequence and the number of pixels occupied by the corresponding image. This plot may be monotonically increasing for a 2D object that is rotated 90 degrees. An upper limit placed on the number of pixels used may translate to the frame number where the frame rate should be decreased.
The computations performed to generate the 3D animation may be carried out on the same device or in two separate devices or pieces of software. In one embodiment, a preprocessor may be used to cache a collection of frames in memory. The preprocessor may compute the optimal frame rate of each frame and store it with the cached frame in, for example, a look up table.
During runtime, the same or a separate device/software may select a group of frames from the collection generated by the preprocessor and order the frames such that the desired animation sequence is created. The frames may then be rendered to create the final images that are seen by the user. Each image may be provided at its optimal frame rate.
In another embodiment, the frame rate during runtime of the system may be a predetermined constant due to the limitations of the hardware or for another reasons. In this case, images with optimal frame rates lower than the runtime frame rate may be held on the screen over multiple frame update cycles, thereby eliminating the need for re-rendering.
Persons of ordinary skill in the art will appreciate that the at least some of the various embodiments described herein can be combined together or they can be combined with other embodiments without departing from the spirit of the present invention.
The above and other objects and advantages of the invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
Referring to
User input component 102 is illustrated in
Device 100 can also have a communications port 103 that allows it to communicate with other electronic devices, such as computers, digital audio players, video players, et cetera. Communications port 103 may provide a direct, wired connection to other devices, or it may provide a wireless connection to other devices, or both. For example, port 103 can support one or more of USB, Wi-Fi (e.g., a 802.11 protocol), Ethernet, Bluetooth™ (which is a trademark owned by Bluetooth Sig, Inc.), high frequency systems (e.g., 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, BitTorrent, FTP, RTP, RTSP, SSH, any other communications protocol, or any combination thereof.
One or more local servers 114 and corresponding database 115 may also be connected to network 111. Local server 114 and database 115 may be a personal computer, for example. Local server 114 and database 115 may interact with remote server 112 and remote database 113 to obtain the aforementioned data files, among other reasons. The data files may be downloaded by local server 114 automatically or in response to a user input. Local server 114 may or may not be directly connected to portable device 110, and the steps of certain computations and processes may be performed partially on local server 114 and partially on portable device 110.
Electronic device 200 can include control processor 202, storage 204, memory 206, communications circuitry 208, input/output circuitry 210, display circuitry 212 and/or power supply circuitry 214. In some embodiments, electronic device 200 can include more than one of each component or circuitry, but for sake of simplicity, only one of each is shown in
Processor 202 can be configured to perform any function. Processor 202 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, and/or any other application.
Storage 204 can be, for example, one or more storage mediums, including for example, a hard-drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. Storage 204 may store, for example, media data (e.g., graphics data files), application data, firmware, wireless connection information data (e.g., information that may enable electronic device 200 to establish a wireless connection), subscription information data (e.g., information that keeps track of podcasts or audio/video broadcasts or other media a user subscribes to), contact information data (e.g., telephone numbers and email addresses), calendar information data, any other suitable data, or any combination thereof.
Memory 206 can include cache memory, semi-permanent memory such as RAM, and/or one or more different types of memory used for temporarily storing data. Memory 206 can also be used for storing data used to operate electronic device applications.
Communications circuitry 208 can permit device 200 to communicate with one or more servers or other devices using any suitable communications protocol. For example, communications circuitry 208 may support Wi-Fi (e.g., a 802.11 protocol), Ethernet, Bluetooth™ (which is a trademark owned by Bluetooth Sig, Inc.), high frequency systems (e.g., 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, BitTorrent, FTP, RTP, RTSP, SSH, any other communications protocol, or any combination thereof.
Input/output circuitry 210 can convert (and encode/decode, if necessary) analog signals and other signals (e.g., physical contact inputs (from e.g., a multi-touch screen), physical movements (from, e.g., a mouse), analog audio signals, etc.) into digital data. Input/output circuitry can also convert digital data into any other type of signal vice-versa. The digital data can be provided to and received from processor 202, storage 204, memory 206, or any other component of electronic device 200. Although input/output circuitry 210 is illustrated in
Display circuitry 212 can accept and/or generate signals for presenting media information (textual and/or graphical) on a display such as those discussed below. For example, display circuitry 212 can include a coder/decoder (CODEC) to convert digital media data into analog signals. Display circuitry 212 also can include display driver circuitry and/or circuitry for driving display driver(s). The display signals can be generated by processor 202 or display circuitry 212. The display signals can provide media information related to media data received from communications circuitry 208 and/or any other component of electronic device 200. In some embodiments, display circuitry 212, like any other component discussed herein, can be integrated into and/or electrically coupled to electronic device 200.
Power supply 214 can provide power to the components of device 200. In some embodiments, power supply 214 can be coupled to a power grid (e.g., a wall outlet or automobile cigarette lighter). In some embodiments, power supply 214 can include one or more batteries for providing power to a portable electronic device. As another example, power supply 214 can be configured to generate power in a portable electronic device from a natural source (e.g., solar power using solar cells).
Bus 216 can provide a data transfer path for transferring data to, from, or between control processor 202, storage 204, memory 206, communications circuitry 208, and any other component included in electronic device.
In some embodiments, electronic device 200 may be coupled to a host device (not shown) for performing any suitable operation that may require electronic device 200 and a host device to be coupled. The host device may perform operations such as data transfers and software or firmware updates. The host device may also execute one or more operations in lieu of electronic device 200 when memory 206 does not have enough memory space, or processor 202 does not have enough processing power to perform the operations efficiently. For example, if electronic device 200 is required to render images that are too large to be stored in memory 206, electronic device 200 may be coupled to a host device for the host device to execute the computations. Alternatively, the host device may perform one or more operations in conjunction with electronic device 200 so as to increase the efficiency of electronic device 200. For example, if electronic device 200 needs to perform several steps in a process, electronic device 200 may execute some of the steps while the host device executes the rest.
The host device may be any device that is suitable for executing operations that the host device may need to execute when coupled to electronic device 200. The host device may be a device that is capable of functioning like electronic device 200 (e.g., a device that is capable of producing 3D animation). In some embodiments, a plurality of electronic devices may be coupled to a host device to share data using the host device as a server. In other embodiments, an electronic device may be coupled to a plurality of host devices (e.g., for each of the host devices to serve as a backup for data stored in the electronic device).
Electronic device 200 may be coupled with a host device over a communications link using any suitable approach. As an example, the communications link may be any suitable wireless connection. The communications link may support any suitable wireless protocol such as, for example, Wi-Fi (e.g., a 802.11 protocol), Bluetooth®, infrared, GSM, GSM plus EDGE, CDMA, quadband, or any other suitable wireless protocol. Alternatively, the communications link may be a wired link that is coupled to both electronic device 200 and the host device (e.g., a wire with a USB connector or a 30-pin connector). A combination of wired and wireless links may also be used to couple electronic device 200 with a host device.
Referring now to
Object 301, for illustration purposes, is shown in
Time axis 316 indicates the temporal order in which frames 302-312 can be provided to the display screen to create the animation. In particular, frame 302 can be provided at time 0, frame 303 can be provided at time t1, frame 304 is provided at time t2, et cetera. Frames 303-312 can be provided in succession at a specified rate to create a 3D animation of object 301 rotating 180 degrees clockwise. Although it is not explicitly shown in
Nominally, the frame rate of an animation, the rate at which the frames are updated, is constant during the entire length of the animation. For example, a commonly used frame rate is 30 frames per second (fps), although higher and lower frame rates may also used in many applications. For example, an eleven frame animation operating at a constant frame rate of 30 fps would run for T=0.37 seconds. However, in accordance with embodiments of the present invention, using a frame rate that is variable during an animation may have certain performance advantages for the reasons discussed below.
The amount of hardware resources used in rendering an image from a model is highly dependent on the complexity of the image. In many cases, the complexity of an image is correlated with the screen area (i.e. the number of pixels) that the image occupies. The incremental change in occupied screen area between consecutive frames also leads to more complexity. For example, it requires more hardware resources to model, render, and display frames 306-308 in an animation than frames 302-305 and 309-312, because the images resulting from frames 306-308 utilize more pixels than that of the other frames, and because the incremental change in pixel use between the images of frames 306-308 is greater than that of the other frames. Thus, a decrease in overall resource use in generating an animation sequence may be achieved by selectively decreasing the frame rate during an expensive segment of an animation sequence. In this manner, resource intensive frames are provided a fewer number of times.
Examples of resources that may be saved by using the scheme described above include memory, power, CPU operation, and other types of electronic resources. This is especially important for portable electronic devices, where hardware resources are limited, and the fact that the use of those resources can cause a severe drain on the battery, resulting in reduced useful life of the product between charges. In particular, many portable electronic devices that do not contain a graphics card, or other hardware dedicated to performing the expensive computations required in animation, would benefit from the present invention.
With continued reference to
Although the above discussion focuses on animation sequences with two different frame rates, it is understood that any number of different frame rates may be used during an animation sequence to produce the desired resource-saving effects. The changes in frame rate may also be continuous rather than at discrete intervals.
Curve 351 provides a characterization of the animation trajectory and may take a variety of shapes. In a 90 degree rotation of a flat object, like the first half of the animation shown in
Referring now to
In one embodiment, preprocessor 401 may be a software program, such as iTunes™, that runs on a server or other computing device, such as local server 114 of
For example, preprocessor 401 may generate table 403 for a particular animation object, such as object 301 of
The remainder of the computations for generating an animation may be performed on runtime processor 402. In one embodiment, runtime processor 402 may be implemented in the circuitry or software of a portable electronic device, such as device 110 of
In some cases, runtime processor 402 may have a predetermined rate at which it updates the display. For example, runtime processor 402 may display 10 frames per second regardless of the animation sequence that is being shown. In these cases, frames that could be provided optimally at a lower rate than the frame rate of runtime processor 402 may be held over multiple frame cycles (i.e., rather than changing the frame and incurring expense in system resources, the frame remains as it was for one or more extra cycles). For example, if runtime processor 402 runs at 10 fps and selects all of the cached frames 404 from preprocessor 401 to display, it may hold frames 306-308 over two cycles because the frame rate of frames 306-308 is only 5 fps according to preprocessor 401. By holding frames 306-308 over multiple cycles, calculations such as rendering calculations need not be performed unnecessarily, thereby decreasing the use of system resources (or not increasing the use of those resources which would otherwise occur). Runtime processor 402 creates upcoming frames list 406 to reflect this detail by listing each of frames 306-308 twice. Using upcoming frames list 406, runtime processor 402 may render the images and display the images at the predetermined frame rate to create the 3D animation.
Referring now to
At step 501, frames of one or more animated 3D objects are cached in to memory. This may be done by a preprocessor that is part of or separate from the portable electronic device. For example, if the portable device is an iPod™, the frames may be cached by iTunes™ software on the memory of a personal computer. A selection of frames that are most likely to be displayed in the near future may be cached in the memory of the iPod™ as well. Alternatively, all the frames may be cached directly in the portable electronic device.
At step 502, the optimal frame rate for each cached frame is determined by assessing the computational complexity of rendering and displaying each cached frame. Persons of ordinary skill in the art will appreciate that, in accordance with some embodiments of the present invention, the assessment of optimal frame rate can be performed even earlier, offline, and downloaded with the frames in to cache memory as part of step 501. As described previously, the computational complexity of generating an image generally increases as the number of pixels occupied by that image increases. The computational complexity can also increase when the incremental change between two consecutive images in an animation sequence is large. Since complex computations typically require more hardware resources, the frame rate, in accordance with embodiments of the present invention, is selectively reduced for such images.
For purpose of illustration, in one scenario, the animated object has a flat rectangular shape and undergoes a 3D rotation during the animation. As the object rotates from being perpendicular to the screen to being parallel with it, the frame rate is selectively decreased to limit the resources used by these expensive images. The cached frames and their associated frame rates may be organized into a table, such as table 401 of
Although variable frame rates are computed for each frame in step 502, the rate at which the screen of the portable electronic device is updated may be a predetermined constant. For example, the frame rate of a device might be a function of the speed in hertz of the processor. Additionally, if an animation is comprised of two or more objects, the computationally expensive frames of each object may occur at alternating times during the animation (e.g. two disks rotating out of sync). In this case, it may not be possible to actually slow down or increase the runtime frame rate of the device. Accordingly, the actual runtime frame rate at which the portable device operates is determined at step 503. This step may involve simply determining the predetermined frame rate of the portable device or computing a frame rate that is a common factor of the optimal frame rates found in step 502 of each object in the animation.
The steps described above may occur during preprocessing. During runtime, the temporal layout of the animation may be determined. At step 504, a list of upcoming frames is created or modified. The frames on this list are displayed sequentially at the runtime frame rate calculated in step 503. Frames with optimal frame rates lower than that of the runtime frame rate may be held on the screen over multiple consecutive cycles. Since this list of upcoming frames may dynamically change in response to user input, such as in the cover flow user interface of the iPod™ and iPhone™, step 504 may be performed many times or even continuously.
At step 505, the frames on the list created in step 504 are rendered to create the images in the final animation. Rendering is the process of displaying the 3D objects in the frames cached in step 501 onto a 2D screen. Rendering may add texture to the objects. For example, light reflection properties may be added to the objects to create the 3D effect. These rendered images are then displayed onto a device screen in the order and frame rate that was specified in step 504, thereby creating a 3D animation at step 506.
It will be understood that the foregoing is only illustrative of the principles of the invention, and that various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention, and the present invention is limited only by the claims that follow.
This application claims priority to Bull et al., U.S. Provisional Patent Application No. 61/009,655 (Attorney Docket No. 104677-0183-001), filed Dec. 31, 2007, entitled “Selective Frame Rate Display of a 3D Object,” the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61009655 | Dec 2007 | US |