1. Field of the Invention
The present invention is related to a method for generating image frames, and more particularly, to a method for generating three-dimensional (3D) image frames without requiring 3D hardware.
2. Description of the Prior Art
Nowadays 3D technology is blending into more and more aspects of consumers' everyday life. 3D technology covers applications in architecture, medical, entertainment and consumer electronics etc. For instance, 3D films and games, as well as products such as 3D television and 3D digital photo frame, all deploy 3D technology heavily.
However, one of the drawbacks of 3D technology is the requirement of 3D hardware such as 3D chips, which are often costly and increase factors such as circuit space and heat dissipated etc. In addition, certain peripherals such as 3D glasses may be needed for the user to experience the 3D effect.
Implementing 3D hardware on relative compact (e.g. low profile) devices, e.g. digital still camera (DSC), mobile phones and digital photo frames etc., may be impractical due to factors such as size, design architecture and cost etc. Image displayed or user interface (UI) implemented for a compact device is mostly two-dimensional (2D) based, or 2D-based with switching effects (still 2D-based). Consequently, visual effects and related user experience are limited for compact devices.
The present invention discloses a method for generating image frames. The method comprises setting a 3D display mode of the image frames; configuring at least one parameter of the image frames; displaying the image frames according to the 3D display mode and the at least one parameter of the image frames; when an input command is received while displaying the image frames, determining whether the input command corresponds to the at least one parameter of the image frames; and if the input command does not correspond to the at least one parameter, reconfiguring the at least one parameter according to the input command.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Please refer to
Step 102: start;
Step 104: setting a 3D display mode of the image frames;
Step 106: configuring at least one parameter of the image frames;
Step 108: displaying the image frames according to the 3D display mode and the at least one parameter of the image frames;
Step 110: determining if an input command is received while displaying the image frames; if so, proceed to step 112, otherwise proceed to step 108;
Step 112: decoding the received input command;
Step 114: determining whether the input command corresponds to any of the at least one parameter of the image frames; if so, proceed to step 108, otherwise proceed to step 116;
Step 116: reconfiguring the at least one parameter according to the input command.
Steps 102-106 describe initialization of method 100 of the present invention, for determining how the image frames are presented. Firstly a 3D display mode for displaying the generated image frames is set. For instance, image frames can be set in step 104 to display in modes such as photo wall, spiral and sphere etc. At least one parameter relative to the image frames is then configured in step 106. Parameters comprise coordinates, movement trace and frame rate of the image frames etc. Movement trace represents the trajectory and/or movement distance of effect relative to the image frames. For instance, if image frames are displayed in photo wall mode, the movement trace corresponds to a sliding trajectory of the image frames; if the image frames are displayed in spiral mode, the movement trace corresponds to a helix trajectory of the image frames; if the image frames are displayed in sphere, the movement trace corresponds to a revolving trajectory of the sphere.
After the image frames have been initialized, the image frames are displayed in step 108 according to display mode and parameters set insteps 104 and 106 respectively. For instance, to display image frames in step 108, a first frame data of the image frames is prepared according to the set mode and parameters. The first frame data is then forwarded one by one to an output module such as an internal/external frame buffer of the compact device.
It is noted that prior to displaying a frame data of image frames, an internal thread may be executed by the processing unit for requesting an identification of frame data of the image frame to be displayed, as well as notifying the identification of frame data of the image frame that is ready to be displayed. The internal thread may be a cached thread, for instance.
In step 110, method 100 determines if an input command is received while displaying the image frames. Input command may be generated externally, e.g. by a user input. Taking photo wall mode as an example, the input command may be the user sliding through the photo wall or selecting a specific photo for viewing. In other embodiments, input command may also be system generated, for instance, by a processing unit of the compact device.
When the processing unit does not have sufficient time/resources to process a frame data of image frames, the process unit generates an input command for displaying a substitute image frame, e.g. certain symbols such as a sandglass symbol is displayed to notify the user the system is in processing operation, until the frame data of image frames has finished processing and ready to be displayed. When a substitute frame is being displayed, the internal thread (e.g. cached thread) notifies an identification of the frame data of substitute image frame, and the substitute frame is forwarded to the frame buffer.
When an input command is received and recognized, the input command is decoded in step 112 so as to identify content of the input command, for determining whether the received input command is new. Decoded content of the received input command is then compared to parameters set in step 102-106. If the decoded content of the received input command does not correspond to the set parameters of the image frames, the received input command is considered to be new; the parameters of the image frames are reconfigured according to the new input command. For instance, an event handler may be invoked to reconfigure parameters of the image frames according to the input command. Step 108 is repeated to display image frames according to reconfigured parameters.
If the decoded content of input command does correspond to parameters of the image frames, e.g. the received input command is not new, then step 108 is repeated to display image frames without reconfiguring parameters of the image frames such that image frames are displayed according to parameters originally configured in step 106.
Please refer to
For instance, if user interface 20 is utilized for a digital camera, the major classes represented by cubes W-Z can be set to “Camera”, “Playback”, “Video” and “Customize” respectively. Each major class comprises sub-classes. Taking the major class “Camera” represented by cube W as an example, the corresponding sub-classes represented by surfaces W1-W4 of cube W can be set to “Mode”, “Picture Format”, “Focus” and “Exposure” respectively. Options represented by slabs W1a-W1h correspond to surface W1 (e.g. sub-class “Mode”) can be set to “Manual”, “Auto”, “Aperture Priority”, “Shutter Priority”, “Program Mode”, “Scene Mode”, “Custom Mode 1” and “Custom Mode 2” respectively. This way, all options can be integrated into one user interface 20. User can slide or tap cubes W-Z for desired options, without flipping through different pages or layers of user interface.
Interface 20 is generated according to method 100 illustrated in
The user interface 20 is then displayed according to the set parameters. User interface 20 may start with a default display, for instance, surface W1 of cube W is selected and the corresponding slabs W1a-W1h are displayed as default. When a cube is selected, the border of selected cube can be, for instance, illustrated with bold border or in a different color from other cubes. Also, selected surface of a cube confronts the user, e.g. the selected surface is at a plane closest to the user. The cube can be slid to select desired sub-class of options.
If a new input command is received while displaying the user interface 20, for instance, a user slides from surface W1 of cube W to surface W2 of cube W. Parameters are reconfigured such that the movement trace of cube W is configured so as to display a sliding effect in response to the user's sliding action; the coordinates and movement trace of slabs W1a-W1h and W2a-W2h are also reconfigured so as to perform the vertical transition from slabs W1a-W1h to slabs W2a-W2h. The vertical transition from slabs W1a-W1h to slabs W2a-W2h may require several frames to complete, hence the frame rate may be reconfigured according to how fast the user slides the cube W.
Please note that the above embodiment of user interface 20 is merely an exemplary illustration of the present invention, those skilled in the art can certainly make appropriate modifications such as implementing a different number of major classes (e.g. cube) or setting a different number of options (e.g. slabs) corresponding to one sub-class (e.g. a surface of a cube) etc., according to practical demands which also belong to the scope of the present invention.
Please refer to
Please refer to
In the present embodiment, the transition of slabs corresponding to different surfaces e.g. transiting slabs Y1a-Y1h to slabs Y2a-Y2h, is according to a vertical movement trace as shown in
Please refer to
Please refer to
Please refer to
Please refer to
Please note that the above embodiments utilizing method 100 are merely exemplary illustrations of the present invention, those skilled in the art can certainly make appropriate modifications according to practical demands which also belong to the scope of the present invention.
In conclusion, the present invention provides a method for generating three-dimensional image frames. The method of the present invention generates image frames without requiring 3D hardware support. The method of the present invention is especially useful for devices that are usually without 3D hardware, such as handheld devices like digital still cameras, mobile phones and digital photo frames. Also, the method of the present invention does not require assistant peripherals such as 3D display panel or 3D glasses to generate 3D effects. Similarly, the method of the present invention does not require software architecture such as OpenGL/ES or DirectX etc to generate image frames to achieve 3D effects. The existing hardware installed need not to be altered for implementing the method of the present invention. Moreover, the method of the present invention also provided expandability. Different 3D effects other than the embodiments specified above may be implemented without altering the mainframe of method of the present invention. This way, user experience can be greatly enhanced, through the 3D user interface or 3D presentation generated by the method of the present invention.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention.