This Application claims priority of Taiwan Patent Application No. 100137645, filed on Oct. 18, 2011, the entirety of which is incorporated by reference herein.
1. Field of the Invention
The invention provides an image editing method, especially to real-time image editing during video recording.
2. Description of the Related Art
When a user records video files by a recorder, mobile phone with camera function or other portable devices, the recorded video files are usually transferred to a computer (personal computer or laptop) for editing. The video editing is usually implemented by a video editing program, and if the video editing program is not installed in the computer, or the format of the video file is not supported by the video editing program, the user has to find other video editing programs. This causes inconvenience for the user.
An embodiment of the invention provides a real-time image editing method for an electronic device having a touch panel. The method includes the steps of executing a recording program, a timing program and an image editing program, showing an image editing option menu when detecting an input signal, receiving at least one select signal of a user via the image editing option menu and generating an editing result, storing the editing result in a register, and when the recording program stops, the image editing program outputs an output video according to the editing result and a raw video generated by the recording program.
Another embodiment of the invention provides an electronic device that can apply real-time video editing on a raw video generated by a recording program when the recording program is running The electronic device comprises a timer, a touch panel, an image editing program and a register. The timer is activated when the recording program is executed to transmit a time of an image editing event to the image editing program. When the touch panel has been touched, a signal is transmitted to the image editing program, and the image editing program then shows an image editing option menu to receive at least one selection signal from a user to generate an editing result, and when the recording program stops, the image editing program outputs a video file according to the editing result and the raw video.
A detailed description is given in the following embodiments with reference to the accompanying drawings.
The present invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
When recording video, a control unit of the portable device shows a current captured frame on a display device of the portable device. In this embodiment, the display device is a touch panel. In step S13, a controller of the portable device detects whether an input signal is received, wherein the input signal is generated when the user has touched the display device. If the user does not touch the display device, steps following step S13 do not occur until the user has touched the display device or stops recording.
When the controller detects that the user has touched the display device, step S14 is executed. The control system of the portable device shows an image editing option menu on the display device of the portable device. The user selects desired image elements, special effect or inputs text via the image editing option menu. In the step S15, the control unit of the portable device or the image editing program generates an image editing result according to user selection. The control unit of the portable device or the image editing program transforms a plurality of image editing results into an image editing array which is a 7×N array. The content of the image editing result can be shown as follows:
(No., component_name, time, Locate1_x, Locate1_y, Locate2_x, Locate2_y),
wherein the parameter No. indicates a number of times that the image or the video has been edited by the user during video recording. The parameter component_name represents the name of user's selected image element or used special effect. If a user selects an text input option to edit the image or video by inputting text, the parameter component_name comprises not only the name of the text input option, but also the content that user had been input. The parameter time represents a time the user touched the display device, wherein the time is synchronized with a recording time. For example, when the user starts recording, the timer counts time from 00:00:00. If the user has touched the touch panel at the time 00:03:23, the time that the image editing is made is stored as 00:03:23. The parameters Locate 1_x, Locate 1_y, Locate2_x and Locate2_y represent two coordinates of the image element that are selected by the user on the touch screen.
After the user finishes the image editing, the image editing program stores the image editing result in a register (step S16). When the user stops recording, the step S17 is executed. The electronic device generates an output video file according to a raw video generated by a recording program and the image editing array, wherein the output video file comprises the image editing made by the user during the video recording.
For further illustration, please refer to
A user can move, rotate, zoom in or zoom out the element 31. If a multi-touch function is supported by the portable device 21, the element 31 is moved, rotated, zoomed in or zoomed out according to a user's gesture. When a location of the element 31 is determined, the image editing program stores and writes coordinates of points L1 and L2 to an image editing array. It is noted that only the coordinate of point L2 is changeable when element 31 is rotated, zoomed in or zoomed out. The coordinate of point L1 is fixed when the location of the element 31 is determined
When the recording program 42 is activated, the timer 44 is activated too. The timer starts counting time when the recording program starts recording. When an image editing program 47 is activated, the image editing program 47 detects whether the display device 45 has been touched by a user. In other embodiments, the display device 45 detects whether the user has touched the display device 45, and when the display device 45 has been touched, the display device 45 transmits a notification to the image editing program 47. When detecting that the user has touched the display device 45, the image editing program 47 shows an image editing option menu on the display device 45. The user can select desired image elements, special effects or input text via the image editing option menu. Furthermore, when detecting that the user has touched the display device 45, the image editing program 47 acquires a first time from the timer 44 or the timer 44 automatically transmits the first time to the image editing program 47, wherein the first time is the time that the user has touched the display device 45.
After the image editing, the image editing program 47 transmits and stores parameters generated according to a user's image editing operation in the register 46. The image editing result comprises seven parameters. Assuming the user had edited the video or image for N times during the video recording, the register 46 stores a 7×N image editing array. After finishing the video recording, the image editing program 47 acquires a raw video and edits the raw video according to the image editing array stored in the register 46 to output an edited video file.
The user then edits the raw video via the image editing option menu to generate an image editing result. The user selects an image element via the image editing option menu and the image editing program shows a coordinate grid on the touch panel 53. The user then determines a location of the image element and may apply other operations on the image element, such as rotation, zoom in or zoom out. The image editing program generates an image editing result according to user's operations via the image editing option menu and the editing result is stored in a register. When the recording program stops, the image editing program outputs an output video file according to the image editing result and a raw video generated by the recording program.
Furthermore, the processor 52 may generate and store an image editing array according to a plurality of image editing results. When the recording program stops, the image editing program outputs the output video file according to the image editing array and the raw video generated by the recording program.
The content of the image editing result can be shown as following:
(No., component_name, time, Locate1_x, Locate1_y, Locate2_x, Locate2_y),
wherein the parameter No. indicates a number of times that the image or the video has been edited by the user during video recording. The parameter component_name represents the name of a user's selected image element or used special effect. If a user selects an text input option to edit the image or video by inputting text, the parameter component_name comprises not only the name of the text input option, but also the content that user had been input. The parameter time represents a time that the user touched the display device, wherein the time is synchronizes with a recording time. For example, when the user starts recording, the timer counts time from 00:00:00. If the user has touched the touch panel at the time 00:03:23, the time that the image editing is made is stored as 00:03:23. The parameters Locate1_x, Locate1_y, Locate2_x and Locate2_y represent two coordinates of the image element that are selected by the user on the touch screen.
In this embodiment, when the touch panel 53 has been touched, the timing program actively transmits a time to the image editing program and the image editing program stores the time in the image editing result. In other embodiments, when detecting that the touch panel 53 has been touched, the image editing program transmits a request to the timing program, receives an editing time from the timing program and stores the editing time to the image editing result.
While the invention has been described by way of example and in terms of the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Number | Date | Country | Kind |
---|---|---|---|
100137645 | Oct 2011 | TW | national |