The present invention relates to a method of and a device for creating a modified video from an input video, for example, for editing an input video captured by a camcorder.
Video contents created by means of a video recorder, such as a camcorder, generally have a lower quality than professional video contents. Even after advanced user-editing of the raw camcorder footage, the resulting quality is still not satisfactory to users who are used to watch professionally edited content.
One reason why video content generated by a camcorder looks worse than professional content is that a video scene is shot by a single camera, e.g. at a single recording angle. In the case of professional content, however, multiple-angle cameras are used, which allows switching the angles within a scene, for example from wide angle shots to close-ups.
Currently, although some video editing software is provided to users for video editing, such software requires specialized skills, making it difficult to use and also time-consuming.
It is an object of the invention to provide a method of creating a modified video from an input video.
To this end, the method according to the present invention comprises the following steps: generating at least one sub-video corresponding to a sub-view of the input video; and integrating the generated sub-video into the input video along the time axis for creating the modified video.
The modified video may include some close-up content coming from the input video, as a result of which the modified video is more attractive than the original input video.
Advantageously, the step of generating further comprises a step of identifying a sub-view, and a step of extracting sub-views from the original input video.
Advantageously, the step of integrating comprises a step of replacing a clip of the input video by a generated sub-video.
Advantageously, the integrating step comprises a step of inserting the generated sub-video into the input video.
It is also an object of the invention to provide a device for creating a modified video from an input video.
To this end, the device according to the invention comprises a first module for generating at least one sub-video corresponding to a sub-view of said input video; and a second module for integrating said sub-video into said input video along the time axis for creating said modified video.
It is also an object of the invention to provide a video recorder comprising a device as described above, for creating a modified video from an input video.
Detailed explanations and other aspects of the invention will be given below.
Particular aspects of the invention will now be explained with reference to the embodiments described hereinafter and considered in connection with the accompanying drawings, in which identical parts or sub-steps are designated in the same manner:
The method comprises a step of generating 100 at least one sub-video corresponding to a sub-view of the input video, followed by a step of integrating 110 the generated sub-video into the input video along the time axis for creating a modified video.
The input video can be any video format, for example, MPEG-2, MPEG-4, DV, MPG, DAT, AVI, DVD or MOV. The input video can be captured by a video camera, for example a camcorder or the like.
According to the invention, a sub-view is a partial view of the image in the input video. For example,
According to the invention, a sub-video consists of frames including data of sub-views belonging to successive frames of the input video, and is generated by the generating step 100. For example,
It is noted that in the following drawings, only one picture per different video scene is shown, to facilitate the illustration.
Step 110 is used for integrating a sub-video into the input video.
It is to be understood by the person skilled in the art, that the step of integrating 110 could be implemented by various methods according to the data content of the input video, as will be explained in detail herein below.
Alternatively, as depicted by the flow chart of
In order to identify a sub-view in a video, some preferences need to be given. For example, the amount of desired sub-views, the size of desired sub-views, and the shape of desired sub-views need to be given.
As illustrated by
Advantageously, the step of identifying 101 further comprises a step of detecting an object from the input video to identify a sub-view according to the detected object.
For example, by detecting the data content of the input video, a face, a moving object or a central object could be detected as an object. As illustrated by
Alternatively, the step of identifying 101 further comprises a step of receiving a user input for a user to identify a sub-view.
The sub-views can also be identified completely by a user input through the user interface. In this case, the user will select the object to be contained in the sub-view and determine the above mentioned preferences.
As shown in the flow chart of
For example,
The extracting step 102 may contain predefined criteria to instruct how and where to extract the sub-views.
For example, in
In another example, the extracting criteria can be to extract the data of sub-views by tracking the detected object so that the object is always in the sub-views, no matter whether the object is moving or not.
In another example, the extracting criteria allow to extract a set of sub-views by gradually varying the background size.
For example,
Alternatively, as illustrated in
For example, as illustrated in
Alternatively, the clip of the input video to be replaced may also have the different time length as the generated sub-video, i.e. the frame amount of the input video clip is different with the frame amount of the generated sub-video.
Alternatively, in the replacing step 111, the sub-video can also be used to replace any other clip which does not provide the data of the sub-video with the same time length. In this case, the audio associated with the video should be taken into account, because the corresponding audio will also be replaced when the frames are replaced. In order to avoid the audio being disordered, the complete original audio can be removed or replaced with music during editing.
Alternatively, as illustrated in
For example,
Alternatively, as depicted in
For example,
Alternatively, the step of enlarging 107 further comprises a step of enhancing 108 the resolution of the enlarged sub-video.
One way of enhancing the resolution is, for example, up-scaling, which means that pixels are artificially added. For example: upscaling SD (standard density) (576*480 pixels) to HD (high density) (1920*1080 pixels) could be done by this step of enhancing 108 the resolution.
Alternatively, the method according to the invention further comprises a step of gradually moving 105 the position of said extracted sub-views along the time axis. This step allows the creation of a panning effect in the modified video.
Alternatively, the method according to the invention further comprises a step of gradually fading in or fading out 106 the sub-video. Fading in here means causing the image or sound to appear or be heard gradually. Fading out here means causing the image or sound to disappear gradually.
The video modification device 1000 comprises a first module 1010 for generating at least one sub-video corresponding to a sub-view of the input video, and a second module 1020 for integrating the generated sub-video into the original input video along the time axis for creating a modified video.
The first module 1010 further comprises a first unit 1011 for identifying a sub-view from the data content of the original input video, and a second unit 1012 for extracting the identified sub-view from the original input video.
The first unit 1011 is used for identifying the sub-view according to predefined preferences and a given object. To detect an object, some kind of object detection unit can be used, such as: a face detection unit, a moving object detection unit, a center object detection unit, etc. After detecting an object, the system identifies a sub-view including the detected object according to the predefined preferences, as previously described, according to the method of the invention.
The second unit 1012 is used for extracting sub-views from the original input video, similarly to step 102 described above.
The second module 1020 is used for integrating a sub-video into an original input video for creating a modified video.
Alternatively, the second module 1020 further comprises a third unit 1021 for replacing clips of the input video by the generated sub-video, similarly to step 111 described above, according to the method of the invention.
Alternatively, the second module 1020 further comprises a fourth unit 1022 for inserting the generated sub-video into original input video, similarly as step 112 described according to the method of the invention.
Alternatively, the first module 1010 further comprises a fifth unit 1013 to receive a user input for a user to identify a sub-view. The receiving unit 1013 receives user input via a user interface. The user can either choose the sub-views provided by the system or select an object and identify the corresponding sub-views directly, similarly to the step of receiving a user input described above according to the method of the invention.
This implementation comprises:
This implementation also comprises:
This implementation also comprises:
Memories 1182-1184-1186 and processors 1181-1183-1185 advantageously communicate via a data bus.
It is to be understood by the person skilled in the art that memories 1182, 1184, and 1186 could be combined into one memory, and that processors 1181, 1183, 1185 could be combined into a single processor.
It is also to be understood by the person skilled in the art that this invention could be implemented either by hardware or software or a combination thereof.
The present invention also relates to a video recorder for recording an input video, and comprising a device 1000 for creating a modified video from the input video. The video recorder, for example, corresponds to a camcorder or the like.
While the invention has been illustrated and described in detail in the drawings and foregoing description, illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
Any reference sign in a claim should not be construed as limiting the claim. The word “comprising” does not exclude the presence of elements other than those listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
Number | Date | Country | Kind |
---|---|---|---|
200710140722.7 | Aug 2007 | CN | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2008/053119 | 8/5/2008 | WO | 00 | 2/2/2010 |