Embodiments of the present disclosure relate to the field of computer technologies and, for example, to a video generation method and apparatus, an electronic device, and a storage medium.
At present, some video software provides a user with a video template, and the user may upload photos or video segments in the video template. Thus, the video software may synthesize the photos or video segments uploaded by the user into one video to simplify operations necessary for the user to generate the video.
However, the photos or video segments uploaded by the user are simply synthesized into the video by the video template in the related art. The content of the video segments is less coherent, so a video with story logic cannot be generated.
Embodiments of the present disclosure provide a video generation method and apparatus, an electronic device, and a storage medium to improve the coherence of content between different video segments and generate a video with story logic.
In a first aspect, embodiments of the present disclosure provide a video generation method. The method includes the steps below.
A first trigger operation for using a target template is received.
In response to the first trigger operation, a target template homepage of the target template is displayed and scene description information about multiple target scenes of the target template is displayed in the target template homepage.
A second trigger operation for adding a scene video of any target scene is received.
In response to the second trigger operation, a scene video of a target scene corresponding to the second trigger operation is added.
Scene videos of the multiple target scenes are synthesized into a target video according to the sequence of the multiple target scenes in the target template homepage.
In a second aspect, embodiments of the present disclosure further provide a video generation apparatus. The apparatus includes a first trigger module, a homepage display module, a second trigger module, a video adding module, and a video synthesis module.
The first trigger module is configured to receive a first trigger operation for using a target template.
The homepage display module is configured to, in response to the first trigger operation, display a target template homepage of the target template and display scene description information about multiple target scenes of the target template in the target template homepage.
The second trigger module is configured to receive a second trigger operation for adding a scene video of any target scene.
The video adding module is configured to, in response to the second trigger operation, add a scene video of a target scene corresponding to the second trigger operation.
The video synthesis module is configured to synthesize scene videos of the multiple target scenes into a target video according to the sequence of the multiple target scenes in the target template homepage.
In a third aspect, embodiments of the present disclosure further provide an electronic device.
The electronic device includes one or more processors and a memory.
The memory is configured to store one or more programs. When executed by the one or more processors, the one or more programs cause the one or more processors to implement the video generation method according to the embodiments of the present disclosure.
In a fourth aspect, embodiments of the present disclosure further provide a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, implements the video generation method according to the embodiments of the present disclosure.
The same or similar reference numerals in the drawings denote the same or similar elements. It is to be understood that the drawings are schematic and that originals and elements are not necessarily drawn to scale.
Embodiments of the present disclosure are described in more detail hereinafter with reference to the drawings. The drawings illustrate some embodiments of the present disclosure, but it is to be understood that the present disclosure may be implemented in various manners and should not be construed as being limited to the embodiments set forth herein. On the contrary, these embodiments are provided to facilitate a more thorough and complete understanding of the present disclosure. It is to be understood that the drawings and embodiments of the present disclosure are merely illustrative and are not intended to limit the scope of the present disclosure.
It is to be understood that steps described in method implementations of the present disclosure may be performed in sequence and/or in parallel. Additionally, the method implementations may include additional steps and/or omit some of the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term “include” and variations thereof used herein refer to “including, but not limited to”. The term “based on” refers to “at least partially based on”. The term “an embodiment” refers to “at least one embodiment”. The term “another embodiment” refers to “at least one another embodiment”. The term “some embodiments” refers to “at least some embodiments”. Definitions of other terms are given in the description hereinafter.
It is to be noted that concepts such as “first” and “second” in the present disclosure are used to distinguish between apparatuses, between modules, or between units and are not intended to limit the sequence or mutual dependence of the functions performed by these apparatuses, modules, or units.
It is to be noted that modifications “one” and “multiple” mentioned in the present disclosure are illustrative instead of restrictive and should be construed as “one or more” by those skilled in the art, unless otherwise clearly indicated in the context.
The names of messages or information exchanged between apparatuses in the implementations of the present disclosure are only used for illustrative purposes and are not intended to limit the scope of the messages or information.
In S101, a first trigger operation for using a target template is received.
The target template may be understood as a video generation template for generating a video this time. The video production template may be set in advance by a developer. Different video generation templates may be used for producing different types of videos. The video generation template may be a food recipe template, a food tour template, a travel journal template, an unboxing review template, a good thing sharing template, an eating show template, a hotel experience template, or the like. Each video generation template may be provided with multiple scenes to be shot to produce a corresponding type of video. For example, the multiple scenes may be continuous scenes so that a user generates a video with a continuous plot by using the video generation template. The first trigger operation may be any trigger operation which can trigger an entry to a target template homepage of the target template, such as clicking on a certain recommended template in a creation homepage or clicking on the use control of a certain video production template displayed in a template list page.
For example, as shown in
In S102, in response to the first trigger operation, the target template homepage of the target template is displayed and scene description information about multiple target scenes of the target template is displayed in the target template homepage.
A target scene may be understood as a scene of a video which needs to be shot or uploaded and conforms with the target template when a video of the type to which the target template belongs is produced, the target scene is determined based on the target template type/target video type, and different target template types/target video types correspond to different target scenes. For example, when the target template type is the food recipe template, or the target video type is the food recipe video, target scenes of the target template may include an opening introduction, an ingredient introduction, an ingredient production, and the like. Scene description information about the target scene may include scene information about the target scene (such as a scene name and the sequence number of the order of the target scene in the target template homepage) and guidance information (such as shot content information about a scene video) when the scene video of the target scene is shot. The target scene corresponds to the story logic of the target video. The target template homepage may be understood as the template homepage of the target template.
When the electronic device receives the first trigger operation for using the target template, in response to the first trigger operation, the currently displayed page is switched to the target template homepage of the target template, and the scene description information about the target scene in the target template is displayed in the target template homepage. Thus, the user specifies, through the scene description information, the video content of the scene video which needs to be uploaded or shot in a corresponding target scene, that is, the user can add the video which meets the requirements of the corresponding target scene to improve the coherence between multiple plots in a finally generated target video. Multiple preset target scenes may exist in the target template. The multiple target scenes may be arranged in a preset scene sequence in the target template homepage so that the user uploads or shoots videos according to the order of the multiple target scenes in the target template homepage.
As shown in
In this embodiment, the target scenes in the target template homepage may or may not support modification by the user (for example, adding and/or deleting the target scene). For example, the target scenes may be configured to support the modification by the user so that a larger creative space is provided for the user and the user experience is improved.
In an implementation, after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage, the method further includes: receiving a fourth trigger operation for deleting a second target scene in the target template homepage; and in response to the fourth trigger operation, deleting the second target scene displayed in the target template homepage.
The second target scene may be understood as a target scene which the user wants to delete. The fourth trigger operation may be any operation for deleting the target scenes in the target template homepage, such as a long press operation acting on the scene display area of a certain target scene or the operation of clicking on the deletion control of a certain target scene. The fourth trigger operation may be configured in advance by the developer as required.
For example, when the user wants to delete a certain target scene in the target template homepage, the user may perform a long press on the scene display area in which the target scene is located (for example, when the electronic device has an Android system). Alternatively, the user may swipe leftwards in the scene display area in which the target scene is located so that the electronic device displays the scene deletion control 50 of the target scene when detecting the operation of swiping leftwards, as shown in
In the preceding implementation, after receiving the fourth trigger operation, the electronic device may further display a deletion confirmation pop-up window and the user is reminded by the pop-up window to determine whether the second target scene is deleted or not. In addition, when detecting that the user clicks on a deletion confirmation control in the deletion confirmation pop-up window, the electronic device deletes the second target scene from the target template homepage, and when detecting that the user clicks on a deletion cancellation control in the deletion confirmation pop-up window, the electronic device does not perform the subsequent deletion operation to avoid an accidental deletion.
In another implementation, after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage, the method further includes: receiving a third click operation acting on a scene adding control in the target template homepage; and in response to the third click operation, switching the currently displayed page from the target template homepage to a newly added scene page for the user to input scene description information about a newly added target scene into the newly added scene page.
The newly added target scene may be understood as a target scene which the user needs to add this time.
For example, as shown in
In addition, as shown in
The scene identification information about the target scenes may be understood as information for identifying the multiple target scenes, such as scene names and/or the scene sequence numbers of the target scenes. Accordingly, the newly added scene identification information may be understood as scene identification information about the newly added target scene.
For example, the electronic device displays the sequence adjustment control in the newly added scene page; and when the user wants to adjust the order of the newly added target scene in the target template homepage, the user clicks on the sequence adjustment control. Accordingly, when detecting that the user clicks on the sequence adjustment control in the newly added scene page, the electronic device determines that the fourth click operation is received, and in response to the fourth click operation, the electronic device pops up the sequence adjustment window and displays the scene identification information about the newly added target scene and the scene identification information about the multiple original target scenes in the target template homepage in the sequence adjustment window according to the sequence of all the target scenes. Thus, the user may adjust the sequence of the newly added target scene in all the target scenes by dragging the newly added scene identification information about the newly added target scene. That is, when detecting the drag operation acting on the newly added scene identification information, the electronic device may control the newly added scene identification information to move with the control point of the drag operation and may adjust the sequence of each target scene according to the order of each scene identification information in the sequence adjustment window after the movement. For example, the scene sequence number of each target scene is changed (for the case where the sequence of each target scene is identified by the scene sequence number), and/or, the order of each target scene is adjusted in the target template homepage (for the case where the sequence of each target scene is identified by the order of each target scene in the target template homepage).
It is to be understood that when the user performs a deletion/addition operation on the target scene, the electronic device may respond to the deletion/addition operation of the user regardless of the number of target scenes remaining in the target template homepage at a current time or the number of target scenes which have been added by the user until the current time. When the deletion operation on the target scene by the user is detected, it may be determined whether the number of target scenes in the target template homepage at the current time is less than or equal to a first preset number (such as 1), the deletion operation is responded to based on the determination result that the number of target scenes in the target template homepage at the current time is less than or equal to the first preset number, and the deletion operation is no longer responded to based on the determination result that the number of target scenes in the target template homepage at the current time is greater than the first preset number, so that it is ensured that the number of target scenes provided in the target template homepage is not less than the first preset number; and/or when the addition operation on the target scene by the user is detected, it may be determined whether the number of target scenes added by the user in the target template homepage at the current time is greater than or equal to a second preset number (such as 10), the addition operation is no longer responded to based on the determination result that the number of target scenes added by the user in the target template homepage at the current time is less than or equal to the second preset number, and the addition operation is responded to based on the number of target scenes added by the user in the target template homepage at the current time which is greater than the second preset number, so that the user is prevented from adding too many new target scenes in the target template homepage.
In S103, a second trigger operation for adding a scene video of any target scene is received.
The second trigger operation may be a trigger operation for adding a scene video of a certain target scene, for example, the operation of clicking on a control for uploading a video from a photo album or a shooting control in an addition mode selection window of the target scene in the target template homepage, or the operation of clicking on a video uploading control or a go-to-shoot control in the shooting guide page of the target scene.
As shown in
In S104, in response to the second trigger operation, the scene video of the target scene corresponding to the second trigger operation is added.
For example, when detecting that the user clicks on the shooting control in the addition mode selection window of a certain target scene or the go-to-shoot control in the shooting guide page, the electronic device may switch the currently displayed page from the target template homepage to the video shooting page and turn on a camera to shoot the scene video of the corresponding target scene; and/or when detecting that the user clicks on the control for uploading the video from the photo album in the addition mode selection window of a certain target scene or the video uploading control in the shooting guide page, the electronic device may switch the currently displayed page from the target template homepage to a photo album page and add the video selected by the user from the photo album page as the scene video of the corresponding target scene to the target template homepage.
It is to be understood that one or more scene videos may be added in a certain target scene, which is not limited in this embodiment. As shown in
In addition, scene video deletion controls 101 may be further displayed on the upper layers of multiple scene video thumbnails. Thus, when the user wants to delete a certain scene video which has been added, the user may click on a scene video deletion control 101 on the upper layer of a scene video thumbnail 100 of the scene video. Accordingly, when detecting that the user clicks on the scene video deletion control 101 on the upper layer of the scene video thumbnail 100, the electronic device deletes the scene video to which the scene video thumbnail 100 belongs. Alternatively, the user may view and edit a certain scene video by clicking on the scene video thumbnail 100 of the scene video, for example, the user may click on the scene video thumbnail 100 of the scene video when the user wants to view or edit the scene video. Accordingly, when detecting the click operation acting on the scene video thumbnail 100 of the scene video, the electronic device may switch the currently displayed page from the target template homepage to a video preview page, play the scene video in the video preview page, and display a clipping control 110 and a deletion control 111, as shown in
In S105, scene videos of the multiple target scenes are synthesized into a target video according to the sequence of the multiple target scenes in the target template homepage.
A rule for determining the sequence of the multiple target scenes may be flexibly configured. For example, the sequence of each target scene may be determined according to the scene sequence number of each target scene, or the sequence of the multiple target scenes may be determined according to the sequence numbers of the target scenes from small to large; and/or the sequence of each target scene is determined according to the order of each target scene in the target template homepage, for example, the sequence of the multiple target scenes is determined according to the arrangement position sequence of the multiple target scenes in the target template from front to back, which is not limited in this embodiment.
In this embodiment, the multiple target scenes are coherent scenes which are matched with multiple plots included in the corresponding type of video produced with the target template, that is, the sequence of the multiple target scenes is matched with the sequence of the multiple plots included in the corresponding type of video produced with the target template, and the scene names of the multiple target scenes may also be matched with the content of the multiple plots separately. Therefore, after the scene videos of the multiple target scenes in the target template homepage are each added, the scene videos of the multiple target scenes may be synthesized into the target video according to the sequence of the multiple target scenes. Therefore, it can be ensured that the synthesized target video includes the plots included in the corresponding type of video and the multiple plots included in the target video are coherent plots so that the synthesized target video is narrative, logical, and coherent, thereby better meeting the needs of the user and improving the user experience.
In this embodiment, the method for synthesizing the multiple scene videos into the target video may be configured as required, for example, the multiple scene videos may be directly connected so that the target video is obtained. Alternatively, a transition video may be first added between adjacent videos so that corresponding video effects are added to the multiple scene videos, and/or, volume balance processing or the like is performed on different scene videos and then the multiple processed videos (such as the scene videos and transition videos) are connected so that the target video is obtained.
According to the video generation method provided in this embodiment, the first trigger operation for using the target template is received, in response to the first trigger operation, the target template homepage of the target template is displayed and the scene description information about the multiple target scenes of the target template is displayed in the target template homepage, the second trigger operation for adding the scene video of any target scene is received, in response to the second trigger operation, the scene video of the corresponding target scene is added, and after the addition of the scene videos is completed, the scene videos of the multiple target scenes are synthesized into the target video according to the sequence of the multiple target scenes in the target template homepage. In this embodiment, with the preceding technical solution, the multiple target scenes are provided for the target template in advance according to the story logic of the video and the user is guided by the scene description information about the multiple target scenes to add the scene videos which meet the requirements of the corresponding target scenes. Thus, it is unnecessary for the user to storyboard the shot video manually and the difficulty in producing the video is reduced, and the coherence between the plots of the multiple scene videos can be improved, thereby making the generated video more narrative and logical.
A shooting guide control of each target scene is further displayed in the target template homepage, and the method further includes: receiving a first click operation acting on a target shooting guide control of the first target scene; and in response to the first click operation, displaying a shooting guide of the first target scene and a line input area of the first target scene for the user to input the target line of the target scene video of the first target scene into the line input area.
Synthesizing the scene videos of the multiple target scenes into the target video according to the sequence of the multiple target scenes in the target template homepage includes: receiving a fifth click operation acting on a first next control in the target template homepage; in response to the fifth click operation, processing the multiple scene videos; switching the currently displayed page from the target template homepage to a video editing page, sequentially playing multiple to-be-synthesized videos in the video editing page, and displaying a video editing track for the user to edit the multiple to-be-synthesized videos based on the video editing track, where the to-be-synthesized videos include the scene videos; receiving a sixth click operation acting on a second next control in the video editing page; and in response to the sixth click operation, synthesizing the multiple to-be-synthesized videos into the target video.
As shown in
In S201, the first trigger operation for using the target template is received.
In S202, in response to the first trigger operation, the target template homepage of the target template is displayed and the scene description information about the multiple target scenes of the target template and shooting guide controls of the multiple target scenes of the target template are displayed in the target template homepage.
In S203, the first click operation acting on the target shooting guide control of the first target scene is received.
In S204, in response to the first click operation, the shooting guide of the first target scene and the line input area of the first target scene are displayed so that the user inputs the target line of the target scene video of the first target scene into the line input area.
The first click operation may be understood as a click operation acting on the shooting guide control. Accordingly, the target shooting guide control may be the shooting guide control on which the first click operation acts, the first target scene may be a target scene corresponding to the target shooting guide control, the target scene video may be a scene video of the first target scene, and the target line is a line used when the target scene video is shot. In this embodiment, the shooting guide of the first target scene and the line input area of the first target scene may be displayed in the target template homepage or may be displayed in another page (for example, the shooting guide page) instead of the target template homepage, which is not limited in this embodiment. Illustration is provided below by using an example in which the shooting guide of the first target scene and the line input area of the first target scene are displayed in the shooting guide page.
As shown in
As shown in
In S205, the second trigger operation for adding the scene video of any target scene is received, where the second trigger operation includes the trigger operation for shooting the scene video of any target scene.
In S206, in response to the trigger operation for shooting the target scene video of the first target scene, the camera is turned on and the currently displayed page is switched to the video shooting page so that the target scene video is shot, where the target line of the target scene video is displayed in the video shooting page.
For example, as shown in
In this embodiment, as shown in
It is to be understood that in this embodiment, the developer may configure, in advance, preset lines when the multiple scene videos (including the target scene video of the first target scene) are shot, and when a shooting guide page of a certain target scene is displayed, a preset line is displayed in the line input area of the shooting guide page of the target scene for the user to modify. If the user does not modify a preset line of a scene video of a certain target scene, the preset line of the target scene may be determined as the target line of the target scene, and when the scene video of the target scene is shot, the target line is displayed in the video shooting page of the scene video of the target scene.
In an implementation, the video generation method provided in this embodiment may further include: receiving a third trigger operation for adjusting the line display area of the target line in the video shooting page; and in response to the third trigger operation, adjusting the position and/or size of the line display area.
In the preceding implementation, the user may adjust the position and/or size of the line display area. As shown in
In another implementation, the video generation method provided in this embodiment may further include: receiving a second click operation acting on a teleprompter control in the video shooting page; and in response to the second click operation, switching the target line from a displayed state to a hidden state.
In the preceding implementation, as shown in
In S207, the fifth click operation acting on the first next control in the target template homepage is received.
In S208, in response to the fifth click operation, the multiple scene videos are processed.
The first next control 45 may be understood as a next control provided in the target template homepage. In this embodiment, as shown in
When the user completes the addition of all the scene videos of the multiple video scenes to be shot in the template homepage and wants to instruct the electronic device to perform subsequent processing on the multiple scene videos, the user may click on the next control in the target template homepage. Accordingly, when detecting that the user clicks on the next control in the target template homepage, the electronic device may pop up a progress pop-up window 140. As shown in
In this embodiment, processing the multiple scene videos includes at least one of: adding a video effect corresponding to each target scene in the target template to the scene video of the target scene, for example, adding a filter, background music, and/or a subtitle style corresponding to the target scene to which each scene video belongs to the scene video; performing the volume balance processing on a scene video of each target scene, for example, automatically aligning the volume of each scene video so as to prevent the volume from fluctuating; or sequencing the multiple scene videos in the target template homepage according to the sequence of each target scene and the order of each scene video in the target scene to which the scene video belongs, and adding the transition video between adjacent scene videos which meet a preset condition, so as to avoid excessively abrupt switches between different scene videos. Here, the preset condition for adding the transition video may be that the difference degree between the first preset number (for example, 1 frame) of video frames at the end of a previous video in two adjacent scene videos and the first preset number of video frames at the start of a subsequent video in the two adjacent scene videos is greater than a preset difference degree threshold, where the difference degree may be determined by a pre-trained model. The transition video added between the two adjacent scene videos may be determined according to the second preset number (for example, 5 frames) of video frames at the end of the previous video and the second preset number of video frames at the start of the subsequent video.
In S209, the currently displayed page is switched from the target template homepage to the video editing page, multiple to-be-synthesized videos are sequentially played in the video editing page, and a video editing track is displayed so that the user edits the multiple to-be-synthesized videos based on the video editing track, where the to-be-synthesized videos include the scene videos.
If the transition videos are added when the multiple scene videos are processed, the to-be-synthesized videos further include the added transition videos so that the user edits the added transition videos, thereby improving the user experience.
In this embodiment, after completing the processing of the multiple scene videos added to the target template homepage and the addition of the transition videos, the electronic device may switch the currently displayed page from the target template homepage to the video editing page of the target template so that the user edits one or more to-be-synthesized videos (including the scene videos and the transition videos) in the video editing page. For example, as shown in
In S210, the sixth click operation acting on the second next control in the video editing page is received.
In S211, in response to the sixth click operation, the multiple to-be-synthesized videos are synthesized into the target video.
The second next control may be understood as a next control in the video editing page.
For example, as shown in
In this embodiment, when the electronic device displays the target template homepage, the user may save, by clicking on a first save control in the target template homepage, content currently edited in the target template homepage, and/or the user may exit the target template homepage by clicking on a homepage closure control in the target template homepage; and when the electronic device displays the video editing page, the user may save, by clicking on a second save control in the video editing page, content currently edited in the video editing page, and/or the user may exit the video editing page by clicking on a page return control in the video editing page. When a project file of the target video (that is, an edited target template) to be generated is saved through save controls in different pages, the saving states of the project file may be the same or different, which is not limited in this embodiment.
In an implementation, the file states of the project file of the target video saved through the save controls in the different pages may be different to facilitate differentiation by the user. In this case, the video generation method provided in this embodiment further includes: receiving a seventh click operation acting on the first save control in the target template homepage; and in response to the seventh click operation, saving the project file of the target video as a template draft file whose file state is a template draft state; and/or receiving an eighth click operation acting on the second save control in the video editing page; and in response to the eighth click operation, saving the project file of the target video as a clip draft file whose file state is a clip draft state.
The first save control may be a save control which is in the target template homepage and can be used for instructing the electronic device to save content edited by the user in the target template homepage. For example, the first save control may be a homepage save control which is always in a displayed state in the target template homepage, or the first save control may be a window save control in a closure confirmation window popped up when it is detected that the user clicks on the homepage closure control in the target template homepage. The second save control may be a save control which is in the video editing page and can be used for instructing the electronic device to save content edited by the user in the video editing page, for example, a save and return control in a return confirmation window popped up when it is detected that the user clicks on the page return control in the video editing page.
For example, as shown in
For example, as shown in
In the preceding implementation, after the project file of the target video is saved as the template draft file and/or the clip draft file, the method may further include: receiving a trigger operation for displaying a draft box, displaying a draft box page, and displaying file information about project files of multiple unpublished videos in the draft box page; receiving a ninth click operation acting on file information about any target project file; and in response to the ninth click operation, displaying a target page corresponding to the file state of the target project file, where the target page is a template homepage or the video editing page.
The template homepage may include the target template homepage of the target template to which the target video belongs. The video editing page may include the video editing page of the target template to which the target video belongs. The file information about the project file may include the file cover, the file name, and the file state of the project file and may also include the time of the last update of the project file and the video duration of the project file. The target project file may be understood as the project file to which the file information clicked on by the user in the draft box page belongs. The target page of the template corresponding to the template draft file may be the template homepage of the video production template corresponding to the template draft file. The target page of the template corresponding to the clip draft file may be the video editing page of the video production template corresponding to the clip draft file.
For example, if the user wants to continue editing the content in a template after exiting the template, the user may control the electronic device to display the draft box page through a corresponding trigger operation, for example, the user may control the electronic device to display the draft box page by clicking on a draft box control 23 (as shown in
According to the video generation method provided in this embodiment, when the user shoots the scene video, the line inputted by the user in advance is displayed in the video shooting page, the transition video corresponding to the adjacent scene videos which meet the preset condition is added to the adjacent scene videos when the switch to the video editing page is performed, and the user is supported to edit the multiple scene videos and the multiple transition videos separately in the video editing page. Thus, the difficulty in shooting and producing the video can be reduced, the synthesized target video better meets the user's wishes, the visual effect of the synthesized target video can be improved, and an excessively abrupt transition is avoided.
The first trigger module 1701 is configured to receive a first trigger operation for using a target template.
The homepage display module 1702 is configured to, in response to the first trigger operation, display a target template homepage of the target template and display scene description information about multiple target scenes of the target template in the target template homepage.
The second trigger module 1703 is configured to receive a second trigger operation for adding a scene video of any target scene.
The video adding module 1704 is configured to, in response to the second trigger operation, add a scene video of a target scene corresponding to the second trigger operation.
The video synthesis module 1705 is configured to synthesize scene videos of the multiple target scenes into a target video according to the sequence of the multiple target scenes in the target template homepage.
In the video generation apparatus provided in this embodiment, the first trigger operation for using the target template is received by the first trigger module, the target template homepage of the target template is displayed by the homepage display module in response to the first trigger operation and the scene description information about the multiple target scenes of the target template is displayed by the homepage display module in the target template homepage, the second trigger operation for adding the scene video of any target scene is received by the second trigger module, the scene video of the corresponding target scene is added by the video adding module in response to the second trigger operation, and after the addition of the scene video is completed, the scene videos of the multiple target scenes are synthesized into the target video by the video synthesis module according to the order of the multiple target scenes in the target template homepage. In this embodiment, with the preceding technical solution, the multiple target scenes are provided for the target template in advance according to the story logic of the video and a user is guided by the scene description information about the multiple target scenes to add the scene videos which meet the requirements of the corresponding target scenes. Thus, it is unnecessary for the user to storyboard the shot video manually and the difficulty in producing the video is reduced, and the coherence between the plots of the multiple scene videos can be improved, thereby making the generated video more narrative and logical.
The second trigger operation includes a trigger operation for shooting the scene video of any target scene, and the video adding module 1704 is configured to, in response to a trigger operation for shooting a target scene video of a first target scene, turn on a camera and switch a currently displayed page to a video shooting page to shoot the target scene video, where a target line of the target scene video is displayed in the video shooting page.
A shooting guide control of each target scene is further displayed in the target template homepage, and the video generation apparatus provided in this embodiment further includes: a first click module configured to receive a first click operation acting on a target shooting guide control of the first target scene; and a guide display module configured to, in response to the first click operation, display a shooting guide of the first target scene and a line input area of the first target scene for the user to input the target line of the target scene video of the first target scene into the line input area.
The video generation apparatus provided in this embodiment further includes: a third trigger module configured to receive a third trigger operation for adjusting a line display area of the target line in the video shooting page; and an area adjustment module configured to, in response to the third trigger operation, adjust the position and/or size of the line display area.
The video generation apparatus provided in this embodiment further includes: a second click module configured to receive a second click operation acting on a teleprompter control in the video shooting page; and a state switch module configured to, in response to the second click operation, switch the target line from a displayed state to a hidden state.
The video generation apparatus provided in this embodiment further includes: a fourth trigger module configured to receive a fourth trigger operation for deleting a second target scene in the target template homepage after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage; and a scene deletion module configured to, in response to the fourth trigger operation, delete the second target scene displayed in the target template homepage.
The video generation apparatus provided in this embodiment further includes: a third click module configured to receive a third click operation acting on a scene adding control in the target template homepage after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage; and a scene adding module configured to, in response to the third click operation, switch the currently displayed page from the target template homepage to a newly added scene page for the user to input scene description information about a newly added target scene into the newly added scene page.
The video generation apparatus provided in this embodiment further includes: a fourth click module configured to receive a fourth click operation acting on a sequence adjustment control in the newly added scene page; a window display module configured to, in response to the fourth click operation, display a sequence adjustment window, where scene identification information about the multiple target scenes is displayed in the sequence adjustment window, and the multiple target scenes include the newly added target scene; an identification drag module configured to receive a drag operation acting on newly added scene identification information about the newly added target scene; and a sequence adjustment module configured to, in response to the drag operation, adjust the order of the newly added scene identification information in the sequence adjustment window to adjust the order of the newly added target scene in all the target scenes.
The video synthesis module 1705 includes: a fifth click unit configured to receive a fifth click operation acting on a first next control in the target template homepage; a video processing unit configured to, in response to the fifth click operation, process multiple scene videos; an editing page display unit configured to switch the currently displayed page from the target template homepage to a video editing page, sequentially play multiple to-be-synthesized videos in the video editing page, and display a video editing track for the user to edit the multiple to-be-synthesized videos based on the video editing track, where the to-be-synthesized videos include the scene videos; a sixth click unit configured to receive a sixth click operation acting on a second next control in the video editing page; and a video synthesis unit configured to, in response to the sixth click operation, synthesize the multiple to-be-synthesized videos into the target video.
In the preceding solution, the video processing unit may be configured to perform at least one of: adding a video effect corresponding to each target scene in the target template to a scene video of the target scene; performing volume balance processing on a scene video of each target scene; or sequencing the multiple scene videos according to the sequence of each target scene and the order of each scene video in a corresponding target scene, and adding a transition video between two adjacent scene videos which meet a preset addition condition, where the transition video corresponds to the two adjacent scene videos. Accordingly, the to-be-synthesized videos further include the transition video.
The video generation apparatus provided in this embodiment further includes: a first save module configured to receive a seventh click operation acting on a first save control in the target template homepage and save, in response to the seventh click operation, a project file of the target video as a template draft file whose file state is a template draft state; and/or a second save module configured to receive an eighth click operation acting on a second save control in the video editing page and save, in response to the eighth click operation, a project file of the target video as a clip draft file whose file state is a clip draft state.
The video generation apparatus provided in the embodiment of the present disclosure may perform the video generation method provided in any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the video generation method. For technical details not described in detail in the embodiment, reference may be made to the video generation method provided in any embodiment of the present disclosure.
Reference is made to
As shown in
Generally, the following apparatuses may be connected to the I/O interface 1805: an input apparatus 1806 such as a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, and a gyroscope; an output apparatus 1807 such as a liquid crystal display (LCD), a speaker, and a vibrator; the storage apparatus 1806 such as a magnetic tape and a hard disk; and a communication apparatus 1809. The communication apparatus 1809 may allow the electronic device 1800 to communicate wirelessly or by wire with another device to exchange data. Although
According to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, a computer program product is provided in the embodiments of the present disclosure. The computer program product includes a computer program carried on a non-transitory computer-readable medium. The computer program includes program codes for performing the methods illustrated in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from the network via the communication apparatus 1809, or may be installed from the storage apparatus 1806, or may be installed from the ROM 1802. When the computer program is executed by the processing apparatus 1801, the preceding functions defined in the methods according to the embodiments of the present disclosure are performed.
It is to be noted that the preceding computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination thereof. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to, an electrical connection with one or more wires, a portable computer disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical memory device, a magnetic memory device, or any appropriate combination thereof. In the present disclosure, the computer-readable storage medium may be any tangible medium including or storing a program. The program may be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, where computer-readable program codes are carried in the data signal. The data signal propagated in this manner may be in various forms, including, but not limited to, an electromagnetic signal, an optical signal, or any appropriate combination thereof. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable signal medium may send, propagate, or transmit a program used by or in conjunction with an instruction execution system, apparatus, or device. The program codes embodied on the computer-readable medium may be transmitted via any appropriate medium including, but not limited to, an electrical wire, an optical cable, a radio frequency (RF), or any appropriate combination thereof.
In some implementations, clients and servers may communicate using any currently known or future developed network protocol such as the HTTP (HyperText Transfer Protocol) and may be interconnected with any form or medium of digital data communication (such as a communication network). Examples of the communication network include a local area network (“LAN”), a wide area network (“WAN”), an internet (such as the Internet), a peer-to-peer network (such as an ad hoc peer-to-peer network), as well as any currently known or future developed network.
The preceding computer-readable medium may be included in the preceding electronic device or may exist alone without being assembled into the electronic device.
The preceding computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device is configured to: receive a first trigger operation for using a target template; in response to the first trigger operation, display a target template homepage of the target template and display scene description information about multiple target scenes of the target template in the target template homepage; receive a second trigger operation for adding a scene video of any target scene; in response to the second trigger operation, add a scene video of a target scene corresponding to the second trigger operation; and synthesize scene videos of the multiple target scenes into a target video according to the sequence of the multiple target scenes in the target template homepage.
Computer program codes for performing the operations in the present disclosure may be written in one or more programming languages or a combination thereof. The preceding programming languages include, but are not limited to, object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as the “C” language or similar programming languages. The program codes may be executed entirely on a user computer, executed partly on a user computer, executed as a stand-alone software package, executed partly on a user computer and partly on a remote computer, or executed entirely on a remote computer or a server. In case of the remote computer, the remote computer may be connected to the user computer via any kind of network including a local area network (LAN) or a wide area network (WAN). Alternatively, the remote computer may be connected to an external computer (for example, over the Internet provided by an Internet service provider).
The flowcharts and block diagrams in the drawings illustrate possible architectures, functions and operations of the system, method and computer program product according to the embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or part of codes. The module, program segment, or part of codes contains one or more executable instructions for implementing specified logical functions. It is also to be noted that in some alternative implementations, the functions marked in the blocks may be implemented in an order different from those marked in the drawings. For example, two successive blocks may, in fact, be performed substantially in parallel or in a reverse order, which depends on the functions involved. It is also to be noted that each block in the block diagrams and/or flowcharts and a combination of blocks in the block diagrams and/or flowcharts may be implemented by a specific-purpose hardware-based system which performs specified functions or operations or a combination of specific-purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by software or hardware. The name of a module does not limit the module in a certain case.
The functions described herein above may be performed, at least partially, by one or more hardware logic components. For example, without limitation, example types of hardware logic components that may be used include: a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system on chip (SOC), a complex programmable logic device (CPLD), and the like.
In the context of the present disclosure, the machine-readable medium may be a tangible medium that may include or store a program used by or in conjunction with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device or any appropriate combination thereof. More specific examples of the machine-readable storage medium may include an electrical connection with one or more wires, a portable computer disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical memory device, a magnetic memory device, or any appropriate combination thereof.
According to one or more embodiments of the present disclosure, example one provides a video generation method. The method includes the steps described below.
A first trigger operation for using a target template is received.
In response to the first trigger operation, a target template homepage of the target template is displayed and scene description information about multiple target scenes of the target template is displayed in the target template homepage.
A second trigger operation for adding a scene video of any target scene is received.
In response to the second trigger operation, a scene video of a target scene corresponding to the second trigger operation is added.
Scene videos of the multiple target scenes are synthesized into a target video according to the sequence of the multiple target scenes in the target template homepage.
According to one or more embodiments of the present disclosure, according to the method described in example one, in example two, the second trigger operation includes a trigger operation for shooting the scene video of any target scene, and in response to the second trigger operation, adding the scene video of the target scene corresponding to the second trigger operation includes the step described below.
In response to a trigger operation for shooting a target scene video of a first target scene, a camera is turned on and a currently displayed page is switched to a video shooting page so that the target scene video is shot, where a target line of the target scene video is displayed in the video shooting page.
According to one or more embodiments of the present disclosure, according to the method described in example two, in example three, a shooting guide control of each target scene is further displayed in the target template homepage, and the method further includes the steps described below.
A first click operation acting on a target shooting guide control of the first target scene is received.
In response to the first click operation, a shooting guide of the first target scene and a line input area of the first target scene are displayed so that the user inputs the target line of the target scene video of the first target scene into the line input area.
According to one or more embodiments of the present disclosure, according to the method described in example two, in example four, the method further includes the steps described below.
A third trigger operation for adjusting a line display area of the target line in the video shooting page is received.
In response to the third trigger operation, the position and/or size of the line display area is adjusted.
According to one or more embodiments of the present disclosure, according to the method described in example two, in example five, the method further includes the steps described below.
A second click operation acting on a teleprompter control in the video shooting page is received.
In response to the second click operation, the target line is switched from a displayed state to a hidden state.
According to one or more embodiments of the present disclosure, according to the method described in example one, in example six, after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage, the method further includes the steps described below.
A fourth trigger operation for deleting a second target scene in the target template homepage is received.
In response to the fourth trigger operation, the second target scene displayed in the target template homepage is deleted.
According to one or more embodiments of the present disclosure, according to the method described in example one, in example seven, after the scene description information about the multiple target scenes of the target template is displayed in the target template homepage, the method further includes the steps described below.
A third click operation acting on a scene adding control in the target template homepage is received.
In response to the third click operation, the currently displayed page is switched from the target template homepage to a newly added scene page so that the user inputs scene description information about a newly added target scene into the newly added scene page.
According to one or more embodiments of the present disclosure, according to the method described in example seven, in example eight, the method further includes the steps described below.
A fourth click operation acting on a sequence adjustment control in the newly added scene page is received.
In response to the fourth click operation, a sequence adjustment window is displayed, where scene identification information about the multiple target scenes is displayed in the sequence adjustment window, and the multiple target scenes include the newly added target scene.
A drag operation acting on newly added scene identification information about the newly added target scene is received.
In response to the drag operation, the order of the newly added scene identification information is adjusted in the sequence adjustment window so that the order of the newly added target scene is adjusted in all the target scenes.
According to one or more embodiments of the present disclosure, according to the method described in any one of examples one to eight, in example nine, synthesizing the scene videos of the multiple target scenes into the target video according to the sequence of the multiple target scenes in the target template homepage includes the steps described below.
A fifth click operation acting on a first next control in the target template homepage is received.
In response to the fifth click operation, multiple scene videos are processed.
The currently displayed page is switched from the target template homepage to a video editing page, multiple to-be-synthesized videos are sequentially played in the video editing page, and a video editing track is displayed so that the user edits the multiple to-be-synthesized videos based on the video editing track, where the multiple to-be-synthesized videos include the scene videos.
A sixth click operation acting on a second next control in the video editing page is received.
In response to the sixth click operation, the multiple to-be-synthesized videos are synthesized into the target video.
According to one or more embodiments of the present disclosure, according to the method described in example nine, in example ten, processing the multiple scene videos includes at least one of the steps described below.
A video effect corresponding to each target scene in the target template is added to a scene video of the target scene.
Volume balance processing is performed on a scene video of each target scene.
The multiple scene videos are sequenced according to the sequence of each target scene and the order of each scene video in a corresponding target scene, and a transition video is added between two adjacent scene videos which meet a preset addition condition, where the transition video corresponds to the two adjacent scene videos. Accordingly, the to-be-synthesized videos further include the transition video.
According to one or more embodiments of the present disclosure, according to the method described in example nine, in example eleven, the method further includes the steps described below.
A seventh click operation acting on a first save control in the target template homepage is received.
In response to the seventh click operation, a project file of the target video is saved as a template draft file whose file state is a template draft state.
An eighth click operation acting on a second save control in the video editing page is received.
In response to the eighth click operation, a project file of the target video is saved as a clip draft file whose file state is a clip draft state.
According to one or more embodiments of the present disclosure, example twelve provides a video generation apparatus. The apparatus includes a first trigger module, a homepage display module, a second trigger module, a video adding module, and a video synthesis module.
The first trigger module is configured to receive a first trigger operation for using a target template.
The homepage display module is configured to, in response to the first trigger operation, display a target template homepage of the target template and display scene description information about multiple target scenes of the target template in the target template homepage.
The second trigger module is configured to receive a second trigger operation for adding a scene video of any target scene.
The video adding module is configured to, in response to the second trigger operation, add a scene video of a target scene corresponding to the second trigger operation.
The video synthesis module is configured to synthesize scene videos of the multiple target scenes into a target video according to the sequence of the multiple target scenes in the target template homepage.
According to one or more embodiments of the present disclosure, example thirteen provides an electronic device.
The electronic device includes one or more processors and a memory.
The memory is configured to store one or more programs.
When executed by the one or more processors, the one or more programs cause the one or more processors to implement the video generation method according to any one of examples one to eleven.
According to one or more embodiments of the present disclosure, example fourteen provides a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, implements the video generation method according to any one of examples one to eleven.
In addition, although operations are illustrated in a particular order, it should not be construed as that the operations are required to be performed in the particular order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although several specific implementation details are included in the preceding discussion, these should not be construed as limitations on the scope of the present disclosure. Certain features described in the context of separate embodiments may also be implemented in combination in a single embodiment. Rather, features described in the context of a single embodiment may be implemented in multiple embodiments separately or in any appropriate sub-combination.
Number | Date | Country | Kind |
---|---|---|---|
202011624042.4 | Dec 2020 | CN | national |
This application is a continuation of International Patent Application No. PCT/CN2021/143197, filed on Dec. 30, 2021, which claims priority to Chinese Patent Application No. 202011624042.4 filed on Dec. 31, 2020, the disclosures of both of which are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/143197 | Dec 2021 | US |
Child | 18217215 | US |