The present invention relates generally to maintaining generated Ai video.
The present provides of a method for maintaining generated Ai video, said method comprising the steps of:
The present invention provides a method for maintaining generated Ai video, said method comprising the steps of:
The present invention provides a method for maintaining and regenerating AI-generated videos, the method comprising the following steps:
The present invention provides A method for generating a video, implemented by one or more processors operatively coupled to a non-transitory computer readable storage device, on which are stored modules of instruction code that when executed cause the one or more processors to perform said method comprising the steps of:
According to some embodiments of the present invention the promotion/marketing concept include at least one of educational, informative or entrainment.
According to some embodiments of the present invention the video style is limited by personal style or design limitation brand guidance.
According to some embodiments of the present invention the method further comprising the step of applying generative AI model for determining idea concept for a story board, wherein the generation of the script is further based on the story board.
According to some embodiments of the present invention the defining scenario parts/scene is based on created determined script.
According to some embodiments of the present invention the Ai video model is generated by applying the followings steps: receive user instruction, receiving generated video options, receiving user selections of video parts sequence, user editing actions,
According to some embodiments of the present invention the user interface is configured to receive user instruction, editing previous instruction, selecting manually more relevant media or use services to generate media, uploading media or text User, deleting scenes, update the script, Optionally the user approving final version, enabling manual editing option and user final selection of video segment.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to, creating promotion/marketing concept/idea, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to generate video, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to generating scripts by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of designing video layout, creating concept, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to designing video layout, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to generating scripts by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to, creating promotion/marketing concept/idea, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to define/determine video style, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions According to some embodiments of the present invention the method further comprising the step of creating, training designated AI model configured to select, determine generate content by learning user preferences in relation to the user received instruction based on user text, selected video and user actions
The present invention provides a system for generating a video, implemented by one or more processors operatively coupled to a non-transitory computer readable storage device, on which are stored modules processors to perform the steps of:
According to some embodiments of the present invention the Ai video model is generated by applying the followings steps: receive user instruction, receiving generated video options, receiving user selections of video parts sequence, user editing actions,
According to some embodiments of the present invention the method the video generation server is further configured to creating, training designated AI model configured to, creating promotion/marketing concept/idea, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method the video generation server is further configured to creating, training designated AI model configured to generate video, by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
According to some embodiments of the present invention the method the video generation server is further configured to creating, training designated AI model configured to generating scripts by learning user preferences in relation to the user received instruction based on user text, selected video and user actions.
The present invention will be more readily understood from the detailed description of embodiments thereof made in conjunction with the accompanying drawings of which:
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
The following is a table of definitions of the terms used throughout this application, adjoined by their properties and examples.
The video generation platform 50 comprises the following key components:
The Video Database Management Module is designed to store each video file, including a complete set of instructions used for video creation and the original user text prompts (802).
Additionally:
The video template generation module applies at least one of the followings steps
Optionally Save metadata within as separate file associated with the video file using ID, where the file is saved at remote server full instruction 160.
The text server module applies at least one of the following steps:
All scene media parts are customized and personalized based requesting entity (company, human user) branding/profile data, the branding can be provided by user or by smart analyzing any entity content: such as website, logo, press media, etc. 250.
Generating new video by implementing selected or new video template using aggregating content wherein the generated video complies with all analyzed requirements 260.
The user interface module executes a sequence of actions, including but not limited to the following steps:
Manually select relevant media or generate new media using services like DALL-E-2.
Upload personal media or text content.
Delete scenes, update the script, or make other adjustments. After creating the video, providing correction instruction to the video, brief changes
Approve the final version, with the option to continue manual editing as needed.
Final Selection (360): The user makes a final choice of at least one video segment to proceed with the completed video.
The Ai video bot module apply at least one of the followings steps:
The Ai director video bot module apply at least one of the followings steps: Enabling user to select video type: promotion, education, informative, entrainment, social or personal 702A. alternatively Applying generative designated AI model for determining video type based on use instructions;
The Ai director video bot module apply at least one of the followings steps:
This process encompasses several sophisticated steps:
Check style of each classes make adaption of styles, packed of compatible classes, format pf classes technical properties
Optionally, applying generative AI model for determining idea concept/general story board based on based on user instruction and/or generated promotion/marketing concept/idea and/or and determined classes;
Optionally, Applying generative AI model for determining video style based on based on and determined classes and/or user instruction and/or determined style and or concept, wherein the style include emotion type, design format, length based on promotion concept, animation, sentence 708B;
Applying generative AI model for creating text script first based on adapting script to the determined classes and only optionally based on the generated marketing concept, style and format for video using the text messages, wherein the script is comprised of scenes, each scene is designed to match layout structure of video scenes 708;
Wherein the video is comprised of scenes, each scene has motion layout format including definitions of type of objects appearing, layout of objects, order of displaying objects;
Wherein the video is comprised of scenes, each scene has motion layout format including definitions of type of objects motion, appearance order, layout of objects, order of displaying objects;
Applying generative AI model selecting or generating content based on user instruction, generating promotion/marketing concept/idea determined concept, determined style 711B;
For each scenario part based on define script part determine layout style, context and/or content, emotion, theme number, type and properties of content objects, layout of video frames, order-sequence of disapplying content, functionality of objects, optionally object customization option, generating content using AI 712B.
Generating video based on the defined scenario parts, selected or generated content and determined classes, 714B.
The Video database management module apply at least one of the followings steps:
In this section, we present a flowchart depicting the functionalities of the Video Database Management Module, as per several embodiments of the invention.
The Video Database Management Module performs a series of essential steps, including but not limited to:
This step involves the storage of a comprehensive set of instructions utilized in the creation of each video, along with the original text prompts provided by the user (referred to as “user original text prompt” hereafter). This combination of data, encompassing both the technical instructions and the user's creative input;
The Video Database Management Module is also responsible for classifying video instructions based on contextual factors and/or the user's original text. This classification helps organize and categorize the instructions for efficient retrieval and usage.
This step involves receiving text requests from users who are seeking to create new videos. These user-initiated text requests, referred to as “User text request for new video” are an essential input for generating customized videos.
The module's next task is to search the video instruction database to find the best matching instructions based on the user's request and the existing instructions. This search operation, ensures that the video creation process is guided by the most relevant and suitable instructions.
Finally, upon locating the best match between the user's request and the available video instructions, the Video Database Management Module proceeds to edit the selected instructions to further optimize the match. This editing process, labeled as “best match”, ensures that the final video output aligns with the user's expectations and creative intent.
In summary, the Video Database Management Module encompasses a series of steps, from storing comprehensive video instructions to processing user text requests, searching for the best-matching instructions, and fine-tuning the chosen instructions to produce high-quality, user-specific video content. This system facilitates a seamless and efficient video creation process, enhancing the overall user experience.
The Video coding representation module applies at least one of the followings steps;
Determining comprehensive set of instructions which represent unique video version 902;
Saving said comprehensive set of instructions as video code, instruction for assets of the video: text, image, audio video, link to media object (public media links) 904;
Creating video file which upon activation, use this code for playing the unique video version 906;
Receiving User text request for new video 908;
Search video files database for video having code representing instruction similar to user text 910;
In this section, we present a flowchart illustrating the functionalities of the Video Coding Representation Module, in accordance with various embodiments of the invention.
The Video Coding Representation Module is responsible for executing one or more of the following critical steps:
Determining Comprehensive Set of Instructions for Unique Video Version
This step involves the identification and determination of a comprehensive set of instructions that represent a unique version of a video (referred to as “video version” hereafter). “These instructions,” serve as the blueprint for creating distinct video content.
Saving Comprehensive Instructions as Video Code
Upon determining the comprehensive set of instructions, the module proceeds to save these instructions as video code. This video code encompasses instructions for all elements of the video, including text, images, audio, video clips, and links to external media objects (such as public media links). This comprehensive data is labeled as “904” and serves as the basis for video creation.
Creating Video File with Activation Code
The Video Coding Representation Module is also responsible for generating a video file that, upon activation, utilizes the stored video code to play the unique video version. This creation of the video file, ensures that the video content aligns precisely with the defined instructions, resulting in a tailored video experience.
In this step, the module receives text requests from users who are seeking to create new videos. These user-generated text requests, denoted as “User text request for new video” (labeled “908”), are essential inputs for customizing video content to meet user preferences.
Subsequently, the Video Coding Representation Module searches the video files database for videos that possess codes representing instructions similar to the user's text request. This search process, identified as “910,” ensures that the module can quickly locate and provide relevant video content that aligns with the user's request.
In summary, the Video Coding Representation Module plays a pivotal role in the video creation process. It begins by determining unique video instructions, saving them as video code, and using this code to generate tailored video files upon activation. Additionally, the module facilitates the user experience by receiving text requests and efficiently retrieving video content that matches the user's expressed preferences, ultimately enhancing the overall effectiveness and personalization of video creation.
The Video designated player apply at least one of the followings steps Upon user activation, activate video code 952;
Applying comprehensive set of instructions represented by the video code 954 on the fly generating the unique video file based on the comprehensive set of instructions using Text by Video generation Server 956
In this section, we introduce a flowchart that illustrates the operations of the Video Designated Player, in accordance with various embodiments of the invention.
The Video Designated Player is designed to execute at least one of the following crucial steps:
Upon user activation of the Video Designated Player, the player initiates the activation of a specific video code. This video code serves as the key to unlocking and playing the designated video content. (step 952)
Subsequently, the Video Designated Player applies a comprehensive set of instructions represented by the activated video code. These instructions, provide detailed guidance on how the video content should be presented and rendered to the user. (step 9540).
One of the primary functions of the Video Designated Player is to dynamically generate a unique video file in real-time based on the comprehensive set of instructions. This process involves the use of a Text-to-Video Generation Server (labeled as “Text by Video Generation Server to transform textual instructions into visual and auditory elements, ultimately creating a customized video tailored to the user's specifications. (step 956)
In summary, the Video Designated Player serves as the bridge between user interaction and video content delivery. It activates video codes upon user initiation, applies comprehensive instructions embedded in the codes, and collaborates with a Text-to-Video Generation Server to produce unique video files on-the-fly. This dynamic and user-centric approach ensures that the video experience is highly customizable and responsive to user preferences, enhancing the overall effectiveness and personalization of video playback.
In the above description, an embodiment is an example or implementation of the inventions. The various appearances of “one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments.
Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purposes only.
The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
It is to be understood that the terms “including”, “comprising”, “consisting of” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers.
If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional elements.
It is to be understood that where the claims or specification refer to “a” or “an” element, such reference is not construed that there is only one of that elements.
It is to be understood that where the specification states that a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included.
Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
The term “method” may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
Any publications, including patents, patent applications and articles, referenced or mentioned in this specification are herein incorporated in their entirety into the specification, to the same extent as if each individual publication was specifically and individually indicated to be incorporated herein. In addition, citation or identification of any reference in the description of some embodiments of the invention shall not be construed as an admission that such reference is available as prior art to the present invention.
While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.
Number | Date | Country | |
---|---|---|---|
63594507 | Oct 2023 | US |