The present invention relates to video editing, and more particularly to a computer based system and method for editing video/audio files in real time.
Conventionally, stationary computers are used to perform video editing because the stationary computers are provided with computing resources compared to mobile devices. Such stationary computers can include desktop computers and servers. Users typically capture videos using their mobile devices because the mobile devices are more portable over stationary computers. Once users capture any digital content (e.g. video content) on their mobile devices, the captured digital content is then to be transferred from the mobile devices to the stationary computers for performing various operations on the captured digital content such as viewing or editing.
Many people record video on their mobile devices and share those videos with others. In many cases, these recorded videos could benefit from modifications that can alter the appearance of the video or improve visual and aural qualities of the video. Editing video content, however, can require considerable computing power and current technologies do not allow for meaningful video enhancements to be performed on computing devices in real time during playback.
Thus, in the light of the above mentioned problems, it is evident that, there is a need for a method and system which would enable a user to play one or multiple video and audio files in synchronization, and during playback, trigger manipulations and effects on the video and audio files using a graphical user interface.
It should be understood that this disclosure is not limited to the particular systems, and methodologies described herein, as there can be multiple possible embodiments of the present disclosure which are not expressly illustrated in the present disclosure. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present disclosure.
It is an objective of the present invention to provide a method and system for playing and editing at least one video and audio file in real time. The method includes a step of receiving a first request, via a graphical user interface, for selecting and displaying a first video file in a first available channel, a step of receiving a second request, via the graphical user interface, for selecting and displaying second video file in a second available channel, a step of receiving a third request, via the graphical user interface, for selectively attaching at least one audio file to the first available channel or an additional video file in the first available channel or in the second available channel; a step of receiving at least one command, via the graphical user interface, for selectively performing predefined manipulation function associated with the command at a user defined time frame for customizing the first video in the first available channel, the second video in the second available channel and the at least one audio file during the playback.
Further, the method further includes a step of storing the customized first video in the first available channel, the customized second video in the second available channel and the customized audio file along with a manipulation data of the customized first video, the customized second video and the customized at least one audio file during playback, a step of receiving a mixing request, via the graphical user interface, for combining the customized first video, the customized second video and the customized at least one audio file for creating a final video based on the manipulation data, and a step of storing and displaying the final video in at least one master channel via the graphical user interface.
Another object of the present invention is to provide a real-time video performance instrument, which would enable a user to play one or multiple video and audio files in synchronization and during playback, splice and trigger manipulations and effects on the video and audio files using the graphical user interface.
Another object of the present invention is to provide the graphical user interface which allows a user to utilize an intensity lever to adjust the character and magnitude of each manipulation in real-time.
Another object of the present invention is to provide a method which is adapted to save and load manipulation data on new audio/video files. The manipulation data include a plurality of effects, manipulations, changes, and splices as performed on the previously saved audio/video files. Further, the method allows the user to utilize non-destructive recording, in such a manner that a customized file will be secured and safe in case of any system failure. This method further allows processing of each available video channel separately, including a master channel. The user can choose to use audio from the video output or an audio file. The user can record video or load video via the graphical user interface. The method further allows the user to record video at a special aspect ratio. The user can encapsulate the video with an audio-responsive waveform and resulting in generation of visualization effects for the encapsulated video.
Further, in some implementations, a computer readable storage medium is provided to store instructions causing a processing device to perform the operations described above.
For illustrative purposes, the description represented below is applicable to video data/video files, but the systems, apparatuses, and methods described herein can similarly be applied to any type of media content item, including audio data, visual data (e.g. images), audio-visual, or any combination thereof.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized that such equivalent constructions do not depart from the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
For a complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
Some embodiments of this invention, illustrating all its features, will now be discussed in detail with respect to
The mobile device 101 can be a portable computing device, such as, and not limited to, cellular telephones, personal digital assistants (PDAs), portable media players, notebooks, laptop computers, an electronic book reader or a tablet computer (e.g., that includes a book reader application), and the like. The mobile device 101 can receive a media item, such as a digital video or a digital movie from the database 107, or the media storage 108 or the media library 103. The mobile device 101 can run an operating system (OS) that manages hardware and software on the mobile device 101.
Media items can be received from any sources, including components of the mobile device 101, the server or server machine 115, another mobile device 101 etc and be stored in a storage unit. The storage unit comprises at least one of a database 107, the media storage 108, or the media library 103. For example, the storage unit can store a digital video captured by a video camera of a mobile device 101. The media storage 108 or the media library 103 can be a persistent storage that is capable of storing data. The persistent storage unit can be a local storage unit or a remote storage unit. The persistent storage units can be a magnetic storage unit, optical storage unit, solid state storage unit, electronic storage units (main memory), or similar storage unit. The persistent storage unit can be a monolithic device or a distributed set of devices. The term ‘set’, as used herein, refers to any positive whole number of items. The data storage can be internal to the mobile device 101 or external to the mobile device 101 and be accessible by the mobile device 101 via a network. As will be appreciated by those skilled in the art, in some implementations data storage may be a network-attached file server or a cloud-based file server, while in other implementations data storage might be some other type of persistent storage such as an object-oriented database, a relational database, and so forth.
The server/server machine 115 can be a rack mount server, a router computer, a personal computer, a portable digital assistant, a laptop computer, a desktop computer, a media center, a tablet, a stationary machine, or any other computing device capable of performing enhancements of videos.
The present invention provides a method which can be executed by a real-time video performance apparatus having a processor executable application stored in the memory. The application enables a user to play one or multiple video and audio files in synchronization and during playback, splice and trigger manipulations and effects on the video and audio files using the graphical user interface 310. The method includes a step of receiving a first request, via the graphical user interface 310, for selecting and displaying a first video file in a first available channel, a step of receiving a second request, via the graphical user interface 310, for selecting and displaying second video file in a second available channel, a step of receiving a third request, via the graphical user interface 310, for selectively attaching at least one audio file to the first and second video file in the first available channel and in the second available channel, a step of receiving at least one command, via the graphical user interface 310, for selectively performing predefined manipulation function associated with the command at a user defined time frame for customizing the first video in the first available channel, the second video in the second available channel and the at least one audio file during playback.
Further, the method includes a step of storing the customized first video in the first available channel, the customized second video in the second available channel and the customized audio file along with a manipulation data of the customized first video, the customized second video and the customized at least one audio file during playback, a step of receiving a mixing request, via the graphical user interface 310, for combining the customized first video, the customized second video and the customized at least one audio file for creating a final video based on the manipulation data, and a step of storing and displaying the final video in at least one master channel via the graphical user interface 310.
In one embodiment of the present invention, the first available channel, the second available channel and the at least one master channel includes the respective windows showing the first video, the second video, and the final video respectively along with their respective timeline in the graphical user interface 310. Further, the first video, the second video, and at least one audio file are received from a media library stored in the memory or online audio/video streaming server or camera in real time.
In another embodiment of the present invention, as shown in
In another embodiment of the present invention, the software application having the graphical user interface 310 allow the user to save and load manipulation data on new audio/video file. The manipulation data include the effects, manipulations, changes, and splices as performed on the previously saved audio/video file. Further, the method allows the user to utilize non-destructive recording, so that customized file will be secured and safe in case of any system failure. The method further allows each video available channel to be processed separately, including the master channel. The user can choose to use audio from the video output or an audio file. The user can record video or load video via the graphical user interface 310. The method allows the user to record video at a special aspect ratio. The method allows the user to encapsulate the video with audio-responsive waveform visualization.
In an exemplary embodiment of the present invention, the user would first add or record audio or video to each respective channel and align each audio/video element in a specified timeline where desired. To create FX, the user would select a channel to process and trigger the play button at which all video and audio would start playing in synchronization. The user can utilize the graphical user interface to trigger effects and manipulations, which would then be recorded and stored during real-time playback, the effects and manipulations would be triggered to the appropriate channel at the precise time and durations that they were triggered. The user can continue to layer and record multiple FX by repeating this process. This FX and manipulations can be applied to both a single channel, as well as the master channel (if the master output channel (master channel) is selected during playback). For example, the user may be able to add different effects to channel, and then add additional effects to the master output channel.
In an exemplary embodiment of the present invention, the RTVPI application 105 is adapted to show the real time progress during playback and output one or more videos which are processed in one or more video channels via the graphical user interface 310.
In order to choose which channels should feed into the master channel, the user would select the master channel by tapping on the channel labeled (not shown explicitly) as such in the wireframe, trigger the play button, and then in real-time during playback, tap the visual interface for channels 1-6 as desired, similar to how effects and manipulations are applied. In an exemplary embodiment the user can select channel 1 during minute 1, channel 4 during minute 2 and so on. The channels can be selected at any instant of time by the user as desired. The time instant may be represented for example in minutes.
In another implementation of the present invention, a computing system within which a set of instructions, for causing the machine to perform one or more of the methodologies discussed herein may be executed. The machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, or an extranet. The machine may operate with a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a cellular telephone, a web appliance, a server, a network router, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
Additionally, as will be appreciated by those skilled in the art, the machine may include an image sensing module, an image capture device, a hardware media encoder/decoder and/or a graphics processor (GPU). The image sensing module can include an image sensor (Camera for example) capable of converting an optical image or images into an electronic signal.
Further as used herein, the term “data storage” or any variations thereof may include a machine-readable storage medium (or more specifically a computer-readable storage medium) having one or more sets of instructions ((e.g., RTVPI (real time video performance Application 105 having the graphical user interface 310)) embodying any one or more of the methodologies or functions described herein. Further, the video preview module may also reside, completely or at least partially, within main memory and/or within processing device during execution thereof. As would be appreciated by those skilled in the art, the main memory and processing device also constitutes machine-readable storage media.
For simplicity of explanation of implementation, the methods have been described as a series of steps. However, the steps in accordance with this disclosure can occur in various orders and/or concurrently, and with other steps not presented and described herein. Furthermore, not all illustrated steps may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture (e.g., a computer readable storage medium) to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media.
The methods and systems described herein can be used in a wide variety of implementations, including as part of a mobile application (“app”), and can be part of a photo or video-related software including a mobile operating system. Applications installed on the mobile device can access the systems and methods via one or more application programming interface (API).
It will finally be understood that the disclosed embodiments are presently preferred examples of how to make and use the claimed invention, and are intended to be merely explanatory. Reasonable variations and modifications of the illustrated examples in the foregoing written specification and drawings are possible without departing from the scope of the invention as defined in the claim below.
This application claims the benefit of U.S. Provisional Application No. 62/376,708 filed on Aug. 18, 2016, and U.S. Provisional Application No. 62/442,979 filed on Jan. 6, 2017, which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62376708 | Aug 2016 | US | |
62442979 | Jan 2017 | US |