In a social media platform that is provided for users to upload original content and interact with each other's content, viral trends commonly occur in which various users attempt to repeat an original concept, sometimes by including their own modifications. A derivative version of the original concept may even become more popular than the original, despite owing its start to the user who provided the original concept. The original user may feel that their original concept was misappropriated in such a case. In addition, a platform hosting such uploaded content may have a high barrier for entry of new users who are not yet familiar with the various editing options available for generating the content, or may not feel creative enough to develop their own ideas into original content.
To address these issues, a computing system is provided herein that includes a client computing device including a processor. The processor may be configured to execute a client program to display a first video published by a first user on a video server platform, to a second user viewing the first video. The processor may be configured to execute the client program to display a graphical user interface. The graphical user interface may include a selectable input component configured to enable selection of an edits model of the first video. The edits model may include a series of edit operations applied to the first video. The processor may be configured to execute the client program to, in response to selection of the selectable input component, apply the edit operations to a second video. The processor may be configured to execute the client program to publish the second video by the second user on the video server platform for viewing by other users.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
To address the above issues,
On the client side of the computing system 100, a first client computing device 18A, a second client computing device 18B, and other client computing devices 18C may be used by associated users to interact with the application server program 16. Each client computing device 18A-C may be of any suitable type such as a smartphone, tablet, personal computer, laptop, wearable electronic device, etc. able to access the video server platform 10 via an internet connection. The first client computing device 18A may include a processor 20A configured to execute a client program 22 to enact various client-side functions of the video server platform 10 on behalf of a first user. The first client computing device 18A may further include associated memory 24A for storing data and instructions, a display 26A, and at least one input device 28A of any suitable type, such as a touchscreen, keyboard, buttons, accelerometer, microphone, camera, etc., for receiving user input from the first user. In this example, the first user is a content originator who is providing new, original content on the video server platform 10 for consumption by other users.
First, the first user creates a first video 30 to be published on the video server platform 10. The processor 20A may be configured to execute the client program 22 to present a graphical user interface (GUI) 32 to the first user on the display 26A. The GUI 32 may include a plurality of pages, screens, windows, or sub-interfaces providing various functions. For example, a video publishing screen 34 may be used to finalize details and settings before publishing a finished video; a video viewing screen 36 may be used to select and view another user's published videos; a video sharing screen 38 may present a number of options to the viewing user for interacting with the viewed video such as adding the video to a list or favorites collection, reacting to the video, sharing a link to the video over a connected social media or communications account, downloading the video, and so on; and a video editing screen 40 may be used to film and/or edit a video to be published. Additional screens may be provided to provide additional features.
The first client computing device 18A may prepare the first video 30 using the video editing screen 40. The first video 30 may be packaged inside a first video object 42 with metadata 44 such as a location, model, and operating system of the first client computing device 18A, and a sharing permission 46. The sharing permission 46 may apply to all options of the video sharing screen 38, or any individual options. The sharing permission 46 may be an account-wide setting or a setting for individual videos. The first user may be able to set the sharing permission 46 via a selectable GUI component such as a switch, tick box, drop down menu, etc. (see
The second client computing device 18B, similar to the first client computing device 18A, may include a processor 20B configured to execute the client program 22 to display the GUI 32 including at least the video viewing screen 36, the video sharing screen 38, and the video editing screen 40, as well as associated memory 24B, a display 26B, and at least one input device 28B. Each of these components correspond to the same named components of the first client computing device 18A, and therefore the same description will not be repeated. As with the first computing device 18A, more screens may be presented in the GUI 32 than are shown in
If the edits model 48 is not included together with the first video 30, then the second user may select a GUI component, for example, on the video sharing screen 38, to send an edits model request 64 indicating the edits model identifier 58 to the handler 62, as shown in
The second user may complete the second video 66 with the exact same edit operations 50 of the edits model 48, in which case the edits model 48 may be omitted from a publish request 68 if desired, and the edits model identifier 58 may be used to associate the already stored edits model 48 with the second video 66 on the server computing device 12. Alternatively, in some implementations, the second user may be permitted to further modify one or more of the edit operations 50 and send back a modified edits model 70 to the handler 62 of the application server program 16. The modified edits model 70 may be associated with the original edits model 48 so that the first user is still credited with inspiration for the second video 66. That is, the edits model 48 of the second video 66 may be the same as or partially different from the edits model 48 of the first video 30. By sending the publish request 68 including a second video object 72 including metadata 74, the second video 66, sharing permission 76, as well as the edits model identifier 58 and/or edits model 48 as discussed above, the client program 22 may cause the application server program 16 to publish the second video 66 by the second user on the video server platform 10 for viewing by other users. Other users may be able to view the second video 66 provided by a handler 78 of the application server program 16 via their own other client computing devices 18C providing the video viewing screen 36.
Turning to
Turning to
As mentioned above, the GUI 32 may further include a video editing screen 40.
In some instances, the video editing screen 40 further includes a reference video 150 of the first video 30 that is displayed over the second video 66. Here, the reference video 150 is illustrated as a thumbnail, but may be a full-size overlay or may be displayed in a split-screen formation. The second user may therefore be able to easily create the second video 66 to have the correct content at the correct time in order to follow the flow of the series of edit operations 50. A GUI component 152 may be selected to close the reference video 150 if desired. As can be seen by comparing corresponding frames 110A-D, 148A-D at the same timestamp, the reference video 150 may be configured to play and pause in sync with the second video 66 during video filming and/or editing of the second video 66. Accordingly, if the second user pauses recording of the second video 66 via a play/pause button 154, the reference video 150 may be paused at the same point and the two videos 66, 150 will not go out of sync. As such, the reference video 150 may be a useful aid for the second user to look at when creating the second video 66. The reference video 150 may be adjustable in at least one of transparency, size, and position by the second user in the video editing screen 40. For example, the second user may apply an input 156 to drag the reference video 150 across the screen to a new position in frame 148A. The second user may apply an input 158 in frame 148C to increase the size of the reference video, with a reverse action able to decrease the size instead. The second user may be able to access an opacity pane and adjust a selectable GUI component 160, which may be a slider bar or up/down arrow, etc., to adjust the transparency of the reference video 150.
In the edit screen, the second user may have access to many edit functions. A plurality of selectable GUI components 162 may be displayed to switch between front and rear facing cameras, adjust the recording speed, adjust photography settings, apply a filter, set a filming delay timer, etc. An effects component 164 may be selectable to access a catalog of usable effects to be applied to the second video 66. An upload component 166 may be selectable to retrieve footage stored in a camera reel or remote storage of the second client computing device 18B rather than using the camera to record within the client program 22. An audio description 168 may include information about an audio track used with the second video 66, which may be original or selected from a catalog of available tracks. The default audio track may be the same audio track used in the first video 30 as part of the edits model 48 applied to the second video 66. Once the second user is finished with the second video 66, a cancel button 170 may be used to cancel the prepared video, or an accept button 172 may be used to proceed to final touches before publishing.
The second user may use the edits model 48 of the first video 30 as-is when publishing the second video 66. Alternatively, with reference to
Another example of the video viewing screen 36 is illustrated in
In some implementations, the edit operations may include at least one of adding a text box, formatting the text box, applying a filter, adding a sticker, adding or modifying an audio track, setting coordinates of the text box or sticker, or setting a timing of the text box or sticker. More types of edit operations may be included as well. Accordingly, the first user has many options available for making a creative video that can entice other users to follow suit. In some implementations, applying the edit operations to the second video is permitted based at least on a sharing permission of the first user. The sharing permission may be set at the video level or the account level. This gives the first user creative control over the first video, and other users are allowed to copy the edits model only if the first user is comfortable allowing them to do so.
At 1210, the method 1200 may include, after the edit operations are applied, including an indication of credit to the first user with the second video. In this manner, the first user is assured that the specific concept of their video edits will not be improperly attributed to someone that was copying them. Furthermore, the credit may include a portion of compensation earned by the second video, in some cases. At 1212, the method 1200 may include displaying a reference video of the first video over the second video in a video editing screen of the graphical user interface. The reference video may provide the second user with a quick and easy check while creating the second video to make sure that the footage and edit operations will match up well. At 1214, the method 1200 may include playing and pausing the reference video in sync with the second video during video filming and/or editing of the second video. In this manner, the second user will be able to pause and restart filming or playback as needed without worrying about finding the same timestamp on the reference video. At 1216, the method 1200 may include adjusting the reference video in at least one of transparency, size, and position in response to input by the second user in the video editing screen. Thus, the reference video may be flexibly modified to fit the circumstances of any individual video and user. Finally, at 1218, the method 1200 may include publishing the second video by the second user on the video server platform. Once published, the second video may be viewed by other users who may also want to try using the same edits model.
In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 1300 includes a logic processor 1302 volatile memory 1304, and a non-volatile storage device 1306. Computing system 1300 may optionally include a display subsystem 1308, input subsystem 1310, communication subsystem 1312, and/or other components not shown in
Logic processor 1302 includes one or more physical devices configured to execute instructions. For example, the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor 1302 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood.
Non-volatile storage device 1306 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device 1306 may be transformed—e.g., to hold different data.
Non-volatile storage device 1306 may include physical devices that are removable and/or built-in. Non-volatile storage device 1306 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology. Non-volatile storage device 1306 may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device 1306 is configured to hold instructions even when power is cut to the non-volatile storage device 1306.
Volatile memory 1304 may include physical devices that include random access memory. Volatile memory 1304 is typically utilized by logic processor 1302 to temporarily store information during processing of software instructions. It will be appreciated that volatile memory 1304 typically does not continue to store instructions when power is cut to the volatile memory 1304.
Aspects of logic processor 1302, volatile memory 1304, and non-volatile storage device 1306 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system 1300 typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function. Thus, a program may be instantiated via logic processor 1302 executing instructions held by non-volatile storage device 1306, using portions of volatile memory 1304. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
When included, display subsystem 1308 may be used to present a visual representation of data held by non-volatile storage device 1306. The visual representation may take the form of a GUI. As the herein described methods and processes change the data held by the non-volatile storage device, and thus transform the state of the non-volatile storage device, the state of display subsystem 1308 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1308 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic processor 1302, volatile memory 1304, and/or non-volatile storage device 1306 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 1310 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor.
When included, communication subsystem 1312 may be configured to communicatively couple various computing devices described herein with each other, and with other devices. Communication subsystem 1312 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network, such as a HDMI over Wi-Fi connection. In some embodiments, the communication subsystem may allow computing system 1300 to send and/or receive messages to and/or from other devices via a network such as the Internet.
The following paragraphs provide additional support for the claims of the subject application. One aspect provides a computing system. The computing system comprises a client computing device including a processor configured to execute a client program to display a first video published by a first user on a video server platform, to a second user viewing the first video, display a graphical user interface, the graphical user interface including a selectable input component configured to enable selection of an edits model of the first video, the edits model including a series of edit operations applied to the first video, in response to selection of the selectable input component, apply the edit operations to a second video, and publish the second video by the second user on the video server platform for viewing by other users. In this aspect, additionally or alternatively, the edit operations include at least one of adding a text box, formatting the text box, applying a filter, adding a sticker, adding or modifying an audio track, setting coordinates of the text box or sticker, or setting a timing of the text box or sticker. In this aspect, additionally or alternatively, the client program is permitted to apply the edit operations to the second video based at least on a sharing permission of the first user. In this aspect, additionally or alternatively, the video server platform is configured to store the edits model in a video object including the first video, or include an edits model identifier in the video object referencing a stored location of the edits model. In this aspect, additionally or alternatively, after the edit operations are applied, the second video includes an indication of credit to the first user. In this aspect, additionally or alternatively, the graphical user interface is configured to, after the edit operations are applied, permit modifications of one or more of the edit operations by the second user before the second video is published. In this aspect, additionally or alternatively, the graphical user interface further includes a video editing screen in which a reference video of the first video that is displayed over the second video. In this aspect, additionally or alternatively, the reference video is configured to play and pause in sync with the second video during video filming and/or editing of the second video. In this aspect, additionally or alternatively, the reference video is adjustable in at least one of transparency, size, and position by the second user in the video editing screen. In this aspect, additionally or alternatively, the client computing device is further configured to execute the client program to display a plurality of videos that include the edits model of the first video, or display a list of user accounts that published the plurality of videos.
Another aspect provides a method. The method comprises displaying a first video published by a first user on a video server platform, to a second user viewing the first video. The method comprises displaying a graphical user interface including a selectable input component configured to enable selection of an edits model of the first video, the edits model including a series of edit operations applied to the first video. The method comprises in response to selection of the selectable input component, applying the edit operations to a second video. The method comprises publishing the second video by the second user on the video server platform. In this aspect, additionally or alternatively, the edit operations include at least one of adding a text box, formatting the text box, applying a filter, adding a sticker, adding or modifying an audio track, setting coordinates of the text box or sticker, or setting a timing of the text box or sticker. In this aspect, additionally or alternatively, the applying the edit operations to the second video is permitted based at least on a sharing permission of the first user. In this aspect, additionally or alternatively, the method further comprises storing the edits model in a video object including the first video, or including an edits model identifier in the video object referencing a stored location of the edits model, on the video server platform. In this aspect, additionally or alternatively, the method further comprises, after the edit operations are applied, including an indication of credit to the first user with the second video. In this aspect, additionally or alternatively, the method further comprises displaying a reference video of the first video over the second video in a video editing screen of the graphical user interface. In this aspect, additionally or alternatively, the method further comprises playing and pausing the reference video in sync with the second video during video filming and/or editing of the second video. In this aspect, additionally or alternatively, the method further comprises adjusting the reference video in at least one of transparency, size, and position in response to input by the second user in the video editing screen.
Another aspect provides a computing system. The computing system comprises a server computing device of a video server platform. The server computing device is configured to receive a first video by a first user of a first client computing device, receive a sharing permission from the first user of the first client computing device indicating that an edits model of the first video can be shared with and used by other users of the video server platform, and publish the first video on the video server platform. The server computing device is configured to, in response to a viewing request by a second user of a second client computing device, send the first video to the second user for viewing. The server computing device is configured to send the edits model of the first video to the second user, the edits model including a series of edit operations applied to the first video, and publish a second video by the second user on the video server platform, the edit operations having been applied to the second video in response to selection by the second user of a selectable input component in a graphical user interface. In this aspect, additionally or alternatively, the edit operations include at least one of adding a text box, formatting the text box, applying a filter, adding a sticker, adding or modifying an audio track, setting coordinates of the text box or sticker, or setting a timing of the text box or sticker.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed. If used herein, the phrase “and/or” means any or all of multiple stated possibilities.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.