The present disclosure generally relates to multimedia content and more particularly, to systems and methods for quick decision editing of media content.
As smartphones and other mobile devices have become ubiquitous, people have the ability to take pictures and videos virtually any time. Furthermore, with an ever-growing amount of content available to consumers through the Internet and other sources, consumers have access to a vast amount of digital content. With existing media editing tools, users can manually edit digital images or videos to achieve a desired effect or style. However, while many media editing tools are readily available, the editing process can be complex and time-consuming for the casual user.
Briefly described, one embodiment, among others, is a method implemented in a media editing device for editing an image. The method comprises retrieving digital content and analyzing content of the digital content. The method further comprises searching for a plurality of target attributes in the digital content, the plurality of target attributes being grouped into different categories, the different categories having corresponding editing tools. Based on identification of at least one target attribute in the digital content, at least one suggested editing tool in at least one category corresponding to the identified at least one target attribute is retrieved. The method further comprises populating at least one toolbar in the user interface with the retrieved at least one suggested editing tool.
Another embodiment is a system that comprises a memory comprising logic and a processor coupled to the memory. The processor is configured by the logic to retrieve digital content and analyze the digital content. The processor is further configured to search for a plurality of target attributes in the digital content, the plurality of target attributes being grouped into different categories, the different categories having corresponding editing tools. Based on identification of the at least one target attribute in the digital content, the processor retrieves at least one suggested editing tool in at least one category corresponding to the identified at least one target attribute. The processor is further configured to populate at least one toolbar in the user interface with the retrieved at least one suggested editing tool.
Another embodiment is a non-transitory computer-readable storage medium having instructions stored thereon, wherein when executed by a processor, the instructions configure the processor to retrieve digital content depicting at least one individual and analyze attributes of the at least one individual depicted in the digital content. The instructions further configure the processor to determine whether any of the analyzed attributes match at least one of a plurality of target attributes, the plurality of target attributes being grouped into different categories, the different categories having corresponding editing tools. Based on a match with at least one target attribute. The instructions further configure the processor to retrieve at least one suggested editing tool in at least one category corresponding to the identified at least one target attribute. The instructions further configure the processor to populate at least one toolbar in the user interface with the retrieved at least one suggested editing tool.
Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
Various aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Various embodiments are disclosed for systems and methods for analyzing the content of videos or digital images and identifying target attributes for purposes of presenting suggested editing tools to the user. Based on the target attribute, a customized user interface for editing the digital content is presented to the user with the suggested editing tools. In particular, systems and methods are disclosed for analyzing the content of videos or digital images for purposes of recommending editing tools to the user. In this regard, systems and methods are described for reducing the complexity of the editing process, whereby specific editing tools are presented to the user based on the attributes of the digital content to be edited and based on the user's past usage of specific editing tools with respect to the same or similar attributes now exhibited in the current digital content being edited. As described in more detail below, the media editing device 102 may be further configured to track the user's behavior with regards to the suggested editing tools that were actually utilized by the user in order to further customize the user interface in the future.
A description of a system for implementing an adaptive user interface for facilitating image editing is now described followed by a discussion of the operation of the components within the system.
An effects applicator 104 executes on a processor of the media editing device 102 and configures the processor to perform various operations, as described in more detail below. The effects applicator 104 comprises various components including a tools retriever 106, a digital content analyzer 108, a peripheral interface 113, and a user interface generator 112. The peripheral interface 113 may be configured to receive digital content from a digital recording device (e.g., digital camera) capable of capturing digital content, where the media editing device 102 is coupled to the digital recording device via a cable or other interface. The peripheral interface 113 may support any one of a number of common computer interfaces, such as, but not limited to IEEE-1394 High Performance Serial Bus (Firewire), USB, a serial connection, a Bluetooth® connection, and so on. Alternatively, the media editing device 102 may directly capture digital content via a camera module 116.
The digital content analyzer 108 configures the processor in the media editing device 102 to analyze the content of digital content to be edited. For some embodiments, the digital content analyzer 108 comprises a target attributes identifier 110 that determines attributes of the content depicted in digital content. In some embodiments, the digital content analyzer 108 may retrieve one or more target attributes from a data store 111 in the media editing device 102 for purposes of determining whether the current digital content possesses one or more of the target attributes. For some embodiments, a remote data store 121 storing the data disclosed herein in connection with the data store 111 may be implemented and maintained by a server 117 where the media editing device 102 is coupled to the server 117 via a network 118, such as the Internet or a local area network (LAN).
In accordance with various embodiments, the pre-defined target attributes may be grouped into different categories. For example, certain target attributes may be grouped in one category corresponding to facial features while other target attributes may be grouped in another category corresponding to lighting and contrast levels. The target attributes in the category corresponding to facial features may comprise for example, facial contour, eye color, nose shape, and so on.
The presence of pre-defined target attributes may be determined by analyzing the content of the digital content and/or by extracting information from metadata associated with the digital content. In accordance with various embodiments, each category of target attributes has corresponding editing tools for purposes of specifically modifying the target attributes in that category. For example, the category corresponding to facial features may include tools to modify attributes relating to the skin tone, the face shape, the hair color, and so on.
The tools retriever 106 interfaces with the digital content analyzer 108 and retrieves editing tools from the data store 111 based on the analysis performed by the digital content analyzer 108. In particular, the tools retriever 106 analyzes historical usage data in the data store 111 of one or more users and retrieves editing tools based on the analysis, as described in more detail below. The user interface generator 112 constructs a user interface comprising the suggested editing tools obtained by the tools retriever 106. The user interface generator 112 also includes a user behavior monitor 115 configured to log and store information relating to the use of editing tools by the user. In this regard, the user behavior monitor 115 analyzes usage behavior by the user in connection with specific tools for editing the image.
The processing device 202 may include any custom made or commercially available processor, a central processing unit (CPU) or an auxiliary processor among several processors associated with the media editing device 102, a semiconductor based microprocessor (in the form of a microchip), a macroprocessor, one or more application specific integrated circuits (ASICs), a plurality of suitably configured digital logic gates, and other well known electrical configurations comprising discrete elements both individually and in various combinations to coordinate the overall operation of the computing system.
The memory 214 can include any one of a combination of volatile memory elements (e.g., random-access memory (RAM, such as DRAM, and SRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). The memory 214 typically comprises a native operating system 216, one or more native applications, emulation systems, or emulated applications for any of a variety of operating systems and/or emulated hardware platforms, emulated operating systems, etc. For example, the applications may include application specific software which may comprise some or all the components of the media editing device 102 depicted in
Input/output interfaces 208 provide any number of interfaces for the input and output of data. For example, where the media editing device 102 comprises a personal computer, these components may interface with one or more user input/output interfaces 208, which may comprise a keyboard or a mouse, as shown in
In the context of this disclosure, a non-transitory computer-readable medium stores programs for use by or in connection with an instruction execution system, apparatus, or device. More specific examples of a computer-readable medium may include by way of example and without limitation: a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), and a portable compact disc read-only memory (CDROM) (optical).
One grouping of target attributes may comprise for example, certain facial attributes including, but not limited to, facial contour, eye color, nose shape, skin color, wrinkle, facial blemish(es), facial feature size, and so on. Another grouping of target attributes may comprise the white color balance and HSL (hue, saturation, lightness) level. Yet another grouping of target attributes may comprise a threshold lighting level, threshold contrast level, the presence of artifacts, the detection of zooming/panning motion within the digital content, the detection of shaking by the camera, a threshold level of detected motion, the presence of speech within the digital content, and so on. For target attributes associated with threshold levels (e.g., threshold lighting level), a determination is made in those instances on whether the digital content exhibits an attribute (e.g., lighting level) that falls below (or above) the particular threshold. Yet another grouping of target attributes may comprise the identity of the individual(s) depicted in the image, the location where the image was captured, the time and day when the image was captured, and so on, where such information may be stored in metadata associated with the image.
In some embodiments, the digital content analyzer 108 identifies the presence of target attributes as a function of time as certain target attributes may be exhibited throughout the duration of a video whereas other target attributes may be transient in nature. For example, the lighting level within a video may vary over time. In this regard, the user interface comprising suggested editing tools may vary as a function of time when the user is editing a video. Reference is made to
Referring back to
The digital content analyzer 108 forwards the identified target attributes to the tools retriever 106, which is configured to access editing tools from the data store 111. The editing tools may comprise a wide range of photo editing tools for modifying and enhancing digital content. As discussed earlier, one perceived shortcoming with conventional photo editing systems is that given the wide range of photo or video editing tools, photo or video editing can sometimes be a complex and time-consuming process, particularly if a user is relatively new to editing photos.
The tools retriever 106 analyzes the identified target attributes received from the digital content analyzer 108 and retrieves specific editing tools that will be presented to the user in order to facilitate the editing process. For example, if the digital content analyzer 108 determines the presence of a pre-defined target attribute in the digital content corresponding to an undesirable color balance level or an undesirable white balance level, the tools retriever 106 may retrieve a white balance adjustment tool for adjusting the color balance or an HSL color selection tool. Furthermore, if the identity of the user of the media editing device 102 is known, the tools retriever 106 may also retrieve user history data corresponding to the user.
The user history data stored and updated by the user behavior monitor 115 facilitates the selection of suggested editing tools by informing the tools retriever 106 which specific photo or video editing tools that particular user has utilized in the past for particular digital content. This increases the likelihood that the suggested editing tools will actually be utilized by the user. For example, the user history data may reflect that the user utilized four specific tools in the past for digital content depicting a particular individual. The tools retriever 106 takes this information into consideration when determining which editing tools to retrieve and present to the user. For some embodiments, the tools retriever 106 may be configured to examine the history data to identify editing tools that were previously retrieved (and suggested to the user) but that were ultimately not utilized by the user. In those instances, the tools retriever 106 may be configured to avoid retrieving those editing tools based on the past non-use of those editing tools in order to aid in reducing the number of suggested editing tools.
The user interface generator 112 obtains the suggested editing tools from the tools retriever 106 and constructs a user interface presenting the suggested editing tools to the user with the intent of facilitating the photo editing process. In some embodiments, the user behavior monitor 115 in the user interface generator 112 logs the editing activities of the user and stores this information as part of the user history data back into the data store 111. The user behavior monitor 115 may log, for example, which editing tools the user actually utilizes during the editing process. This may further comprise logging the user's activities reflecting whether the user utilized some or all of the suggested editing tools presented to the user to determine the accuracy in which the tools retriever 106 is retrieving tools. For example, if the user behavior monitor 115 determines that the user elected to utilize each of the suggested tools presented to the user, the user behavior monitor 115 stores this information for future use.
Reference is made to
Although the flowchart of
Beginning with block 410, the effects applicator 104 (
In block 440, based on identification of at least one target attribute in the content, the effects applicator 104 retrieves at least one suggested editing tool in at least one category corresponding to the identified at least one target attribute. In block 450, the effects applicator 104 populates at least one toolbar in the user interface with the retrieved at least one suggested editing tool.
For some embodiments, the media editing device 102 may be configured to compare each of the pre-defined target attributes in all of the groupings in the data store 111 with attributes of the digital content. For other embodiments, the media editing device 102 may be configured to compare each of the target attributes in a subset the groupings in the data store 111 with attributes of the digital content, where the subset is determined based on the past user behavior. For example, upon determining that a particular user is utilizing the media editing device 102, the media editing device 102 retrieves information from the data store 111 relating to that particular user's past editing activities and retrieves a subset of target attributes tailored specifically for that user.
For some embodiments, the digital content analyzer 108 (
It is understood that the flowchart of
Although the flowchart of
In block 710, in response to digital content being obtained by the media editing device 102, the effects applicator 104 accesses the data store 111 (
In block 720, the target attributes identifier 110 (
In block 730, historical data corresponding to editing tools from various categories is analyzed, and editing tools are retrieved based on the analysis. In some instances, the analysis of historical usage data assists the tools retriever 106 in retrieving a specific subset of editing tools within each of the identified categories, thereby narrowing the number of suggested editing tools. The historical usage data may contain such information as the number of instances each editing tool within a particular category has been used. The historical usage data may also contain information (e.g., date, time) reflecting when each editing tool was last used. The tools retriever 106 then retrieves suggested editing tools based on one or more pieces of data relating to historical usage of editing tools.
For some embodiments, the tools retriever 106 may determine which editing tools to retrieve based a weighted combination of different pieces of data reflected in the historical usage data. For example, the tools retriever 106 may give some weight to the fact that a particular editing tool within a particular category was most recently used. However, the tools retriever 106 may be configured to give more weight to the fact that another editing tool within the same category was used the most. Thus, the tools retriever 106 may be configured to retrieve editing tools that meet or exceed a threshold weight value. The user interface generator 112 (
Reference is now made to
It should be emphasized that the tools retriever 106 is not limited to retrieving editing tools on a category-by-category basis and may mix and match editing tools that are then suggested to the user. For example, the tools retriever 106 may pair various facial attribute enhancer tools (e.g., hair toner) with one or more decorative tools (e.g., text bubble tool), as shown in
In some embodiments, the editing tools within the grouping may be sorted and presented to the user based on which editing tools are more commonly used. In this regard, certain editing tools may be prioritized over other editing tools. In the example user interface of
Note that as shown in the example user interfaces of
To illustrate, suppose that the digital content analyzer 108 identifies a target attribute corresponding to poor lighting in the digital content. Based on the presence of this particular target attribute (poor lighting), one or more corresponding editing tools are retrieved. For this particular example, one or more editing tools comprising lighting adjustment tools may be retrieved for allowing the user to incorporate better lighting into the digital content. As another example, suppose that the digital content analyzer 108 identifies a target attribute corresponding to a shaking effect within the digital content. For this particular example, one or more editing tools comprising video stabilization tools may be retrieved for allowing the user to address the shaking effect exhibited by the digital content. As yet another example, suppose that the digital content analyzer 108 identifies a target attribute corresponding to the presence of an individual's face within the digital content. For this particular example, one or more editing tools comprising motion tracking tools, object tracking tools, etc. may be retrieved for allowing the user to edit the individual's face within the digital content.
The cloud computing device 1002 may comprise a server computer or any other system providing computing capability. Alternatively, a plurality of computing devices may be employed that are arranged, for example, in one or more server banks or computer banks or other arrangements. For example, cloud computing device 1002 may comprise a cloud computing resource, a grid computing resource, and/or any other distributed computing arrangement. Such computing devices may be located in a single installation or may be distributed among many different geographical locations.
Similar to the arrangement shown for the media editing device 102 in
In the networked environment shown, the client device 1019 is coupled to the cloud computing device 1002 via a network 1018, such as the Internet or a local area network (LAN). The client device 1019 may be embodied as a computing device equipped with digital content recording capabilities such as, but not limited to, a digital camera, a smartphone, a tablet computing device, a digital video recorder, a laptop computer coupled to a webcam, and so on. A user interface for displaying suggested editing tools is displayed locally at the client device 1019.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims priority to, and the benefit of, U.S. Provisional Patent Application entitled, “Systems and Methods for Quick Decision Editing of Media Content,” having Ser. No. 62/137,919, filed on Mar. 25, 2015, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62137919 | Mar 2015 | US |