The present invention relates to the architecture, methods, and software for automatically creating storyshare products. Specifically, the present invention relates to simplifying the creation process for multimedia slideshows, collages, movies, photobooks, and other image products.
Digital assets typically include still images, videos, and music files, which are created and downloaded to personal computer (PC) storage for personal enjoyment. Typically, these digital assets are accessed when desired for viewing, listening or playing.
Many multimedia applications for the consumer focus on a single output type, such as video, video on CD/DVD, or print. The process for creating the output in these applications is predominantly manual and often time consuming. It is left up to the user to choose what assets to use, what output to create, how to arrange the assets, how to apply any edits to the assets, and what affects to apply to an asset. In addition, choices made for one output type are not maintained for application to an alternative output choice. Example applications include video editing programs, programs for creating DVDs, calendars, greeting cards, etc.
There are some programs available that have introduced a level of automation. In general, they still require the user to select the assets. In some cases they provide additional input such as text, and then make a selection from a limited set of choices that dictates how effects and transitions will be applied to those assets. The application of those effects is fixed, random, or generically applied, and typically are not based on attributes of the image itself.
The present invention provides a solution to the shortcomings of the prior art described above by making available a computer application, which intelligently derives information about the content of digital assets in order to guide the application of transitions, effects, and templates, including incorporating third party content, provided on the computer or available over a network, toward the automatic creation of a desired output from a set of digital assets as input.
One preferred embodiment of the present invention pertains to a computer-implemented method for automatically selecting multimedia assets stored on a computer system. The method utilizes input metadata associated with the assets and generates derived metadata therefrom. The assets are then ranked based on the assets' input metadata and derived metadata and a subset of the assets are automatically selected based on the ranking. Another preferred embodiment includes storing user profile information such as user preferences and the step of ranking includes the user profile information. Another preferred embodiment of the invention includes using a theme lookup table that includes a plurality of themes having various thematic attributes and comparing the input and derived metadata with those attributes to identify themes having substantial similarity with the input and derived metadata. The attributes can be related to events or subjects of interest such as birthdays, anniversaries, vacations, holidays, family, or sports. Typically, the assets are digital assets comprised of pictures, still images, text, graphics, music, video, audio, multimedia presentation, or descriptor files.
Another preferred embodiment of the invention includes the use of programmable effects, such as zooming or panning, applied to the assets governed by a rules database for constraining application of the effects to those assets that are best showcased by the effects. Themes and effects can be designed by the user or by third parties. Third party themes and effects include dynamic auto-scaling image templates, automatic image layout algorithms, video scene transitions, scrolling titles, graphics, text, poetry, audio, music, songs, digital motion and still images of celebrities, popular figures, or cartoon characters. The assets are assembled into a storyshare descriptor file based on selected themes, the assets, and on the rules database. The file can be saved on a portable storage device or transmitted to other computer systems. Each descriptor file can be rendered on different output media and formats.
Another preferred embodiment of the invention is a computer system having access to stored multimedia assets and a component for reading metadata associated with the assets and for generating derived metadata. The computer system also has access to a theme descriptor file that includes effects applicable to the assets and thematic templates for presenting the assets in a preferred output format. The theme descriptor file comprises data selected from location information, background information, special effects, transitions, or music. A rules database accessible by the computer system comprises conditions for limiting application of effects to those assets that meet the conditions of the rules database. A tool accessible by the computer system is capable of assembling the assets into a storyshare descriptor file based on a selected output format and on the conditions of the rules database. The multimedia assets include digital assets selected from pictures, still images, text, graphics, music, video, audio, multimedia presentation, and descriptor files.
This invention provides for methods, system and software for composing stories, which use a rules database for constraining random usability of assets and effects within a story.
Another aspect of this invention provides methods, system and software for composing stories where a metadata database is constructed comprising input metadata, derived metadata, and metadata relationships. The metadata database is used to suggest themes for a story.
Another aspect of this invention provides methods, system and software for identifying appropriate assets and effects based on the metadata database to be used within a story. The assets and effects may be owned by the user or by a third party. They may be available on a user's computer system during story creation or they may be accessed remotely over a network.
In another aspect of the invention there is provided a system, method, and software for producing various output products from a storyshare descriptor file, output descriptor files and presentation rules.
Other embodiments that are contemplated by the present invention include computer readable media and program storage devices tangibly embodying or carrying a program of instructions readable by machine or a processor, for having the machine or computer processor execute instructions or data structures stored thereon. Such computer readable media can be any available media, which can be accessed by a general purpose or special purpose computer. Such computer-readable media can comprise physical computer-readable media such as RAM, ROM, EEPROM, CD-ROM, DVD, or other optical disk storage, magnetic disk storage or other magnetic storage devices, for example. Any other media, which can be used to carry or store software programs which can be accessed by a general purpose or special purpose computer are considered within the scope of the present invention.
These, and other, aspects and objects of the present invention will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following description, while indicating preferred embodiments of the present invention and numerous specific details thereof, is given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the present invention without departing from the spirit thereof, and the invention includes all such modifications. The figures below are not intended to be drawn to any precise scale with respect to size, angular relationship, or relative position.
An asset is a digital file that consists of a picture, a still image, text, graphics, music, a movie, video, audio, multimedia presentation, or a descriptor file. Several standard formats exist for each type of asset. The storyshare system described herein is about creating intelligent, compelling stories easily in a sharable format and delivering a consistently optimum playback experience across numerous imaging systems. Storyshare allows users to create, play and share stories easily. Stories can include pictures, videos and/or audio. Users can share their stories using imaging services, which will handle the formatting and delivery of content for recipients. Recipients can then easily request output from the shared stories in the form of prints, DVDs, or custom output such as a collage, a poster, a picture book, etc.
As shown in
It will be understood that these digital multimedia objects can be digital still images, such as those produced by digital cameras, audio data, such as digitized music or voice files in any of various formats such as “WAV” or “MP3” audio file formats, or they can be digital video segments with or without sound, such as MPEG-1 or MPEG-4 video. Digital multimedia objects also include files produced by graphics software. A database of digital multimedia objects can comprise only one type of object or any combination.
With minimal user input, the storyshare system can intelligently create stories automatically. The storyshare architecture and workflow of a system made in accordance with the present invention is concisely illustrated by
In addition to the above, there are theme style sheets, which are the background and foreground assets for the themes. A foreground asset is an image that can be superimposed on another image. A background image is an image that provides a background pattern, such as a border or a location, to a subject of a digital photograph. Multiple layers of foreground and background assets can be added to an image for creating a unique product.
The initial story descriptor file 112 can be a default XML file, which can be used by any system optionally to provide any default information. Once this file is fully populated by the composer 114 this file will then become a composed story descriptor file 115. In its default version it includes basic information for composing a story, for example, a simple slideshow format can be defined that displays one line of text, blank areas may be reserved for some number of images, a display duration for each is defined, and background music can be selected.
The composed story descriptor file provides necessary information required to describe a compelling story. A composed story descriptor file will contain, as described below, the asset information, theme information, effects, transitions, metadata and all other required information in order to construct a complete and compelling story. In some ways it is similar to a story board and can be a default descriptor, as described above, minimally populated with selected assets or, for example, it may include a large number of user or third party assets including multiple effects and transitions.
Therefore once this composed descriptor file 115 is created (which represents a story), then this file along with the assets related to the story can be stored in a portable storage device or transmitted to, and used in, any imaging system which has the rendering component 116 to create a storyshare output product. This allows systems to compose a story, persist the information via this composed story descriptor file and then create the rendered storyshare output file (slideshow, movie, etc.) at a later time on a different computer or to a different output.
The theme descriptor file 111 is another XML file, for example, which provides necessary theme information, such as artistic representation. This will include:
The theme descriptor file is, for example, in an XML file format and points to an image template file, such as a JPG file that provides one or more spaces designated to display an asset 110 selected from an asset collection. Such a template may show a text message saying “Happy Birthday,” for example, in a birthday template.
The composer 114 used to develop a story will use theme descriptor files 111 containing the above information. It is a module that takes input from the three earlier components and can optionally apply automatic image selection algorithms to compose the story descriptor file 115. The user could select the theme or the theme could be algorithmically selected by the content of the assets provided. The composer 114 will utilize the theme descriptor file 111 when building the composed storyshare descriptor file 115.
The story composer 114 is a software component, which intelligently creates a composed story descriptor file, given the following input:
With this input information, the composer component 114 will lay out the necessary information to compose the complete story in the composed story descriptor file, which contains all the required information needed by the renderer. Any edits done by the user through the composer will be reflected on the story descriptor file 115.
Given the input the composer will do the following:
The output descriptor file 113 is an XML file, for example, which contains information on what output will be produced and the information required to create the output. This file will contain the constrains based on:
Output descriptor file 113, is used by the renderer 116, to determine available output format.
The story renderer 116 is a configurable component comprised of optional plug-ins that corresponds to the different output format supported by the rendering system. It formats the storyshare descriptor file 115 depending on the selected output format for the storyshare product. The format may be modified if the output is intended to be viewed on a small cell phone, a large screen device, or print formats such as photobooks, for example. The renderer then determines required resolutions, etc. needed for the assets based on output format constraints, etc. In operation, this component will read the composed storyshare descriptor file 115 created by the composer 114, and act on it by processing the story and creating the required output 18 such as in a DVD or other hardcopy format (slideshow, movie, custom output, etc.). The renderer 116 interprets the story descriptor file 115 elements, and depending on the output type selected, the renderer will create the story in the format required by the output system. For example the renderer could read the composed storyshare descriptor file 115 and create a MPEG-2 slideshow, based on all the information described in the composed story descriptor file 115. The renderer 116 will perform the following functions:
This component takes the created story and authors it by creating menus, titles, credits, and chapters appropriately, depending on the required output.
The authoring component 117 creates a consistent playback menu experience across various imaging systems. Optionally, this component will contain the recording capability. It is also comprised of optional plug-in modules for creating particular outputs, such as slideshow using software implementing MPEG-2 or a photobook software for creating a photobook, or a calendar plug-in for creating a calendar, for example. Particular outputs in XML format may be capable of being directly fed to devices that interpret XML and so would not require special plug-ins.
After a particular story is described in the composed story descriptor file 115, this file can be reused to create various output formats of that particular story. This allows the story to be composed by, or on, one computer system and persist via the descriptor file. The composed story descriptor file can be stored on any system, or portable, storage device and then reused to create various outputs on different imaging systems.
In other embodiments of the present invention the story descriptor file 115 does not contain presentation information but rather it references an identifier for a particular presentation that has been stored in the form of a template. In these embodiments, a template library, such as described in reference to theme descriptor file 111, would be embedded in the composer 114 and also in the renderer 116. The story descriptor file would then point to the template files but not include them as a part of the descriptor file itself. In this way the complete story would not be exposed to a third party who may be an unintended recipient of the story descriptor file.
As described in a preferred embodiment, the three main modules within the storyshare architecture, i.e. the composer module 114, the preview module (not shown in
At step 640 the new derived metadata is stored together with the existing metadata in association with a corresponding asset to augment the existing metadata. The new metadata set is used to organize and rank order the user's assets at step 650. The ranking is based on outputs of the analysis and classification algorithms based on relevance or, optionally, an image value index, which provides a quantitative result as described above.
At decision step 660 a subset of the user's assets can be automatically selected based on the combined metadata and user preferences. This selection represents an edited set of assets using rank ordering and quality determining techniques such as image value index. At step 670 the user may optionally choose to override the automatic asset selection and choose to manually select and edit the assets. At decision 680 an analysis of the combined metadata set and selected assets is performed to determine if an appropriate theme can be suggested. A theme in this context is an asset descriptor such as sports, vacation, family, holidays, birthdays, anniversaries, etc. and can be automatically suggested by metadata such as a time/date stamp that coincides with a relative's birthday obtained from the user profile. This is beneficial because of the almost unlimited thematic treatments available today for consumer-generated assets. It is a daunting task for a user to search through this myriad of options to find a theme that conveys the appropriate emotional sentiment and that is compatible with the format and content characteristics of the user's assets. By analyzing the relationship and image content a more specific theme can be suggested. For example, if the face recognition algorithm identifies “Molly” and the user's profile indicates that “Molly” is the user's daughter. The user profile can also contain information that last year at this time the user produced a commemorative DVD of “Molly's 4th Birthday Party”. Dynamic themes can be provided to automatically customize a generic theme such as “Birthday” with additional details. If image templates are used in the theme that can be modified with automatic “fill in the blank” text and graphics this would enable changing “Happy Birthday” to “Happy 5th Birthday Molly” without requiring user intervention. Box 690 is included in step 680 and contains a list of available themes, which can be provided locally via a removable memory device such as a memory card or DVD or via a network connection to a service provider. Third party participants and copyrighted content owners can also provide themes on a pay-per-use type arrangement. The combined input and derived metadata, the analysis and classification algorithm output, and organized asset collection is used to limit the user's choices to themes that are appropriate for the content of the assets and compatible with the asset types. At step 200 the user has the option to accept or reject the suggested theme. If no theme is suggested at step 680 or the user decides to reject the suggested theme at step 200, she is given the option to manually select a theme from a limited list of themes or from the entire available library of available themes at step 210.
A selected theme is used in conjunction with the metadata to acquire theme specific third party assets and effects. At step 220 this additional content and treatments can be provided by a removable memory device or can be accessed via a communication network from a service provider or via pointers to a third party provider. Arrangements between various participants concerning revenue distribution and terms for usage of these properties can be automatically monitored and documented by the system based on usage and popularity. These records can also be used to determine user preferences so that popular theme specific third party assets and effects can be ranked higher or given a higher priority increasing the likelihood of consumer satisfaction. These third party assets and effects include dynamic auto-scaling image templates, automatic image layout algorithms, video scene transitions, scrolling titles, graphics, text, poetry, music, songs, digital motion and still images of celebrities, popular figures, and cartoon characters all designed to be used in conjunction with user generated and/or acquired assets. The theme specific third party assets and effects as a whole are suitable for both hardcopy such as greeting cards, collages, posters, mouse pads, mugs, albums, calendars, and soft copy such as movies, videos, digital slide shows, interactive games, websites, DVDs, and digital cartoons. The selected assets and effects can be presented to the user, for her approval, as set of graphic images, a story board, a descriptive list, or as a multimedia presentation. At decision step 230 the user is given the option to accept or reject the theme specific assets and effects and if she chooses to reject them, the system presents an alternative set of assets and effects for approval or rejection at step 250. Once the user accepts the theme specific third party assets and effects at step 230, they are combined with the organized user assets at step 240 and the preview module is initiated at step 260.
Referring now to
At step 300 the theme specific effects are applied to the arranged user and theme specific assets for the intended output type. At step 310 a virtual output type draft is presented to the user along with asset and output parameters such as provided in LUT 320 which includes output specific parameters such as image counts, video clip count, clip duration, print sizes, photo album page layouts, music selection, and play duration. These details along with the virtual output type draft are presented to the user at step 310. At decision step 330 the user is given the option to accept the virtual output type draft or to modify asset and output parameters. If the user wants to modify the asset/output parameters she proceeds to step 340. One example of how this could be used would be to shorten a downloadable video from a 6-minute total duration to a video with a 5-minute duration. The user could select to manually edit the assets or allow the system to automatically remove and/or shorten the presentation time of assets, speed up transitions, and the like to shorten the length of the video. Once the user is satisfied with the virtual output type draft at decision step 330 it is sent to the render module at step 350.
Referring now to
Referring now to
Referring now to
Another means of context setting is referred to as “event segmentation” as described above. This uses time/date stamps to record usage patterns and when used in conjunction with image histograms it provides a means to automatically group images, videos, and related assets into “events”. This enables a user to organize and navigate large asset collections by event.
The content of image, video, and audio assets can be analyzed using face, object, speech, and text identification and algorithms. The number of faces and relative positions in a scene or sequence of scenes can reveal important details to provide a context for the assets. For example a large number of faces aligned in rows and columns indicates a formal posed context applicable to family reunions, team sports, graduations, and the like. Additional information such as team uniforms with identified logos and text would indicate a “sporting event”, matching caps and gowns would indicate a “graduation”, and assorted clothing may indicate a “family reunion”, and a white gown, matching colored gowns, and men in formal attire would indicate a “Wedding Party”. These indications combined with additional extracted and derived metadata provides an accurate context that enables the system to select appropriate assets, provided relevant themes for the selected assets, and to provide relevant additional assets to the original asset collection.
StoryShare—the Rules within Themes:
Themes are a component of storyshare that enhances the presentation of user assets. A particular story is built upon user provided content, third party content, and how that content is presented. The presentation may be hard or softcopy, still, video, or audio, or a combination or all of these. The theme will influence the selection of third party content and the types of presentation options that a story utilizes. The presentation options include, backgrounds, transitions between visual assets, effects applied to the visual assets, and supplemental audio, video, or still content. If the presentation is softcopy, the theme will also affect the time base, that is, the rate that content is presented.
In a story, the presentation involves content and operations on that content. It is important to note that the operations will be affected by the type of content on which they operate. Not all operations that are included in a particular theme will be appropriate for all content that a particular story includes.
When a story composer determines the presentation of a story, it develops a description of a series of operations upon a given set of content. The theme may contain information that serves as a framework for that series of operations in the story. Comprehensive frameworks are used in “one-button” story composition. Less comprehensive frameworks are used when the user has interactive control of the composition process. The series of operations is commonly known as a template. A template can be considered to be an unpopulated story, that is, the assets are not specified. In all cases, when the assets are assigned to the template, the operations described in the template follow rules when applied to content.
In general, the rules associated with a theme take an asset as an input argument. The rules constrain what operations can be performed on what content during the composition of a story. Additionally, the rules associated with a theme can modify or enhance the series of operations, or template, so that the story may become more complex if assets contain specific metadata.
1) Not all image files have the same resolution. Therefore not all image files can support the same range for a zoom operation. A rule to limit the zoom operation on a particular asset would be based on some combination of the metadata associated with the asset such as: resolution, subject distance, subject size, or focal length, as an example.
2) The operations used in the composition of a story will be based on the existence of an asset having certain metadata properties or the ability to apply a particular algorithm to that asset. If the existence or applicability condition cannot be met, then the operation cannot be included for that asset. For example, if the composition search property is looking for “tree” and there are no pictures containing trees in the collection, then the picture will not be selected.
Any algorithm that looks for “Christmas tree ornament” pictures cannot be applied subsequently.
3) Some operations require two (or possibly more) assets. Transitions are an example where two assets are required. The description of the series of operations must reference the correct number of assets that a particular operation requires. Additionally, the referenced operations must be of the appropriate type. That is to say a transition cannot occur between an audio asset and a still image. In general, operations are type specific as one cannot zoom in on an audio asset.
4) Depending on the operations used and constraints imposed by the theme, the order of the operations performed on an asset might be constrained. That is the composition process may require a pan operation to precede a zoom operation.
5) Certain themes may prohibit certain operations from being performed. For example, a story might not include video content, but only still images and audio.
6) Certain themes may restrict the presentation time, any particular asset, or asset type may have in a story. In this case the display, show, or play operations would be limited. In the case of audio or video, such a rule will require the composer to do temporal preprocessing before including an asset in a description of the series of operations.
7) It is possible that a theme having a comprehensive framework includes references to operations that do not exist on a particular version of a composer. Therefore it is necessary for the theme to include operation substitution rules. Substitutions particularly apply to transitions. A “wipe” may have several blending effects when transitioning between two assets. A simple sharp edge wipe may be the substitute transition if the more advanced transitions cannot be described by the composer. One should note that the rendering device will also have substitution rules for cases where it cannot render the transition described by the story descriptor. In many cases it may be possible to substitute a null operation for an unsupported operation.
8) The rules of a particular theme may check whether or not an asset contains specific metadata. If a particular asset contains specific metadata, then additional operations can be performed on that asset constrained by the template present in the theme. Therefore, a particular theme may allow for conditional execution of operations on content. This gives the appearance of dynamically altering the story as a function of what assets are associated with a story or, more specifically, what metadata is associated with the assets that are associated with the story.
Depending on the particular embodiment, a theme may place restrictions on operations depending on the sophistication or price of the composer or the privilege of a user. Rather than assign different sets of themes to different composers, a single theme would constrain the operations permitted in the composition process based on an identifier of composer or user class.
Presentation rules may be a component of a theme. When a theme is selected, the rules in the theme descriptor become embedded in the story descriptor. Presentation rules may also be embedded in the composer. A story descriptor can reference a large number of renditions that might be derived from a particular primary asset. Including more renditions will lengthen the time needed to compose a story because the renditions must be created and stored somewhere within the system before they can be referenced in the story descriptor. However, the creation of renditions makes rendering of the story more efficient particularly for multimedia playback. Similar to the rule described in theme selection, the number and formats of renditions derived from a primary asset during the composition process will be weighted most heavily by renderings requested and logged in the user's profile, followed by themes selected by the general population.
Rendering rules are a component of output descriptors. When a user selects an output descriptor, those rules help direct the rendering process. A particular story descriptor will reference the primary encoding of a digital asset. In the case of still images, this would be the Original Digital Negative (ODN). The story descriptor will likely reference other renditions of this primary asset. The output descriptor will likely be associated with a particular output device and therefore a rule will exist in the output descriptor to select a particular rendition for rendering.
Theme selection rules are embedded in the composer. User input to the composer and metadata that is present in the user content guides the theme selection process. The metadata associated with a particular collection of user content may lead to the suggestion of several themes. The composer will have access to a database which will indicate which of the suggested themes based on metadata has the highest probability of selection by the user. The rule would weigh most heavily themes that fit the user's profile, followed by themes selected by the general population.
Referring to
It will be understood that, although specific embodiments of the invention have been described herein for purposes of illustration and explained in detail with particular reference to certain preferred embodiments thereof, numerous modifications and all sorts of variations may be made and can be effected within the spirit of the invention and without departing from the scope of the invention.
Accordingly, the scope of protection of this invention is limited only by the following claims and their equivalents.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 60/870,976, filed on Dec. 20, 2006, entitled: “STORYSHARE AUTOMATION”. U.S. patent application Ser. No. 11/______, entitled: “AUTOMATED PRODUCTION OF MULTIPLE OUTPUT PRODUCTS”, filed concurrently herewith, is assigned to the same assignee hereof, Eastman Kodak Company, and contains subject matter related, in certain respect, to the subject matter of the present application. The above-identified patent applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60870976 | Dec 2006 | US |