The present invention relates to a user interface for creating digital image products, and more particularly to a context and content sensitive adaptable user interface for text and/or graphic element placement in the digital image product.
Current and prior art photo product creation applications for user created photo products such as photo books, greeting cards, collages, and the like do not handle text well. The addition and positioning of text within image based products is cumbersome and these procedures are focused on technology or product elements, not on the user experience. Many applications treat text as a layer that exists on a picture layer or on the background layer of a digital image product. Users typically are provided with different applications that treat some layer implementations in a unique manner such as a linked annotation where text is placed adjacent to an image, where users can add text to a picture, and add text to a page such as with caption. Other applications simply provide templates that illustrate where text can be placed, either on an image or adjacent to it. More sophisticated applications will adapt the text font size if the number of text characters exceeds the allocated space for text.
With imaging products, if the page is a collage background, one of the four surfaces of a folded card, or a photo book page, whereby the text creation method defines the link and associated behavior of the text. Each different condition including a consumer provided image, a graphic element such as a border or overlay, or a finished product feature such as a fold, seam, or cut line would provide a different real time user interface process, feedback indication, and text placement feature, and text style/color/configuration feature. Additionally, the actual content of the image such as faces, objects, pets, animals, vehicles, and the like are automatically identified via image analysis or provided metadata and also are used to modify the text placement user interface and text placement option relative to the proximity of the identified image content. This technique can also be applied to graphic elements (such as clip art) and extracted photographic objects.
According to the present invention, a context and content sensitive adaptable user interface for text and/or graphic element placement in digital image products is provided. The text placement and configuration application according to the present invention indicates to the user via visual, haptic, and optionally audio feedback the text placement options that are available on any given area of the image product. Image product areas includes recognized objects and open spaces such as sky, sand, or water in a scene, template surround areas for composite images, collages, and album pages, and folds, gutters, and borders for post printing converting, folding, trimming, and binding operations. As the user drags the text around the layout a plurality of methods are used to convey text and image product associations that are provided, including text conforms, and adapts to the spaces and colors of objects recognized in the image and formatting surrounding the image in real time allowing the user to make visual choices instead of selecting tools and initiating text formatting modifications.
According to the present invention, a user interface system for a computer device, comprises a user interface control for an image product creation application for adding user selected or supplied text or graphic elements to at least one image product, wherein the user interface control is responsive to the position relative to at least one user supplied image, a recognized object within the user supplied image, or the at least one image product related feature, wherein the user interface control provides an indication when the text or graphic elements are positioned proximal to at least one user supplied image, a recognized object within the user supplied image, or the at least one image product related feature, and wherein the user interface control modifies at least one attribute of the color, font, size, shape, surround, or background of the text or graphic elements when placed proximal to at least one user supplied image, recognized object within the user supplied image, or the at least one image product related feature.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure:
The user interface of the present invention includes many features useful for creating digital image products. For example, text added to a picture cannot be moved to the background. Text added to a page will react either independently to a picture and handled as a separate node on the layout or in relation to the picture (moved during a layout shuffle such that text does not impinge on the image node) but the text is always tied to the background layer of the layout and cannot be tied or linked to the picture (as that requires add text to picture).
Typical image product creation applications do not provide both types of adding text, they provide one and the user must deal with or cope with the system limitations to create the product they want.
The present invention is especially useful on touch enabled tablet style devices or touch screen enabled kiosks since text is added to a layout and allow users to position and link the text as they see fit. The text would be added as the top layer of an imaging product enabling the user to touch the text to drag it to a new location on the layout. Additionally the “text node” could be stretched or shrunk to change the size of the text node, using standard multi-finger gestures or automatically adapt to the content of the image. As the user drags the text around the layout that a plurality of methods are used to convey text plus image associations are provided. The visual and optional audio linking of the text with image content such as detected faces, animals, objects, clipart, or graphic design elements would convey the linkage of the text with that element. Additionally, product design features such as page gutters, crease locations for folded products like greeting cards,
When the user drags the text adjacent to a picture a visual method is used to convey the linkage, such as a visual outline around the two elements which would for example “flash” or “alternate colors” to convey that is a linkage indicator rather than a graphic product design element such as a border. Additionally, the text and the image could be linked with a standard technique such as a Graphic User Interface insertion bar or an icon (lock, rope, or carabineer) to indicate that the image and text node “tied” together. This would indicate that the two elements are linked and subsequent actions relating to layout changes such as moving a picture, shuffling elements within a layout, etc. would treat the text and element as a common unit. This relationship would be maintained unless the user chose to separate the elements. Dragging the text links the element as a caption that exists linked outside an image, as text that overlays and image or as freeform text not linked to an image/element. Likewise, touching a linked text node in a defined manner (e.g., after a dwell period) would allow the user to modify the text+element linkage (e.g., a caption could be moved to be linked within the picture or as freeform/unlinked text).
In addition this arrangement facilitates the addition of new imaging products in that the same adaptable text positioning rules will automatically adapt to the new imaging product. The user interface can be adapted to run on a processor equipped device with a display, including a personal computer, a kiosk, a personal mobile device such as a smartphone or a tablet, or a remote display connected to a server computer on the network or the cloud.
One example according to the present invention of image content and product element context sensitive text modification and placement is described below. A face is detected in the digital image. As the user drags text in quotes or quotes added to the text automatically when the text is in a predetermined proximity to the detected face and a “speech or thought bubble” forms around the text with the “point” of the bubble pointed at the mouth of the detected face. Adjusting the position of the text away from mouth of the detected face to the head of the detected face turns the “speech bubble” to a “thought bubble”.
Additional examples are shown below with respect to
In
Referring now to
Additional features of the user interface of the present invention are described below. For example, text color turns complementary or contrasting to an image background automatically. Placed text pseudo-conforms to the shape and size of an object in an image. When the user hovers text over a simple object, of a more or less uniform color, in the scene, such as a balloon, in the text is center of text expanded and distorted to conform and fill the assumed “3D shape” of the balloon. Using the “balloon example”, common identifiable simply shaped objects, such as balloons, balls, TV screens, mirrors, the sides of vehicles, faces, etc. would be assigned a compatible simple wireframe, not visible to the user. When text is “hovered” over the object, the text would re-scale and layout according to the available area of the object and conform the wireframe shape. The user interface of the present invention includes touch screen special features such as multi-finger stretch, gesture, audio commands, and pointing devices. Font style also adjusts text content such as “Are we having fun yet?” would be presented with a fun, casual, humorous font. Text font adjusts to content such as an image of “Big Ben” would be in an “Old English” font. A rodeo would provide a rope-like font (the existence of a rodeo, for example, could be semantically identified, identified by even time/location metadata, social network tagging/captions, or analysis of user supplied text, such as “we are at the rodeo”).
A simple Graphical User Interface (“GUI”) according to the present invention would include a designated text input area (e.g. graphical lasso to contain the user's text with carabineer to “lock” the text to an object, to a template frame, to an “open” area (sky, sand, flat surface, etc.) of an image. The GUI of the present invention could use a drop shadow like technique to “write in” sand, snow, foam, bubbles, and foliage, and the like. The GUI according to the present invention would include an option for providing user editing of text or graphic after placement (e.g. paint brush stroke instead of crayon). The GUI of the present invention could also suggest text based on content analysis.
Object and face recognition, tags provided from social network comments, and camera generated time and location metadata can be used to provide captions or suitable words for a caption or to suggest a caption. An example of this technique would involve using the camera location metadata and a map application to identify a location as a golf course and a standing individual recognized as the user's father. The system could recommend the caption “Dad loves to golf” or “My Dad is quite the golfer”. In addition, potential captions can be suggested as a random or prioritized list, a “word cloud” of suggested terms, and/or as audio phrases that the user can select via verbal commands such as “yes” or “no”.
The GUI of the present invention would assign a cascading level of importance of the content in a scene so as not to obscure important content with text or speech or thought bubbles. For example, in a group shot positioning the thought or speech bubble “speaker/thinker” indicator appropriately at the head or mouth of the individual but extending the text area so as not to obscure other faces. Foreground faces facing forward would be assigned a first priority, foreground faces facing sideways would be assigned a second priority, foreground objects would be assigned a third priority, and open/uncluttered spaces would not be assigned a priority and would accept text whenever hovered over.
Referring now to
It will be understood that, although specific embodiments of the invention have been described herein for purposes of illustration and explained in detail with particular reference to certain preferred embodiments thereof, numerous modifications and all sorts of variations may be made and can be effected within the spirit of the invention and without departing from the scope of the invention.
This application claims the benefit of U.S. Provisional Application No. 61/867,325, filed Aug. 19, 2013, entitled “Context Sensitive Adaptable User Interface,” which is hereby incorporated in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6711291 | Stubler | Mar 2004 | B1 |
7010751 | Shneiderman | Mar 2006 | B2 |
7596598 | Birch | Sep 2009 | B2 |
8046713 | Yamashita | Oct 2011 | B2 |
8411113 | Cornell | Apr 2013 | B1 |
8806332 | Cok | Aug 2014 | B2 |
9275016 | Freund | Mar 2016 | B1 |
9330437 | Berglund | May 2016 | B2 |
20030179244 | Erlingsson | Sep 2003 | A1 |
20040153971 | Taniwaki | Aug 2004 | A1 |
20060109510 | Widdowson | May 2006 | A1 |
20060165291 | Atsumi | Jul 2006 | A1 |
20070019943 | Sueyoshi | Jan 2007 | A1 |
20080026800 | Lee | Jan 2008 | A1 |
20090100374 | Sheasby | Apr 2009 | A1 |
20090253517 | Bererton | Oct 2009 | A1 |
20090276701 | Nurmi | Nov 2009 | A1 |
20090278806 | Duarte | Nov 2009 | A1 |
20110035661 | Balinsky | Feb 2011 | A1 |
20110221918 | Kasahara | Sep 2011 | A1 |
20110239115 | Williams | Sep 2011 | A1 |
20110283211 | Butler | Nov 2011 | A1 |
20120032979 | Blow | Feb 2012 | A1 |
20120038626 | Kim | Feb 2012 | A1 |
20120236178 | Nusbaum | Sep 2012 | A1 |
20130100161 | Nonaka | Apr 2013 | A1 |
20130125069 | Bourdev | May 2013 | A1 |
20130182946 | Bala | Jul 2013 | A1 |
20130222394 | Fyke | Aug 2013 | A1 |
20130293572 | Kodimer | Nov 2013 | A1 |
20130339907 | Matas | Dec 2013 | A1 |
20130342676 | Amano | Dec 2013 | A1 |
20140026038 | Lee | Jan 2014 | A1 |
20140033124 | Sorrick | Jan 2014 | A1 |
20140109046 | Hirsch | Apr 2014 | A1 |
20140118551 | Ikeda | May 2014 | A1 |
20140321736 | Onai | Oct 2014 | A1 |
20140324943 | Antipa | Oct 2014 | A1 |
20150154676 | Matousek | Jun 2015 | A1 |
20150373174 | Lee | Dec 2015 | A1 |
20170139877 | Lee | May 2017 | A1 |
Number | Date | Country |
---|---|---|
2001255455 | Sep 2001 | JP |
Entry |
---|
Google.com, “define: icon”, retrieved on Jun. 5, 2016, available at <https://www.google.com/search?q=define%3A+icon&oq=define%3A+icon&aqs=chrome..69i57j69i58.6319j0j7&sourceid=chrome&es—sm=0&ie=UTF-8>, 1 page. |
English Translation of Abstract for JP 2001255455 A, published on Sep. 21, 2001, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20150052439 A1 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
61867325 | Aug 2013 | US |