This disclosure relates to the field of data processing, and more particularly, to techniques for assigning an attribute to an image asset displayed on a computing device, such as a device having a touch-sensitive screen.
Photographers review photographs taken during a photo shoot to winnow down a typically large number of images into a smaller group of winners, or so-called heroes. When using analog film, the photographer develops the negatives and examines either the negatives themselves or contact sheets to identify the images of particular interest. With digital photography, image assets can be viewed either directly on the camera or using a computer-implemented image processing application, such as Adobe Lightroom or Adobe Camera Raw, after the image data has been transferred from the camera to the computer. However, prior techniques do not permit the photographer to assign a status to a displayed image in situ using a touch screen gesture.
The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral.
As mentioned above, photographers often take many photographs from which relatively few are selected for use. The selection process can include simply marking each image as picked or rejected. However, when working with large numbers of images, this selection process can be very time consuming, particularly when the photographs are evaluated one at a time.
To this end, and in accordance with an embodiment of the present invention, techniques are disclosed for assigning, in situ, a status attribute to an image asset displayed on a computing device. Such computing devices can be, for example, devices having a touch-sensitive screen, including smart phones and tablets. The device can display images one at a time, and a user can browse through the images by swiping or gesturing horizontally (e.g., right to left or left to right) across the touch screen using a finger or stylus. The user may also use other gestures (e.g., pinching gestures) to zoom in or out on the displayed image. Each image has a status attribute for indicating whether the image has been picked or rejected by the user (or another user), once the status has been initially chosen. At any time, the user can view and change the status of the displayed image using a vertical touch contact gesture. For example, an upward touch contact gesture may be used to assign a picked status to an image asset, remove a rejected status from the image asset, or both. Similarly, a downward touch contact gesture may be used to assign a rejected status to the image asset, remove a picked status from the image asset, or both. Regardless of which gesture (upward or downward) is used, any touch contact can invoke a user interface (UI) affordance configured to display the status and available choices that can be selected by the user via a touch contact gesture. By their design, touch-sensitive devices facilitate direct interaction with content, as opposed to indirect interaction using an input device such as a mouse or keyboard, thereby affording a solution that allows the user to quickly assign a status to an image using a single gesture on a touch-sensitive screen while the image is being displayed. Numerous configurations and variations will be apparent in light of this disclosure. For example, embodiments can be used in conjunction with any touch-sensitive device for any applications having flagging attributes.
As used in this disclosure, the term “in situ” refers to requesting or commanding performance of a function, such as assigning a status attribute, using an input applied to a device displaying an object, such as an image, upon which the function is to be performed. For example, a pick or reject function may be performed by applying a flick-like or swipe-like gesture to a touch-sensitive screen displaying an image without separately invoking a user interface or chrome prior to performing the flick-like or swipe like gesture. As used in this disclosure, the term “chrome” refers to visible features of a graphical user interface (e.g., text, icons, cursors, buttons, checkboxes, sliders, frames, windows, interactive widgets, or other visible user interface elements).
As used in this disclosure, the term “UI affordance” refers to a visual representation of a functional object within the user interface of the device. The UI affordance may, for example, have a circular form (or other regular shape) in the center of the touch-sensitive screen (or other location) containing a flag graphic and a text string. The status of any given image asset can be “picked,” “rejected” or unassigned (e.g., null), or any other suitable qualifier. The UI affordance is configured to display the flag graphic and text string corresponding to the current status of the image asset being displayed, as well as to display an animated graphic in response to a vertical touch contact gesture for changing the status of the image asset, in accordance with an embodiment.
In one particular embodiment, the UI affordance is revealed to the user on the display when a single touch contact (e.g., one finger only) or vertical gesture is detected in situ with a displayed image. No other user interface or chrome is used prior to revealing the UI affordance. The vertical gesture results from a user touching the screen with a single finger and flicking or swiping the finger substantially upwards or downwards with respect to the screen orientation (e.g., portrait or landscape). A substantially vertical gesture may, for example, be one in which the vertical component of the gesture is larger than the horizontal component, if any. Once the UI affordance is revealed, the current status of the displayed image asset, if any, is shown in the affordance as a picked flag or rejected flag, respectively. If the status is unassigned, the UI affordance displays a suitable graphic or text string (e.g., “unassigned” or “unflag”). As the distance of the gesture increases (e.g., as the user continues to swipe the finger across the screen in a continuous vertical motion), the UI affordance is animated to display the status that will be selected if the user ends the touch contact at the current screen position (e.g., by lifting the finger off of the screen). For example, an upwards gesture may cause the UI affordance to display “Pick” and a flag with a checkmark icon (representing a picked status) if there is no status currently assigned to the image asset or if the current status is rejected, or to display “Unflag” if the current status of the image asset is rejected. Similarly, a downward gesture may cause the UI affordance to display “Reject” and a flag with a cross icon (representing a rejected status) if there is no status currently assigned to the image asset or if the current status is picked, or display “Unflag” if the current status of the image asset is picked. Visually this may appear on the screen to behave like a slot machine. For instance, the circular window of the UI affordance can display at least three states: picked, unassigned and rejected. The visual transitions between those three states occur similarly to a slot machine with lemons and limes and jackpots, with corresponding graphics that vertically scroll in and out of view and the various states move in response to the corresponding vertical gesture. Other types and forms of UI affordances will be apparent in light of this disclosure.
Example System
By way of example, the camera 110 can be configured to obtain a plurality of images and send each image frame to the processor 120. The processor 120 in turn can send one or more of the images to the display 130 so that a user can view the images. Additionally or alternatively, the processor 120 can send one or more of the images to an image store 140 or other suitable memory for storage and subsequent retrieval. The image store 140 may be an internal memory of the computing device 100 or an external database (e.g., a server-based database) accessible via a wired or wireless communication network, such as the Internet. The image store 140 can contain a low resolution queue 142 and a high resolution queue 144 for storing low and high resolution versions of the images, respectively.
Example Use Cases
In some embodiments, at least two image queues can be used for staging the first and second images 150, 160 (and any other images) in memory for display: a low resolution queue and a high resolution queue, such as the queues 142, 144 described with respect to
If the vertical contact gesture input 312 exceeds a predetermined distance threshold 316 (e.g., the distance between the initial point 314 and the end point of the touch contact is greater than the threshold), the UI affordance 310 displays the corresponding status indicator 312, 314 or 316. For example, as shown in
Shortly after the gesture 312 ends, the UI affordance 310 may briefly display a confirmatory animation (e.g., a bubbling zoom-like animation of the visible flag and text) before disappearing from the display 130, leaving only the image 150 visible. The status of the displayed image 150 selected by the vertical gesture persists in memory or the image store 140. At this point, the user can again change the status of the image 150 using a vertical gesture, or select a different image, such as by using the horizontal swipe gestures 152 and 162 described above with respect to
Example Methodology
In one embodiment, if the speed at which the touch contact location moves vertically is fast (e.g., resulting from a flick motion), then the slot machine animation is similarly performed quickly in the direction of motion. Likewise, if the speed at which the touch contact location moves vertically is slow (e.g., resulting from a swipe motion), then the slot machine animation is similarly performed slowly in the direction of motion and commensurate with the speed at which the user is applying the gesture to the touch screen. In this manner, a user can quickly select a status by making a rapid flick gesture, or more slowly select the status by making a gradual swipe gesture. In the case where the user is making a gradual swipe gesture, the user has the option of canceling the status selection by swiping in the opposite direction or by ending the touch contact before the threshold distance has been reached from the initial contact location.
In one embodiment, if the speed at which the touch contact location moves vertically is fast (e.g., resulting from a flick motion) and the touch contact input ends 704, and an unassigned status is available, then the selected status is displayed 710 in the center of the UI affordance with a bubbling zoom-like animation. If, on the other hand, the unassigned status is unavailable (e.g., the user is swiping in a direction for which there is no selection available), the current status of the image is displayed 708 in the center of the UI affordance with a bubbling zoom-like animation. Likewise, if the speed at which the touch contact location moves vertically is slow (e.g., resulting from a swipe motion) and the touch contact location input ends 704 at or beyond a threshold distance from the initial contact location, and an unassigned status is available, then the selected status is displayed 710 in the center of the UI affordance with a bubbling zoom-like animation. If, on the other hand, the unassigned status is unavailable, the current status of the image is displayed 708 in the center of the UI affordance without a bubbling zoom-like animation.
Example Computing Device
The computing device 1000 includes one or more storage devices 1010 and/or non-transitory computer-readable media 1020 having encoded thereon one or more computer-executable instructions or software for implementing techniques as variously described in this disclosure. The storage devices 1010 may include a computer system memory or random access memory, such as a durable disk storage (which may include any suitable optical or magnetic durable storage device, e.g., RAM, ROM, Flash, USB drive, or other semiconductor-based storage medium), a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that implement various embodiments as taught in this disclosure. The storage device 1010 may include other types of memory as well, or combinations thereof. The storage device 1010 may be provided on the computing device 1000 or provided separately or remotely from the computing device 1000. The non-transitory computer-readable media 1020 may include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more USB flash drives), and the like. The non-transitory computer-readable media 1020 included in the computing device 1000 may store computer-readable and computer-executable instructions or software for implementing various embodiments. The computer-readable media 1020 may be provided on the computing device 1000 or provided separately or remotely from the computing device 1000.
The computing device 1000 also includes at least one processor 1030 for executing computer-readable and computer-executable instructions or software stored in the storage device 1010 and/or non-transitory computer-readable media 1020 and other programs for controlling system hardware. Virtualization may be employed in the computing device 1000 so that infrastructure and resources in the computing device 1000 may be shared dynamically. For example, a virtual machine may be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines may also be used with one processor.
A user may interact with the computing device 1000 through an output device 1040, such as a screen or monitor (e.g., the touch-sensitive display 130 of
The computing device 1000 may run any operating system, such as any of the versions of Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device 1000 and performing the operations described in this disclosure. In an embodiment, the operating system may be run on one or more cloud machine instances.
In other embodiments, the functional components/modules may be implemented with hardware, such as gate level logic (e.g., FPGA) or a purpose-built semiconductor (e.g., ASIC). Still other embodiments may be implemented with a microcontroller having a number of input/output ports for receiving and outputting data, and a number of embedded routines for carrying out the functionality described in this disclosure. In a more general sense, any suitable combination of hardware, software, and firmware can be used, as will be apparent.
As will be appreciated in light of this disclosure, the various modules and components of the system shown in
Numerous embodiments will be apparent in light of the present disclosure, and features described in this disclosure can be combined in any number of configurations. One example embodiment provides a system including a storage having at least one memory, and one or more processors each operatively coupled to the storage. The one or more processors are configured to carry out a process including displaying an image on a display device, the image having a status attribute associated therewith, the status attribute representing one of a plurality of states; detecting a touch contact input via a touch-sensitive input device; invoking a user interface affordance in response to the touch contact input, the user interface affordance configured to provide a visual indication of the plurality of states on the display device; receiving a user selection of one of the plurality of states via the user interface affordance based on the same touch contact input; providing a visual confirmation of the user selection via the user interface affordance on the display device; and assigning the state associated with the user selection to the status attribute. In some embodiments, the touch contact input includes a vertical touch gesture, and the invoking of the user interface affordance further comprises displaying the user interface affordance on the display device. In some such embodiments, the user interface affordance is completely displayed subsequent to the vertical touch gesture reaching or exceeding a threshold distance away from an initial touch contact location. In some other such embodiments, the user interface affordance is gradually displayed as a function of a distance the vertical touch gesture moves away from an initial touch contact location. In yet some other such embodiments, the visual indication of the plurality of states includes an animation of each of the plurality of states moving as a function of a distance the vertical touch gesture moves away from an initial touch contact location. In some embodiments, the process includes detecting an end of the touch contact input via the touch-sensitive input device, where the providing of the visual confirmation of the user selection occurs in response to the end of the touch contact input. In some such embodiments, the visual confirmation includes an animation. In some embodiments, the displayed image is a low resolution version of the image stored in a low resolution image queue, where the process includes rendering a high resolution version of the image to be stored in a high resolution image queue, and where the displayed image is changed to the high resolution version of the image after the image has been rendered. In some embodiments, the touch contact input includes a flick touch gesture or a swipe touch gesture. In some such embodiments, the user interface affordance is completely displayed subsequent to the flick touch gesture or subsequent to the swipe touch gesture. Another embodiment provides a non-transient computer-readable medium or computer program product having instructions encoded thereon that when executed by one or more processors cause the processor to perform one or more of the functions defined in the present disclosure, such as the methodologies variously described in this paragraph. As previously discussed, in some cases, some or all of the functions variously described in this paragraph can be performed in any order and at any time by one or more different processors.
The foregoing description and drawings of various embodiments are presented by way of example only. These examples are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Alterations, modifications, and variations will be apparent in light of this disclosure and are intended to be within the scope of the invention as set forth in the claims.