Not applicable.
Not applicable.
In recent years, computer users have become more and more reliant upon personal computers to store and present a wide range of digital media. For example, users often utilize their computers to store and interact with digital images. As millions of families now use digital cameras to snap thousands of images each year, these images are often stored and organized on their personal computers.
With the increased use of computers to store digital media, greater importance is placed on the efficient retrieval of desired information. For example, metadata is often used to aid in the location of desired media. Metadata consists of information relating to and describing the content portion of a file. Metadata is typically not the data of primary interest to a viewer of the media. Rather, metadata is supporting information that provides context and explanatory information about the underlying media. Metadata may include information such as time, date, author, subject matter and comments. For example, a digital image may include metadata indicating the date the image was taken, the names of the people in the image and the type of camera that generated the image. The discrete pieces of information stored as metadata are often referred to as “tags.” For example, a tag may be the keyword “John Smith,” and images depicting John Smith may receive this tag.
Metadata may be created in a variety of different ways. It may be generated when a media file is created or edited. For example, the user or a device may assign metadata when the media is initially recorded. Alternatively, a user may enter metadata via a metadata editor interface provided by a personal computer.
With the increasingly important role metadata plays in interacting with desired media, it is important that computer users be provided tools for quickly and easily applying desired metadata. Without such tools, users may select not to create metadata, and, thus, they will not be able to locate media of interest. For example, metadata may indicate a certain person is shown in various digital images. Without this metadata, a user would have to examine the images one-by-one to locate images with this person.
A number of existing interfaces are capable of assigning or “tagging” digital media with metadata. These existing interfaces, however, require the user to navigate among various menus and/or options before entry of a metadata text is permitted. Further, metadata editor interfaces today typically rely on keyboard entry of metadata text. Such navigation and keyboard entry can be time-consuming, especially with large sets of items requiring application of metadata.
The present invention meets the above needs and overcomes one or more deficiencies in the prior art by providing systems and methods for associating metadata with digital media. Tags that may be stored as metadata are associated with single-action user inputs. For example, a tag may be associated with user selection of an icon. Entry of one of the single-action user inputs is detected. For example, a user may select the icon with a mouse click. The tag associated with the detected input is stored as metadata associated with a selected item of digital media.
It should be noted that this Summary is provided to generally introduce the reader to one or more select concepts described below in the Detailed Description in a simplified form. This Summary is not intended to identify key and/or required features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The present invention is described in detail below with reference to the attached drawing figures, wherein:
The subject matter of the present invention is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the term “step” may be used herein to connote different elements of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. Further, the present invention is described in detail below with reference to the attached drawing figures, which are incorporated in their entirety by reference herein.
The present invention provides an improved system and method for associating metadata with digital media. An exemplary operating environment for the present invention is described below.
Referring initially to
The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The invention may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With reference to
Computing device 100 typically includes a variety of computer-readable media. By way of example, and not limitation, computer-readable media may comprise Random Access Memory (RAM); Read Only Memory (ROM); Electronically Erasable Programmable Read Only Memory (EEPROM); flash memory or other memory technologies; CDROM, digital versatile disks (DVD) or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium that can be used to encode desired information and be accessed by computing device 100.
Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, nonremovable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 100 includes one or more processors that read data from various entities such as memory 112 or I/O components 120. Presentation component(s) 116 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 118 allow computing device 100 to be logically coupled to other devices including I/O components 120, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
At 202, the method 200 associates tags with single-action user inputs. A tag may include any information acceptable for being associated as metadata with an item as media. A tag may identify keywords related to the subject matter depicted by the media. For example, the keywords may identify the people in an image or events associated with the image. As will be appreciated by those skilled in the art, keyword-based tags may be used to organize or to locate an item of media.
A tag may also express an action to be performed with respect to the digital media. For example, a user may desire for an image be printed or emailed. Accordingly, a tag may indicate the commands “email” or “print.” Subsequently, these commands may be used to trigger the emailing or printing of the image. As will be appreciated by those skilled in the art, a tag may indicate a variety of actions that a user intends to be performed with respect to the media.
Tags may originate from a variety of sources. For example, tags may be automatically created (i.e., predefined) by a software provider or other source. The tags may also be user-defined. These user-defined tags may be created by the user and can be associated with an icon. As another example, tags may be automatically generated based on user actions. For example, an automatically generated tag may indicate the date a digital image was last printed.
As previously mentioned, the method 200 associates tags with single-action user inputs. Any number of single-action user inputs are known in the art, and these inputs may vary based on input device. For example, a single-action user input may be the entry of a “hot key” keystroke. A hot key is a keystroke entry on a computer keyboard that indicates user assignment/association of a tag. A hot key may be associated with a single keyboard key or a combination of keyboard keys that are pressed simultaneously. As another example, the single-action user input may relate to the selection of an icon or widget displayed in a graphical interface. Such selection may be made, for example, with a mouse click or with a stylus input. It should be noted that the term icon, as it used herein, refers to any graphical object that may be presented to a user and associated with a tag. A single-action user input may also be associated with a specialized button on, for example, a device, a computer keyboard or a remote control. For example, a digital camera may have an “email” button to be pressed when a user desires to tag a picture for emailing. Such specialized buttons/keys may be incorporated into any number of devices or keyboards.
At 204, the method 200 presents the item of digital media to the user. Any visual representation the digital media may be acceptable for presentation at 204. For example, a digital image may be displayed if the media is a picture or a video. The method 200 also presents icons associated with the tags. Any number of icons may be presented to the user, and these icons may be customized to reflect commonly used tags. For example, an icon may be associated with the name of a user's child. Accordingly, each image that depicts the child may be quickly tagged with the child's name via selection of this icon.
The method 200, at 206, detects entry of a single-action user input. For example, a hot key keystroke may be detected, or the method 200 may detect a mouse click selecting an icon. Upon detection of the user input, the method 200, at 208, stores the tag associated with the detected input as metadata along with the item of digital media. A variety of techniques exist in the art for storing metadata with media. In one embodiment, the metadata may be used to identify key aspects of the underlying media. In this manner, items of interest may be located by searching for items having a certain tag. As will be appreciated by those skilled in the art, because the tags are stored with the underlying files, various applications and/or an operating system may access and utilize the metadata information.
The screen display 300 also includes a tag presentation area 304. The tags in the tag presentation area 304 may identify the subject matter of the presented image and/or may list actions to be performed with respect to the image. The tags may be derived from any number of inputs and/or sources. For example, a user may manually enter a tag in response to an image's display, or a tag may be created in response to user entry of a single-action user input. Alternatively, a tag may be communicated from a device (e.g., a digital camera) along with the presented image.
A tag icon area 306 is also included on the screen display 300. The tag icon area 306 includes four icons, and each of these icons is associated with a tag. Those skilled in the art will appreciate that any number of icons may be displayed in the tag icon area 306 and that these icons may have various associated tags. For example, an icon resembling an envelope resides in the tag icon area 306. This envelope icon may be associated with a tag containing the word “Email.” When a user desires to email the image presented in the image presentation area 302, the user may select the envelope icon to associate an “Email” tag with the presented image.
The tag editor 308 may also allow the user to associate an icon from the tag icon area 306 with the “Beaches” tag. As illustrated by
Turning to
The screen display 400 also includes a tag presentation area 404 and a tag icon area 406. The tag presentation area 404 and the tag icon area 406 may be similar to the tag presentation area 304 and the tag icon area 306 of
A tree display area 408 is also included in the screen display 400. The tree display area 408 may include controls that allow a user to navigate among and organize their images. For example, the tree display area 408 includes a “Date Taken” entry. Upon user selection, this entry may be expanded to list various dates in which photos were taken. By selecting a date, each of the photos taken that day will be displayed in the presentation area 402. Such tree interfaces are well known in the art.
One of the entries in the tree display area 408 is a “Tags” entry. When expanded, this entry provides various tag-related options. For example, the tree display area 408 may allow the user to create a new tag. The icons presented in the tag icon area 406 are also presented in the tree display area 408. When a user selects an icon from the tree, the images that have a tag associated with the selected icon are presented in the presentation area 402.
The screen display 400 may allow the user to alter the tags of multiple images at the same time. For example, images having the “Email” tag may be presented in the presentation area 402. After emailing these images, the user may wish to delete the “Email” tag from each image, and the screen display 400 may provide a control allowing such removal from multiple images at the same time. Further, the user may wish to delete the “Email” tag from all images. As shown on
The system 500 also includes a user input interface 504. The user input interface 504 may be configured to receive single-action user inputs selecting one of the controls. For example, the user input interface 504 may receive a mouse click selecting an icon. As another example, the user input interface 504 may detect entry of a keystroke combination associated with a hotkey. As will be appreciated by those skilled in the art, any number of single-action user inputs may be entered by a user and received by the input interface 504.
The system 500 further includes a metadata control component 506. The metadata control component 506 may be configured to store tags as metadata with an identified item of digital media. The metadata control component 506 may determine whether one or more tags are associated with an input detected by the input interface 504. In one embodiment, a set of multiple tags may be associated with a single input. In this embodiment, the received single-action user input may indicate a user's desire to assign a set of multiple tags to an item of digital media.
If such tags are associated, the metadata control component 506 may incorporate the tag(s) into the media file as metadata, and the file may be stored in a data store. As will be appreciated by those skilled in the art, the metadata control component 506 may utilize any number of known data storage techniques to associate the metadata with the underlying media file. By storing tags as metadata, the tags will persist with the media, and any number of computer programs may use the tags when interacting with the media.
Alternative embodiments and implementations of the present invention will become apparent to those skilled in the art to which it pertains upon review of the specification, including the drawing figures. Accordingly, the scope of the present invention is defined by the appended claims rather than the foregoing description.