The present invention relates to electronic devices that render digital images, and more particularly to a system and methods for tagging multiple digital images in a convenient and efficient manner to provide an improved organizational mechanism for a database of digital images.
Contemporary digital cameras typically include embedded digital photo album or digital photo management applications in addition to traditional image capture circuitry. Furthermore, as digital imaging circuitry has become less expensive, other portable devices, including mobile telephones, portable data assistants (PDAs), and other mobile electronic devices often include embedded image capture circuitry (e.g. digital cameras) and digital photo album or digital photo management applications in addition to traditional mobile telephony applications.
Popular digital photo management applications include several functions for organizing digital photographs. Tagging is one such function in which a user selects a digital photograph or portion thereof and associates a text item therewith. The text item is commonly referred to as a “text tag” and may provide an identification label for the digital image or a particular subject depicted within a digital image. Tags may be stored in a data file containing the digital image, including, for example, by incorporating the tag into the metadata of the image file. Additionally or alternatively, tags may be stored in a separate database which is linked to a database of corresponding digital images. A given digital photograph or image may contain multiple tags, and/or a tag may be associated with multiple digital images. Each tag may be associated with a distinct subject in a digital photograph, a subject may have multiple tags, and/or a given tag may be associated with multiple subjects whether within a single digital photograph or across multiple photographs.
For example, suppose a digital photograph is taken which includes a subject person who is the user's father. A user may apply to the photograph one or more tags associated with the digital image such as “father”, “family”, and “vacation” (e.g., if the user's father was photographed while on vacation). The digital photograph may include other subject persons each associated with their own tags. For example, if the photograph also includes the user's brother, the photograph also may be tagged “brother”. Other photographs containing an image of the user's father may share tags with the first photograph, but lack other tags. For example, a photograph of the user's father taken at home may be tagged as “father” and “family”, but not “vacation”. As another example, a vacation photograph including only the user's mother also may be tagged “family” and “vacation”, but not “father”.
It will be appreciated, therefore, that a network of tags may be applied to a database of digital images to generate a comprehensive organizational structure of the database. In particular, the tagging of digital images has become a useful tool for organizing photographs of friends, family, objects, events, and other subject matter for posting on social networking sites accessible via the Internet or other communications networks, sharing with other electronic devices, printing and manipulating, and so on. Once the digital images in the database are fully associated with tags, they may be searched by conventional methods to access like photographs. In the example described above, a user who wishes to post vacation photographs on a social networking site may simply search a digital image database by the tag “vacation” to identify and access all the user's photographs of his vacation at once, which may then be posted on the social networking site. Similarly, should the user desire to access and/or post photographs of his mother, the user may search the database by the tag “mother”, and so on.
Despite the increased popularity and usage of tagging to organize digital photographs for manipulation, current systems for adding tags have proven deficient. One method of tagging is manual entry by the user. Manual tagging is time consuming and cumbersome if the database of digital images and contained subject matter is relatively large. In an attempt to reduce the effort associated with manual tagging, some tagging applications may maintain lists of most recent tags, commonly used tags, and the like from which a user may more readily select a tag. Even with such improvements, manual tagging still has proven cumbersome as to large numbers of digital images.
To overcome burdens associated with manual tagging, automatic tagging techniques have been developed which apply recognition algorithms to identify subject matter depicted in a database of digital images. In recognition algorithms, subject matter depicted in a digital image may be compared to a reference database of images in an attempt to identify the subject matter. Such recognition algorithms particularly have been applied to subject persons in the form of face recognition. Face recognition tagging, however, also has proven deficient. Face recognition accuracy remains limited, particularly as to a large reference database. There is a high potential that even modest “look-alikes” that share common overall features may be misidentified, and therefore mis-tagged. Mis-tagging, of course, would undermine the usefulness of any automatic tagging system. The accuracy of current automatic tagging systems diminishes further when such algorithms are applied to objects generally, for object recognition has proven difficult to perform accurately.
In addition, conventional manual and recognition tagging systems typically tag only one digital image at a time. As stated above, however, to provide a comprehensive organizational structure of a digital image database, it is often desirable for multiple digital images to share one or more common tags. Tagging each digital image individually is cumbersome and time consuming, even when using a recognition or other automatic tagging system.
Accordingly, there is a need in the art for an improved system and methods for the manipulation and organization of digital images (and portions thereof) that are rendered on an electronic device. In particular, there is a need in the art for an improved system and methods for text tagging multiple digital images at once with one or more common tags.
Therefore, a system for tagging multiple digital images includes an electronic device having a display for rendering a plurality of digital images. An interface in the electronic device receives an input of an area of interest within one of the rendered images, and receives a selection of images from among the rendered images to be associated with the area of interest. In one embodiment, the interface may be a touch screen interface or surface on the display, and the inputs of the area of interest and associated images selection may be provided by interacting with the touch screen surface with a stylus, finger, or other suitable input instrument. An input device in the electronic device receives a tag input based on the area of interest, which is then applied to the associated images. In one embodiment, the input device is a keypad that receives a manual input of tag text.
Alternatively, an automatic tagging operation may be performed. In automatic tagging, portions of the rendered images may be transmitted to a network tag generation server. The server may compare the image portions to a reference database of images to identify subject matter that is common to the image portions. The server may generate a plurality of suggested tags based on the common subject matter and transmit the suggested tags to the electronic device. The user may accept one of the suggested tags, and the accepted tag may be applied to each of the associated images.
Therefore, according to one aspect of the invention, an electronic device comprises a display for rendering a plurality of digital images. An interface receives an input of an area of interest within at least one of the plurality of rendered images, and receives a selection of images from among the plurality of rendered images to be associated with the area of interest. An input device receives an input of a tag based on the area of interest to be applied to the associated images, and a controller is configured to receive the tag input and to apply the tag to each of the associated images.
According to one embodiment of the electronic device, the input device is configured for receiving a manual input of the tag.
According to one embodiment of the electronic device, the controller is configured to extract an image portion from each of the associated images, the image portions containing common subject matter based on the area of interest. The electronic device comprises a communications circuit for transmitting the image portions to a tag generation server and for receiving a plurality of tag suggestions from the tag generation server based on the common subject matter. The input device receives an input of one of the suggested tags, and the controller is further configured to apply the accepted tag to each of the associated images.
According to one embodiment of the electronic device, each image portion comprises a thumbnail portion extracted from each respective associated image.
According to one embodiment of the electronic device, each image portion comprises an object print of the common subject matter.
According to one embodiment of the electronic device, the interface comprises a touch screen surface on the display, and the area of interest is inputted by drawing the area of interest on a portion of the touch screen surface within at least one of the rendered images.
According to one embodiment of the electronic device, the associated images are selected from among the plurality of rendered images by interacting with a portion of the touch screen surface within each of the images to be associated.
According to one embodiment of the electronic device, the electronic device further comprises a stylus for providing the inputs to the touch screen surface.
According to one embodiment of the electronic device, the interface comprises a touch screen surface on the display and the display has a display portion for displaying the inputted area of interest, and the associated images are selected by interacting with the touch screen surface to apply the displayed area of interest to each of the images to be associated.
According to one embodiment of the electronic device, a plurality of areas of interest are inputted for a respective plurality of rendered images, and the controller is configured to apply the tag to each image that is associated with at least one of the areas of interest.
According to one embodiment of the electronic device, at least a first tag and a second tag are applied to at least one of the associated images, wherein the first tag corresponds to a general image category and the second tag corresponds to a specific image category within the general image category.
According to another aspect of the invention, a tag generation server comprises a network interface for receiving a plurality of image portions from an electronic device, the image portions each being extracted from a plurality of respective associated images. A database comprises a plurality of reference images. A controller is configured to compare the received image portions to the reference images to identify subject matter common to the image portions, and configured to generate a plurality of tag suggestions based on the common subject matter to be applied to each of the associated images, wherein the tag suggestions are transmitted via the network interface to the electronic device.
According to one embodiment of the tag generation server, if the controller is unable to identify the common subject matter, the controller is configured to generate an inability to tag indication, wherein the inability to tag indication is transmitted via the network interface to the electronic device.
According to one embodiment of the tag generation server, the network interface receives from the electronic device a first group of image portions each being extracted from a first group of associated images, and a second group of image portions each being extracted from a second group of associated images. The controller is configured to compare the first group of image portions to the reference images to identify first subject matter common to the first group of image portions, and configured to generate a first plurality of tag suggestions based on the first common subject matter to be applied to each of the first associated images. The controller also is configured to compare the second group of image portions to the reference images to identify second subject matter common to the second group of image portions, and configured to generate a second plurality of tag suggestions based on the second common subject matter to be applied to each of the second associated images. The first and second plurality of tag suggestions are transmitted via the network interface to the electronic device.
According to one embodiment of the tag generation server, each reference image comprises an object print of a respective digital image.
According to another aspect of the invention, a method of tagging a plurality of digital images comprises the steps of rendering a plurality of digital images on a display, receiving an input of an area of interest within at least one of the plurality of digital images, receiving a selection of images from among the plurality of rendered images and associating the selected images with the area of interest, receiving an input of a tag to be applied to the associated images, and applying the inputted tag to each of the associated images.
According to one embodiment of the method, receiving the tag input comprises receiving a manual input of the tag.
According to one embodiment of the method, the method further comprises extracting an image portion from each of the associated images, the respective image portions containing common subject matter based on the area of interest, transmitting the image portions to a tag generation server, receiving a plurality of tag suggestions from the tag generation server based on the common subject matter, and applying at least one of the suggested tags to each of the associated images.
According to one embodiment of the method, the method further comprises receiving an input of a plurality of areas of interest for a respective plurality of rendered images, and applying the tag to each image that is associated with at least one of the areas of interest.
According to one embodiment of the method, the method further comprises applying at least a first tag and a second tag to at least one of the associated images, wherein the first tag corresponds to a general image category and the second tag corresponds to a specific image category within the general image category.
These and further features of the present invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the invention may be employed, but it is understood that the invention is not limited correspondingly in scope. Rather, the invention includes all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
It should be emphasized that the terms “comprises” and “comprising,” when used in this specification, are taken to specify the presence of stated features, integers, steps or components but do not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Embodiments of the present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. It will be understood that the figures are not necessarily to scale.
In the illustrated embodiments, a digital image may be rendered and manipulated as part of the operation of a mobile telephone. It will be appreciated that aspects of the invention are not intended to be limited to the context of a mobile telephone and may relate to any type of appropriate electronic device, examples of which include a stand-alone digital camera, a media player, a gaming device, a laptop or desktop computer, or similar. For purposes of the description herein, the interchangeable terms “electronic equipment” and “electronic device” also may include portable radio communication equipment. The term “portable radio communication equipment,” which sometimes is referred to as a “mobile radio terminal,” includes all equipment such as mobile telephones, pagers, communicators, electronic organizers, personal digital assistants (PDAs), smartphones, and any communication apparatus or the like. All such devices may be operated in accordance with the principles described herein.
The electronic device 10 includes a display 22 for displaying information to a user regarding the various features and operating state of the mobile telephone 10. Display 22 also displays visual content received by the mobile telephone 10 and/or retrieved from a memory 90. As part of the present invention, display 22 may render and display digital images for tagging. In one embodiment, the display 22 may function as an electronic viewfinder for a camera assembly 12.
An input device is provided in the form of a keypad 24 including buttons 26, which provides for a variety of user input operations. For example, keypad 24/buttons 26 typically include alphanumeric keys for allowing entry of alphanumeric information such as telephone numbers, phone lists, contact information, notes, etc. In addition, keypad 24/buttons 26 typically includes special function keys such as a “send” key for initiating or answering a call, and others. The special function keys may also include various keys for navigation and selection operations to access menu information within the mobile telephone 10. As shown in
In one embodiment, digital images to be tagged in accordance with the principles described herein are taken with the camera assembly 12. It will be appreciated, however, that the digital images to be tagged as described herein need not come from the camera assembly 12. For example, digital images may be stored in and retrieved from the memory 90. In addition, digital images may be accessed from an external or network source via any conventional wired or wireless network interface. Accordingly, the precise of source of the digital images to be tagged may vary.
Referring again to
Among their functions, to implement the features of the present invention, the control circuit 30 and/or processing device 92 may comprise a controller that may execute program code stored on a machine-readable medium embodied as tag generation application 38. Application 38 may be a stand-alone software application or form a part of a software application that carries out additional tasks related to the electronic device 10. It will be apparent to a person having ordinary skill in the art of computer programming, and specifically in application programming for mobile telephones, servers or other electronic devices, how to program an electronic device to operate and carry out logical functions associated with the application 38. Accordingly, details as to specific programming code have been left out for the sake of brevity. In addition, application 38 and its various components may be embodied as hardware modules, firmware, or combinations thereof, or in combination with software code. Also, while the code may be executed by control circuit 30 in accordance with exemplary embodiments, such controller functionality could also be carried out via dedicated hardware, firmware, software, or combinations thereof, without departing from the scope of the invention.
Application 38 may be employed to apply common text tags to multiple digital images in a more efficient manner as compared to conventional tagging systems.
The method may begin at step 100 at which a plurality of digital images are rendered. For example, multiple digital images may be rendered on display 22 of electronic device 10 by taking multiple images with the camera assembly 12, retrieving the images from a memory 90, accessing the images from an external or network source, or by any conventional means. At step 110, the electronic device may receive an input defining a particular area of interest within one of the rendered images. The inputted area of interest may define representative subject matter about which the desired tag may be based. At step 120, the electronic device may receive a user input selection of multiple images to be associated with each other as a group of images. At step 130, the electronic device may receive an input of a tag which may be based upon the area of interest as defined above. At step 140, the tag may be applied to each of the associated images.
It will be appreciated that step 130 in particular (the input of the tag) may occur at any point within the tag generation process. For example, a tag input alternatively may be received by the electronic device at the outset of the method, after the images are rendered, after the area of interest is defined, or at any suitable time. In one embodiment, the multiple images may be stored or otherwise linked as an associated group of images, and tagged at some later time. In such an embodiment, the associated group of images may be shared or otherwise transmitted among various devices and/or image databases, with each corresponding user applying his or her own tag to the associated group of images.
As stated above,
An input of a tag may then be received based upon the thumbnail 18 of the area of interest 16. As seen in the lower-right sub-figure of
In this example, a user would have a variety of tagging options. For example, similar to the process of
In accordance with the above,
In this vein, tags may be applied to multiple images in a highly efficient manner. The system may operate in a “top-down” fashion. By selecting the tag Flower, images subsequently grouped under the more specific tags Daisy, Tulip, or Rose automatically would also be tagged Flower. The system also may operate in a “bottom-up” fashion. By defining an area of interest for the related but not identical subjects of Daisy, Tulip, and Rose, the system automatically may generate the tag Flower for the group in accordance with the tag tree. Similarly, in one embodiment only one Daisy tagged image would need to be tagged Flower. By tagging one Daisy tagged image with the tag Flower, the tag Flower also may be applied automatically to every other Daisy tagged image. As a result, common tagging of multiple images is streamlined substantially in a variety of ways.
The various tags may be incorporated or otherwise associated with an image data file for each of the digital images. For example, the tags may be incorporated into the metadata for the image file, as is known in the art. Additionally or alternatively, the tags may be stored in a separate database having links to the associated image files. Tags may then be accessed and searched to provide an organizational structure to a database of stored images. For example, as shown in
In each of the above examples, the specific tag input was received by the electronic device by a manual entry inputted by the user with an input device such as a keypad. The tag was then applied automatically to an associated group of images. In other embodiments, the tag input itself may be received (step 130 of
Referring briefly back to
Referring to
Communications network 70 also may include a tag generation server 75 to perform operations associated with the present invention. Although depicted as a separate server, the tag generation server 75 or components thereof may be incorporated into one or more of the communications servers 72.
The method may begin at step 200 at which multiple digital images are rendered. At step 210, the electronic device may receive an input defining a particular area of interest within one of the rendered images. The inputted area of interest may define a representative image portion upon which the desired tag may be based. At step 220, the electronic device may receive a user input selection of multiple images to be associated with each other as a group of images. Note that steps 200, 210, and 220 are comparable to the steps 100, 110, and 120 of
As step 230, a portion of each associated image may be transmitted from the electronic device to an external or networked tag generation server, such as the tag generation server 75. In one embodiment, the image portions may comprise entire images. Referring briefly back to
In another embodiment, therefore, a partial image portion may be defined and extracted from each associated image. For example, a thumbnail image portion may be extracted from each associated image based on the point in the image in which a user touches the image with the stylus 14 on the touch screen surface 22a. As seen in
As used herein, the term “object print” denotes a representation of an object depicted in the digital image that would occupy less storage capacity than the broader digital image itself. For example, the object print may be a mathematical description or model of an image or an object within the image based on image features sufficient to identify the object. The features may include, for example, object edges, colors, textures, rendered text, image miniatures (thumbnails), and/or others. Mathematical descriptions or modeling of objects is known in the art and may be used in a variety of image manipulation applications. Object prints sometimes are referred to in the art as “feature vectors”. By transmitting object prints to the tag generation server rather than the entire images, processing capacity may be used more efficiently.
As will be explained in more detail below, the tag generation server may analyze the transmitted image portions to determine a plurality of suggested common tags for the images. The tag generation server may generate a plurality of tag suggestions to enhance the probability that the subject will be identified, as compared to if only one tag suggestion were to be generated. Any number of tag suggestions may be generated. In one embodiment, the number of tag suggestions may be 5-10 tag suggestions. In addition, the tag suggestions may be ranked or sorted by probability or proportion of match of the subject matter to enhance the usefulness of the tag suggestions.
At step 240 of
For example,
A similar process may be applied to the digital images depicted in
The method may begin at step 300 at which the server receives from an electronic device a plurality of image portions, each extracted from a respective associated digital image rendered on the electronic device. As stated above, the image portions may be thumbnail portions extracted from the digital images, object prints of subject matter depicted in the images, or less preferably the entire images themselves. At step 310, the tag generation server may compare the received image portions to a database of reference images. Similar to the received image portions, the reference images may be entire digital images, but to preserve processing capacity, the reference images similarly may be thumbnail portions or object prints of subject matter extracted from broader digital images. At step 320, a determination may be made as to whether common subject matter in the received image portions can be identified based on the comparison with the reference image database. If so, at step 325 a plurality of tag suggestions may be generated based on the common subject matter, and at step 330 the plurality of tag suggestions may be transmitted to the electronic device. As stated above in connection with the mirror operations of the electronic device, a user may accept to apply one of the suggested tags or input a tag manually. Regardless, at step 333 the tag generation server may receive a transmission of information identifying the applied tag. At step 335, the tag generation server may update the reference database, so the applied tag may be used in subsequent automatic tagging operations.
If at step 320 common subject matter cannot be identified, at step 340 the tag generation server may generate an “Inability To Tag” indication, which may be transmitted to the electronic device at step 350. The user electronic device may then return to a manual tagging mode by which a manual input of a tag may be inputted in one of the ways described above. In such case, the tag generation server still may receive a transmission of information identifying the applied tag and update the reference database commensurately (steps 333 and 335).
Automatic tagging with the tag generation server also may be employed to provide a plurality of tag suggestions, each pertaining to different subject matter. For example, the server may receive from the electronic device a first group of image portions extracted from a respective first group of associated images, and a second group of image portions extracted from a respective second group of associated images. The first and second groups of image portions each may be compared to the reference database to identify common subject matter for each group. A first plurality of tag suggestions may be generated for the first group of image portions, and a second plurality of tag suggestions may be generated for the second group of image portions. Furthermore, in the above examples, the subject matter of the images tended to be ordinary objects. Provided the reference database is sufficiently populated, tag suggestions may be generated even if a user does not know the precise subject matter depicted in the images being processed.
For example,
Similar to previous figures,
The image manipulations based on areas of interest 16a and 16b are distinguished in
Similarly, a user may employ the stylus 14 to select the second thumbnail 18b of the van. A user may then click or drag the thumbnail, thereby selecting one or more images 13b-f to be associated with the van. In
Methods comparable to those of
Tags, therefore, may be generated automatically for images depicting varying subjects, even when the user is unaware of the precise subject matter depicted in the digital images. The described system has advantages over conventional automatic tagging systems. The system described herein generates a plurality of image portions each containing specific subject matter for comparing to the reference images, as compared to a broad, non-specific single image typically processed in conventional systems. By comparing multiple and specific image portions to the reference images, the system described herein has increased accuracy as compared to conventional systems. Furthermore, in the above example tagging was performed automatically as to two groups of images. It will be appreciated that such tagging operation may be applied to any number of multiple groups of images (e.g., five, ten, twenty, other).
In the previous examples, the tags essentially corresponded to the identity of the pertinent subject matter. Such need not be the case. For example, a user may not apply any tag at all. In such case, the electronic device may generate a tag. A device-generated tag may be a random number, thumbnail image, icon, or some other identifier. A user then may apply a device-generated tag to multiple images in one of the ways described above.
A user also may define tags based on personal descriptions, feelings, attitude, characterization, or by any other user defined criteria.
Similar to previous figures,
The image manipulations based on thumbnails 18a and 18b are distinguished in
Similarly, a user may employ the stylus 14 to select the second thumbnail 18b. A user may then click or drag the thumbnail on the touch screen surface, thereby selecting one or more images 15b-f to be associated with the thumbnail 18b. In
As stated above, the various examples described herein are intended for illustrative purposed only. The precise form and content of graphical user interface, databases, and digital images may be varied without departing from the scope of the invention.
It will be appreciated that the tagging systems and methods described herein have advantages over conventional tagging systems. The described system has enhanced accuracy and is more informative because tags may be based upon specific user-defined areas of interest within the digital images. Accordingly, there would be no issue as to what portion of an image should provide the basis for a tag.
Manual tagging is improved because a tag entered manually may be applied to sub-areas of numerous associated images. A user, therefore, need not tag each photograph individually. In this vein, by associating digital images with categorical tags of varying generality, a hierarchal organizational of digital photographs may be readily produced. The hierarchal categorical tags may also be employed to simultaneously generate tags for a plurality of images within a given category. A user may also tag images based on characterization of content or other user defined criteria, obviating the need for the user to know the specific identity of depicted subject matter.
Automatic tagging also is improved as compared to conventional recognition tagging systems. The system described herein provides multiple image portions containing specific subject matter for comparing to the reference images, compared to the broad, non-specific single images typically processed in conventional systems. By comparing multiple image portions containing specific subject matter to the reference images, the system described herein has increased accuracy as compared to conventional recognition tagging systems. Accurate tags, therefore, may be generated automatically for images depicting varying subjects, even when user is unaware of the precise subject matter being depicted.
Although the invention has been described with reference to digital photographs, the embodiments may be implemented with respect to other categories of digital images. For example, similar principles may be applied to a moving digital image or frames or portions thereof, a webpage downloaded from the Internet or other network, or any other digital image.
Referring again to
The mobile telephone 10 includes call circuitry that enables the mobile telephone 10 to establish a call and/or exchange signals with a called/calling device, typically another mobile telephone or landline telephone, or another electronic device. The mobile telephone 10 also may be configured to transmit, receive, and/or process data such as text messages (e.g., colloquially referred to by some as “an SMS,” which stands for short message service), electronic mail messages, multimedia messages (e.g., colloquially referred to by some as “an MMS,” which stands for multimedia messaging service), image files, video files, audio files, ring tones, streaming audio, streaming video, data feeds (including podcasts) and so forth. Processing such data may include storing the data in the memory 90, executing applications to allow user interaction with data, displaying video and/or image content associated with the data, outputting audio sounds associated with the data and so forth.
The mobile telephone 10 further includes a sound signal processing circuit 98 for processing audio signals transmitted by and received from the radio circuit 96. Coupled to the sound processing circuit are a speaker 60 and microphone 62 that enable a user to listen and speak via the mobile telephone 10 as is conventional (see also
The display 22 may be coupled to the control circuit 30 by a video processing circuit 64 that converts video data to a video signal used to drive the display. The video processing circuit 64 may include any appropriate buffers, decoders, video data processors and so forth. The video data may be generated by the control circuit 30, retrieved from a video file that is stored in the memory 90, derived from an incoming video data stream received by the radio circuit 96 or obtained by any other suitable method.
The mobile telephone 10 also may include a local wireless interface 69, such as an infrared transceiver, RF adapter, Bluetooth adapter, or similar component for establishing a wireless communication with an accessory, another mobile radio terminal, computer or another device. In embodiments of the present invention, the local wireless interface 69 may be employed as a communications circuit for short-range wireless transmission of images or image portions, tag suggestions, and/or related data among devices in relatively close proximity.
The mobile telephone 10 also may include an I/O interface 67 that permits connection to a variety of conventional I/O devices. One such device is a power charger that can be used to charge an internal power supply unit (PSU) 68. In embodiments of the present invention, I/O interface 67 may be employed as a communication circuit for wired transmission of images or image portions, tag suggestions, an/or related data between devices sharing a wired connection.
Although the invention has been shown and described with respect to certain preferred embodiments, it is understood that equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications, and is limited only by the scope of the following claims.