This invention relates in general to digital image collections, and more particularly, to identifying popular landmarks in large digital image collections.
With the increased use of digital images, increased capacity and availability of digital storage media, and the interconnectivity offered by digital transmission media such as the Internet, ever larger corpora of digital images are accessible to an increasing number of people. Persons having a range of interests from various locations spread throughout the world take photographs of various subjects and can make those photographs available, for instance, on the Internet. For example, digital photographs of various landmarks and tourist sites from across the world may be taken by persons with different levels of skill in taking photographs and posted on the web. The photographs may show the same landmark from different perspectives, and taken from the same or different distances.
To leverage the information contained in these large corpora of digital images, it is necessary that the corpora be organized. For example, at digital image web sites such as Google Photos or Picasa, starting at a high level menu, one may drill down to a detailed listing of subjects for which photographs are available. Alternatively, one may be able to search one or more sites that have digital photographs. Some tourist information websites, for example, have downloaded images of landmarks associated with published lists of popular tourist sites.
However, there is no known system that can automatically extract information such as the most popular tourist destinations from these large collections. As numerous new photographs are added to these digital image collections, it may not be feasible for users to manually label the photographs in a complete and consistent manner that will increase the usefulness of those digital image collections. What is needed therefore, are systems and methods that can automatically identify and label popular landmarks in large digital image collections.
In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters.
In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module in communication with said database of geo-tagged images, wherein the geo-tagged images are grouped into one or more geo-clusters; and a visual clustering module in communication with said geo-clustering module, wherein the one or more geo-clusters are grouped into one or more visual clusters, and wherein visual cluster data is stored in the landmark database.
In a further embodiment the present invention is a method of enhancing user queries to retrieve images of landmarks, including the stages of receiving a user query; identifying one or more trigger words in the user query; selecting one or more corresponding tags from a landmark database corresponding to the one or more trigger words; and supplementing the user query with the one or more corresponding tags, generating a supplemented user query.
In yet another embodiment the present invention is a method of automatically tagging a new digital image, including the stages of: comparing the new digital image to images in a landmark image database, wherein the landmark image database comprises visual clusters of images of one or more landmarks; and tagging the new digital image with at least one tag based on at least one of said visual clusters.
Reference will be made to the embodiments of the invention, examples of which may be illustrated in the accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in the context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the art with access to the teachings herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the invention would be of significant utility.
The present invention includes methods and systems for automatically identifying and classifying objects in digital images. For example, embodiments of the present invention may identify, classify and prioritize most popular tourist landmarks based on digital image collections that are accessible on the Internet. The method and systems of the present invention can enable the efficient maintenance of an up-to-date list and collections of images for the most popular tourist locations, where the popularity of a tourist location can be approximated by the number of images of that location posted on the Internet by users.
A popular landmark recognition system 100 according to an embodiment of the present invention is shown in
The landmark database system 120 may include a landmark database 121 and associated indexes 122. The landmark database system 120 may be co-located on the same processing platform as module 101 or may be separately located. The landmark database 121 may include a collection of landmarks recognized by the system 100. The information stored for each landmark in landmark database 121 may include images or a list of images of the landmark, image and feature templates, and metadata from the images including geo-coordinates, time, and user information. The landmark database 121 may also contain the visual clustering and geo-clustering data required for the processing in processing module 101. The indexes 122 may include indexing that arranges the landmarks in landmark database 121 in order of one or more of, for example and without limitation, popularity, geographic region, time, or other user defined criteria as subject of interest. The link 141 may be any one or a combination of interconnection mechanisms including, for example and without limitation, Peripheral Component Interconnect (PCI) bus, IEEE 1394 Firewire interface, Ethernet interface, or an IEEE 802.11 interface.
A user interface 130 allows a user or other external entity to interact with the processing system 101, the landmark database system 120, and the geo-tagged image corpus 110. The user interface 130 may be connected to other entities of the system 100 using any one or a combination of interconnection mechanisms including, for example and without limitation, PCI bus, IEEE 1394 Firewire interface, Ethernet interface, or an IEEE 802.11 interface. One or more of a graphical user interface, a web interface, and application programming interface may be included in user interface 130.
The geo-tagged image corpus 110 may include one or more digital geo-tagged image corpora distributed across one or more networks. A person skilled in the art will understand that the corpus 110 may also be implemented as a collection of links to accessible geo-tagged image collections that are distributed throughout a network. The corpus 110 may also be implemented by making copies (for example, downloading and storing in local storage) of all or some images available in distributed locations. In some embodiments, a part of the geo-tagged image corpus may exist on the same processing platform as the processing system 101 and/or landmark database system 120. The different collections of geo-tagged images that constitute the geo-tagged image corpus 110 may be interconnected through the Internet, an intra-network or other form of inter-network. The processing system 101 takes as input, images made available from the geo-tagged image corpus. In some embodiments, the images from the distributed image collections may be converted to a standard graphic format such as GIF, either upon being stored in corpus 110 or before being input to processing module 101. Embodiments may also require that other forms of standardization, such as reduction or enhancement of resolution, or processing is performed on images prior to either upon being stored in corpus 110 or before being input to processing module 101. The corpus 110 may be connected to other components of the system by links 142 and 143 using any one or a combination of interconnection mechanisms including, for example and without limitation, PCI bus, IEEE 1394 Firewire interface, Ethernet interface, or an IEEE 802.11 interface.
In some embodiments, visual cluster information including the associated images and/or references to associated images may be stored in a database such as landmark database 121. The images and/or the virtual images stored in landmark database 121 may be accessible using one or more indexes 122 that allow access to stored visual clusters based on configurable criteria including popularity. For example, the stored visual clusters may be processed by a popularity module 104 that updates an index 122 to allow access in order of the number of unique users that have submitted images to each cluster.
In some embodiments, selected visual clusters may be subjected to review by a user and/or may be further processed by a computer program. For example, optionally, visual clusters satisfying specified criteria, such as, having less than a predetermined number of images, may be subjected to review by a user. A user may modify one or more visual clusters by actions including, deleting an image, adding an image, or re-assigning an image to another cluster. A user may also specify new tag information or modify existing tag information. A person skilled in the art will understand that processing the visual clusters according to external data received from a user or a computer program may require the system to perform additional functions to maintain the consistency of the geo-cluster and visual cluster information stored in the database system 120.
In the geo-cluster validation stage 302, each one of the geo-clusters generated in the create geo clustering stage 301 may be validated based on selected criteria. For example, in one embodiment of the present invention, the goal may be to ensure that each geo-cluster selected for further processing reasonably includes a tourist landmark, i.e., a popular landmark. Accordingly, a validation criteria may be to further process only geo-clusters having images from more unique users than a predetermined threshold. A validation criteria such as having at least a predetermined number of unique users having submitted images of the same landmark, is likely to filter out images of other buildings, structures and monuments, parks, mountains, landscapes etc., that have little popular appeal. For example, an enthusiastic homeowner posting pictures of his newly built house of no popular appeal, is unlikely to post a number of images of his house that is substantial when compared to the number of images of any popular landmark posted by all users of Internet digital image collection sites. In one embodiment, the threshold may be set per season and/or per geographic area. In other embodiments, the threshold may be derived by first analyzing the geo-clusters for the distribution of unique users. In yet other embodiments, the threshold may be set for each type of landmark. The foregoing descriptions of means for setting the threshold is only for illustration. A person skilled in the art will understand that there are many other means through which the geo-clusters can be validated according to the focus of each use.
In stage 503, based on the index and the matches generated in stages 501-502, a match-region graph is generated. In the match-region graph, a node is an image, and the links between nodes indicate relationships between images. For example, a pair of images that match according to stage 502 would have a link between them. The match-region graph is used, in stage 504, to generate the visual clusters. Briefly, a visual cluster is a connected sub-tree in the match-region graph, after the weak links are pruned based on additional processing in stage 504. Weak links may be, where images are matched based on image or feature templates, the links with less than a threshold number of matching features. Some embodiments may consider links that do not match a specified set of features as weak links. Text label agreement, where available, between images in a cluster may be another criteria. Also, the number of images in a cluster may be considered when pruning weak links so as to minimize clusters with very few images. A person skilled in the art will understand that pruning weak links may be based on a variety of criteria, in addition to those described here. Lastly, the visual cluster data is saved in stage 505. The visual clusters may be saved to the landmark database 121. Along with the images and the object information of each visual cluster, other pertinent data including but not limited to, one or more text labels descriptive of the cluster, and one or more images particularly representative of the cluster, may be saved. A text label descriptive of the visual cluster may be generated, for example, by merging text labels of each constituent image of that cluster. One or more images particularly representative of a visual cluster may be useful to display in an index, for example, of popular tourist landmarks.
In another embodiment of the present invention, user verification of the generated visual clusters is implemented.
Returning to
In another embodiment of the present invention, the landmark database is grown incrementally.
The system 100, having a landmark database 121, may enable many applications. For example, the landmark database 121 may be used to supplement user queries in order to make the queries more focused.
Another application, in one embodiment of the present invention, is shown in
In stage 1118, a determination is made as to whether there are more visual clusters to be displayed corresponding to the selected landmark. If no more visual clusters are to be displayed for the selected landmark, then in stage 1120, information about the landmark is displayed. For example, information such as the name and location of the landmark, popularity, number of images etc., can be displayed. For each landmark displayed in stage 1120, a corresponding user input graphic may also be displayed and enabled for user input. For example, in
In an embodiment of the present invention, the system and components of the present invention described herein are implemented using well known computers. Such a computer can be any commercially available and well known computer capable of performing the functions described herein, such as computers available from International Business Machines, Apple, Silicon Graphics Inc., Sun, HP, Dell, Compaq, Digital, Cray, etc.
Any apparatus or manufacture comprising a computer usable or readable medium having control logic (software) stored therein is referred to herein as a computer program product or program storage device. This includes, but is not limited to, a computer, a main memory, a hard disk, or a removable storage unit. Such computer program products, having control logic stored therein that, when executed by one or more data processing devices, cause such data processing devices to operate as described herein, represent embodiments of the invention.
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application is a continuation of, and claims priority to, U.S. Pat. No. 9,483,500 filed Apr. 6, 2015, which is a continuation of, and claims priority to, U.S. Pat. No. 9,014,511 filed Sep. 14, 2012, which is a divisional of, and claims priority to, U.S. Pat. No. 8,676,001 filed May 12, 2008, the entire contents of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6580811 | Maurer et al. | Jun 2003 | B2 |
6711293 | Lowe | Mar 2004 | B1 |
7340458 | Vaithilingam et al. | Mar 2008 | B2 |
7353114 | Rohlf et al. | Apr 2008 | B1 |
7702185 | Keating et al. | Apr 2010 | B2 |
7840558 | Wiseman et al. | Nov 2010 | B2 |
7870227 | Patel et al. | Jan 2011 | B2 |
8027832 | Ramsey et al. | Sep 2011 | B2 |
8037011 | Gadanho et al. | Oct 2011 | B2 |
8396287 | Adam | Mar 2013 | B2 |
9014511 | Brucher et al. | Apr 2015 | B2 |
9020247 | Adam | Apr 2015 | B2 |
20040064334 | Nye | Apr 2004 | A1 |
20050021202 | Russell et al. | Jan 2005 | A1 |
20050027492 | Taylor et al. | Feb 2005 | A1 |
20050036712 | Wada | Feb 2005 | A1 |
20060015496 | Keating et al. | Jan 2006 | A1 |
20060015497 | Keating et al. | Jan 2006 | A1 |
20060020597 | Keating et al. | Jan 2006 | A1 |
20060095521 | Patinkin | May 2006 | A1 |
20060095540 | Anderson et al. | May 2006 | A1 |
20060242139 | Butterfield et al. | Oct 2006 | A1 |
20070110316 | Ohashi | May 2007 | A1 |
20070115373 | Gallagher et al. | May 2007 | A1 |
20070154115 | Yoo | Jul 2007 | A1 |
20070174269 | Jing et al. | Jul 2007 | A1 |
20070208776 | Perry et al. | Sep 2007 | A1 |
20080005091 | Lawler et al. | Jan 2008 | A1 |
20080010262 | Frank | Jan 2008 | A1 |
20080080745 | Vanhoucke et al. | Apr 2008 | A1 |
20080086686 | Jing et al. | Apr 2008 | A1 |
20080104040 | Ramakrishna | May 2008 | A1 |
20080118160 | Fan et al. | May 2008 | A1 |
20080140644 | Franks et al. | Jun 2008 | A1 |
20080162469 | Terayoko | Jul 2008 | A1 |
20080268876 | Gelfand et al. | Oct 2008 | A1 |
20080292186 | Hamamura | Nov 2008 | A1 |
20080310759 | Liu et al. | Dec 2008 | A1 |
20080320036 | Winter | Dec 2008 | A1 |
20090049408 | Naaman et al. | Feb 2009 | A1 |
20090143977 | Beletski et al. | Jun 2009 | A1 |
20090161962 | Gallagher | Jun 2009 | A1 |
20090171568 | McQuaide, Jr. | Jul 2009 | A1 |
20090216794 | Saptharishi | Aug 2009 | A1 |
20090279794 | Brucher et al. | Nov 2009 | A1 |
20090290812 | Naaman et al. | Nov 2009 | A1 |
20090292685 | Liu et al. | Nov 2009 | A1 |
20090297012 | Brett et al. | Dec 2009 | A1 |
20100076976 | Sotirov et al. | Mar 2010 | A1 |
20100205176 | Ji et al. | Aug 2010 | A1 |
20100250136 | Chen | Sep 2010 | A1 |
Number | Date | Country |
---|---|---|
101228785 | Jul 2008 | CN |
1921853 | May 2008 | EP |
1995168855 | Jul 1995 | JP |
10134042 | May 1998 | JP |
2011328194 | Nov 1999 | JP |
2000-259669 | Sep 2000 | JP |
2002259976 | Sep 2002 | JP |
2002010178 | Nov 2002 | JP |
2004021717 | Jan 2004 | JP |
2007507775 | Mar 2007 | JP |
2007142672 | Jun 2007 | JP |
2007197368 | Aug 2007 | JP |
2007316876 | Dec 2007 | JP |
2007334505 | Dec 2007 | JP |
200833399 | Feb 2008 | JP |
2008129942 | Jun 2008 | JP |
2008165303 | Jul 2008 | JP |
2009526302 | Jul 2009 | JP |
10-2006-0026924 | Mar 2006 | KR |
101579634 | Feb 2011 | KR |
2006055514 | May 2006 | WO |
2007013432 | Feb 2007 | WO |
2007094537 | Aug 2007 | WO |
WO 2008045704 | Apr 2008 | WO |
2008055120 | May 2008 | WO |
2008152805 | Dec 2008 | WO |
Entry |
---|
SIPO, “First Office Action in Chinese Application No. 201410455635.0”, dated Mar. 1, 2017. |
CNOA, “Second Office Action in Chinese Application No. 201410455635.0”, dated Sep. 18, 2017, 6 pages. |
“Examination Report for CA Application No. 2,762,090”, dated Apr. 10, 2017, 4 Pages. |
Batur et al., “Adaptive Active Appearance Models”, IEEE Transactions on Image Processing vol. 14, No. 11, Nov. 2005, pp. 1707-1721. |
SIPO, Notification for Patent Registration Formalities and Notification on the Grant of Patent Right for Invention (with English translations) for Chinese Patent Application No. 201410455635.0, Jan. 17, 2018, 4 pages. |
Toyama, et al., “Geographic location tags on digital images”, Nov. 2003, pp. 156-166. |
USPTO, “Preinterview First OA in U.S. Appl. No. 15/663,796”, dated Jan. 26, 2018. |
EPO, “Office Action in European Application No. 10724937.7”, dated Nov. 9, 2017, 5 Pages. |
USPTO, Final Office Action for U.S. Appl. No. 15/663,796, dated Oct. 5, 2018, 16 pages. |
Notice of Allowance mailed in U.S. Appl. No. 14/683,643, dated Mar. 7, 2017, 8 pages. |
JPO Office Action mailed in Japanese Application No. 2012-511045, dated Apr. 9, 2014. |
SIPO Office Action mailed in Chinese Patent Application No. 200980127106.5, dated Aug. 24, 2012. |
JPO Notice of Allowance mailed in Japanese Application No. 2012-511045, dated Dec. 12, 2014. |
JPO Office Action mailed in Japanese Application No. 2012-511045, dated Dec. 3, 2013. |
KIPO Office Action mailed in KR Patent Application No. 10-2010-7027837, dated Feb. 27, 2015. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 13/759,916, dated Jan. 15, 2014. |
SIPO Office Action mailed in Chinese Application No. 201080030849.3, dated Jan. 17, 2014. |
SIPO Office Action mailed in Chinese Patent Application No. 201080030849.3, dated Jan. 19, 2015. |
SIPO Office Action mailed in Chinese Patent Application No. 200980127106.5, dated Jan. 30, 2014. |
SIPO Office Action mailed in Chinese Patent Application No. 201080030849.3, dated Jul. 10, 2014. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 12/119,359, dated Jun. 17, 2011. |
USPTO Final Rejection mailed in U.S. Appl. No. 13/759,916, dated Jun. 24, 2014. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 12/119,359, dated Jun. 4, 2013. |
PCT International Search Report and Written Opinion mailed in PCT Application No. PCT/US2009/002916, dated Mar. 2, 2010. |
USPTO Final Rejection mailed in U.S. Appl. No. 13/619,652, dated Mar. 25, 2014. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 12/119,359, dated May 21, 2012. |
SIPO Office Action mailed in Chinese Patent Application No. 200980127106.5, dated May 24, 2013. |
USPTO Final Rejection mailed in U.S. Appl. No. 12/119,359, dated Nov. 10, 2011. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 13/619,652, dated Nov. 6, 2013. |
USPTO Final Rejection mailed in U.S. Appl. No. 12/119,359, dated Nov. 8, 2012. |
EPO Office Action mailed in EP Patent Application No. 10 724 937.7, dated Oct. 2, 2014. |
JPO Office Action mailed in Japanese Patent Application No. 2014-021923, dated Oct. 30, 2014. |
USPTO Non-Final Rejection mailed in U.S. Appl. No. 13/619,652, dated Sep. 2, 2014. |
PCT International Search Report and Written Opinion mailed in PCT Application No. PCT/US2010/034930, dated Sep. 7, 2010, 12 pages. |
“Canadian Office Action”, CA Application No. 2,762,090, dated May 2, 2016. |
“KIPO Notice of Preliminary Rejection”, Korean Application No. 10-2011-7029949, dated Dec. 18, 2015. |
“Non-Final Office Action”, U.S. Appl. No. 14/680,000, dated Mar. 31, 2016. |
“Non-Final Office Action in U.S. Appl. No. 14/683,643”, dated Oct. 6, 2016, 11 pp. |
“Notice of Acceptance”, in Australian Office Action No. 2010248862, dated May 30, 2016, 2 pages. |
Ahern, et al., “World Explorer: visualizing aggregate data from unstructured text in gee-referenced collections”, JCDL '07, Canada, Jun. 17-22, 2007. |
Batur, , “Adaptive active appearance models”, IEEE transactions on image processing, vol. 14, No. 11, Nov. 2005, pp. 1707-1721. |
Buddemeier, et al., “Clustering Images Using an Image Region Graph”, U.S. Appl. No. 12/183,613, Jul. 31, 2008. |
Buddemeier, et al., “Systems and Methods for Descrptor Vector Computation”, U.S. Appl. No. 12/049,841, Mar. 17, 2008. |
Gronau, et al., “Optimal Implementations of UPGMA and Other Common Clustering Algorithms”, Information Processing Letters, 2007. |
Kandel, et al., “Photospread: A Spreadsheet for Managing Photos”, ACM Proc. Chi., Apr. 5, 2008. |
Kennedy, et al., “Generating diverse and representative image search results for landmarks”, Proceeding of the 17th international conference on World Wide Web, Apr. 21-25, 2008, pp. 297-306. |
Kennedy, et al., “How Flickr Helps us Make Sense of the World: Context and Content in Community-Contributed Media Collections”, MM' 07, Augsburg, Bavaria, Germany, Sep. 23-28, 2007. |
Li, et al., “Modeling and Recognition of Landmark hnage Collections Using Iconic Scene Graphs”, Proceedings of ECCV 2008, Lecture Notes in Computer Science, Springer, Oct. 12, 2008, pp. 427-440. |
Lindeberg, et al., “On Scale Selection for Differential Operators”, Proc. 8th Scandinavian Conference on Image Analysis, 1993. |
Lowe, et al., “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, Jan. 5, 2004, 28 pages. |
“Object Recognition from Local Scale-Invariant Features”, Proc. of the International Conference on Computer Vision, 1999, pp. 1150-1157. |
Maurer, et al., “Tracking and Learning Graphs of Image Sequences of Faces”, Proceedings of International Conference on Artificial Neural Networks at Bochum, 2006. |
Takeuchi, “Evaluation of Image-Based Landmark Recognition Techniques”, Technical Report CMU-RI-TR-98-20, Carnegie Mellon University, Jul. 1, 1998, 16 pages. |
Toyama, et al., “Geographic Location Tags on Digital Images”, ACM, Nov. 2003, pp. 156-166. |
Tsai, et al., “Extent: Inferring image metadata from context and content”, Proc. IEEE International Conference on Multimedia and Expo, 2005, pp. 1154-1157. |
Vu, et al., “Image Retrieval Based on Regions of Interest”, IEEE Transactions on Knowledge and Data Engineering, Jul. 2003. |
Yamada, et al., “A sightseeing contents delivery system”, Report of Technical Study by the Institute of Electronics, Information and Communication Engineers, Japan, 2005. |
USPTO, Notice of Allowance for U.S. Appl. No. 15/663,796 dated Jan. 23, 2019, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20170024415 A1 | Jan 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12119359 | May 2008 | US |
Child | 13619652 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14680000 | Apr 2015 | US |
Child | 15284075 | US | |
Parent | 13619652 | Sep 2012 | US |
Child | 14680000 | US |