Image search engines, photograph-sharing websites and desktop photograph management tools have made it easy for computer users to access and collect large numbers of images. However, image collections are usually unorganized, which makes finding desired photographs or obtaining a quick overview of an image collection very difficult. Thumbnails, tapestries, collages and feature-based processing have been attempted to help users find desired images within a collection.
Many image collections are accompanied by rich text information, and such image collections are referred to herein as tagged image collections. For example, images may be titled, tagged, and/or annotated by users. Images from web pages are often associated with accompanying text, which may be used for image indices in most existing image search engines. The text information usually reflects the semantic content of images, whereby users may obtain a semantic overview of a collection, but often contains significant “noise” that leads to undesirable results.
Any improvements in technology with respect to visual-based and textual-based techniques that help users deal with large images collections are thus desirable.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a technology by which input, comprising images with associated text labels (e.g., a tagged image collection), is processed into a visual summary and a textual summary. The processing is based upon finding relationships within the images, finding relationships within the words that are in the text labels, and finding relationships between images and words. These relationships are used to produce the visual and textual summaries.
In one aspect, the relationships within the images comprise image similarity values between pairs of images. Similarly, the relationships within the words are word similarity values between pairs of words. The similarity values between images are used to compute the homogeneous messages to be propagated within images. The similarity values between words are used in the same way. Heterogeneous messages are computed based on the relationships (affinities) between images and words. A heterogeneous affinity propagation scheme is used to propagate the homogeneous messages within images and words and propagate heterogeneous messages across images and words. The visual and textual summaries are based upon the aggregation of the messages each image and word receives.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards computing visual and textual summaries for a tagged image collection that help characterize the content of the tagged image collection. In general, instead of computing such visual and textual summaries separately, visual and textual summaries are computed together. To this end, there is described a scalar message propagation scheme over images and words, referred to as heterogeneous affinity propagation, to simultaneously find visual and textual exemplars. As will be understood, this is beneficial in that the technology integrates visual exemplar finding and textual exemplar finding, whereby each benefits from the other by taking advantage of homogeneous relationships with images and texts as well as heterogeneous relationships between them, as well as by operating together (instead of performing a post-process to find centers followed by a clustering procedure, for example).
While some of the examples herein are directed towards visually and textually summarizing live image search results, such as to deliver both visual and textual summaries for live image search results using both visual content and surrounding text, it should be understood that any of the examples described herein are non-limiting examples. Any tagged image collection may be processed, regardless of how the images are obtained and/or stored. Further, any description of how the visual and textual summaries are used in an application or the like is non-limiting, e.g., the summaries may be used to suggest images of interest for a user to find, to form a visual and textual cover for a group to help consumers decide if it is necessary to create a new group, to provide a quick visual and textual overview for image search results, and so forth.
As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used in various ways that provide benefits and advantages in computing and image and text processing in general.
Turning to
In general, the image processing logic 104 includes an image/word processing mechanism 110 that maintains relationships between the words 112 (taken from the text) and the images 114, including via an image/word relation matrix 116 as described below. An image similarity processing mechanism 118 determines similarities between pairs of images, e.g., based upon each image's features, such as distances between feature vectors representing the images. These data may be maintained as an image-to-image matrix 120. In one implementation, the image similarity may be determined from various low level features (e.g., color moment, SIFT) that are extracted to represent each image, with similarity evaluated as a negative Euclidean distance over the low level features, and normalized to have mean −1 and variance 1.
Similarly, a word similarity processing mechanism 122 determines similarities between pairs of words. These data may be maintained as a word-to-word matrix 124. In one implementation, for finding word similarity, the known WordNet® similarity concepts may be used, which in general comprise a variety of semantic similarity and relatedness measures based on a large lexical database of an appropriate language (e.g., English), also normalized to have with mean −1 and variance 1.
As described below, the various data (e.g., matrices) are processed by a heterogeneous and homogenous affinity propagation mechanism (algorithm) 126. In general, the image-to-word relationships are heterogeneous relationships, while the image-to-image and word-to-word relationships are homogeneous relationships.
In general, the task of computing both visual and textual summaries is formulated as finding a compact set of image and word exemplars from an image collection and the associated text labels. For such a task in a tagged image collection of images and text labels, there are the three aforementioned relationships: two homogeneous relationships within images and texts, including image similarity and text similarity data, and one heterogeneous relationship between images and text labels, e.g., their association relationships. Described herein is finding both visual and textual exemplars together, including by processing the three relationships together to visually and textually summarize a tagged image collection.
Given a set of n images, I={I1, I2, . . . , In}, a set of corresponding texts, T={T1, T2, . . . , Tn}, and the set of words within the text Tk=W1k, W2k . . . Wm
Consider that the set of words W comprises m words, i.e., W={W1, . . . , Wm}; then the association relationships between the n images and the corresponding texts can be represented by the relation matrix 116 having dimension n×m, R=[rij], where rij=1 if Wj∈Ti, and 0 otherwise. This relation is between heterogeneous data, namely images and words.
The homogeneous relationships for images, in the image-to-image relation matrix 120, is of dimension n×n, SI=[sI(l, k)], and represents the pair-wise similarities within images. The homogeneous relationships for words, in the word-to-word relation matrix 124 is of dimension m×m, SW=[sW(l, k)], and represents the pair-wise similarities within words.
These three relations (one heterogeneous, two homogeneous) are depicted in
A set of image exemplars Ī can be denoted as Ī={Ic1, Ic2, . . . Icn} where ck∈{1, 2, . . . , n} is the exemplar image index of image Ik, and c=[c1 c2 . . . cn]T is referred to as a label vector 130 (
The first property concerns the information within images and text, and exploits the homogenous relation for exemplar identification. Given a valid configuration of a label vector, c, the “representativeness degree” that image Ici serves as the exemplar of image Ii is measured by their similarity sI(i; ci). Then, the total representativeness degree is an aggregation summed over the images:
EI(c)=Σi=1nsI(i,ci). (1)
Similarly, the representativeness degree for the words is defined as:
EW(b)=Σj=1msW(j,bj). (2)
The second property essentially considers the affect of the heterogeneous relations between images and words for exemplar identification. This affect is formulated as a function over a pair comprising an image and a word (i, j) and their corresponding exemplars (ci, bj): eij(ci, bj). The whole affect function is written as follows
R(c,b)=Σ(i,j)∈ε
where eij(ci, bj) aims to set different weights according to whether ci is equal to i and whether bi is equal to j,
In one implementation, for finding the heterogeneous relations, p(i, j)=θ1/|∈i.R|, and p(j, i)=θ2/|∈.jR|, where θ1 and θ2 are some fixed negative values; (e.g., −8 was found suitable). This is for penalizing the inconsistency of exemplar selection of those linked heterogeneous nodes. Note that q(i,j) and
To make c and b valid configurations, two extra constraints are introduced. The constraint for c is defined as
For b, a similar valid constraint is defined as
VW(c)=Σk=1mηk(b), (5)
where ηk(·) is defined similarly to δk(·). In summary, the overall objective function for visual and textual exemplar identification is written as follows,
This overall objective function of Equation (6) can be represented using a factor graph in
Turning to the scalar-valued affinity propagation algorithm, one derivation is based on the known max-sum algorithm over a factor graph, which transmits vector-valued messages between function nodes and variable nodes. The max-sum algorithm is an iterative algorithm to exchange two kinds of messages, namely one from function nodes to variable nodes, and the other from variable nodes to function nodes. For the factor graph for Equation (6) as shown in
There are two messages exchanged between ci and δk, including the message ρi→k, sent from ci to δk, comprising n real numbers, with one for each possible value of ci, and the message, αi←k, sent from δk to ci, also comprising n real numbers. The two messages are depicted in
There are two messages exchanged between ci and eij. The message, πi→e, sent from variable ci to eij, comprises n real numbers, with one for each possible value of ci. The message, νi←e, sent from variable eij to ci, also comprises n real numbers. The two messages are depicted in
These vector-valued messages can be reduced to a scalar-valued message, making the propagation much more efficient than vector messaging. The message is additionally propagated between heterogeneous data, images and words.
The idea behind the derivation is to analyze the propagated messages in the two cases whether ci is valued as i or not.
Derivation. Let {tilde over (ρ)}i→k(ci)=ρi→k(ci)−
For {tilde over (ρ)}i→k (ci=k) and {tilde over (α)}i←k(ci=k) there can be obtained:
For {tilde over (ν)} and {tilde over (π)} the following derivations are provided:
To obtain exemplar assignment after convergence, sum together the incoming messages to ci and take the value ĉi as follows:
It can observed that only the variables {tilde over (ρ)}i→k(ci) and {tilde over (α)}i←k(ci) for ci=k and {tilde over (ν)}i←e(ci) and {tilde over (π)}i→e(ci) for ci=i are involved in the message passing. Therefore, scalar-valued variables are defined: r(i, k)={tilde over (ρ)}i→k(ci=k), a(i, k)={tilde over (α)}i←k(ci=k), ν(i, j)={tilde over (ν)}i←e
Described herein is a heterogeneous affinity propagation (HAP) algorithm, which together processes both image data and word data and identifies their exemplars. By viewing each data point as a node in a network, the process recursively transmits scalar-valued messages along edges of the network until a good set of exemplars emerges. The HAP is different from the known affinity propagation algorithm in that HAP transmits not only the messages within images and words, called homogeneous messages and depicted in
With respect to homogeneous message propagation, in one implementation there are two kinds of messages exchanged within image points. The “responsibility” r(i, k), sent from data point i to data point k reflects how well k serves as the exemplar of i considering other potential exemplars for i. The “availability” a(i, k), sent from data point k to data point i reflects how appropriately i chooses k as its exemplar considering other potential points that may choose k as their exemplar. The messages are updated in an iterative way as
By comparison, the above two messages are similar to the original affinity propagation, but the difference lies in the responsibility r(i, j), which involves the absorbability message v(i, j), described below. Similar messages are defined for words.
With respect to heterogeneous message propagation, in one implementation there are two kinds of message exchanged between images and words. The “absorbability” v(i, j), sent from word j to image i, reflects how well image i serves as an exemplar considering whether word j is an exemplar. The “distributability” w(i, j), sent from image i to word j reflects how well word j serves as an exemplar when image i is an exemplar considering other words that are related with image i. This is like adjusting the confidence of being an exemplar via consulting the associated heterogeneous data. The two messages are updated as follows:
A value referred to as “belief” represents the belief that image i selects image j as its exemplar, and is derived as the sum of the incoming messages,
t(i,j)=a(i,j)+
ĉi=arg maxj∈ε
Note that one possible implementation of heterogeneous affinity propagation takes O(n3+m3+mn(m+n)) per iteration. However, by reusing some computations, the algorithm can be made to only take O(|∈I|+|∈W|+|∈R|).
When computing the responsibility message in Equation (19) and the distributability message in Equation (23), the maximum and next-to-maximum values of
When computing
By way of summary,
Step 604 represents determining the pair-wise similarity between each of the images. Step 605 represents determining the pair-wise similarity between each of the words. Step 606 represents determining the pair-wise relationship between images and words.
Step 607 initializes the homogeneous messages and heterogeneous messages to zero. Step 608 computes homogeneous messages for images based upon image similarities and the heterogeneous messages from words to images by Equations (19 and 21). Step 609 computes homogeneous messages for words based upon word similarities and the heterogeneous messages from images to words by Equations (19 and 21).
Steps 610 and 611 compute the heterogeneous messages from images to words and the heterogeneous messages from words to images by Equations (22 and 23), respectively. Step 612 computes the exemplary images and words by Equation (25). Via step 613, steps 608-612 are performed in an iterative manner, and the iteration terminates when the exemplary images and words obtained in step 612 do not change (at least not substantially) from the previous iteration.
Step 614 outputs the obtained exemplars for the tagged image collection.
Exemplary Operating Environment
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 710 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 710 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 710. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media.
The system memory 730 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 731 and random access memory (RAM) 732. A basic input/output system 733 (BIOS), containing the basic routines that help to transfer information between elements within computer 710, such as during start-up, is typically stored in ROM 731. RAM 732 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 720. By way of example, and not limitation,
The computer 710 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, described above and illustrated in
The computer 710 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 780. The remote computer 780 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 710, although only a memory storage device 781 has been illustrated in
When used in a LAN networking environment, the computer 710 is connected to the LAN 771 through a network interface or adapter 770. When used in a WAN networking environment, the computer 710 typically includes a modem 772 or other means for establishing communications over the WAN 773, such as the Internet. The modem 772, which may be internal or external, may be connected to the system bus 721 via the user input interface 760 or other appropriate mechanism. A wireless networking component 774 such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 710, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
An auxiliary subsystem 799 (e.g., for auxiliary display of content) may be connected via the user interface 760 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state. The auxiliary subsystem 799 may be connected to the modem 772 and/or network interface 770 to allow communication between these systems while the main processing unit 720 is in a low power state.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents failing within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
7184959 | Gibbon et al. | Feb 2007 | B2 |
7917755 | Giliyaru et al. | Mar 2011 | B1 |
20020065844 | Robinson et al. | May 2002 | A1 |
20070156669 | Marchisio et al. | Jul 2007 | A1 |
20070271297 | Jaffe et al. | Nov 2007 | A1 |
20080089561 | Zhang | Apr 2008 | A1 |
20080092054 | Bhumkar et al. | Apr 2008 | A1 |
20080133488 | Bandaru et al. | Jun 2008 | A1 |
20080154886 | Podowski et al. | Jun 2008 | A1 |
20080215561 | Yu et al. | Sep 2008 | A1 |
20080253656 | Schwartzberg et al. | Oct 2008 | A1 |
20080298766 | Wen et al. | Dec 2008 | A1 |
20100205202 | Yang et al. | Aug 2010 | A1 |
20100223299 | Yun, II | Sep 2010 | A1 |
Entry |
---|
Li, et al., “Modeling and Recognition of Landmark Image Collections Using Iconic Scene Graphs”, Retrieved at<<http://www.cs.unc.edu/˜lazebnik/publications/eccv08.pdf>>, pp. 1-14. |
Liao, et al. “LiveImage: Organizing Web Images by Relevant Concept”, Retrieved at<<http://wkd.iis.sinica.edu.tw/˜deanliao/doc/wsa.pdf>>, pp. 210-220. |
Datta, et al. “Studying Aesthetics in Photographic Images Using a Computational Approach”, Retrieved at<<http://infolab.stanford.edu/˜wangz/project/imsearch/Aesthetics/ECCV06/datta.pdf>>, pp. 14. |
Blei, et al.“Modeling Annotated Data”, Retrieved at<<http://www.cs.princeton.edu/˜blei/papers/BleiJordan2003.pdf>>, SIGIR'03, Jul. 28-Aug. 1, 2003, Toronto, Canada, pp. 8. |
Barnard, et al. “Learning the Semantics of Words and Pictures”, Retrieved at<<http://www.wisdom.weizmann.ac.il/˜vision/courses/2003—2/barnard00learning.pdf>>, pp. 8. |
“Picasa Web Albums”, Retrieved at<<http://picasaweb.google.com/>>, p. 1. |
“CPAN”, Retrieved at<<http://search.cpan.org/dist/>>, p. 1. |
“WordNet”, Retrieved at<<http://wordnet.princeton.edu/>>, 2006, pp. 2. |
“Flickr”, Retrieved at<<http://www.flickr.com/groups/>>, pp. 2. |
Clough, et al.“Automatically Organizing Images using Concept Hierarchies”, Retrieved at<<http://www.dcs.gla.ac.uk/˜hideo/pub/sigir05/mmir05.pdf>>, pp. 7. |
Dhillon Inderjit S. “Co-Clustering Documents and Words using Bipartite Spectral Graph Partitioning”, Retrieved at<<http://delivery.acm.org/10.1145/510000/502550/p269-dhillon.pdf?key1=502550&key2=9628160321&coll=GUIDE&dl=GUIDE&CFID=16610564&CFTOKEN=24160575>>, KDD 01 Sanfrancisco CA USA, ACM 2001, pp. 269-274. |
Dhillon, et al.“Information-Theoretic Co-Clustering”, Retrieved at<<http://www.cs.utexas.edu/users/inderjit/public—papers/kdd—cocluster.pdf>>, SIGKDD '03, Aug. 24-27, 2003, Washington, DC, USA, 2003 ACM, pp. 10. |
Frey, et al. “Clustering by Passing Messages between Data Points”, Retrieved at<<http://www.psi.toronto.edu/affinitypropagation/FreyDueckScience07.pdf>>, vol. 315, Feb. 15, 2007, pp. 23. |
Gao, et al. “Web Image Clustering by Consistent Utilization of Visual Features and Surrounding Texts”, Retrieved at<<http://delivery.acm.org/10.1145/1110000/1101167/p112-gao.pdf?key1=1101167&key2=2248160321&coll=GUIDE&dl=GUIDE&CFID=16610745&CFTOKEN=50715619>>, MM'05, Nov. 6-11, 2005, Singapore, ACM 2005, pp. 112-121. |
Jaffe, et al.“Generating Summaries for Large Collections of Geo-Referenced Photographs”, Retrieved at<<http:// delivery.acm.org/10.1145/1140000/1135911/p853-jaffe.pdf?key1=1135911&key2=4358160321&coll=GUIDE&dl=GUIDE&CFID=16610854&CFTOKEN=38950020>>, WWW 2006, May 23-26, 2006, Edinburgh, Scotland, ACM, pp. 853-854. |
Frey, et al.“Factor Graphs and the Sum-Product Algorithm”, Retrieved at<<http://www.psi.toronto.edu/pubs/2001/frey2001factor.pdf>>, IEEE Transactions on Information Theory, vol. 47, No. 2, Feb. 2001, pp. 498-519. |
Raguram, et al.“Computing Iconic Summaries for General Visual Concepts”, Retrieved at<<http://cs.unc.edu/˜lazebnik/publications/iconic.pdf>>, pp. 1-8. |
Rege, et al.“Graph Theoretical Framework for Simultaneously Integrating Visual and Textual Features for Efficient Web Image Clustering”, Retrieved at<<http://delivery.acm.org/10.1145/1370000/1367541/p317-rege.pdf?key1=1367541&key2=8968160321&coll=GUIDE&dl=GUIDE&CFID=16086296&CFTOKEN=55911022>>, WWW 2008 / Refereed Track: Rich Media, Apr. 21-25, 2008. Beijing, China, ACM, pp. 317-326. |
Rother, et al.“Digital Tapestry”, Retrieved at<<http://www.cs.cmu.edu/˜skumar/paper—DigitalTapestry—CVPRO5. pdf>>, pp. 8. |
Schmitz Patrick, “Inducing Ontology from Flickr Tags”, Retrieved at<<http://www.semanticmetadata.net/hosted/taggingws-www2006-files/22.pdf>>, WWW 2006, May 22-26, 2006, Edinburgh, UK, pp. 4. |
Simon, et al. “Scene Summarization for Online Image Collections”, Retrieved at<<http://ieeexplorejeee.org/stamp/stamp.jsp?arnumber=4408863&isnumber=4408819>>, 2007 IEEE, pp. 8. |
Wang, et al.“Picture Collage”, Retrieved at<<http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1640779&isnumber=34373>>, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), pp. 8. |
Number | Date | Country | |
---|---|---|---|
20100284625 A1 | Nov 2010 | US |