This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 09/615,726, titled “A METHOD AND MECHANISM FOR THE CREATION, MAINTENANCE, AND COMPARISON OF SEMANTIC ABSTRACTS,” filed Jul. 13, 2000, which is a continuation-in-part of U.S. Pat. No. 6,108,619, titled “METHOD AND APPARATUS FOR SEMANTIC CHARACTERIZATION,” issued Aug. 22, 2000, and of co-pending U.S. patent application Ser. No. 09/512,963, titled “CONSTRUCTION, MANIPULATION, AND COMPARISON OF A MULTI-DIMENSIONAL SEMANTIC SPACE,” filed Feb. 25, 2000, all commonly assigned.
This invention pertains to determining the semantic content of documents via computer, and more particularly to comparing the semantic content of documents to determine similarity.
U.S. patent application Ser. No. 09/615,726, titled “A METHOD AND MECHANISM FOR THE CREATION, MAINTENANCE, AND COMPARISON OF SEMANTIC ABSTRACTS.” filed Jul. 13, 2000, referred to as “the Semantic Abstract application” and incorporated by reference herein, describes a method and apparatus for creating and using semantic abstracts for content streams and repositories. Semantic abstracts as described in the Semantic Abstracts application include a set of state vectors. Thus, storing the semantic abstract requires storing each vector in the set, taking up a lot of storage space. Further, measuring the distance between a semantic abstract and a summary of a document using the Hausdorff distance function, a complicated function, requires numerous calculations along the way to calculate a single distance.
The Semantic Abstract application discusses techniques for simplifying the semantic abstract (e.g., by generating a centroid vector). Such techniques have limitations, however; most notably that particular information can be lost.
Accordingly, a need remains for a way to construct a single vector that captures the meaning of a semantic context represented by a clump of vectors without losing any information about the semantic context.
The invention is a method and apparatus constructing a single vector representing a semantic abstract in a topological vector space for a semantic content of a document. The semantic content is constructed for the document on a computer system. From the semantic content, lexemes or lexeme phrases are identified. State vectors are constructed for the lexemes/lexeme phrases. The state vectors are superpositioned into a single vector, which forms the semantic abstract for the document.
The foregoing and other features, objects, and advantages of the invention will become more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
Computer system 105 further includes software 130. In
Although the above description of software 130 creates a single vector from state vectors in the topological vector space, the state vectors can be divided into groups, or clumps. This produces a minimal set of state vectors, as opposed to a single vector, which avoids distant lexemes/lexeme phrases from being superpositioned and losing too much context.
In the preferred embodiment, clumps are located by performing vector quantization, which determines a distance between each pair of state vectors; vectors sufficiently close to each other can then be clumped together. For example, a vector can be determined to be in a clump if its distance is no greater than a threshold distance to any other vector in the clump.
A person skilled in the art will recognize that other techniques can be used to locate clumps: for example, by dividing the vectors in groups so that each vector in a particular group has an angle within a certain range. The remainder of this invention description assumes that the state vectors form only a single clump and vector quantization is not required; a person skilled in the art will recognize how the invention can be modified when vector quantization is used.
Although the document from which semantic content 135 is determined can be found stored on computer system 105, this is not required.
On the Meaning of the Meaning of Meaning
Recall the definition of a vector space. A nonempty set V is said to be a vector space over a field F if V is an abelian group under an operation denoted by +, and if for every α,βεF, v, w εV the following are satisfied:
α(v+w)=αv+αw
(α+β)v=αv+βv
α(βv)=(αβ)v
1v=v
where “1” represents the unit element of F under multiplication.
As shown in co-pending U.S. patent application Ser. No. 09/512,963, titled “CONSTRUCTION, MANIPULATION, AND COMPARISON OF A MULTI-DIMENSIONAL SEMANTIC SPACE,” filed Feb. 25, 2000, a set S of lexemes can be represented as a vector space. This representation is accomplished by introducing a topology τ on S that is compatible with the sense of the lexemes, building a directed set from the lexemes, and then (given the separation axioms) showing an explicit one-to-one, continuous, open mapping θ from S to a subspace of the Hilbert coordinate space—a de facto vector space. This open mapping θ is continuous and open with respect to τ, of course.
How is θ expressed? By the coordinate functions gk: SI1. And how are the gk defined? By applying Urysohn's lemma to the kth chain of the directed set, where A={S−root}, B=
As is well known, functions that are nicely behaved can form a vector space, and it so happens that step functions are very well behaved indeed. Consider the vector space Q spanned by the coordinate functions gk, where qεQ is of the form Σλkgk, λk ε (the real numbers). Define an inner product on Q, of the form <q1, q2>=∫q1·q2, where it is understood that we integrate over S in a topologically consistent manner.
Given an inner product space Q, Q is a function space. In fact, Q is the function space spanned by the functions gk. The functions gk are defined by their corresponding chains. In fact the kth chain uniquely identifies gk, so that {gk} is more than simply a spanning set; it is a basis of Q.
Having built the metric space Q in such a way as to entail the topology on S, the next step is to coherently leverage S into a metric space via Q's structure. With the two metrics (of S and Q) commensurable, the goal of quantifying the notion of near and far in S will be accomplished.
By definition, if V is a vector space then its dual space is Hom(V, F). Hom(V, F) is the set of all vector space homomorphisms of V into F, also known as the space of linear functionals. So, the dual of Q (i.e., Hom(Q, )) is a function space on a function space.
Now, consider that for any sεS, the function εs associates the function gk with an element of the real field: εs(gk)=gk(s). A simple check shows linearity, i.e., εs(gk+gn)=(gk+gn)(s)=gk(s)+gn(s)=εs(gk)+εs(gn). The reader can similarly verify scalar multiplication. So, what does this show? It shows that every element of S corresponds to an element of the dual of Q. The notations εs(gk) and s(gk) are used interchangeably.
Now the notion of the dual of Q is “properly restricted” (limited to a proper subspace) to those linear functionals in the span of S: Σλksk, λkε, where it is understood that (λisi+λjsj)gk=λisi(gk)+λjsj(gk). When properly restricted, it can be shown that Q and its dual are isomorphic. Indeed, for the finite dimensional case it is very easy to prove that a vector space and its dual are isomorphic. So the dimension of the dual space of Q—i.e., the dimension of the space spanned by S in its new role as a set of linear functionals—is equal to the dimension of Q. And what does the linear functional s “look” like? Well, s is the linear functional that maps g1 to g1(s), g2 to g2(s), . . . and gk to gk(S). In other words, metrized s=(g1(s), g2(s), . . . gk(s), . . . ). This last expression is nothing more or less than the result of the Construction application. But notice: deriving the result this way requires constructing the dual of Q, characterized as τλksk, λε , sεS. In other words, the expression (λisi+λjsj) now has meaning in a way that is consistent with the original topology τ defined on S. The last statement above is the keystone for much that is to be developed below.
The point of all this discussion is that simple algebraic operations on the elements of S, namely vector addition and scalar multiplication, can be confidently done.
On the Plausibility of the Norm ∥q∥=∫|q|
A general line of attack to show that the metrics of S and Q are commensurable is to look for a norm on Q: a norm defined by the notion of the integral ∫|q| with respect to the topology τ on S. To firm up this notion, consider the following points:
The linear form εs is often called the Dirac measure at the point s. Note that we have implicitly adopted the premise that S is locally compact.
Given a positive Radon measure μ on S, μ can be extended to the upper integral μ* for positive functions on S. This leads to the definition of a semi-norm for functions on S, which in turn leads to the space 1 (S, μ) (by completing Q with respect to the semi-norm). The norm on 1 (S, μ) then reflects back (via duality) into S as ∥s∥=lim ∫|q XCk|.
Note that if Q is convex, then S spans a set that sits on the convex hull of Q, just as one would expect that the so-called “pure” states should.
The point of all this discussion is that simple algebraic operations on the elements of S that are metric preserving can now be confidently performed: namely vector addition and scalar multiplication.
On the Nature of the Elements of S
Consider the lexemes si=“mother” and sj=“father.” What is (si+sj)? And in what sense is this sum compatible with the original topology τ?
(si+sj) is a vector that is very nearly co-linear with sn=“parent,” and indeed “parent” is an element (of the dual of Q) that is entailed by both “mother” and “father.” One might say that sn carries the potential to be instantiated as either si or sj. Viewing the elements of S as state vectors, and adducing from this (and other examples), it becomes apparent that vector addition can be interpreted as corresponding to a superposition of states.
While the vector sum “mother”+“father” intuitively translates to the concept of “parent,” other vector sums are less intuitively meaningful. Nevertheless, vector summation still operates to combine the vectors. What is “human”+“bird”? How about “turtle”+“electorate”? Even though these vector sums do not translate to a known concept in the dictionary, if the object is to combine the indicated vectors, superposition operates correctly.
Consider the (preliminary) proposition that the sum of two state vectors corresponds to the superposition of the states of the addends. If state vector addition corresponds to superposition of states, the question then naturally comes to mind, “What happens when we superpose a state with itself?” Ockham's razor suggests that the result of such an operation should yield the same state. From this we conjecture that if a state vector corresponding to a state is multiplied by any non-zero scalar, the resulting state vector represents the same state. Put more succinctly, semantic state is entailed in the direction of the state vector.
Determining Semantic Abstracts
Now that superposition of state vectors has been shown to be feasible, one can construct semantic abstracts representing the content of the document as a vector within the topological vector space.
The state vectors in semantic content 305 are superpositioned to form the semantic abstract. By taking the vector sum of the collected state vectors (the state vectors within semantic content 305), a single state vector 310 can be calculated as the semantic abstract.
Unit circle 315 marks all the points in the topological vector space that are a unit distance from the origin of the topological vector space. (In higher dimensional topological vector spaces, unit circle 315 becomes a unit hyper-sphere.) State vector 310 can be normalized to a unit distance (i.e., the intersection of state vector 310 and unit circle 315). Normalizing state vector 310 takes advantage of the (above-discussed) fact that semantic state is indicated by vector direction, and can compensate for the size of semantic content 305 used to construct state vector 310. One way to normalize state vector 310 is to divide the vector by its length: that is, if v is a state vector, v/∥v∥ is the unit vector in the direction of v.
Measuring Distance between State Vectors
As discussed above, semantic state is entailed by the direction of the state vector. This makes sense, as the vector sum of a state with itself should still be the same state. It therefore makes the most sense to measure the distance between semantic abstract state vectors through the angle between the state vectors. In the preferred embodiment, distance is measured as the angle between the state vectors.
Distance can be measured as the distance between the heads of the state vectors. But recall that changing the length of two state vectors will change the distance between their heads. Since semantic state is entailed by the direction of the state vector, state vectors can be normalized without affecting their states before measuring distance as the difference of state vectors. Normalizing the state vectors allows a given distance between vectors to have a consistent meaning across different bases and state vectors.
Procedural Implementation
Note that steps 515 and 525 are both optional. For example, the state vectors do not have to be weighted. Weighting the state vectors makes possible minimizing the weight of lexemes that are part of the semantic content but less significant to the document. And normalizing the single vector, although highly recommended, is not required, since distance can be measured through angle.
The advantage of superpositioning the state vectors into a single vector is that the amount of storage required to store the semantic abstract. Whereas in the Semantic Abstract application, storing the semantic abstract requires storing several multi-dimensional state vectors, the invention only requires storing one multi-dimensional state vector. And, as shown above, because superpositioning state vectors does not lose information, storing the single state vector is as complete as storing the individual state vectors before superposition.
The above-described embodiments of the invention can be implemented as software stored on a computer readable medium. The program can then be operated on a computer to execute the software.
Having illustrated and described the principles of our invention in a preferred embodiment thereof, it should be readily apparent to those skilled in the art that the invention can be modified in arrangement and detail without departing from such principles. We claim all modifications coming within the spirit and scope of the accompanying claims.
Number | Name | Date | Kind |
---|---|---|---|
5276677 | Ramamurthy et al. | Jan 1994 | A |
5278980 | Pedersen et al. | Jan 1994 | A |
5317507 | Gallant | May 1994 | A |
5325298 | Gallant | Jun 1994 | A |
5390281 | Luciw et al. | Feb 1995 | A |
5539841 | Huttenlocher et al. | Jul 1996 | A |
5551049 | Kaplan et al. | Aug 1996 | A |
5619709 | Caid et al. | Apr 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5694523 | Wical | Dec 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5708825 | Sotomayor | Jan 1998 | A |
5721897 | Rubinstein | Feb 1998 | A |
5778362 | Deerwester | Jul 1998 | A |
5778378 | Rubin | Jul 1998 | A |
5778397 | Kupiec et al. | Jul 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5822731 | Schultz | Oct 1998 | A |
5832470 | Morita et al. | Nov 1998 | A |
5867799 | Lang et al. | Feb 1999 | A |
5873056 | Liddy et al. | Feb 1999 | A |
5934910 | Ho et al. | Aug 1999 | A |
5937400 | Au | Aug 1999 | A |
5940821 | Wical | Aug 1999 | A |
5963965 | Vogel | Oct 1999 | A |
5966686 | Heidorn et al. | Oct 1999 | A |
5970490 | Morgenstern | Oct 1999 | A |
5974412 | Hazlehurst et al. | Oct 1999 | A |
5991713 | Unger et al. | Nov 1999 | A |
6006221 | Liddy | Dec 1999 | A |
6009418 | Cooper | Dec 1999 | A |
6078953 | Vaid et al. | Jun 2000 | A |
6085201 | Tso | Jul 2000 | A |
6097697 | Yao et al. | Aug 2000 | A |
6105044 | De Rose et al. | Aug 2000 | A |
6108619 | Carter et al. | Aug 2000 | A |
6122628 | Castelli | Sep 2000 | A |
6173261 | Arai et al. | Jan 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6289353 | Hazlehurst et al. | Sep 2001 | B1 |
6295533 | Cohen | Sep 2001 | B2 |
6297824 | Hearst et al. | Oct 2001 | B1 |
6311194 | Sheth et al. | Oct 2001 | B1 |
6317708 | Witbrock et al. | Nov 2001 | B1 |
6356864 | Foltz et al. | Mar 2002 | B1 |
6363378 | Conklin et al. | Mar 2002 | B1 |
6459809 | Jensen et al. | Oct 2002 | B1 |
6470307 | Turney | Oct 2002 | B1 |
6493663 | Ueda | Dec 2002 | B1 |
6513031 | Fries et al. | Jan 2003 | B1 |
6523026 | Gillis | Feb 2003 | B1 |
6615208 | Behrens et al. | Sep 2003 | B1 |
6675159 | Lin et al. | Jan 2004 | B1 |