The present invention relates to a method of producing a visual summarization of text documents. More specifically, the present invention relates to a visual text analysis tool to aid users in analyzing a large collection of text documents. The location of critical information in a large collection of text documents, and/or gleaning of useful insights from such a collection, can be time consuming and laborious. An example of a large collection of text documents is a collection of emails.
To help cope with large collections of text documents, a number of tools have been developed to facilitate analysis. While many tools allow a user to run a simple text search through such a collection of documents, such a text search is of limited value in identifying, for example, patterns of appearances for particular terms. Further, data visualization tools have also been developed to facilitate data analysis but none facilitate a comprehensive analysis utilizing both meta data and data content.
The existing techniques are inadequate to support complex analyses required by many real-world applications. An example of a real-world application is a document review process that occurs during an investigation or discovery phase of litigation, where a reviewer may wish to analyze a large collection of documents to quickly and easily identify relevant documents to a particular issue or topic. The existing techniques are not practical because they do not provide the in-depth analysis required or are too time consuming.
Embodiments of the present invention provide a text analysis tool that integrates an interactive visualization with text summarization to visually summarize a collection of text documents.
In particular, an embodiment of the invention provides a method of producing a visual text summarization. A plurality of topics may be extracted from a collection of text documents, where each of the topics comprises a distribution of topic keywords. An importance ranking for each of the topics may be determined, and an importance ranking for each of the topic keywords of a topic may also be determined. A graph may be displayed having a plurality of stacked layers representing a group of topics selected from the plurality of topics based on the importance ranking of the topics. A keyword cloud within each layer of the graph may also be displayed, where the one keyword cloud is a group of topic keywords selected from the extracted topic keywords based on the importance ranking of the extracted topic keywords.
Another embodiment of the invention provides a system for producing a visual text summarization. A data storage device may store a collection of text documents. A text document pre-processor module may extract content and metadata from the collection of text documents. A topic summarizer module may extract a set of topics, associated probabilistic distributions and topic keywords from the context and meta data of the collection of text documents. A text visualizer module may transform the set of topics, the associated probabilistic distributions and the topic keywords into a visual text summarization. Lastly, a display device may display the visual text summarization.
Another embodiment of the invention provides a computer readable article of manufacture tangibly embodying computer readable instructions which when executed causes a computer to carry out the steps of the method described above.
Embodiments of the present invention provide a text analysis tool that integrates an interactive visualization with text summarization to visually summarize a collection of text documents. In embodiments, the text summarization may involve extracting topics and topic keywords from a collection of text documents, with a topic representing the thematic content common to the collection of text documents and each topic being characterized by a distribution over a set of topic keywords. Each keyword may have a probability that measures the likelihood of the keyword appearing in the related topic. The interactive visualization may be a visual text summarization, particularly, a stacked graph wherein each layer represents a different topic.
As shown in
In the preferred embodiment of the invention shown in
The preferred embodiment of the invention shown in
The LDA model is an unsupervised probabilistic topic modeling technique that automatically infers hidden variables reflecting the thematic structure of a collection of text documents. It is a word-based generative model that treats each document as a finite mixture over an underlying set of hidden themes. Moreover, each theme has a specific distribution of words. Given a collection of documents, statistical inference techniques are used to invert the generative process and derive the latent themes in the corpus as well as the document-specific topic mixtures.
More specifically, the corpus is denoted as D={d1, d2, . . . , dM} where dm is a document and M is the corpus size. Each document has a sequence of words Wm={wm,1, wm,2, . . . , wm, Nm}, where Nm is the number of words in each document. The dictionary is denoted as V={v1, v2, . . . , vV}, where the vocabulary size is V. Moreover, z is a latent variable representing the latent topic associated with each observed word. We denote Zm={zm,1, zm,2, . . . , zm, Nm} as the topic sequence associated with the word sequence Wm.
The generative procedure of LDA can be formally defined as:
1. For all the topics kε[1,K]:
2. For each document mε[1,M]:
2.1. Choose Nm˜Poisson(ξ)
2.2. Choose a topic distribution θm˜Dir(θ|α).
2.3. For all the words wm,n where nε[1,Nm] in document dm:
Assuming φk=(φk,1, φk,2, . . . , φk,V)TεRV where φk,i=p(w=vi|z=k), then, the parameters for the topic mixtures of LDA can be represented as Φ=(φ1, φ2, . . . , φK)TεRK×V where K is the topic number. Similarly, if we denote θm=(θm,1, θm,2, . . . , θm,K)TεRK where θm,k=p(z=k|dm). Then the parameters for the document mixture of LDA is Θ=(θ1, θ2, . . . , θM)TεRM×K.
Inferencing a topic model given a set of training documents mainly involves estimating the document-topic distribution Θ and the topic-word distribution Φ. Since the maximum a posterior (MAP) estimation is intractable due to the coupling between model parameters and the hyperparameters, approximations are often used, such as collapsed Gibbs sampling and variational techniques.
An embodiment of the invention uses Gibbs sampling which is an MCMC algorithm. In each iteration of Gibbs sampling, the embodiment of the invention samples a topic assignment for each word in each document based on topic-word co-occurrences and document-topic co-occurrences. Finally, after the result converges, the embodiment of the invention estimates the expected posterior of Dirichlet distribution for Θ and Φ using the following formulas:
where uk,vi represents the count of topic k being assigned to word vi and udm,k represents the count of topic k being assigned to any word in document dm.
Although LDA is effective in discovering latent topics, the native order of the topics and topic keywords resulting from LDA is not ideal for direct user consumption. The LDA derived topics and topic keywords are statistically inferred and some topics and topic keywords having a high statistical order may not be considered important to a user. Consider, for example, an email collection. In an email collection there may be many emails containing disclaimers or other boilerplate language from which an LDA model would extract topics and topic keywords that a user would not consider important even though they have a high statistical value. Further, in an email collection there may be a large amount of newsletters or junk mail from which an LDA model would extract topics and topic keywords that a user would not consider important. Similarly, in the case of topic keywords, if the collection of text documents was a collection of financial news document then common words in finance, for example ‘Wall’ and ‘Street’, would have a high statistical value but not a high importance ranking to the user.
Therefore, referring back to
In the preferred embodiment of the invention shown in
1. Represent each document dm as a node in a graph. Its features are represented by θm.
2. Construct the T-nearest neighbor graph based on a similarity matrix S where Sij=exp {−d2ij/2σ2}. Here, dij can be either Euclidian distance or Hellinger distance.
3. Compute graph Laplacian L=D−S where D is a diagonal matrix and Dii=ΣMj=1 Sij is the degree of the ith vertex.
4. For each topic tk=(θ1,k, θ2,k, . . . , θM,k)TεRM, let
Here, 1=[1, 1, . . . 1]T.
5. Compute the Laplacian score of the k-th topic:
To find the T-nearest neighbors of a topic, we keep a T-size heap. For each topic, we compute its distances to all the other topics and then check whether to insert it to the heap. Thus, the main time complexity is in graph Laplacian construction which is O(M2K+M2 log T).
The step of determining an importance ranking for each topic is not limited to calculating a Laplacian score.
In another preferred embodiment, the step of determining an importance ranking for each topic is carried out by multiplying a mean distribution of each topic over the collection of text documents times a standard deviation of each topic over the collection of text documents. In this way, the importance ranking is a combination of both content coverage and topic variance. Specifically, the weighted mean distribution is calculated as follows:
and the standard deviation is calculated as follows:
where the weight Nm is the document length.
Then the rank of a topic is defined as:
Pk(μ(zk))λ
where λ1 and λ2 are control parameters. Specifically, if λ1=1 and λ2=0, the ranking is determined purely by topic coverage. In contrast, if λ1=0 and λ2=1, the rank is simply determined by topic variance.
In yet another preferred embodiment, the step of determining an importance ranking for each topic is carried out by ranking the topics based on the greatest pair-wise mutual information. The mutual information of two topics measures the information they share or how much knowing one of the topics reduces uncertainty of the other. By ranking the topic with the greatest pair-wise mutual information first it can reduce the uncertainty about the other topics. Specifically, in the preferred embodiment, the following procedure is used to determine the rank of each topic.
1. For ∀i,j, first compute MI/(ti, tj) based on the topic distributions of ti and tj. Then construct a complete graph G where the weight of an edge eti,tj is MI(ti, tj).
2. Build the maximal spanning tree MST of the complete graph G.
3. Define the relevant topic set Rt={t1, t2, . . . , tK} and the corresponding edges in MST.
4. While |Rt|>0,
5. Rank the topics according to the order in which they were removed. Rank the topic removed last the highest.
The Prime's algorithm is used to construct the MST. Thus, to compute pairwise mutual information for topic importance needs O(K2M). By using a heap to construct a priority queue, a MST can be built in O(|ε|log|V|)=K2 log K time since a complete graph is used.
In yet another preferred embodiment, the step of determining an importance ranking for each topic is carried out by using a topic similarity algorithm. The topic similarity algorithm is used to maximize topic diversity and minimize redundancy.
1. For ∀i,j compute the similarity sij for φi and φj based on maximal information compression index.
2. Sort the similarities for each topic.
3. Define the reduced topic set Rt={φ1, φ2, . . . , φK}.
4. While |Rt|>0, remove φj in Rt which satisfies j=arg maxi maxj sij.
5. The rank of a topic is determined by the topic removal order. The topic removed the last should rank the highest.
In this algorithm, constructing the similarity scores needs O(K2M) and sorting the scores needs O(K2 log K).
In the preferred embodiment of the invention shown in
where (#self reply)dm is the reply count by the email owner for document dm, (#other reply)dm is the reply count by others for document dm, and λ1 and λ2 are control parameters. To incorporate rk(reply), this value is multiplied with the values of importance computed from the application independent methods described above, for example, in step 303 the Laplacian score method of computing values of importance are multiplied by rk(reply).
Referring again to
where TF represents the native value of importance for a topic generated by the LDA model. (TF=φk,i). The topic proportion sum and topic proportion product are used respectively in Type-I and Type-II TFIDF to re-weight the TF scores.
The importance ranking for each of the topic keywords is based on their importance to a topic and to a specific time frame. The importance ranking of a topic keyword is computed for a time t.
Referring back to
First, the volatility of each topic layer is computed based on its curvature. The volatility metric corresponds to how much the topic strength varies over time. Second, the topic layers are sorted based on their volatility and start times. The least volatile topic with the earliest start time is placed nearest to the x-axis. Third, the next topic layer is added either on the opposite side of the reference line from the first topic layer and stacked on the first topic layer. The next topic layer is selected based on start time, volatility, semantic similarity with the previously added topic, and geometric complementariness with the previous topic. The above criteria can be weighted differently. Geometric complementariness is calculated using the following formula:
Here, weights w1=0.5, w2=0.5; dij(t) is the vertical distance between a pair of points pi of Ti and pj of Tj at time t; Fσ computes the standard deviation of the pair-wise distances.
The above approach balances all three layer-ordering criteria. First, it places the “flatter” topic currents toward the center of the graph and curvier ones along the edge to minimize the layer distortion. Second, it neighbors geometrically complementary topic currents to maximize the usable space within each layer. Third, it groups semantically similar topic currents together to facilitate topic association and comparison.
In both
In an embodiment of the present invention the keyword placement method considers three factors: (1) temporal proximity, (2) content legibility, and (3) content amount. The first factor states that topic keywords be placed near the most relevant time coordinate. The second criterion requires that keywords be legible, such as avoiding keyword occlusions and overflowing across topic boundaries. The third criterion attempts to maximize the use of available space in a topic current to display as many keywords as allowed. The method follows a two-step algorithm to place topic keywords as a series of keyword clouds along the timeline within a topic current.
To locate suitable space for placing a set of topic keywords relevant to time t within a topic current, the neighborhood of t (t±σ) is searched. Let δ be the time unit (δ=ti+1−ti) and σ<0.5δ to ensure that the keywords be placed near t. To ensure the legibility of topic keywords, a minimal legible font size or above can be required. When evaluating the neighborhood of t, there are three possible outcomes.
First, if there is adequate space to fit a set of keywords (K>=10) in the neighborhood of time t, the space is marked. Second, if there is no adequate space within the allowed neighborhood to fit a single keyword with the minimal legible font, these keywords are merged with those derived at time t+δ. The method then advances to the time point t+δ to look for suitable space. If still unable to find suitable space, the method will drop the keywords at t to ensure temporal proximity of the keyword placement. Otherwise, the space found near t+δ is marked. Third, adequate space is found to fit only a few keywords (K<=3). To avoid visual clutter, there should be an attempt minimize the number of scattered keyword clusters within a topic current. Thus, the method looks ahead to check the space around the time point t+δ. If there is ample space (area>φ), the keywords are merged at t and t+δ and placed near t+δ. If both usable areas near t and t+δ are small, the two areas are combined to fit the merged keywords. The above process repeats itself until every set of keywords is processed.
Any keyword cloud packaging method can be used to package the topic keywords into topic keyword clouds. It is desirable that the method used pack the keywords as tightly as possible.
Here, ek is the kth email in collection E(t), which is the set of emails at time t. Function I(ek) computes the normalized length of email ek, while P(Ti|ek) calculates the distribution of topic Ti in email ek. Topics with larger topic strength are covered by more documents in the collection. Visually, the topics with larger topic strength appear wider.
To support more sophisticated text analytics such as topic trend analysis there is a need to drill down to a particular topic and derive additional information for each topic. In the preferred embodiment of the present invention shown in
Two types of general grouping constraints include “must-link” and “cannot-link”. For example, for time-based topic summarization, if all the documents are sorted based on their time stamps, then there is a must-link between documents from adjacent time stamps. For geographic region based topic summarization, there is a must-link between documents from the same geo-region and cannot-link between those from different geo-regions. An author-based keyword selection can also be formulated similarly. Without loss of generality, it is assumed words can have pairwise constraints.
An embodiment of the present invention uses a novel biHMRF (two-sided hidden Markov random field) regularized information theoretic co-clustering algorithm (biHMRF-ITCC) to group documents. Information theoretic co-clustering is used since it can co-cluster documents and words and find the cluster relationships. Employing document clustering is more flexible than using fixed topic segmentation. For example, for time-based topic segmentation, if a topic is segmented based on fixed time points such as every month, if the document contents from two adjacent months are very similar, the same set of topic keywords may be repeatedly displayed for both months. In contrast, with document clustering, document sets can be combined to avoid redundancy. Moreover, the information theoretic co-clustering algorithm does not restrict the number of document and word clusters to be the same. Thus, groups of documents can be extracted that share the same keyword clusters so that different document clusters may share the same keywords.
q(dm,vi,{circumflex over (d)}k
is used to approximated p(dm, vi) by minimizing the Kullback-Leibler (KL) divergence, where dkd and vkv are cluster indicators, kd and kv are cluster indices, and Kd and Kv are the numbers of documents and word clusters.
As shown in
p(D′,V′|Ld,Lv)=exp(−DKL(p(D′,V′,{circumflex over (D)},{circumflex over (V)})∥q(D′,V′,{circumflex over (D)},{circumflex over (V)}))bφKL(·)
where bφKL(·) is a normalization constant determined by its divergency type {circumflex over (D)} and {circumflex over (V)} are the center sets.
Next, the prior distributions are formulated for both latent labels. Here, the focus is on deriving the prior for Ld. The derivation for Lv is relatively simple. First, for Latent variables ldm, a neighborhood graph is constructed based on the must-links and cannot-links. For a document dm, the must-link set is denoted as Mdm, and the cannot-link set is Cdm. Moreover, the neighbor set for dm is defined as Ndm={Mdm, Cdm}. The random field defined on this neighborhood graph is a Markov random field, following the Markov property: p(ldm|Ld−{ldm})=p(ldm|ldmεNdm). Then the configuration of the latent label set can be expressed as a Gibbs distribution. Following the generalized Potts energy function and its extension, the following equation is derived:
For must-links, the energy function is:
V(dm1,dm2εMdm1)=aijDKL(p(V′|dm1)∥p(V′|dm2))·lldm1≠ldm2
For cannot-links, the energy function is:
V(dm1,dm2εCdm1)=
where p(V′|dm1) denotes a multinomial distribution based on the probabilities (p(v1|dm1), . . . , p(vV′|dm1))T, and Dmax is the maximum value for all the DKL(p(V′|dm1)∥p(V′|dm2)) and ltrue=1 and lfalse=0.
Therefore, the constraint co-clustering problem can be formulated as an MAP estimation problem for label configurations:
p(Ld,Lv|D′,V′)αp(D′,V′|Ld,Lv)p(Ld)p(Lv)
As there are two HMRF priors for Ld and Lv, this is called biHMRF regularization. Moreover, the objective function can be rewritten as:
Since the two sets of latent variables as well as the variational probability are intractable to estimate simultaneously, an alternating expectation maximization (Alternating EM) algorithm is proposed to solve the problem.
Following is an algorithm of an alternating EM algorithm for biHMRF-ITCC.
Algorithm—Alternative EM for biHMRF-ITCC
where p(V′|{circumflex over (d)}kd) denotes a multinomial distribution based on the probabilities (p(v1|{circumflex over (d)}kd), . . . , p(vV′|{circumflex over (d)}kd))T, p(vi|{circumflex over (d)}kd))=p(vi|{circumflex over (v)}kv)p({circumflex over (v)}kv|{circumflex over (d)}kd) and p(vi|{circumflex over (v)}kv)=p(vi)/p(lvi={circumflex over (v)}kv) due to hard clustering labels. Symmetrically, the probability for words can be defined as: p(D′|{circumflex over (v)}kv) denotes a multinomial distribution based on the probabilities (p(d1|{circumflex over (v)}kv), . . . , p(dV′|{circumflex over (v)}kv))T, p(di|{circumflex over (v)}kv)=p(di|{circumflex over (d)}kd)p({circumflex over (d)}kd|{circumflex over (v)}kv) and p(di|{circumflex over (d)}kd)=p(di)/p(ldi={circumflex over (d)}kd) due to hard clustering labels. Thus, the optimization process can be alternated. First, the algorithm fixes Lv and minimizes the objective of the objective function w.r.t Ld. Then, it fixes Ld and minimizes the objective of the objective function in w.r.t Lv until convergence.
When Lv is fixed, the objective function for Ld is rewritten as:
Optimizing this objective function still is computationally intractable. Here, a general EM algorithm is used to find the estimation. In the E-Step, the cluster labels are updated based on model parameters q from the last iteration. In the M-Step, the model parameters q are updated by fixing the cluster labels.
In the E-Step, an iterated conditional modes (ICM) algorithm is used to find the cluster labels. ICM greedily solves the optimization problem by updating one latent variable at a time, and keeping all the other latent variables fixed. Here, we derive the label ldm by:
In the M-Step, since the latent labels are fixed, the update of variational function q is not affected by the must-links and cannot-links. Thus, the following function can be updated:
The algorithm Algorithm—“Alternative EM for biHMRF-ITCC”, described above, summarizes the main steps in the algorithm. Algorithm biHMRF-ITCC monotonically decreases the objective function to local optimum. This is easy to prove since the ICM algorithm decreases the objective to local optimum given fixed q parameters. Then the update of q is monotonically decreasing.
The time complexity of biHMRF-ITCC algorithm is O((nz+(nc*iterICM))·(Kd+Kv))·iterAEM, where nz is the nonzero number of document-word co-occurrences, nc is the constraint number, iterICM is the ICM iteration number in the E-Step, Kd and Kv are the cluster numbers, and iterAEM is the iteration number of the alternating EM algorithm.
Given the co-clustering results for a document set, the keywords for document cluster can be extracted. The keyword selection algorithm has two main steps: First, a filter is used to smooth the labels estimated from the Markov random field. We use a window of size 50 to smooth the adjacent document labels. The cluster label is re-assigned using the label most frequently appeared in the window. Then the output labels will be smoothed and divided into several segments. Second, for each segment with a cluster label, its keywords are extracted. Given a cluster label, its corresponding keyword clusters can be obtained by q({circumflex over (d)}kd, {circumflex over (v)}kv). The probabilities q({circumflex over (d)}kd, {circumflex over (v)}kv) for kv=1, 2, . . . , Kv represent how a keyword cluster is associated with a document cluster. Then, relevant keyword clusters are defined as those whose probabilities are higher than the mean value of these probabilities. Then, the keyword rank is determined by:
The rank values are then sorted so that top keywords can be retrieved.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Number | Name | Date | Kind |
---|---|---|---|
6012053 | Pant et al. | Jan 2000 | A |
7152064 | Bourdoncle et al. | Dec 2006 | B2 |
7194483 | Mohan et al. | Mar 2007 | B1 |
7298930 | Erol et al. | Nov 2007 | B1 |
7536413 | Mohan et al. | May 2009 | B1 |
7607083 | Gong et al. | Oct 2009 | B2 |
7752198 | Canright et al. | Jul 2010 | B2 |
7769749 | He | Aug 2010 | B2 |
20020052894 | Bourdoncle et al. | May 2002 | A1 |
20030037074 | Dwork et al. | Feb 2003 | A1 |
20050246328 | Zhang et al. | Nov 2005 | A1 |
20080275862 | Liu et al. | Nov 2008 | A1 |
20080313168 | Liu et al. | Dec 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20110060983 A1 | Mar 2011 | US |