Method of text classification using discriminative topic transformation

Information

  • Patent Grant
  • 9069798
  • Patent Number
    9,069,798
  • Date Filed
    Thursday, May 24, 2012
    12 years ago
  • Date Issued
    Tuesday, June 30, 2015
    9 years ago
  • CPC
  • Field of Search
    • US
    • 704 001-010
    • 704 245000
    • 704 251000
    • 704 257000
    • 704 270000
    • CPC
    • G06F17/2785
    • G06F17/277
    • G06F17/2775
    • G06F17/30707
    • G06F17/3071
    • G06F17/27
    • G06F17/28
    • G06F17/30705
    • G06F17/21
    • G06F17/2211
    • G06F17/271
    • G06F17/30
    • G06F17/274
    • G06F17/2755
    • G06F17/278
    • G06F17/279
    • G06F17/2795
    • G06F17/30731
  • International Classifications
    • G06F19/24
    • G06F17/30
    • Term Extension
      485
Abstract
Text is classified by determining text features from the text, and transforming the text features to topic features. Scores are determined for each topic features using a discriminative topic model. The model includes a classifier that operates on the topic features, wherein the topic features are determined by the transformation from the text features, and the transformation is optimized to maximize the scores of a correct class relative to the scores of incorrect classes. Then, a class label with a highest score is selected for the text. In situations where the classes are organized in a hierarchical structure, the discriminative topic models apply to classes at each level conditioned on previous levels and scores are combined across levels to evaluate the highest scoring class labels.
Description
FIELD OF THE INVENTION

This invention is related generally to a method for classifying text, and more particularly to classifying the text for a large number of categories.


BACKGROUND OF THE INVENTION

Text classification is an important problem for many tasks in natural language processing, such as user-interfaces for command and control. In such methods, training data derived from a number of classes of text are used to optimize parameters used by a method for estimating a most likely class for the text.


Multinomial Logistic Regression (MLR) Classifiers for Text Classification.


Text classification estimates a classy from an input text x, where y is a label of the class. The text can be derived from a speech signal.


In prior art multinomial logistic regression, information about the input text is encoded using a feature function

ƒj,k:(x,y)custom character{0,1},

typically defined such that








f

j
,
k




(

x
,
y

)


=

{



1





t
j



x





and





y


=

I
k






0



otherwise
,









In other words, the feature is 1 if a term tj is contained in the text x, the class label y is equal to category Ik.


A model used for the classification is a conditional exponential model of the form









p
Λ



(

y
|
x

)


=


1


Z
Λ



(
x
)










j
,
k





λ

j
,
k





f

j
,
k




(

x
,
y

)







,




where








Z
Λ



(
x
)


=



y












j
,
k





λ

j
,
k





f

j
,
k




(

x
,
y

)





.







and λj,k and Λ are the classification parameters.


The parameters are optimized on training pairs of texts xi and labels yi, using an objective function








L
Λ

=





i
,
j
,
k





λ

j
,
k





f

j
,
k




(


x
i

,

y
i


)




-

log





y










j
,
k





λ

j
,
k





f

j
,
k




(


x
i

,

y



)









,





which is to be maximized with respect to Λ.


Regularized Multinomial Logistic Regression Classifiers


Regularization terms can be added to classification parameters in logistic regression to improve a generalization capability.


In regularized multinomial logistic regression classifiers, a general formulation using both the L1-norm and the L2-norm regularizers is








L
Λ

=





i
,
j
,
k









λ

j
,
k





f

j
,
k




(


x
i

,

y
i


)




-

log





y














j
,
k









λ

j
,
k





f

j
,
k




(


x
i

,

y



)







-

α





j
,
k











λ

j
,
k




2



-

β





j
,
k










λ

j
,
k








,





where






α





j
,
k











λ

j
,
k




2







is the L2-norm regularizer, and






β





j
,
k










λ

j
,
k










is an L1-norm regularizer, and α and β are weighting factors. This objective function is again to be maximized with respect to Λ.


Various methods can optimize the parameters under these regularizations.


Topic Modeling


In prior art, probabilistic latent semantic analysis (PLSA) and latent Dirichlet analysis (LDA) are generative topic models in which topics are multinomial latent variables, and the distribution of topics depends on particular document including the text where the words are distributed multinomially given the topics. If the documents are associated with classes, then such models can be used for text classification.


However with generative topic models, the class-specific parameters and the topic-specific parameters are additive according to a logarithmic probability.


SUMMARY OF THE INVENTION

The embodiments of the invention provide a method for classifying text using discriminative topic transformations. The embodiments of the invention also performs classification in problems where the classes are arranged in a hierarchy.


The method extracts features from text, and then transforms the features into topic features, before classifying text to determine scores.


Specifically, the text is classified by determining text features from the text, and transforming the text feature to topic features. The text can be obtained from recognized speech.


Scores are determined for each topic features using a discrcrinative topic transformation model.


The model includes a classifier that operates on the topic features, wherein the topic features are determined by the transformation from the text features, and the transformation is optimized to maximize the scores of a correct class relative to the scores of incorrect classes.


Then, a set of class labels with highest scores is selected for the text. The number of labels selected can be predetermined, or dynamic.


In situations where the classes are organized in a hierarchical structure, where each class corresponds to a node in the hierarchy, the method proceeds as follows. The hierarchy can be traversed in a breadth-first order.


The first stage of the method is to evaluate the class scores of the input text at the highest level of the hierarchy (level one) using a discriminative topic transformation model trained for the level-one classes in the same way as described above. Scores for each level-one class are produced by this stage and are used to select a set of level-one classes having the greatest scores. For each of the selected level-one classes, the corresponding level-two child classes are then evaluated using a discriminative topic transformation model associated with each level-one class. The procedure repeats for one or more levels, or until the last level of the hierarchy is reached. Scores from each classifier used on the path from the top level to any node of the hierarchy are combined to yield a joint score for the classification at the level of that node. The scores are used to output the highest scoring candidates at any given level in the hierarchy. The topic transformation parameters in the discriminative topic transformation models can be shared among one or more subsets of the models, in order to promote generalization within the hierarchy.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flow diagram of a text classification method and system according to embodiments of the invention, and



FIG. 2 is a flow diagram of a hierarchical text classification method and system according to embodiments of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The embodiments of the invention provide a method for classifying text using discriminative topic transformation model.


The method extracts text features ƒj,k(x,y) from the text to be classified, where j is an index for a type of feature, k is an index of a class associated with the feature, x is the text, and y is a hypothesis of the class.


The text features are transformed to topic features using

gl,k(x,y)=hl1,k(x,y), . . . ,ƒJ,k(x,y)),

where hl(•) is a function that transforms the text features, and l is an index of the topic features.


The term “topic features” is used because the features are related to semantic aspects of the text. As used in the art and herein, “semantics” relate to the meaning of the text in a natural language as a whole. Semantics focuses on a relation between signifiers, such as words, phrases, signs and symbols, and what the signifiers denote. Semantics is distinguished from the “dictionary” meaning of the individual words.


A linear transform,

hl1,k(x,y), . . . ,ƒJ,k(x,y))=ΣjAl,jƒj,k(x,y),

parameterized by a feature transformation matrix A, produces the topic features








g

l
,
k




(

x
,
y

)


=



j








A

l
,
j






f

j
,
k




(

x
,
y

)


.







Then, our discriminative topic transformation model is









p

Λ
,
A




(

y

x

)


=


1


Z

Λ
,
A




(
x
)










l
,
j
,
k









λ

l
,
k




A

l
,
j





f

j
,
k




(

x
,
y

)







,




where








Z

Λ
,
A




(
x
)


=



y













l
,
j
,
k









λ

l
,
k




A

l
,
j





f

j
,
k




(

x
,
y

)





.






We construct and optimize our model using training text. The model includes the set classification parameters Λ and and the feature transformation matrix A. The parameters maximize the scores of the correct class labels. The model is also used to evaluate the scores during classification. The construction can be done in a one time preprocessing step.


The model parameters can also be regularized during optimization using various regularizers designed for the feature transformation matrix A, and the classification parameters Λ.


One way uses a mixture of







L





2





α





j
,
k











λ

j
,
k




2



,

L





1





β





j
,
k










λ

j
,
k











regularizers on the classification parameters Λ, and a combined L1/L2 regularizer






γ




l








(



j









A

l
,
j





)

2







on the feature transformation matrix A matrix, where α, β, and γ are weighting factors.


Objective Function for Training Model Parameters


Then, the objective function for training model parameters Λ and A on training pairs of texts xi and labels yi is








L

Λ
,
A


=




i







log


(


p

Λ
,
A




(


y
i



x
i


)


)



-

α





l
,
k











λ

l
,
k




2



-

β





l
,
k










λ

l
,
k






-

γ




l








(



j









A

l
,
j





)

2





,





where α, β, γ are the weights controlling a relative strength of each regularizer, which are determined using cross-validation. This objective function is to be maximized with respect to Λ and A.


Scoring


Scores for each classy given text x can be computed using a similar formula as used in the objective function above, leaving out the constant terms:








s

Λ
,
A




(

y

x

)


=




l
,
j
,
k









λ

l
,
k




A

l
,
j






f

j
,
k




(

x
,
y

)


.







Hierarchical Classification


We now consider the case where the classes are organized in a hierarchical structure. For each text x, we now have labels yd, d=1, . . . , D for each level of the hierarchy. The label variable yd at each level d takes values in a set Cd. The set of considered values for yd can be restricted to a subset Cd(y1:(d-1)) according to the values taken by the label variables y1:(d-1)=y1, . . . , yd-1 at previous levels.


For example, in the case of a tree structure for the classes, each set Cd(y1:(d-1)) can be defined as the set of children of the label yd-1 at level d−1.


For estimating the class at each level d, we can construct classifiers for the text that depend on the hypothesis of the classes at the previous levels d′≦d−1. The score for class yd is computed using the following formula:








Λ

d


S



(

y

1
:

(

d
-
1

)



)


,


A

(



y
d


x

,

y

1
:

(

d
-
1

)




)


=




l
,
j
,
k










λ

l
,
k

d



(

y

1
:

(

d
-
1

)



)




A

l
,
j





f

j
,
k




(

x
,

y
d


)





,





where Λd(y1:(d-1) is the set of parameters for classes at level d given the classes at level 1 to d−1. Optionally, the matrix A can depend on the level d and previous levels' classes y1:(d-1), but there may be advantages to having it shared across levels.


In the case of a tree representation, one possibility is to simplify the above formula to










s

Λ
d




(

y

d
-
1


)

,
A




(



y
d


x

,

y

d
-
1



)


=




l
,
j
,
k










λ

l
.
k

d



(

y

d
-
1


)




A

l
,
j





f

j
,
k




(

x
,

y
d


)





,





so that scoring only depends on the class of the previous level.


In this framework, inference can be performed by traversing the hierarchy, and combining scores across levels for combinations of hypotheses y1:d.


Combining the scores across levels can be done in many ways. Here, we shall consider summing over scores from different levels:







s


(


y

1
:
d



x

)


=





d



d









s



Λ

d





(

y

1
:

(


d


-
1

)



)


,
A




(



y

d




x

,

y

1
:

(


d


-
1

)




)








In some contexts, it can be important to determine the marginal score s(yd|x) of yd. In the case of conditional exponential models, this is given (up to an irrelevant constant) by







s


(


y
d


x

)


=


log
(




y

1
:

(

d
-
1

)






exp


(

s


(


y

1
:
d



x

)


)



)

.





In the case of a tree, we simply have s(yd|x)=s(y1:d|x) as there is only a single path that leads to yd.


The combined scores for different hypotheses are used to rank the hypotheses and determine the most likely classes at each level for the input text.


Traversing the hierarchy can also be done in many ways, we traverse the hierarchy from the top in a breadth-first search strategy. In this context, we can speed up the process by eliminating from consideration hypotheses y1:(d-1) up to level d−1 whose scores are too low. At level d, we now only have to consider hypotheses y1:d that include the top scoring y1:(d-1).


The hierarchy can also be represented by a directed acyclic graph (DAG). The DAG has no cycles. An undirected graph can be converted into a DAG by choosing a total ordering of the nodes of the undirected graph, and orienting every edge between two nodes from the node earlier in the order to the node later in the order.


Method



FIG. 1 shows a method for classifying text using discriminative topic transformation models according to embodiments of our invention.


As described above, we construct 105 our model 103 from known labeled training text 104 during preprocessing.


After the model is constructed, unknown unlabeled text can be classified.


Input to the method is text 101, where the text includes glyphs, characters, symbols, words, phrases, or sentences. The text can be derived from speech.


Output is a set of class labels 102 that most likely correspond to the unknown input text, i.e., class hypotheses.


Using the model, text features 111 are determined 110 from the input text 101. The text features are transformed 120 to topic features 121.


Class scores are determined 130 according to the model 103. Then, the set of class labels 102 with the highest scores is produced.


The steps of the above methods can be performed in a processor 100 connected to memory and input/output interfaces as known in the art.



FIG. 2 shows a method for classifying text using the above method in the case where the classes are arranged in a tree-structured hierarchy.


Parameters 202 are constructed according to the above method for performing classification at each level of the hierarchy. Scores for level 1 classes are evaluated 210 on unlabeled text 201 as above, producing scores for level 1 classes 203. One or more nodes in the next level 2 are then selected 220 based on the scores for level 1. Scores for selected nodes for level 2 are again evaluated 230 using the above method on unlabeled text 201, and are aggregated 204 with scores for the previous level.


The same method is performed at each subsequent level of the hierarchy, beginning with selection 220 of nodes for the level i, evaluation 230 of scores at level i, and storage of the scores up to level i 204.


After the scores up to the final level i=n have been aggregated, the scores are combined 240 across levels, and the set 205 of class labels for each level with the highest scores is produced.


EFFECT OF THE INVENTION

The invention provides an alternative to conventional text classification methods. Conventional methods can use features based on topic models. However, those features are not discriminatively trained within a framework of the classifier.


The use of topic features allows parameters to be shared among all classes, which enables the model to determine relationships between words across the classes, in contrast to only within each class, as in conventional classification models.


The topic features also allow the parameters for each class to be used for all classes, which can reduce noise and over-fitting during the parameter estimation, and improve generalization.


Relative to latent variable topic models, our model involves multiplication of the topic-specific and class-specific parameters in the log probability domain, whereas the prior art latent variable topic models involve addition in the log probability domain, which yields a different set of possible models.


As another advantage, our method uses a multivariate logistic function with optimization that is less sensitive to the training texts points that are far from a decision boundary.


The hierarchical operation of the classification combined with the discriminative topic transformations, enables the system to generalize well from training data by sharing parameters among classes. It also enables to back off to higher level classes if inference at lower levels cannot be performed with sufficient confidence.


Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims
  • 1. A method for classifying text, comprising-steps of: acquiring text as input data in a processor, wherein the text is derived from one or more hypotheses from an automatic speech recognition system operating on a speech signal;determining text features from the text x, wherein the text features are ƒj,k(x,y);transforming the text features to topic features, wherein the transforming is according to gl,k(x,y)=hl(ƒ1,k(x,y), . . . ,ƒJ,k(x,y)),
  • 2. The method of claim 1, wherein the topic features are a linear transformation of the text features.
  • 3. The method of claim 1, wherein parameters of the model are regularized using regularizers comprising L1-norm, L2-norm, and mixed-norm regularizers.
  • 4. The method of claim 1, wherein the topic features relate to semantic aspects of the text.
  • 5. The method of claim 1, wherein a linear transform hl(ƒ1,k(x,y), . . . ,ƒJ,k(x,y))=ΣjAl,jƒj,k(x,y)is parameterized by a feature transformation matrix A to produces the topic features
  • 6. The method of claim 1, wherein the weights are determined by cross-validation.
  • 7. The method of claim 1, wherein the classifying is according to semantics of a natural language used by the text.
  • 8. The method of claim 1, wherein the classes are organized in a hierarchical structure, wherein each class corresponds to a node in the hierarchy, wherein nodes are assigned to different levels of the hierarchy, wherein different classification parameters are used for one or more of the levels of the hierarchy, wherein classification is performed by traversing the hierarchy to evaluate partial scores of the classes at each level conditioned on hypotheses of the classes at previous levels, and combining the partial scores of the classes at one or more of the levels to determine a joint score.
  • 9. The method of claim 8, wherein the hierarchy is represented as a tree.
  • 10. The method of claim 8, wherein the hierarchy is represented as a directed acyclic graph.
  • 11. The method of claim 8, wherein the hierarchy is traversed in a breadth-first manner.
  • 12. The method of claim 8, wherein the scores at one or more levels are used to eliminate hypotheses from consideration at other levels.
  • 13. The method of claim 12, wherein at a given level all but the highest scoring hypotheses are eliminated from further consideration.
  • 14. The method of claim 12, wherein at a given level all but n highest scoring hypotheses are eliminated from further consideration, for some positive integer n.
  • 15. The method of claim 8, wherein the joint score of a sequence of classes along a path from a top level to a class at another level is determined by summing the partial scores along the path.
  • 16. The method of claim 15, wherein the score of the class at a particular level is determined by marginalizing the joint scores of all paths leading to the class.
US Referenced Citations (28)
Number Name Date Kind
6233575 Agrawal et al. May 2001 B1
6253169 Apte et al. Jun 2001 B1
6507829 Richards et al. Jan 2003 B1
6751614 Rao Jun 2004 B1
7177796 Damerau et al. Feb 2007 B1
7529748 Wen et al. May 2009 B2
7584100 Zhang et al. Sep 2009 B2
7769751 Wu et al. Aug 2010 B1
8041669 Nigam et al. Oct 2011 B2
8239397 Stefik et al. Aug 2012 B2
8527523 Ravid Sep 2013 B1
20020087520 Meyers Jul 2002 A1
20030220922 Yamamoto et al. Nov 2003 A1
20050165607 Di Fabbrizio et al. Jul 2005 A1
20060026152 Zeng et al. Feb 2006 A1
20060095521 Patinkin May 2006 A1
20090100053 Boschee et al. Apr 2009 A1
20090204703 Garofalakis et al. Aug 2009 A1
20090234688 Masuyama et al. Sep 2009 A1
20110004463 Gryc et al. Jan 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110252045 Garg et al. Oct 2011 A1
20110258229 Ni et al. Oct 2011 A1
20110307252 Ju et al. Dec 2011 A1
20120179634 Chen et al. Jul 2012 A1
20120296637 Smiley et al. Nov 2012 A1
20120330958 Xu et al. Dec 2012 A1
20130138641 Korolev et al. May 2013 A1
Foreign Referenced Citations (1)
Number Date Country
WO 03014975 Feb 2003 WO
Non-Patent Literature Citations (13)
Entry
Chen et al., “Diverse Topic Phrase Extraction through Latent Semantic Analysis”, Proceedings of the Sixth International Conference on Data Mining, IEEE, 2006, pp. 1-5.
Lewis, D., “Feature Selection and Feature Extraction for Text Categorization”, Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York pp. 212-217 (Feb. 1992).
Apte, C. et al., “Automated Learning of Decision Rules for Text Categorization”, IBM Research Report RC 18879. To Appear in ACM Transactions on Information Systems, pp. 1-20 (no date).; vol. 12, Issue 3, accepted Mar. 1994.
Yang, Y. et al., “A Comparative Study on Feature Selection in Text Categorization”, International Conference on Machine Learning, pp. 412-420 (Jul. 1997).
Basu, Sugato et al., “Semi-Supervised Clustering by Seeding,” Proceedings of the 19th Internaitonal Conference on Machine Learning (ICML-2002), Sydney, Australia, Jul. 2002 (ages 19-26,).
W. W. Cohen, “Improving a Page Classifier with Anchor Extraction and Link Analysis”, Neural Information Processing Systems Foundation, 2002.
Bryant et al., “Recognizing Intentions in Infant-Directed Speech: Evidence for Universals,” Universals in Infant-Directed Speech: Nov. 2006—in press, Psychological Science.
So-Jeong Youn et al., “Intention Recognition Using a Graph Representation,” World Academy of Science, Engineering and Technology 25, 2007, p. 13-18.
Chakrabarti et al. “Scalable Feature Selection, Classification and signature generation for organizing large text databases into hierarchical topic taxonomies,” VLDB Journal, Springer Verlag, Nerlin, DE. vol. 7, No. 3. Aug. 1, 1998.
Hofmann et al. “Intention-Based Probabilistic Phrase Spotting for Speech Understanding,” Proc. of the Int. Symp. on Intelligent Multimedia, Video and Speech Processing, ISIMP 2001, Hong Kong.
D. Blei, J. McAuliffe. “Supervised topic models.” Neural Information Processing Systems 21, 2007.
Seungil Huh et al., “Discriminative Topic Modeling Based on Manifold Learning,” Proceeding KDD '10 Proceedings iof the 16th ACM SIGKDD International Conference on Kowledge Discovery and Data Mining. pp. 653-662. Jul. 28, 2010.
S. Lacoste-Julien et al. “DiscLDA: Discriminative learning for dimensionality reduction and classification.” Advances in Neural Information Processing Systems (NIPS) 21, 2009.
Related Publications (1)
Number Date Country
20130317804 A1 Nov 2013 US