Adapter for allowing both online and offline training of a text to text system

Information

  • Patent Grant
  • 7624020
  • Patent Number
    7,624,020
  • Date Filed
    Friday, September 9, 2005
    19 years ago
  • Date Issued
    Tuesday, November 24, 2009
    14 years ago
Abstract
An adapter for a text to text training. A main corpus is used for training, and a domain specific corpus is used to adapt the main corpus according to the training information in the domain specific corpus. The adaptation is carried out using a technique that may be faster than the main training. The parameter set from the main training is adapted using the domain specific part.
Description
BACKGROUND

Text-to-text applications, such as machine translation systems, often operate based on training data. A machine translation system may automatically learn from translated documents. The quality of the actual translation is based on the amount and quality of the data, and the precision of the training process. The processing uses a tradeoff between the data quality and the speed of processing.


Machine translation systems learn from language pairs, and thereafter may translate documents from one language in the pair to the other language in the pair. Translation quality may be greatly improved by providing field specific translation information, that is, translation information that is specific to the field of the information that is being translated.


SUMMARY

The present application teaches a system which allows a generic training to be done by the translation developer. The translation developer can use a slow but accurate training technique for that generic training. A text to text application system may be done in two parts, generic training done first, with emphasis on accuracy, followed by specific training done with emphasis on speed. The data from the specific training is used to “adapt” the data created by the generic training.


The system may be provided to the customer with the generic training already completed. The field—specific training can be done by the customer, as part of a customization process for the translation. The field specific training uses a technique that is faster but less accurate than the generic training.


Techniques are disclosed for the faster training techniques, and for merging the two training parts without completely re-training.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a hardware block diagram of an embodiment.





DETAILED DESCRIPTION

The general structure and techniques, and more specific embodiments which can be used to effect different ways of carrying out the more general goals are described herein.


Under the current system, a generic training is first carried out, and customers may also provide their specific training material. The software build is then customized to the training information. This may customize the software to work better than the generic translation software. However, it was noticed that customers often have data that they do not want to disclose. The data may be proprietary data, or may be classified data. For example, a military customer may have classified data which is available in both languages and could be used as very powerful specific training information. However, security restrictions may not allow that data to leave the customer's control.


According to an embodiment, a generic translation system is provided, which allows system adaptation on the customer's machine which is already running the pre-trained generic translation software. Alternately, the adaptation can be done at the same location as the generic training, to create an adapted parameter set that can be used for a text-to-text operation.


An embodiment is shown in FIG. 1. Generic training data 110 is used by the translator developer 115 to produce a set of generic parameters 120. The sources may be parallel corpora of multiple language information. Specifically, the sources may include translation memories, probabilistic and non-probabilistic word- and phrase-based dictionaries, glossaries, Internet information, parallel corpora in multiple languages, non-parallel corpora in multiple languages having similar subject matter, and human-created translations. The developer, at 115, can use a very rigorous system of learning from the generic training data. This may be a relatively time consuming process. As an example, the generic system may use a 100 million word database, and might take from one to four weeks to process the contents of the database.


The data information is also supplemented with user training data shown as 125. The user training data is optimized for use in the specific translation field, and hence is domain specific. The fast training module 130 processes this data. The fast training module 130 may use a different training technique then the in-house training system, and one which is optimized for speed as compared with accuracy. The user training data may include fewer words than the generic training data. The user training data may be between ½ million and 5 million words. The user training system may train 2-10 times faster than the in-house training system 115.


The user training creates the user domain specific parameter base 135.


The parameters are merged using a merge module 140 which creates a merged parameter base 145 that includes information from both generic training data 110 and the user training data 125. The merged parameter base is then used by the text to text application 100, which may be a general purpose computer or processor that is programmed for a text-to-text application, such as translation. A foreign language sentence 150 is input into the translator 100 and converted to an English-language translation 155 or vice versa. The translation is based on both sets of parameter databases 120 and 135.


The system used herein is called an adapter, and relies on two different kinds of adaptation, the so-called off-line adaptation at 115 which uses generic training data, and the online adaptation at 130 which adapts the generic data to the specific environment. The online adaptation uses a fast and light weight word alignment model, similar to the models that are used in real-time speech recognition systems. The online adaptation is then merged with the generic system parameters using a set of mathematical formula that allow for effective parameter merge. The formula calculates approximated parameter values as if they would have been trained if all training with both databases had been trained using the off-line adaptation scheme. This avoids complete retraining of the parameters from generic data, and may speed up the adaptation process.


A previous solution to the issue of a general corpus and a domain corpus, combined the two corpora and completely retrained the system. This was typically done “offline”, since it was a relatively time consuming process. Moreover, this requires disclosure of the user's data, which, as described above, could not always be done. The table merge described herein makes this system more able to operate in the off-line/online model.


The merged parameters 145 are used for the translator 100.


The Parameter Merge Module 140 combines parameters from the generic model 115 and the domain model 130. Each model is trained separately. A generic parameter is trained only from the generic data. This training is done at the developer, and may take more than two weeks, since the training data is large (over 100 million words). User domain parameter is trained only from user data, which is typically small (less than 1 million words). As two models are trained separately, the adaptation process is quite fast, since only the user parameters need to be trained.


If more computational resources are spent the adaptation, then one way of processing is to combine the generic data and user data, and to run the full training from scratch. It may also-be important to duplicate the user data multiple times so that the user data makes a significant effect on the model. As the data size is quite different (typically, generic data is about 100 million words, and user data is less than 1 million words), such user data duplication may be important. This method (combining the generic and the user data, and training an adaptor from scratch) is called offline adaptation.


The Basic Merge Formula


The following formulas are used for online adaptation.


For a conditional model P(e|f), two models Pg(e|f) and Pd(e|f) are merged as follows:










P


(

e
|
f

)


=



λ
f
K

·


P
g



(

e
|
f

)



+


(

1
-

λ
f
K


)

·


P
d



(

e
|
f

)








(
1
)







where






λ
f
K


=



C
g



(
f
)





C
g



(
f
)


+

K
·


C
d



(
f
)









(
2
)







The formula combines a generic model Pg(e|f) and a domain model Pd(e|f) by a linear mixture weight λfK. However, the different mixture weight is assigned for different model conditioning variable f. The weight is defined by the count of f in the general corpus Cg(f) and in the domain corpus Cd(f). The domain weight factor K is a global constant for all f.


Intuitively, the mixture weight is adjusted according to the word frequencies. If a word f in a generic data, but only observed infrequently in the domain data, the probability P(e|f) should stay the same as or similar to Pg(e|f). On the other hand, if a word f is observed very frequently in domain data, then P(e|f) should be close to Pd(e|f).


Derivation


This section shows that the formula actually emulates the domain duplication effect. A bigram language model is used as an example here. Notice that the adapter formula is applicable for any conditional model, such as a model-1 translation model and an ngram language model or any other model.


Consider, for example, two bigram language models, Pg(w2|w1) and Pd(w2|w1), that are separately trained from general data and from domain data.











P
g



(


w
2

|

w
1


)


=





C
g



(


w
1



w
2


)




C
g



(

w
1

)








and







P
d



(


w
2

|

w
1


)



=



C
d



(


w
1



w
2


)




C
d



(

w
1

)








(
3
)








where Cg(w) and Cd(w) are the number of occurrences of w in the general data and in the domain data, respectively.


When the domain data is duplicated by K times and concatenated with the general data, and a single language model is trained with this data, such language model Pg+Kd(w2|w1) is:











P

g
+

K





d





(


w
1



w
2


)


=




C
g







(


w
1



w
2


)




+
K




·

C





d









(


w
1



w
2


)











C
g







(

w
1

)




+
K




·

C





d









(

w
1

)









(
4
)











=




C





g








(


w
1



w
2


)










C
g







(

w
1

)




+
K




·

C





d









(

w
1

)





+






K

·


C
d



(


w
1



w
2


)






C
g







(

w
1

)




+
K




·

C





d









(

w
1

)










(
5
)








By introducing duplicated term weights λwlK where










λ

w
1

K

=



C
g



(

w
1

)





C
g







(

w
1

)




+
K




·

C





d









(

w
1

)








(
6
)







1
-

λ

w
1

K


=






K

·


C
d



(

w
1

)






C
g







(

w
1

)




+
K




·

C





d









(

w
1

)








(
7
)








the domain duplicated language model becomes











P

g
+

K





d





(


w
2

|

w
1


)


=



λ

w
1

K

·



C
g



(


w
1



w
2


)




C
g



(

w
1

)




+


(

1
-

λ

w
1

K


)

·






K

·


C
d



(


w
1



w
2


)








K

·


C
d



(

w
1

)










(
8
)











=



λ

w
1

K

·


P
g



(


w
2

|

w
1


)



+


(

1
-

λ

w
1

K


)

·


P
d



(


w
2

|

w
1


)









(
9
)







The last line shows that combining two separate language models Pg(w2|w1) and Pd(w2|w1) with duplicated term weights λwlK is equivalent to a single language model Pg+Kd(w2|w1), trained from a concatenated training data with domain data duplication K times.


Formula Application


The model parameter for this translation system may be formed of several different types of parameters. The adaptation formula above is modified differently according to the different types of parameters.


Word Probability Parameters


The formula above is directly applicable to parameters that are in the form of probabilities conditioned on a single word.


As seen in the formula (1), a merged entry P(e|f) is independent from the domain data if its probability Pd(e|f) is zero, that is

P(e|f)=λfK·Pg(e|f) if Pd(e|f)=0  (10)


This happens when the words e and f do no co-occur in any sentence in the domain corpus.


Therefore, an efficient implementation is the following:










P


(

e
|
f

)


=

{








P
ov



(

e
|
f

)








w
f

·


P
g



(

e
|
f

)











if







C
d



(

e
,
f

)




0

,






(
11
)








otherwise, where

Pov(e|f)=λfK·Pg(e|f)+(1−λfKPd(e|f)  (12)


And Cd(e|f) is the number of sentences (in the domain corpus) in which both e and f appear. The size of the override table Pov(e|f) is identical to the word probability table from the domain corpus.


The override table is generated by the formula (12) at the end of the adaptation process. In the adapted version of the decoder, the override table is used in conjunction with the original generic word probability table Pg(e|f) as specified in the formula (11).


The following is a sample implementation of formula (11) and (12):














# TO GENERATE OVERRIDE TABLE


PROCEDURE GENERATEOVERRIDEWPTABLE(GENERICWPTABLE,DOMAINWPTABLE,GENCNT,DOMCNT,K)


:==









FOR EACH (E,F) IN DOMAINWPTABLE(E,F)









W = GENCNT(F) / (GENCNT(F) + K * DOMCNT(F))



OVERRIDEWPTABLE(E,F) = W * GENERICWPTABLE(E,F) + (1−W) *







DOMAINWPTABLE (E,F)


# TO USE IN THE DECODER


FUNCTION GETWPVALUE(E,F,GENERICWPTABLE,OVERRIDEWPTABLE,GENCNT,DOMCNT,K) :==









IF (E,F) IS FOUND IN THE OVERRIDEWPTABLE(E,F)



THEN









RETURN OVERRIDEWPTABLE(E,F)









ELSE









W = GENCNT(F) / (GENCNT(F) + K * DOMCNT(F))



RETURN W * GENERICWPTABLE(E,F)










The OverrideWPTable(e,f) is maybe a fixed-sized array. Each element stores the precomputed value of formula (12). The size of the OverrideWPTable(e,f) is identical to the size of the word probability table from the domain corpus. The weight term W=GenCnt(f)/(GenCnt(f)+K*ComCnt(f)) implements the formula (2).


Phrase Probability Parameters


A simple application is possible to the parameters that in the form of probabilities conditioned on a sequence of words, or a phrase:











P


(

eee
|
fff

)


=



λ
fff
K

·


P
g



(

eee
|
fff

)



+


(

1
-

λ
fff
K


)

·


P
d



(

eee
|
fff

)










where




(
13
)







λ
fff
K

=



c
g



(
fff
)





C
g



(
fff
)




+
K



·


C
d



(
fff
)









(
14
)








and Cg(fff) and Cd(fff) are the number of occurrences of phrase fff in the general corpus and the domain corpus. These phrase counts one provided from the generic data, in addition to the generic model parameters. If these phrase counts cannot be provided they can, further approximated by assuming that the counts are uniform u over the all possible phrases.










λ
fff
K

=


u

u


+
K



·
u



=

1

1


+
K








(
15
)







This turns out to be a linear model mixture in which the weight is determined by the domain corpus duplication factor K.










P


(

eee
|
fff

)


=



1

1


+
K



·


P
g



(

eee
|
fff

)



+






K


1


+
K



·


P
d



(

eee
|
fff

)








(
16
)







A similar approach for word probability parameters can be implemented as:










P


(

eee
|
fff

)


=

{







P
ov



(

eee
|
fff

)








1

1


+
K



·


P
g



(

eee
|
fff

)











if







C
d



(

eee
|
fff

)




0






(
17
)








otherwise, where











P
ov



(

eee
|
fff

)


=



1

1


+
K



·


P
g



(

eee
|
fff

)



+


K

1


+
K



·


P
d



(

eee
|
fff

)








(
18
)







The following is a exemplary sample implementation of formula (17) and (18):














# TO GENERATE OVERRIDE TABLE


PROCEDURE GENERATEOVERRIDEPHTABLE(GENERICPHTABLE,DOMAINPHTABLE,K) :==









FOR EACH (EEE,FFF) IN THE DOMAINPHTABLE









OVERRIDEPHTABLE(EEE,FFF) =









(1/(1+K)) * GENERICPHTABLE(EEE,FFF)



+ (K/(1_K)) * DOMAINPHTABLE(EEE,FFF)







# TO USE IN THE DECODER


FUNCTION GETPHVALUE(EEE,FF,GENERICPHTABLE,OVERRIDEPHTABLE,K) :==









IF (EEE,FFF) IS FOUND IN THE OVERRIDEPHTABLE



THEN









RETURN OVERRIDEPHTABLE(EEE,FFF)









ELSE









RETURN GENERICPHTABLE(EEE,FFF) / (1+K)










The OverridePHTable is identical to the one from the domain corpus, except that each entry is replaced with the override value.


2.3.3 Count-Based Parameters


Some model parameters are not in the form of probabilities, but rather are in the form of simple counts. For this type of parameter, the basic formula (1) cannot be applied. Instead, the following formula is applied to have the same effect.

C(e,f)=Cg(e,f)+K·Cd(e,f)  (19)


An efficient implementation is the following:










C


(

e
,
f

)


=

{







C
ov



(

e
,
f

)








C
g



(

e
,
f

)










if







C
d



(

e
,
f

)




0






(
20
)








otherwise, where

Cov(e,f)=Cg(e,f)+K·Cd(e,f)  (21)


The override table Cov(e,f) is generated by the formula (21) at the end of the adaptation process. In the adapted version of the decoder, the override lexicon table is used in conjunction with the original generic table Cg(e,f) as specified in formula (20).


The following is an exemplary implementation of formula (20) and (21):














# TO GENERATE OVERRIDE TABLE


PROCEDURE GENERATECTABLE(GENERICCTABLE,DOMAINCTABLE,K) :==









FOR EACH (E,F) IN THE DOMAINCTABLE(E,F)









OVERRIDECTABLE(E,F) = GENERICCTABLE(E,F) + K * DOMAINCTABLE(E,F)







# TO USE IN THE DECODER


FUNCTION GETCVALUE(E,F,GENERICCTABLE,OVERRIDECTABLE,K) :==









IF (E,F) IS FOUND IN THE OVERRIDECTABLE(E,F)



THEN









RETURN OVERRIDECTABLE(E,F)









ELSE









RETURN GENERICCTABLE(E,F)










The OverrideCTable(e,f) is a fixed-sized array. Each element stores the precomputed value of formula (21). The size of the OverrideCTable(e,f) may be identical to the size of the parameter table from the domain corpus.


Fast Train Module


The Fast Train Module uses a simpler version of the in-house Training System. It may use simple models such as bag-of-words translation models, and/or Hidden Markov models.


Simple Fast Training


The simplest form of the Fast Train Module is just to use such simple models without any additional data from the generic parameters. If the user data is sufficiently large, this scheme works well in practice.


Adaptive Fast Training


To improve the adaptation performance, especially for small user data, the generic parameters are additionally used to train the user data. In general, the training method builds a word-based conditional probability table based on the following formula:











P
n



(

e
|
f

)


=



c

n
-
1




(

e
,
f

)





e




c

n
-
1




(

e
,
f

)








(
22
)







Where cn(e,f) is the alignment count from the parameters at the n-th iteration.


Seed Adaptation


In non-adapter systems, the initial parameter P0(e|f) is usually a uniform parameter. In the adapter environment, the generic parameter Pg(e|f) can be used instead.


Use Alignment Counts from General Corpus


If the generic parameter provides the final alignment counts cgN(e,f) from the general corpus, it can be used in each iteration of the user data training in the following way:











P
n



(

e
|
f

)


=




λ
f
K

·


c
g
N



(

e
,
f

)



+


(

1
-

λ
f
K


)

·


c
d

n
-
1




(

e
,
f

)







e



{



λ
f
K

·


c
g
N



(

e
,
f

)



+


(

1
-

λ
f
K


)

·


c
d

n
-
1




(

e
,
f

)




}







(
23
)








Use Generic Model for Each Iteration


Instead of using cgN(e,f), the generic model table Pg(e|f) can be used in each iteration.











P
n



(

e
|
f

)


=


λ
·


P
g



(

e
|
f

)



+


(

1
-
λ

)

·



c

n
-
1




(

e
,
f

)





e




c

n
-
1




(

e
,
f

)










(
24
)








In this formula, a constant weight λ can be used.


Term Weighted Adaptation


The adaptation formula (1) can alternatively be used as:











P
n



(

e
|
f

)


=



λ
f
K

·


P
g



(

e
|
f

)



+


(

1
-

λ
f
K


)

·



c

n
-
1




(

e
,
f

)





e




c

n
-
1




(

e
,
f

)










(
25
)








Use Prior Count Instead of Alignment Count


The alignment count cgN(e,f), from the general corpus can be approximated using the term frequency count Cg(e).

cgN(e,f)≈Cg(fPg(e|f)  (26)


The operations described herein can be carried out on any computer, e.g., an Intel processor, running Windows and/or Java. Programming may be done in any suitable programming language, including C and Java.


In one embodiment, a method may comprise first carrying out a first training using at least one corpus of language information, using a first training operation to obtain a first parameter set; second carrying out a second training using a domain specific corpus, using a second training operation which operates faster than said first training operation, and which is less accurate than said first training operation, to obtain a second parameter set; and using said second parameter set to adapt said first parameter set to carry out a text-to-text operation. In another embodiment, said first carrying out may be carried out at a first location, and said second carrying out may be carried out at a second location, different than said first location.


Although only a few embodiments have been disclosed in detail above, other embodiments are possible and are intended to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in other way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other merging and training techniques may be used.


Also, only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims.

Claims
  • 1. A computer implemented method, comprising: first carrying out a first generic training using at least one corpus of language information based at least in part on Internet information, using a first generic training operation to obtain a first generic parameter set;second carrying out a second domain specific training using a fast train module associated with a domain specific corpus, said fast train module including a second domain specific training operation which operates faster than said first generic training operation, and which is less accurate than said first generic training operation, to obtain a second domain specific parameter set;merging said first generic parameter set and said second domain specific parameter set into a merged parameter set, and using said merged parameter set for a text to text operation, wherein said merging comprises a weighted merge between said first generic parameter set and said second domain specific parameter set; andusing said second domain specific parameter set to adapt said first generic parameter set to carry out said to text operation, wherein said using comprises using partial information from the first generic training and partial information from the second domain specific training, forming an original table and an override table, and using both said original table and said override table as part of said text to text operation.
  • 2. A computer implemented method as in claim 1, wherein said text to text operation is a translation between first and second languages.
  • 3. A computer implemented method as in claim 1, wherein said merging comprises an adaptive training merge between said first generic parameter set and said second domain specific parameter set.
  • 4. A computer implemented method as in claim 1, wherein said weighted merge is sensitive to frequency of specified terms in the corpus.
  • 5. A computer implemented method as in claim 1, wherein said second domain specific training operation uses parameters from said first generic training operation.
  • 6. A computer implemented method as in claim 5, wherein said second domain specific training operation uses a basic seed probability from the first generic training operation.
  • 7. A computer implemented method as in claim 1, wherein said merging uses an adaptive merging.
  • 8. A computer implemented method as in claim 7, wherein said adaptive merging uses a merge which is proportional to a frequency of a specified term in a training database.
  • 9. A computer implemented method as in claim 1, wherein said merging comprises adding indications of counts.
  • 10. A computer implemented method as in claim 1, wherein said merging comprises adding information that represent counts related to alignment.
  • 11. A computer implemented method as in claim 1, wherein said first carrying out is carried out at a first location, and said second carrying out is carried out at a second location, different than said first location.
  • 12. A computer implemented method as in claim 1, wherein the override table includes precomputed versions of specified formulas.
  • 13. A computer implemented method as in claim 1, wherein the partial information includes probabilities.
  • 14. A computer implemented method as in claim 1, wherein the partial information includes counts.
  • 15. An apparatus, comprising: a first training computer at a first location, carrying out a first generic training using at least one corpus of information based at least in part on Internet information, using a first generic training operation to obtain a first generic parameter set; anda second training computer, at a second location, different than the first location, carrying out a second domain specific training using a fast train module associated with a domain specific corpus that has different information than said at least one corpus, said fast train module including a second domain specific training operation which operates faster than said first generic training operation, and which is less accurate than said first generic training operation, to obtain a second domain specific parameter set, and using said first generic parameter set and said second domain specific parameter set together for a text to text operation,wherein said second training computer also operates to merge said first generic parameter set and said second domain specific parameter set into a merged parameter set, to use said merged parameter set for said text to text operation, and to carry out a weighted merge between said first generic parameter set and said second domain specific parameter set, andwherein said training second computer uses partial information from the first generic training and partial information from the second domain specific training, forms an original table and an override table, and uses both said original table and said override table as part of said text to text operation.
  • 16. An apparatus as in claim 15, wherein said text to text operation is a translation between first and second languages.
  • 17. An apparatus as in claim 15, wherein said second training computer carries out an adaptive training merge between said first generic parameter set and said second domain specific parameter set.
  • 18. An apparatus as in claim 15, wherein said override table represents information which is present in both the at least one corpus and the domain specific corpus.
  • 19. An apparatus as in claim 15, wherein the override table includes precomputed versions of specified formulas.
  • 20. An apparatus as in claim 15, wherein the partial information includes probabilities.
  • 21. An apparatus, comprising: a training part including at least one computer, which carries out a first generic training for a text to text operation using at least one corpus of training information based at least in part on Internet information, to obtain a first generic parameter set and at a different time than first generic training, carrying out a second domain specific training using a fast train module associated with a domain specific corpus that has different information than said at least one corpus, said fast train module including a second domain specific training operation which operates faster than said first generic training operation, and which is less accurate than said first generic training operation, to obtain a second domain specific parameter set and using said second domain specific parameter set to adapt said first generic parameter set to create an adapted parameter set, and to use the adapted parameter set for a text to text operation,wherein said at least one training computer merges said first generic parameter set and said second domain specific parameter set into a merged parameter set, and uses said merged parameter set for said text to text operation, and carries out a weighted merge between said first generic parameter set and said second domain specific parameter set, andwherein said at least one training computer uses partial information from the first generic training and partial information from the second domain specific training, forms an original table and an override table, and uses both said original table and said override table as part of said text to text operation.
  • 22. An apparatus as in claim 21, wherein said text to text operation is a translation between first and second languages.
  • 23. An apparatus as in claim 21, wherein said training computer carries out an adaptive training merge between the first generic parameter set and said second domain specific parameter set.
  • 24. An apparatus as in claim 21, wherein said weighted merge is sensitive to frequency of specified terms in the corpus.
US Referenced Citations (60)
Number Name Date Kind
4502128 Okajima et al. Feb 1985 A
4599691 Sakaki et al. Jul 1986 A
4787038 Doi et al. Nov 1988 A
4814987 Miyao et al. Mar 1989 A
4942526 Okajima et al. Jul 1990 A
5146405 Church Sep 1992 A
5181163 Nakajima et al. Jan 1993 A
5212730 Wheatley et al. May 1993 A
5267156 Nomiyama Nov 1993 A
5311429 Tominaga May 1994 A
5432948 Davis et al. Jul 1995 A
5477451 Brown et al. Dec 1995 A
5497319 Chong et al. Mar 1996 A
5510981 Berger et al. Apr 1996 A
5644774 Fukumochi et al. Jul 1997 A
5696980 Brew Dec 1997 A
5724593 Hargrave, III et al. Mar 1998 A
5761631 Nasukawa Jun 1998 A
5781884 Pereira et al. Jul 1998 A
5805832 Brown et al. Sep 1998 A
5848385 Poznanski et al. Dec 1998 A
5867811 O'Donoghue Feb 1999 A
5870706 Alshawi Feb 1999 A
5903858 Saraki May 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991710 Papineni et al. Nov 1999 A
6031984 Walser Feb 2000 A
6032111 Mohri Feb 2000 A
6092034 McCarley et al. Jul 2000 A
6119077 Shinozaki Sep 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6182014 Kenyon et al. Jan 2001 B1
6205456 Nakao Mar 2001 B1
6223150 Duan et al. Apr 2001 B1
6236958 Lange et al. May 2001 B1
6269351 Black Jul 2001 B1
6278967 Akers et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6289302 Kuo Sep 2001 B1
6304841 Berger et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6360196 Poznanski et al. Mar 2002 B1
6389387 Poznanski et al. May 2002 B1
6393388 Franz et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6415250 van den Akker Jul 2002 B1
6460015 Hetherington et al. Oct 2002 B1
6502064 Miyahira et al. Dec 2002 B1
6782356 Lopke Aug 2004 B1
6810374 Kang Oct 2004 B2
6901361 Portilla May 2005 B1
6904402 Wang et al. Jun 2005 B1
6999925 Fischer et al. Feb 2006 B2
7107215 Ghali Sep 2006 B2
7113903 Riccardi et al. Sep 2006 B1
7383542 Richardson et al. Jun 2008 B2
20020188438 Knight et al. Dec 2002 A1
20020198701 Moore Dec 2002 A1
20040030551 Marcu et al. Feb 2004 A1
20060015323 Udupa et al. Jan 2006 A1
Foreign Referenced Citations (7)
Number Date Country
0469884 Feb 1992 EP
0715265 Jun 1996 EP
0933712 Aug 1999 EP
07244666 Jan 1995 JP
07244666 Sep 1995 JP
10011447 Jan 1998 JP
11272672 Oct 1999 JP
Related Publications (1)
Number Date Country
20070094169 A1 Apr 2007 US