Training tree transducers for probabilistic operations

Information

  • Patent Grant
  • 7698125
  • Patent Number
    7,698,125
  • Date Filed
    Tuesday, March 15, 2005
    20 years ago
  • Date Issued
    Tuesday, April 13, 2010
    14 years ago
Abstract
Tree transducers can be trained for use in probabilistic operations such as those involved in statistical based language processing. Given sample input/output pairs as training, and given a set of tree transducer rules, the information is combined to yield locally optimal weights for those rules. This combination is carried out by building a weighted derivation forest for each input/output pair and applying counting methods to those forests.
Description
BACKGROUND

Many different applications are known for tree transducers. These have been used in calculus, other forms of higher mathematics. Tree transducers are used for decidability results in logic, for modeling mathematically the theories of syntax direction translations and program schemata, syntactic pattern recognition, logic programming, term rewriting and linguistics.


Within linguistics, automated language monitoring programs often use probabilistic finite state transducers that operate on strings of words. For example, speech recognition may transduce acoustic sequences to word sequences using left to right substitution. Tree based models based on probabilistic techniques have been used for machine translation, machine summarization, machine paraphrasing, natural language generation, parsing, language modeling, and others.


A special kind of tree transducer, often called an R transducer, operates with its roots at the bottom, with R standing for “root to frontier”. At each point within the operation, the transducer chooses a production to apply. That choice is based only on the current state and the current root symbol. The travel through the transducer continues until there are no more state annotated nodes.


The R transducer represents two pairs, T1 and T2, and the conditions under which some sequence of productions applied to T1 results in T2. This is similar to what is done by a finite state transducer.


For example, if a finite state transition from state q to state r eats symbol A and outputps symbol B, then this can be written as an R production of q(A x0)->B (r x0).


The R transducer may also copy whole trees, transform subtrees, delete subtrees, and other operations.


SUMMARY

The present application teaches a technique of training tree transducers from sample input/output pairs. A first embodiment trains the tree pairs, while a second embodiment trains the tree transducers based on tree/string pairs. Techniques are described that facilitate the computation, and simplify the information as part of the training process.


An embodiment is described which uses these techniques to train transducers for statistical based language processing: e.g. language recognition and/or language generation. However, it should be understood that this embodiment is merely exemplary, and the other applications for the training of the tree transducers are contemplated.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects will now be described in detail with reference to the accompanying drawings, wherein:



FIG. 1 shows derivation trees and their simplifications;



FIG. 2A shows a flowchart;



FIG. 2B shows a speech engine that can execute the flowchart of FIG. 2A;



FIG. 2C shows a flowchart of a second embodiment; and



FIGS. 3 and 4 show a model and parameter table.





DETAILED DESCRIPTION

The present application describes training of tree transducers. The embodiment describes training of tree transducers, e.g., probabilistic R transducers. These transducers may be used for any probabilistic purpose. In an embodiment, the trained transducers are used for linguistic operations, such as machine translation, paraphrasing, text compression and the like. Training data may be obtained in the form of tree pairs. Linguistic knowledge is automatically distilled from those tree pairs and transducer information.


TΣ represents the set of trees over the alphabet Σ. An alphabet is a finite set of symbols. Trees may also be written as strings over the set Σ.


A regular tree grammar or RTG allows compactly representing a potentially infinite set of trees. A weighted regular tree grammar is a set of values, where trees in the set have weights associated with them. The trees can be described as a quadruple G (Σ, N, S, P), where Σ is the alphabet, and N is the set of non-terminals, S is the starting (initial) terminal, and P is the set of weighted productions. The productions are written left to right. A weighted RTG can accept information from an infinite number of trees. More generally, the weighted RTG can be any list which includes information about the trees in a tree grammar, in a way that allows the weight to change rather than a new entry each time the same information is reobtained.


The RTG can take the following form:










TABLE I







Σ =
{S, NP, VP, PP, PREP, DET, N, V, run, the, of, sons, daughters}


N =
{qnp, qpp, qdet, qn, qprep}


S =
q


P =
{q →1.0 S(qnp, VP(VB(run))),



qnp →0.6 NP(qdet, qn),



qnp →0.4 NP(qnp, qpp),



qpp →1.0 PP(qprep, np),



qdet →1.0 DET(the),



qprep →1.0 PREP(of),



qn →0.5 N(sons),



qn →0.5 N(daughters)}









The tree is parsed from left to right, so that the leftmost non-terminal is the next one to be expanded as the next item in the RTG. The left most derivations of G build a tree pre-order from left to right according to

LD(G)≡{(t, ((p1, r1), . . . , (pn, rn))εDG|∀1≦i<n:pi+1lexpi}


The total weight of t in G is given by WG:TΣcustom character, the sum of leftmost den derivations producing t:









W
G



(
t
)








(

t
,
h

)



LD


(
G
)









i
=
1

n




w
i






where



















h
=



(


h
1

,





,

h
n


)






and






h
i


=

(


p
i

,

(


l
i

,

r
i

,

w
i


)


)







Therefore, for every weighted context free grammar, there is an equivalent weighted RTG that produces weighted derivation trees. Each weighted RTG is generated from exactly the recognizable tree language.


An extended transducer are is also used herein. According to this extended transducer xR, an input subtree matching pattern in state q is converted into its right hand side (“rhs”), and it's Q paths are replaced by their recursive transformations. The right hand side of these rules may have no states for further expansions (terminal rules) or may have states for further expansion. In notation form








x


{



(


(

a
,
h

)

,

(

b
,

h
·

(

i
,

(

q
,
pattern
,
rhs
,
w

)


)



)


)








(

q
,
pattern
,
rhs
,
w

)



R





i




paths
a


q



=




label
a



(
i
)




pattern


(

a


(

i
·

(
1
)


)


)



=


1





b

=

a


[

i


rhs


[





p



q




(

a


(

i
·

(
1
)

·

i



)


)



,









p



paths
rhs



:




label
rhs



(
p
)





=

(


q


,

i



)





]



]





}







where, b is derived from a by application of a


rule (queue, pattern)->rhs


to an unprocessed input subtree ai which is in state q.


Its output is replaced by the output given by rhs. Its non-terminals are replaced by the instruction to transform descendent input subtrees.


The sources of a rule r=(q, l, rhs, w)εR are the input-paths in the rhs:

sources(rhs)≡{i1|∃pεpathsrhs(paths), q1εQ:labelrhs(p)=(q1, i−1)}


The reflexive, transitive closure of custom characterx is written custom character*x, and the derivations of X, written D(X), are the ways of transforming input tree I (with its root in the initial state) to an output tree O:

D(X)≡{(I,O,hTΣ×TΔ×(paths×P)*|(Qi(I)( ))custom character*x(O,h)}

The leftmost derivations of X transform the tree-preorder from left to right (always applying a transformation rule to the state-labeled subtree furthest left in its string representation):

LD(X)≡{(I, O, ((p1, r1), . . . , (pn,rn))εD(X)|∀1≦i<n:pi+1lexpi}

The total weight of (I, O) in X is; given by WX:TΣ×TΔcustom character, the sum of leftmost derivations transforming I to O:







W






x


(

I
,
O

)









(

I
,
O
,
h

)



LD


(
X
)









i
=
1

n




w
i






where









h
=



(


h
1

,





,

h
n


)






and






h
i


=

(


p
i

,

(


l
i

,

r
i

,

w
i


)


)






The tree transducers operate by starting at an initial state root and recursively applying output generating rules until no states remain, so that there is a complete derivation. In this way, the information (trees and transducer information) can be converted to a derivation forest, stored as a weighted RTG.


The overall operation is illustrated in the flow chart of FIG. 2A; and FIG. 2B illustrates an exemplary hardware device which may execute that flowchart. For the application of language translation, a processing module 250 receives data from various sources 255. The sources may be the input and output trees and transducer rules described herein. Specifically, this may be the translation memories, dictionaries, glossaries, Internet, and human-created translations. The processor 250 processes this information as described herein to produce translation parameters which are output as 260. The translation parameters are used by language engine 265 in making translations based on input language 270. In the disclosed embodiment, the speech engine is a language translator which translates from a first language to a second language. However, alternatively, the speech engine can be any engine that operates on strings of words such as a language recognition device in speech recognition device, a machine paraphraser, natural language generator, modeler, or the like.


The processor 250 and speech engine 265 may be any general purpose computer, and can be effected by a microprocessor, a digital signal processor, or any other processing device that is capable of executing the steps described herein.


The flowchart described herein can be instructions which are embodied on a machine-readable medium such as a disc or the like. Alternatively, the flowchart can be executed by dedicated hardware, or by any known or later discovered processing device.


The system obtains a plurality of input and output trees or strings, and transducer rules with parameters. The parameters may then be used for statistical machine translation. More generally, however, the parameters can be used for any tree transformation task.


At 210, the input tree, output tree and tranducer rules are converted to a large set of individual derivation trees, “a derivation forest”.


The derivation forest effectively flattens the rules into trees of depth one. The root is labeled by the original rule. All the non-expanding Δ labeled nodes of the rule are deterministically listed in order. The weights of the derivation trees are the products of the weights of the rules in those derivation trees.



FIG. 1 illustrates an input tree 100 being converted to an output tree 110 and generating derivation trees 130. FIG. 1 also shows the transducer rules 120. All of these are inputs to the system, specifically the input and output tree are the data that is obtained from various language translation resources 255, for example. The transducer rules are known. The object of the parsing carried out in FIG. 1 is to derive the derivation trees 130 automatically.


The input/output tree pairs are used to produce a probability estimate for each production in P, that maximizes the probability of the output trees given the input trees. The result is to find a local maximum. The present system uses simplifications to find this maximum.


The technique describes the use of memoization by creating the weighted RTG's. Memoization means that the possible derivations for a given produced combination are constant. This may prevent certain combinations from being computed more than once. In this way, the table, here the wRTG can store the answers for all past queries and return those instead of recomputing.


Note the way in which the derivation trees are converted to weighted RTG's. At the start, rule one will always be applied, so the first RTG represents a 1.0 probability of rule one being applied. The arguments of rule one are 1.12 and 2.11. If 1.12 is applied, rule 2 is always used, while 2.11 can be either rule 3 or rule 4, with the different weightings for the different rules being also shown.


At 230, the weighted RTG is further processed to sum the weights of the derivation trees. This can use the “inside-outside” technique, (Lari, et al, “The estimation of stochastic context free grammars using the inside-outside algorithm, Computer Speech and Language, 4, pp 35-36). The inside-outside technique observes counts and determines each time a rule gets used. When a rule gets used, the probability of that rule is increased. More specifically, given a weighted RTG with parameters, the inside outside technique enables computing the sums of weights of the trees derived using each production. Inside weights are the sum of all weights that can be derived for a non-terminal or production. This is a recursive definition. The inside weights for a production are the sum of all the weights of the trees that can be derived from that production.








β
G



(

n

N

)








(

n
,
r
,
w

)


P




w
·


β
G



(
r
)












β
G

(


r



T
Σ



(
N
)






(

n
,
r
,
w

)


P


}






p



paths
r



(
N
)







β
G



(


label
r



(
p
)


)







The outside weights for a non-terminal are the sum of weights of trees generated by the weighted RTG that have derivations containing it but exclude its inside weights, according to








α
G



(

n

N

)








{











1




if





n

=
S















p
,



(


n


,
r
,
w

)



P
:


label
r



(
p
)




=
n





w
·


α
G



(

n


)







uses





of





n





in





productions



·






p






paths
r



(
N
)


-

{
p
}







β
G



(


label
r



(

p








)


)






sibling





nonterminals












otherwise
.









Estimation maximization training is then carried out at 240. This maximizes the expectation of decisions taken for all possible ways of generating the training corpus, according to expectation, and then maximization, as:








p



parameters


:



counts
p





E

t


training








[








d


derivations
i






(

#





of





times





p





used





in





d

)

·








weight
parameters



(
d
)









d


derivations
i






weight
parameters



(
d
)




]









2. Maximizing by assigning the counts to the parameters and renormalizing:








p


parameters
:

p



counts
p


Z


(
p
)










Each iteration increases the likelihood until a local maximum is reached.


The step 230 can be written in pseudocode as:








For





each






(

i
,
o
,

w
example


)




T
:


//
Estimate








i
.




Let






D



d

i
,
o










ii
.




compute







α
D


,



β
D






using





latest





W

//

inside


-


outside





weights










iii
.




For






each





prod

=


(

n
,
rhs
,
w

)



P
:



label
r



hs


(

(



)

)





R





in
















derivation





wRTG





D

=


(

R
,
N
,
S
,
P

)

:














A
.






γ
D



(
prod
)







α
G



(
n
)


·
w
·


β
G



(
rhs
)



















B
.




Let






rule




label
rhs



(

(



)

)















C
.





count
rule





count
rule

+


w

example






·



γ
D



(
prod
)




β
D



(
S
)













iv
.




L



L
+

log








β
D



(
S
)


·

w
example











For





each





r

=



(

q
,
pattern
,
rhs
,
w

)



R
:


//
Maximize








i
.





w
r





count
r


Z


(

counts
,
r

)









δ



L
-
lastL



L










lastL

L

,

itno


itno
+
1







By using the weighted RTG's, each estimation maximum iteration takes an amount of time that is linear to the size of the transducer. For example, this may compute the sum of all the counts for rules having the same state, to provide model weights for a joint probability distribution of the input output tree pairs. This joint normalization may avoid many different problems.


The above has described tree-to-tree transducers. An alternative embodiment describes tree to string transducers is shown in the flowchart of FIG. 2C. This transducer will be used when a tree is only available at the input side of the training corpus. Note that FIG. 2C is substantially identical to FIG. 2A other than the form of the input data.


The tree to string transduction is then parsed using an extended R transducer as in the first embodiment. This is used to form a weighted derivation tree grammar. The derivation trees are formed by converting the input tree and the string into a flattened string of information which may include trees and strings. 285 of FIG. 5c simply refers to this as derivation information. The parsing of the tree to string transduction may be slightly different then the tree to tree transduction. Instead of derivation trees, there may be output string spans. A less constrained alignment may result.


This is followed in FIG. 2C by operations that are analogous to those in FIG. 2A: specifically, creation of the weighted RTG, the same as the weight summing of 230 and the expectation maximization of 240.


EXAMPLE

An example is now described here in of how to cast a probabilistic language model as an R transducer.


Table 2 shows a bilingual English tree Japanese string training corpus.












TABLE 2









ENGLISH:
(VB (NN hypocrisy)




 (VB is)




 (JJ (JJ abhorrent)




  (TO (TO to) (PRP them))))



JAPANESE:
kare ha gizen ga daikirai da



ENGLISH:
(VB (PRP he)




 (VB has)




 (NN (JJ unusual) (NN ability))




 (IN (IN in) (NN english)))



JAPANESE:
kare ha eigo ni zubanuke-ta sainou wo mot-te iru



ENGLISH:
(VB (PRP he)




 (VB was)




 (JJ (JJ ablaze)




  (IN (IN with) (NN anger))))



JAPANESE:
kare ha mak-ka ni nat-te okot-te i-ta



ENGLISH:
(VB (PRP i)




 (VB abominate)




 (NN snakes))



JAPANESE:
hebi ga daikirai da



 etc.











FIGS. 3 and 4 respectively show the generative model and its parameters. The parameter values that are shown are learned via expectation maximization techniques as described in Yamada and Knight 2001.


According to the model, an English tree becomes a Japanese string in four operations. FIG. 3 shows how the channel input is first reordered, that is its children are permeated probabilistically. If there are three children, then there are six possible permutations whose probabilities add up to one. The reordering is done depending only on the child label sequence.


In 320, a decision is made at every node about inserting a Japanese function word. This is a three-way decision at each node, requiring determination of whether the word should be inserted to the left, to the right, or not inserted at all. This insertion technique at 320 depends on the labels of the node and the parent. At 330, the English leaf words are translated probabilistically into Japanese, independent of context. At 340, the internal nodes are removed, leaving only the Japanese string.


This model can effectively provide a formula for


P. (Japanese string|English tree)


in terms of individual parameters. The expectation maximization training described herein seeks to maximize the product of these conditional probabilities based on the entire tree-string corpus.


First, an xRs tree to string transducer is built that embodies the probabilities noted above. This is a four state transducer. For the main-start state, the function q, meaning translate this tree, has three productions:


q x→i x, r x


q x→r x, i x


q x→r x


State 5 means “produce a Japanese word out of thin air.” There is an i production for each Japanese word in the vocabulary.


i x→“de”


i x→“kuruma”


i x→“wa”


. . .


State r means “reorder my children and then recurse”. For internal nodes, this includes a production for each parent/child sequence, and every permutation thereof:


r NN(x0:CD, x1:NN)→q x0, q x1


r NN(x0:CD, x1:NN)→q x1, q x0


. . .


The RHS then sends the child subtrees back to state q for recursive processing. For English leaf nodes, the process instead transitions to a different state t to prohibit any subsequent Japanese function word insertion:


r NN(x0:“car”)→t x0


r CC (x0:“and”)→t x0


. . .


State t means “translate this word”. There is a production for each pair of cooccuring in English and Japanese words.


t “car”→“kuruma”


t “car”→*wa*


t “car”→*e*


. . .


Each production in the XRS transducer has an associated weight, and corresponds to exactly 1 of the model parameters.


The transducer is unfaithful in one respect, specifically the insert function word decision is independent of context. It should depend on the node label and the parent label. This is addressed by fixing the q and r production. Start productions are used:


q x:VB→q.TOP.VB x


q x:JJ→q.TOP.JJ x


. . .


States are used, such as q.top.vb which states mean something like “translate this tree, whose route is vb”. Every parent-child payer in the corpus gets its own set of insert function word productions:


q.TOP.VB x→i x, r x


q.TOP.VB x→r x, i x


q.TOP.VB x→r x


q.VB.NN x→i x, r x


q.VB.NN x→r x, i x


q.VB.NN x→r x


. . .


Finally, the R productions need to send parent child information when they recurse to the q.parent.child states.


The productions stay the same. Productions for appraisal translations and others can also be added.


Although only a few embodiments have been disclosed in detail above, other modifications are possible, and this disclosure is intended to cover all such modifications, and most particularly, any modification which might be predictable to a person having ordinary skill in the art. For example, an alternative embodiment could use the same techniques for string to string training, based on tree based models or based only on string pair data. Another application is to generate likely input trees from output trees or vide versa. Also, and to reiterate the above, many other applications can be carried out with tree transducers, and the application of tree transducers to linguistic issues is merely exemplary.


Also, only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims


All such modifications are intended to be encompassed within the following claims

Claims
  • 1. A method for performing probabilistic operations with trained tree transducers, the method comprising: through execution of instructions stored in memory, obtaining tree transducer information including input/output pair information and transducer information;through execution of instructions stored in memory, converting said input/output pair information and said transducer information into a set of values in a weighted tree grammar; andthrough execution of instructions stored in memory, using said weighted tree grammar to solve a problem that requires information from the input/output pair information and transducer information.
  • 2. A method as in claim 1, wherein said tree transducer information includes an input tree and an output tree as said input/output pair.
  • 3. A method as in claim 1, wherein said tree transducer information includes an input tree and an output string as said input/output pair.
  • 4. A method as in claim 1, further comprising, through execution of instructions stored in memory, converting said tree transducer information into a derivation forest.
  • 5. A method as in claim 1, wherein said training further comprises maximizing an expectation of decisions.
  • 6. A method as in claim 1, wherein said using comprises training a linguistic engine which solves a linguistic problem.
  • 7. A method as in claim 6, wherein said linguistic problem includes training of a linguistic engine of a type that converts from one language to another.
  • 8. A method as in claim 1, wherein said set of values represents information about the tree transducer information and transducer information in a specified grammar, associated with a weight for each of a plurality of entries.
  • 9. A method as in claim 8, wherein said set of values are in a weighted regular tree grammar.
  • 10. A method as in claim 9, wherein said converting comprises storing the set of values as a weighted regular tree grammar, and returning certain stored information instead of recomputing said certain stored information.
  • 11. A method as in claim 1, further comprising, through execution of instructions stored in memory, further processing the set of values to sum weights based on information learned from said tree transducer information and said transducer information.
  • 12. A method as in claim 11, wherein said further processing comprises using an inside-outside algorithm to observe counts and determine each time a rule gets used and to adjust said rule weights based on said observing.
  • 13. A method as in claim 11, wherein said further processing comprises observing counts and determining each time a rule gets used, and increasing a probability of that rule each time the rule gets used.
  • 14. A method as in claim 1, wherein said training further comprises computing a sum of all the counts for rules having the same state, to provide model weights for one of a joint or conditional probability distribution of the tree transducer information.
  • 15. A method as in claim 1, wherein said using comprises solving a logic problem.
  • 16. A method as in claim 15, wherein said logic problem is a problem that uses a machine to analyze and the aspect of at least one language.
  • 17. A method for training tree transducers for probabilistic operations, the method comprising: using a computer to obtain information in the form of a first-tree, second information corresponding to said first tree, and transducer information; andusing said computer to automatically distill information from said first tree, from said second information, and from said transducer information into a list of information in a specified tree grammar with weights associated with entries in the list and to produce locally optimal weights for said entries.
  • 18. A method as in claim 17, wherein said automatically distill comprises observing when a specified rule in said list of information is used, and increasing said weight associated with said specified rule when said specified rule is used.
  • 19. A method as in claim 17, further comprising using said list of information in said computer to solve a problem.
  • 20. A method as in claim 19, wherein said problem is a linguistic problem.
  • 21. A method as in claim 17, wherein said automatically distill comprises parsing the information from a first portion to a second portion.
  • 22. A method as in claim 21, wherein said second information comprises a second tree.
  • 23. A method as in claim 21, wherein said second information comprises a string.
Parent Case Info

This application claims the benefit of the priority of U.S. Provisional Application Ser. No. 60/553,587, filed Mar. 15, 2004 and entitled “TRAINING TREE TRANSDUCERS”, the disclosure of which is hereby incorporated by reference.

US Referenced Citations (66)
Number Name Date Kind
4502128 Okajima et al. Feb 1985 A
4599691 Sakaki et al. Jul 1986 A
4787038 Doi et al. Nov 1988 A
4814987 Miyao et al. Mar 1989 A
4942526 Okajima et al. Jul 1990 A
5146405 Church Sep 1992 A
5181163 Nakajima et al. Jan 1993 A
5212730 Wheatley et al. May 1993 A
5267156 Nomiyama Nov 1993 A
5311429 Tominaga May 1994 A
5432948 Davis et al. Jul 1995 A
5477451 Brown et al. Dec 1995 A
5510981 Berger et al. Apr 1996 A
5644774 Fukumochi et al. Jul 1997 A
5696980 Brew Dec 1997 A
5724593 Hargrave, III et al. Mar 1998 A
5761631 Nasukawa Jun 1998 A
5781884 Pereira et al. Jul 1998 A
5805832 Brown et al. Sep 1998 A
5806032 Sproat Sep 1998 A
5848385 Poznanski et al. Dec 1998 A
5867811 O'Donoghue Feb 1999 A
5870706 Alshawi Feb 1999 A
5903858 Saraki May 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991710 Papineni et al. Nov 1999 A
6031984 Walser Feb 2000 A
6032111 Mohri Feb 2000 A
6092034 McCarley et al. Jul 2000 A
6119077 Shinozaki Sep 2000 A
6131082 Hargrave, III et al. Oct 2000 A
6182014 Kenyon et al. Jan 2001 B1
6205456 Nakao Mar 2001 B1
6223150 Duan et al. Apr 2001 B1
6236958 Lange et al. May 2001 B1
6278967 Akers et al. Aug 2001 B1
6285978 Bernth et al. Sep 2001 B1
6289302 Kuo Sep 2001 B1
6304841 Berger et al. Oct 2001 B1
6311152 Bai et al. Oct 2001 B1
6360196 Poznanski et al. Mar 2002 B1
6389387 Poznanski et al. May 2002 B1
6393388 Franz et al. May 2002 B1
6393389 Chanod et al. May 2002 B1
6415250 van den Akker Jul 2002 B1
6460015 Hetherington et al. Oct 2002 B1
6502064 Miyahira et al. Dec 2002 B1
6587844 Mohri Jul 2003 B1
6782356 Lopke Aug 2004 B1
6810374 Kang Oct 2004 B2
6904402 Wang et al. Jun 2005 B1
7013262 Tokuda et al. Mar 2006 B2
7107215 Ghali Sep 2006 B2
7113903 Riccardi et al. Sep 2006 B1
7143036 Weise Nov 2006 B2
7149688 Schalkwyk Dec 2006 B2
7346493 Ringger et al. Mar 2008 B2
7373291 Garst May 2008 B2
7389234 Schmid et al. Jun 2008 B2
20020188438 Knight et al. Dec 2002 A1
20020198701 Moore Dec 2002 A1
20040015342 Garst Jan 2004 A1
20040030551 Marcu et al. Feb 2004 A1
20040111253 Luo et al. Jun 2004 A1
20040193401 Ringger et al. Sep 2004 A1
20050125218 Rajput et al. Jun 2005 A1
Foreign Referenced Citations (6)
Number Date Country
0469884 Feb 1992 EP
0715265 Jun 1996 EP
0933712 Aug 1999 EP
07244666 Jan 1995 JP
10011447 Jan 1998 JP
11272672 Oct 1999 JP
Related Publications (1)
Number Date Country
20050234701 A1 Oct 2005 US
Provisional Applications (1)
Number Date Country
60553587 Mar 2004 US