Method and apparatus for speaker identification using mixture discriminant analysis to develop speaker models

Information

  • Patent Grant
  • 6330536
  • Patent Number
    6,330,536
  • Date Filed
    Friday, March 16, 2001
    23 years ago
  • Date Issued
    Tuesday, December 11, 2001
    22 years ago
Abstract
A speaker identification system is provided that constructs speaker models using a discriminant analysis technique where the data in each class is modeled by Gaussian mixtures. The speaker identification method and apparatus determines the identity of a speaker, as one of a small group, based on a sentence-length password utterance. A speaker's utterance is received and a sequence of a first set of feature vectors are computed based on the received utterance. The first set of feature vectors are then transformed into a second set of feature vectors using transformations specific to a particular segmentation unit, and likelihood scores of the second set of feature vectors are computed using speaker models trained using mixture discriminant analysis. The likelihood scores are then combined to determine an utterance score and the speaker's identity is validated based on the utterance score. The speaker identification method and apparatus also includes training and enrollment phases. In the enrollment phase the speaker's password utterance is received multiple times. A transcription of the password utterance as a sequence of phones is obtained, and the phone string is stored in a database containing phone strings of other speakers in the group. In the training phase, the first set of feature vectors are extracted from each password utterance and the phone boundaries for each phone in the password transcription are obtained using a speaker independent phone recognizer. A mixture model is developed for each phone of a given speaker's password. Then, using the feature vectors from the password utterances of all of the speakers in the group, transformation parameters and transformed models are generated for each phone and speaker, using mixture discriminant analysis.
Description




BACKGROUND OF THE INVENTION




1. Field of Invention




This invention relates to determining the identity of a speaker as one of a small group, based on a sentence-length password utterance.




2. Description of Related Art




Security systems have long used passwords as a means to limit access to a single individual or groups of individuals. Passwords are common place for computer systems, building entry, etc., so that valuable information or materials may be protected.




Most secure systems requiring a password for entry require the user to enter alphanumeric text via a keyboard or keypad. However, with the advent of high quality speech recognition systems, there is a need for an accurate and reliable speaker identification system to allow entry to computer systems or buildings etc., using spoken passwords.




SUMMARY OF THE INVENTION




A speaker identification system is provided that constructs speaker models using a discriminant analysis technique where the data in each class is modeled by Gaussian mixtures. The speaker identification method and apparatus determines the identity of a speaker, as one of a small group, based on a sentence-length password utterance.




A speaker's utterance is received and a sequence of a first set of feature vectors are computed based on the received utterance. The first set of feature vectors are then transformed into a second set of feature vectors using transformations specific to a particular segmentation unit and likelihood scores of the second set of feature vectors are computed using speaker models trained using mixture discriminant analysis. The likelihood scores are then combined to determine an utterance score and the speaker's identity is validated based on the utterance score.




The speaker identification method and apparatus also includes training and enrollment phases. In the enrollment phase the speaker's password utterance is received multiple times. A transcription of the password utterance as a sequence of phones is obtained; and the phone string is stored in a database containing phone strings of other speakers in the group.




In the training phase, the first set of feature vectors are extracted from each password utterance and the phone boundaries for each phone in the password transcription are obtained using a speaker independent phone recognizer. A mixture model is developed for each phone of a given speaker's password. Then, using the feature vectors from the password utterances of all of the speakers in the group, transformation parameters and transformed models are generated for each phone and speaker, using mixture discriminant analysis.




These and other features and advantages of this invention are described in or are apparent from the following detailed description of the preferred embodiments.











BRIEF DESCRIPTION OF THE DRAWINGS




The invention is described with detail of references of the following drawings, wherein like numerals represent like elements, and wherein:





FIG. 1

is a block diagram of the speaker identification system;





FIG. 2

is a block diagram of the enrollment stage using password text;





FIG. 3

is a block diagram of the enrollment stage using a password utterance;





FIG. 4

is a block diagram illustrating the phone segmentation portion of the training phase;





FIG. 5

is a block diagram illustrating the mixture model development portion of the training phase;





FIG. 6

is a block diagram illustrating the creation of transformation parameters and transformed models in the training phase;





FIG. 7

is a block diagram illustrating the input portion of the identification phase;





FIG. 8

is a block diagram illustrating the scoring portion of the identification phase;





FIG. 9

is a block diagram illustrating the scoring portion of the identification phase using a threshold;





FIG. 10

is a flowchart of the enrollment phase of the invention;





FIG. 11

is a flowchart of the training phase of the invention; and





FIG. 12

is a flowchart of the identification phase of the invention.











DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS





FIG. 1

is a block diagram of an exemplary speaker identification system


100


. The speaker identification system


100


includes a speaker independent phone recognizer


110


, a speaker identifier


120


and speaker model unit


130


. However, it can appreciated that this invention may be performed on any currently existing or later developed speaker identification system. Furthermore, the speaker independent phone recognizer


110


represents any currently existing or future speech recognition device.




Moreover, while the exemplary embodiment of the invention is described below using phones as the primary language segmentation unit, it may be appreciated that the invention may include the use of other language segmentation units, such as syllables or acoustic sub-units, for example.




When a user seeks access to a system or structure secured by a password, the user utters the password and it is input to the speaker independent phone recognizer


110


. The password may be one word or a sentence, for example. The speaker independent phone recognizer


110


extracts features from the utterance and segments the utterance into a sequence of phones which are then input into the speaker identifier


120


. The speaker identifier


120


compares the sequence of features and the phone segments generated from the password utterance by the speaker independent phone recognizer


110


with previously trained speaker models for each member of the group, from the speaker model unit


130


, to generate a list of utterance scores. The speaker identifier


120


determines the identity as the speaker whose model produces the highest score. The best score is compared to a threshold to determine whether the password utterance is accepted or rejected.




The speaker identification process of the invention has essentially three phases: enrollment, training and identification. The enrollment phase is illustrated in the block diagram in

FIGS. 2 and 3

and the flowchart in

FIG. 10

; the training phase is illustrated in the block diagrams in

FIGS. 4-6

and the flowchart in

FIG. 11

; and the identification phase is illustrated in the block diagrams in

FIGS. 7-9

and the flowchart in FIG.


12


.





FIG. 2

is a block diagram of the enrollment phase of the speaker identification process when the password input is assumed to be known text. The password text is input to linguistic processor 200 which generates a phone transcription based on the password text input (For example, if the word “cat” is input, the linguistic processor


200


would break down the word cat into the phone string /k/ /ae/t/.) The phone transcription of the password is obtained by a dictionary lookup or rule-based text-to-phoneme conversion. To complete the enrollment, the user will have to speak the password a few times.





FIG. 3

is a block diagram illustrating the enrollment phase of the speaker identification process using a password utterance input where the password is not known. In this phase, the speaker utters the password which is then input to the automatic phone transcriber


300


. The automatic phone transcriber


300


generates a phone string such as that above, namely in the “cat” example, /k/ae//t/. The phone string generated will not necessarily correspond to a “dictionary” pronunciation if the text of the password is not provided. A password dictionary is created that stores the phone transcription for each user's password.





FIGS. 4-6

illustrate the training phase of the speaker identification system


100


. In the training phase, for example, N classifiers, 1 for each phone of the N phones in the password transcription, may be trained for a given group.





FIG. 4

illustrates the beginning of the training phase described above. Password utterances from all speakers in the group are input into a speaker independent phone recognizer


110


. The speaker independent phone recognizer


10


segments the input string into individual phones, namely in the case of “cat”, the individual phones are /k/ /ae/ /t/.





FIG. 5

is a block diagram illustrating the generation of mixture models. The Gaussian mixture model trainer


500


generates Gaussian mixture models from a collection of features for a given unit or phone and speaker. The Gaussian mixture model trainer


500


uses a K-means clustering algorithm to cluster the collection of features and initialize a Gaussian mixture model for the given phone and speaker. The means, covariances, and mixture weights of the initial model are updated using several (10, for example ) iterations of a standard Expectation-Maximization algorithm. This process is repeated for each phone in the password transcription of every speaker. The result is set of mixture models, Λ


j




p


for the j-th speaker and p-th phone.





FIG. 6

is a block diagram illustrating the training of the transformation parameters and the transformed model. The collection of features from a segment labelled by a given phone from all speakers in the group is input to the posterior probability estimator


610


. The probability density function of the i-th D-dimensional feature vector x


i


from a given phone and speaker j is modelled by:













P
(

x
i


&RightBracketingBar;



Λ
j
p


)

=



&LeftBracketingBar;

2


πΣ
p


&RightBracketingBar;



-
1

/
2







m
=
1


M
j
p









c
jm
p






-

D


(


x
i

,

μ
jm
p


)



/
2









(
1
)













where Σ


p


is the covariance common to all the speakers, c


jm




p


and μ


jm




p


are the mixture weights and the class means respectively, and D(x, μ)=(x−μ)


T


Σ


−1


(x−Σμ). The process described here is repeated for each phone in the password transcription. The estimated posterior mixture probability of the l-th mixture of the j-th speaker model is:
















p
^

(

g
jl


&RightBracketingBar;


x

;

Λ
j
p


)

=





c
jl
p






-

D


(


x
i

,

μ
jl
p


)



/
2







m
=
1


M
j
p









c
jm
p






-

D


(


x
i

,

μ
jm
p


)



/
2










l

=
1


,
2
,





,

M
j
p





(
1
)













The posterior probability estimator


610


computes the above probability for every mixture in each speaker model. These probabilities are input to the mixture discriminant analysis unit


620


. The mixture discriminant analysis unit


620


also receives input from the mixture model unit


630


and outputs transformation parameters. These parameters are computed in the steps 0-4 as shown below:




0.







Let






M
p


=




j
=
1

J







M
j
p












be the total number of mixtures over all speakers for phone p, where J is the number of speakers in the group. If there are N


p


feature vectors segmented into phone p taken from all of the speakers in the group, compute an N


p


×M


p


matrix H


p


as follows. The i-th row of H


p


has M


p


entries:







h


(
i
)


p


=

[


h
1

1

p














h

M
1


1

p




h
1

2

p














h

M
2
p


2

p














h
1
Jp













h

M
J
p

Jp


]











If the i-th observation belongs to the utterance of speaker j then h


l




jp


={circle around (p)}(g


ji


|x


i


; Λ


j




p


) for l={1,2, . . . ,M


j




p


}, and all other entries in the row are 0. {circle around (p)}(g


ji


|x


i


; Λ


j




p


) refers to the posterior probabilities in equation 2 estimated from training data.




1. Regression: Solve the equation H


p


=H


p


B


p


for B


p


to minimize the mean square error:










ε


(


Θ
p

,

B
p


)


=


1

N
p







k
=
1


K
p









&LeftDoubleBracketingBar;



H
p



θ
k
p


-


X
p



b
k
p



)




&LeftDoubleBracketingBar;
2

,








(
3
)













where X


p


is a N


p


×D data matrix whose rows are the feature vectors, x


i


, B


p


is a D×K


p


matrix whose columns are the regressors b


k




p


, and Θ


p


is a J


p


×K


p


matrix of score vectors θ


k




p


. Let the solution be {tilde over (B)}


p


.




2. Optimal scoring: Obtain the score matrix Θ


p


by setting it equal to the eigenvector matrix obtained in the eigen-decomposition of kept H


p






T




Ĥ


p


, with the normalization Θ


p






T




H


p






T




H


p


Θ


p


=I where I is the identity matrix and Ĥ


p


=X


p


{tilde over (B)}


p


is the estimated value of the H


p


matrix from step 1.




3. Update the regression coefficient matrix {tilde over (B)}


p


obtained in step 1 using optimal scores: {circle around (B)}


p


={tilde over (B)}


p


Θ


p


. The projected observations of dimension K


p


is η


K






p




(x)={circle around (B)}


K






p






p


x where {circle around (B)}


K






p






p


is the D×K


p


sub-matrix of {circle around (B)}


p


that retains the first K


p


columns.




4. Update {circle around (p)}(g


jl


|x


i


; Λ


j




p


) and c


jl




p


using the standard Expectation-Maximization algorithm for Gaussian mixtures.




In this implementation example, two iterations of steps 1 through 4 are performed.





FIGS. 7-9

illustrate the identification phase of the speaker identification system


100


. In the identification phase, classification is performed by computing utterance scores for each speaker in the group.

FIG. 7

is a block diagram illustrating the first stage of the identification phase. Feature vectors belonging to a phone segment whose label is p extracted from the password utterance are input to a likelihood estimator


700


. The likelihood estimator produces the log-likelihood of the K


p


-dimensional projection of the feature vectors, where the value of K


p


is less than the original feature dimension D (i.e. a low-dimensional discriminant subspace), using the transformation parameters and transformed models generated by the mixture discriminant analysis unit


620


during the training phase. The likelihood score, or frame-level likelihood score, score for speaker j is given by:










L


(




η

K
p


(

x
i


&RightBracketingBar;



Λ
j
p


)


=



-

1
2



log


&LeftBracketingBar;

2


πΣ
p


&RightBracketingBar;


+




m
=
1


M
j
p










c
^

jm
p






-

D


(


x
i

,

μ
jm
p


)



/
2









(
4
)













where











D
K
p



(

x
,


μ
^

j


)


=

&LeftDoubleBracketingBar;




E
p

(



η

K
p




(
x
)


-


μ
^

j

p








&RightDoubleBracketingBar;

2






(
5
)













where η


K






p




(X) is computed as explained in step 3 of the training stage and E


p


is a K


p


×K


p


diagonal matrix with:











E

k
,
k

p

2

=

1


e
k
p2



(

1
-

e
k
p2


)







(
6
)













and e


k




p






2




is the mean square residual of the k


th


optimally scored fit, that is the k


th


term in the summation in equation 3.





FIG. 8

is a block diagram illustrating the identification/rejection phase of the identification process. The frame-level likelihood scores generated from

FIG. 7

are input into a score combiner


810


. The score combiner


810


generates the utterance score:











s
j
l

=


1
N






p
=
1

P










i
=
1


N
p









L
(


η

K
p




(

x
i

)


&RightBracketingBar;



Λ
j
p






)




(
7
)













where P is the number of phones in the utterance, N


p


is the number of observations in phone p, and N=Σ


p=1




p


N


p


is the total number of feature vectors, for each speaker in the group. The utterance scores are then input to the score analysis unit


820


which either confirms that the password utterance belongs to one of the speakers in the group or rejects it as being uttered by an impostor, for example. The score analysis unit performs this task by comparing the utterance score to a threshold that has been established during the training process. This threshold is speaker specific and is adapted on input utterances.





FIG. 9

is a block diagram illustrating the confirmation and rejection processes of

FIG. 8

but including a threshold scoring unit


910


. The threshold scoring unit


910


takes the frame-level likelihood scores generated in FIG.


7


and outputs thresholded scores to be combined by the score combiner


810


. The threshold scoring unit


910


assigns a speaker label {circle around (k)}


i


=j to each feature vector x


i


such that the frame-level likelihood score of the feature vector using the model for speaker frame-level likelihood j is the maximum of all the speaker scores. The score combiner then computes the overall utterance score as:










s
j
d

=


1
N










i
=
1

N




1
j



(


k
^

i

)








(
8
)













where l


j


({circle around (k)}


i


)=1 if {circle around (k)}


i


=j and 0 otherwise.





FIG. 10

is a flowchart illustrating the enrollment phase. The process starts at step


1010


and proceeds to step


1020


where a determination is made as to whether the text of the password utterance is known or unknown. If the text is known, the process goes to step


1030


where the linguistic processor


200


processes the password utterance and produces a phone transcription of the password text. The process then proceeds to step


1040


, where the linguistic processor


200


collects N repetitions of the transcribed password utterance from the speaker to generate training utterances used to train the speaker models. The phone transcriptions are then stored in a password dictionary. The process then goes to step


1070


and ends.




However, if the text is unknown, the process goes to step


1050


where the automatic phone transcriber


300


collects N repetitions of the password utterance from the speaker to generate training utterances used to train the speaker models. In step


1060


, the automatic phone transcriber


300


processes the password utterance and produces a phone transcription of the password text. The phone transcriptions are then stored in a password dictionary. The process then goes to step


1070


and ends.





FIG. 11

is a flowchart of the training phase of the speaker identification process. The process begins in step


1110


and proceeds to step


1120


where the speaker independent phone recognizer


110


obtains password utterances from all speakers in the group and segments the utterances into phones. In step


1130


the Gaussian mixture model trainer


500


receives a collection of features for a given phone and speaker and outputs a mixture model.




In step


1140


, the posterior probability estimator receives the collection of features and mixture models to output a posterior probability of a given vector. In step


1150


, the mixture discriminant analysis unit


620


receives the posterior probabilities and mixture models and performs a mixture discriminant analysis.




The mixture discriminant analysis unit


620


then outputs transformed parameters and models for each speaker in step


1160


, and the process goes to step


1170


and ends.





FIG. 12

is a block diagram of the identification phase of the speaker identification process. Process begins at step


1210


and proceeds to step


1220


where a likelihood estimator receives a password utterance and its extracted features. In step


1230


the likelihood estimator


700


develops frame-level likelihood scores from the features and the transformation parameters and models.




In step


1240


, thresholding is performed on the frame-level likelihood scores by the threshold scoring unit


910


. The threshold scores are input into the score combiner


810


and the utterance scores are input to the score analysis unit


820


. From the output of the score analysis unit


820


, the password is confirmed or rejected in


1250


and the process goes to step


1260


and ends.




While this invention is being described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to the those skilled in the art. Accordingly, preferred embodiments of the invention as set forth herein are attended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention as described in the following claims.



Claims
  • 1. A method of determining an utterance score for identifying a speaker from a group of speakers based on a first set of feature vectors of an utterance from the speaker, comprising:transforming the first set of feature vectors into a second set of feature vectors using transformations specific to a segmentation unit; computing likelihood scores of the second set of feature vectors using speaker models trained by a mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group; and combining the likelihood scores to determine an utterance score.
  • 2. The method of claim 1, further comprising validating the identity of the speaker based on the utterance score.
  • 3. The method of claim 1, wherein the second set of feature vectors represents a low-dimensional discriminant subspace.
  • 4. The method of claim 1, wherein the segmentation unit is one of a phone, a syllable, and an acoustic sub-word unit.
  • 5. The method of claim 1, further comprising a training phase, the training phase comprising:extracting the collection of first sets of feature vectors and obtaining phone segments from a password utterance for each of the speakers in the group; developing a mixture model for each of the phone segments of the password utterance for each of the speakers in the group; estimating posterior mixture probabilities using the mixture models and the collection of first sets of feature vectors; performing the mixture discriminant analysis using the posterior mixture probabilities and the collection of first sets of feature vectors; and outputting transformation parameters specific to phone segments and common to all the speakers in the group and transformed models for each speaker in the group.
  • 6. The method of claim 1, further comprising an enrollment phase, the enrollment phase comprising:receiving a password utterance multiple times for one of the speakers in the group; converting the password utterance into a phone string; and storing the phone string in a database containing phone strings of the other speakers in the group.
  • 7. The method of claim 6, wherein the password utterance is known.
  • 8. The method of claim 6, wherein the password utterance is not known.
  • 9. The method of claim 1, wherein the utterance score is determined by averaging the likelihood scores.
  • 10. The method of claim 1, wherein the utterance score is based on threshold scores generated from the likelihood scores.
  • 11. An apparatus for determining an utterance score for identifying a speaker from a group of speakers based on a first set of feature vectors of utterance from the speaker, comprising:a speaker independent phone recognizer that transforms the first set of feature vectors into a second set of feature vectors using transformations specific to a particular segmentation unit; a likelihood estimator that computes likelihood scores of the second set of feature vectors using speaker models trained by mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group; and a score combiner that combines the likelihood scores to determine an utterance score.
  • 12. The apparatus of claim 11, further comprising a score analysis unit that validates the identity of the speaker based on the utterance score.
  • 13. The apparatus of claim 11, wherein the second set of feature vectors represents a low-dimensional discriminant subspace.
  • 14. The apparatus of claim 11, wherein the segmentation unit is one of a phone, a syllable, and an acoustic sub-word unit.
  • 15. The apparatus of claim 11, wherein:the speaker independent phone recognizer extracts the collection of first sets of feature vectors and obtains phone segments from a password utterance for each of the speakers in the group; a Gaussian mixture model trainer develops a mixture model for each one of the phone segments of the password utterance for each speaker in the group; a posterior probability estimator estimates posterior mixture probabilities using the mixture models and the collection of first sets of feature vectors; and a mixture discriminant analysis unit performs a mixture discriminant analysis using the posterior mixture probabilities and the collection of first sets of feature vectors and outputs transformation parameters specific to phone segments and common to all the speakers in the group and outputs transformed models for each speaker in the group.
  • 16. The apparatus of claim 11, wherein:the speaker independent phone recognizer receives the password utterance for one of the speakers in the group multiple times, converts the password utterance into a phone string, and stores the phone string in a database containing phone strings of the other speakers in the group.
  • 17. The apparatus of claim 16, wherein the password utterance for the speakers in the group is known.
  • 18. The apparatus of claim 16, wherein the password utterance for the speakers in the group is not known.
  • 19. The apparatus of claim 11, wherein the score combiner determines the utterance score by averaging the likelihood scores.
  • 20. The apparatus of claim 11, further comprising a threshold unit, wherein:the score combiner determines the utterance score based on threshold scores generated from the likelihood scores by the threshold unit.
  • 21. A method for identifying a speaker of a group from an utterance having features represented by a first set of feature vectors, comprising:transforming the first set of feature vectors into a second set of feature vectors; comparing the second set of feature vectors to speaker models to generate likelihood scores, combining the likelihood scores to determine an utterance score; and comparing the utterance score to a speaker specific threshold, wherein the speaker models are trained by mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group.
  • 22. The method of claim 21, wherein transformations that are specific to a particular segmentation unit are used in the transforming step.
Parent Case Info

This is a continuation of application Ser. No.09/198,579 U.S. Pat. No. 6,233,555 filed Nov. 24, 1998, which is in turn a non-provisional application that claims the benefit of U.S. Provisional Application No. 60/066,280, filed Nov. 25, 1997. The subject matter of both applications are incorporated herein by reference.

US Referenced Citations (9)
Number Name Date Kind
5054083 Naik et al. Oct 1991
5615299 Bahl et al. Mar 1997
5687287 Gandhi et al. Nov 1997
5754681 Watanabe et al. May 1998
5839103 Mammone et al. Nov 1998
5913192 Parthasarathy et al. Jun 1999
5995927 Li Nov 1999
6029124 Gillick et al. Feb 2000
6233555 Parthasarathy et al. May 2001
Non-Patent Literature Citations (9)
Entry
K. Fukunaga, Introduction to Statistical Pattern Recognition, Chapter 10, “Non Linear Mapping”, pp. 288-322, Academic Press, Inc. 1990.
L. Breiman and R. Ihaka, “Nonlinear Discriminant Analysis Via Scaling and ACE,” Technical Report, University of California, Berkeley, 1984.
T. Hastie, R. Tibshirani and Buja, “Flexible Discriminant Analysis by Optimal Scoring,” Journal of the American Statistical Association, 89, pp. 1255-1270, 1994.
T. Hastie and R. Tibshirani, “Discriminant Analysis by Gaussian Mixtures,” Journal of the Royal Statistical Society (Series B), 58, pp. 155-176, 1996.
D.X. Sun, “Feature Dimension Reduction Using Reduced-Rank Maximum Liklihood Extimation For Hidden Markov Models,” Proc. Int. Conference on Spoken Language Processing, pp. 244-247, 1996.
A. E. Rosenberg, O. Siohan, and Parthasarathy, “Small Group Speaker Identification with Common Password Phrases,” submitted to RLA2C, 1998.
Joseph T. Buck, David K. Burton, and John E. Shore, “Text-Dependent Speaker Recognition unrigs Vector Quantization,” Proc. IEEE Int. Conf. Acoust. Speech, and Sig. Proc. ICASSP 85, Mar. 26-29, 1985, pp. 391-394.
A.E. Rosenberg, C.H. Lee, and F.K. Soong, “Sub-Word Unit Talker Verification using Hidden Markov Models,” Proc. 1990 Int. Conf. Acoust. Speech, and Sig. Proc. ICASSP 90, Apr. 3-6, 1990, pp. 269-272.
M. Sharma and R. Mammone, “Subword-based Text-dependent speaker Verification System with User-selectable Passwords,” Proc. 1996 Int. Conf. Acoust. Speech, and Sig. Proc. ICASSP 96, May 7-10, 1996, pp. 93-96.
Provisional Applications (1)
Number Date Country
60/066280 Nov 1997 US
Continuations (1)
Number Date Country
Parent 09/198579 Nov 1998 US
Child 09/809226 US