This application is a Submission Under 35 U.S.C. § 371 for U.S. National Stage Patent Application of International Application Number: PCT/EP2016/054581, filed Mar. 3, 2016 entitled “UNCERTAINTY MEASURE OF A MIXTURE-MODEL BASED PATTERN CLASSIFIER,” the entirety of which is incorporated herein by reference.
Embodiments presented herein relate to a method, a classification device, a computer program, and a computer program product for determining an uncertainty measure of a mixture-model based parametric classifier.
In general terms, audio mining is a technique by which the content of an audio signal (comprising an audio waveform) can be automatically analyzed and searched. It is commonly used in the field of automatic speech recognition, where the analysis tries to identify any speech within the audio. The audio signal will typically be processed by a speech recognition system in order to identify word or phoneme units that are likely to occur in the spoken content. In turn, this information can be used to identify a language used in the audio signal, which speaker that is producing the audio waveform, the gender of the speaker producing the audio waveform, etc. This information may either be used immediately in pre-defined searches for keywords, languages, speakers, gender (a real-time word spotting system), or the output of the speech recognizer may be stored in an index file. One or more audio mining index files can then be loaded at a later date in order to run searches for any of the above parameters (keywords, languages, speakers, gender, etc.).
x={xn}n=1N (1)
All vectors xn are D-dimensional, i.e., xn=[xn1, xn2, . . . , xnD]. The input sequence x is assumed to belong to one of M classes as represented by ωm:
{ωm}m=1M (2)
The exact class to which the input sequence x belongs is unknown. The classifier module 110 is configured to assign the input sequence x to the correct class ωm*. This can be formalized through configuring the classifier module 110 to implement discriminant functions:
{gm(x)}m=1M (3)
The discriminant functions are by the classifier module 110 used to produce an index of the most likely class ωm* for the input sequence x as:
Here, the function argmax represents the argument (i.e., the value m*∈ {1, 2, . . . , M}) that maximizes the function gm(x).
For probabilistic parametric classifiers 110 the form of the discriminant function is most often defined to be the log-likelihood, expressed as log(P(x|ωm)), of the input sequence (i.e., the log-likelihood of observing the input sequence x given the class ωm) expressed as
gm(x)=log(P(x|ωm)) (5)
To simplify presentation it is assumed that all classes ωm are equally probable, i.e., that all a priori probabilities are equal, i.e., that P(ω1)=P(ω2)= . . . =P(ωM). The skilled person would be able to extend the presentation to un-equal a priori probabilities.
It is assumed that the probability of the input sequence x could be written as product of probabilities of individual vectors xn. Further, for mixture-models, these probabilities are modeled as superposition of components, or classes, denoted Φm,k. That is, P(xn|ωm) is a weighted sum of K clusters Φm,k, with weights um,k. This leads to an expression of the discriminant functions (log-likelihood) for the mixture-model based pattern classification as follows:
In its most commonly used form the clusters Φm,k are Gaussian densities determined by set of means μm,k and covariances Σm,k:
Here, exp denotes the exponential function. The structure introduced in Equations (4)-(7) defines a commonly used pattern classification techniques based on Gaussian Mixture Models (GMM). At a training stage, as performed by the training module 120, the parameters for each class ωm:{um,k, μm,k, Σm,k}k=1K are obtained from the part of a training sequence y that is associated with that class. During classification, as performed by the classifier module 110, for a given input sequence x the classifier module 110 implements Equation (6) for all m, finds the largest gm(x), and determines the index of the optimal class by implementing Equation (4).
Under the assumptions that the mixture model can capture the probability density function of the input sequence x, that the model parameters are correct, and that the length of the input sequence x approaches infinity, the mixture model based classifier approaches optimal classification performance.
However, for many practical applications the classification is performed on finite-length input sequences. For example, statistics in a speech signal are rarely stationary more than 200 ms, which can imply that the classification decision has to be taken with at most 20 feature vectors. In such a case the optimality when using a mixture model based classifier is not guaranteed. Further, it could be challenging to deduce the degree of uncertainty in the classification decision based on the likelihood P(x|ωm) itself, because it depends on the shape of the probability density function, the particular input sequence x, etc.
Hence, there is still a need for an improved measure of the uncertainty of the classifier 110.
An object of embodiments herein is to provide an efficient measure of the uncertainty of a mixture-model based parametric classifier.
According to a first aspect there is presented a method for determining an uncertainty measure of a mixture-model based parametric classifier. The method is performed by a classification device. The method comprises obtaining a short-term frequency representation of a multimedia signal. The short-term frequency representation defines an input sequence. The method comprises classifying the input sequence to belong to one class of at least two available classes using the parametric classifier. The parametric classifier has been trained with a training sequence. The method comprises determining an uncertainty measure of the thus classified input sequence based on a relation between posterior probabilities of the input sequence and posterior probabilities of the training sequence.
Advantageously this provides an efficient measure of the uncertainty of a mixture-model based parametric classifier.
Advantageously this uncertainty measure can be used to give a degree of confidence in classification performed by a mixture-model based parametric classifier on short input sequences and where the expected source statistics might be disturbed.
According to a second aspect there is presented a classification device for determining an uncertainty measure of a mixture-model based parametric classifier. The classification device comprises processing circuitry. The processing circuitry is configured to cause the classification device to obtain a short-term frequency representation of a multimedia signal. The short-term frequency representation defines an input sequence. The processing circuitry is configured to cause the classification device to classify the input sequence to belong to one class of at least two available classes using the parametric classifier. The parametric classifier has been trained with a training sequence. The processing circuitry is configured to cause the classification device to determine an uncertainty measure of the thus classified input sequence based on a relation between posterior probabilities of the input sequence and posterior probabilities of the training sequence.
According to a third aspect there is presented a classification device for determining an uncertainty measure of a mixture-model based parametric classifier. The classification device comprises processing circuitry and a computer program product. The computer program product stores instructions that, when executed by the processing circuitry, causes the classification device to perform operations, or steps. The operations, or steps cause the classification device to obtain a short-term frequency representation of a multimedia signal. The short-term frequency representation defines an input sequence. The operations, or steps cause the classification device to classify the input sequence to belong to one class of at least two available classes using the parametric classifier. The parametric classifier has been trained with a training sequence. The operations, or steps cause the classification device to determine an uncertainty measure of the thus classified input sequence based on a relation between posterior probabilities of the input sequence and posterior probabilities of the training sequence.
According to a fourth aspect there is presented a classification device for determining an uncertainty measure of a mixture-model based parametric classifier. The classification device comprises an obtain module configured to obtain a short-term frequency representation of a multimedia signal. The short-term frequency representation defines an input sequence. The classification device comprises a classify module configured to classify the input sequence to belong to one class of at least two available classes using the parametric classifier. The parametric classifier has been trained with a training sequence. The classification device comprises a determine module configured to determine an uncertainty measure of the thus classified input sequence based on a relation between posterior probabilities of the input sequence and posterior probabilities of the training sequence.
According to a fifth aspect there is presented a computer program for determining an uncertainty measure of a mixture-model based parametric classifier, the computer program comprising computer program code which, when run on classification device, causes classification device to perform a method according to the first aspect.
According to a sixth aspect there is presented a computer program product comprising a computer program according to the fifth aspect and a computer readable storage medium on which the computer program is stored.
It is to be noted that any feature of the first, second, third, fourth, fifth and sixth aspects may be applied to any other aspect, wherever appropriate. Likewise, any advantage of the first aspect may equally apply to the second, third, fourth, fifth, and/or sixth aspect, respectively, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
The inventive concept is now described, by way of example, with reference to the accompanying drawings, in which:
The inventive concept will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the inventive concept are shown. This inventive concept may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. Like numbers refer to like elements throughout the description. Any step or feature illustrated by dashed lines should be regarded as optional.
With reference to
Reference is here made to
As noted above, it could be challenging to deduce the degree of uncertainty in the classification decision based on the likelihood P(x|ωm) itself, because it depends on the shape of the probability density function, the particular input sequence x, etc.
Another option is to use the posterior probabilities P(ωm|x), which represent the probability of the class being ωm given the input sequence x, also considering the impact of the input sequence x being of finite length. For example, the input sequence x could correspond a length of 200 ms.
An example of the impact of the finite length of the input sequence x is presented in
In conclusion, if the models are complex (consisting of many clusters) and the input sequence x is short, P(ωm|x) will be dependent on which clusters are associated to the particular points (as defined by vectors xn) in x. Since the model is global (a GMM outputs only a weighted sum of votes from individual clusters) the classifier 110 cannot capture how the short sequence factor will impact the certainty of the classification P(ωm|x).
The embodiments disclosed herein thus relate to determining an uncertainty measure of the mixture-model based parametric classifier 110. In order to obtain such an uncertainty measure there is provided a classification device 200a, 200b, a method performed by the classification device 200a, 200b, a computer program product comprising code, for example in the form of a computer program, that when run on a classification device 200a, 200b, causes the classification device 200a, 200b to perform the method.
Reference is now made to
S102: The classification device 200a, 200b obtains a short-term frequency representation of a multimedia signal. Examples of the short-term frequency representation and examples of the multimedia signal will be provided below. The short-term frequency representation defines an input sequence x.
S104: The classification device 200a, 200b classifies the input sequence x to belong to one class ωm* of at least two available classes ω1, ω2. The input sequence x is classified using the parametric classifier 110. The parametric classifier 110 has been trained with a training sequence y. In this respect the parametric classifier 110 implements operations which as such are known in the art for classifying the input sequence x into one class ωm*, of the at least two available classes ω1, ω2.
S106: The classification device 200a, 200b determines an uncertainty measure of the thus classified input sequence x. The uncertainty measure is based on a relation between posterior probabilities of the input sequence x and posterior probabilities of the training sequence y. Examples of uncertainty measures will be provided below. Step S106 can be implemented by the uncertainty module 210a, 210b.
Determination of the uncertainty measure, as performed by the uncertainty module 210a, 210b, can thereby be added to a legacy classification scheme, as performed by the parametric classifier 110, without modifying the core classification determination implemented by the parametric classifier 110.
The determination of the uncertainty measure thus requires the classification device 200a, 200b to obtain the posterior probabilities of the input sequence x and to obtain the posterior probabilities of the training sequence y.
Embodiments relating to further details of determining the uncertainty measure of a mixture-model based parametric classifier 110 as performed by the classification device 200a, 200b will now be disclosed. Reference is now made to
There could be different kinds of parametric classifiers 110. According to an embodiment the parametric classifier 110 is based on Gaussian Mixture Models (GMMs).
There could be different properties that the at least two available classes ω1, ω2 represent. For example, the classification of the input sequence x can be performed to classify languages, speaker, and/or genders. Hence, according to an embodiment each of the at least two available classes ω1, ω2 represents a unique language, a unique speaker, or a unique gender.
There could be different ways for the classification device 200a, 200b to obtain the short-term frequency representation of the multimedia signal as in step S102. According to an embodiment the short-term frequency representation is provided by mel-frequency cepstral components (MFCCs). In this respect, the MFCCs are coefficients that collectively make up a mel-frequency cepstrum (MFC). The MFC is a representation of the short-term power spectrum of an audio waveform of the multimedia signal, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. According to one embodiment the MFCCs are made readily available to the classification device 200a, 200b. Hence, this embodiment requires a module configured to provide the MFCCs from an audio waveform of the multimedia signal. According to another embodiment the classification device 200a, 200b receives the audio waveform and extracts the MFCCs from the audio waveform. Hence, according to an embodiment the classification device 200a, 200b is configured to perform step S102a:
Step S102a: The classification device 200 extracts the MFCCs from the audio waveform of the multimedia signal. How to extract MFCCs from an audio waveform is as such known in the art and further description thereof is therefore omitted. Step S102a is performed as part of step S102.
Assuming that the input sequence x is divided into input vectors xn, each input vector xn, can then correspond to a vector of MFCCs. Assuming that the audio waveform is composed of frames, there is then one vector of MFCCs per frame.
There could be different types of audio waveforms. According to an embodiment the audio waveform represents a speech signal.
There could be different types of audio classification of which the herein disclosed methods for non-parametric audio classification could be part of. According to an embodiment the step S102 of obtaining, the step S104 of classifying, and the step S106 of determining are performed in an audio mining application. In this respect, when uncertainty is detected in a phone recognition in an automatic speech recognition module, a language model could be adapted to compensate for the detected uncertainty. In another example in a video or audio mining application the length of the input sequence x could be extended until a desired level of certainty is reached.
Further, since the discriminant function, as defined in Equation (6), is additive in terms of the input sequence x, i.e., gm(x1, . . . , xN)=gm(x1, . . . , xN-1)+gm(xN), delaying decision and accommodating more data is straightforward and does not require re-computing of the past contributions.
There can be different kinds of uncertainty measures. According to an embodiment the uncertainty measure describes a deviation from an optimal performance of the parametric classifier 110. According to some aspects the optimal performance is based on the posterior probabilities of the training sequence y.
There can be different kinds of posterior probabilities. According to an embodiment there is one posterior probability of the input sequence x for each available class ω1, ω2. The posterior probability for a given class ωm then represents the probability of the given class ωm for the input sequence x. Further, according to an embodiment there is one posterior probability of the training sequence y for each available class ω1, ω2. The posterior probability for a given class ωm then represents the probability of the given class ωm for the training sequence y. Examples of how the posterior probabilities can be determined will be provided below.
As will be further disclosed below, according to some embodiments the uncertainty measure is defined as minimum of 1 and a ratio between the posterior probabilities of the input sequence x and the posterior probabilities of the training sequence y.
In the following it is assumed that the total set of parameters {um,k, μm,k, Σm,k}k=1K for all M classes are already available. These parameters could be estimated by the classification device 200a, 200b implementing an Expectation Maximization algorithm. Next will be described how to determine the uncertainty measure associated with these classes.
Embodiments relating to the operations performed by the uncertainty module 210a will now be disclosed with reference to the classification device 200a of
According to the embodiment of
The the embodiment of
According to a first example, if the entire training sequence y is available, the classification device 200a determines the normalization factor γmopt by implementing Equation (8):
According to a second example, if the entire training sequence y is available, or it is computationally prohibited for the classification device 200a to implement Equation (6), the classification device 200a is configured to determine γmopt by calculating the Bhattacharyya bound on per-cluster basis and sum over all clusters and classes. This will give the total probability of error, which could be used as an estimate of (1−P(ωm|yinf)), where yinf denotes a training sequence of infinite length.
The classification device 200a is configured to, at the classification stage, for the given input sequence x, obtain the index of the most likely class m* by implementing Equation (4) and then determine an adaptive factor γm* associated with the particular input sequence x by implementing Equation (9):
The classification device 200a is then configured to determine the uncertainty measure Λm* regarding the classification made in step S104 that the most likely class for the input sequence x is ωm* by implementing Equation (10):
Hence, according to the embodiment of
Embodiments relating to the operations performed by the uncertainty module 210b will now be disclosed with reference to the classification device 200a of
According to the embodiment of
From Equations (6)-(7) it follows that the likelihood is roughly determined by the association of xn to the particular clusters Φm,k. For example, by assuming that all covariances are equal, then this association would mean that the distance defined by |xn−μk| is smallest, which also means that Φm,k(xn) is largest among all clusters. Therefore, for posterior probabilities at a given cluster Φm,k the notation P(ωm|μm,k) is used, where μm,k is the mean of the cluster Φm,k.
The classification device 200b determines posterior probabilities for each cluster Φm,k. The classification device 200b could pre-store the determined posterior probabilities. The posterior probabilities could be determined analytically or empirically, similarly to the description above. For example, in order to perform an empirical determination of the posterior probability for a cluster with mean μm,k the classification device 200b is configured to perform the following operations directly on the training sequence y. The classification device 200b stores in variable Lm,k, for each cluster the number of training data points in the training sequence y that belong to cluster Φm,k. The classification device 200b stores in a variable Lm,kω
In this respect, a cluster Φm,k that is maximized by many points from a class ωj≠m relative to the data points generated from class ωm does not have much discriminative power by itself; that cluster simply consists of points from different classes. At the same time, a cluster Φm,k that is maximized by many points from its own class class ωm relative to other classes ωj≠m has a lot of discriminative power.
According to the embodiment of
The normalization factor Ψmopt thus relate to performance of the classifier 110 when performing classification of the training sequence y. As will be disclosed below, the normalization factor Ψmopt is by the classification device 200b used when determining the uncertainty measure of the classification of the the input sequence x.
At the classification of the input sequence x the classifier 110 implements Equation (4) to obtain the index m* of the most likely class ωm*. The classification device 200b then determines an uncertainty measure associated with that decision.
According to the embodiment of
The classification device 200b is then configured to determine an adaptive factor Ψm* associated with the particular input sequence x by implementing Equation (13):
The classification device 200b is then configured to determine the uncertainty measure Ωm* regarding the classification made in step S104 that the most likely class for the input sequence x is ωm* by implementing Equation (14):
Hence, according to the embodiment of
The classification device 200b is optionally configured to determine an additional indicator Θm*, by implementing Equation (15)
The upper-most row in Θm* is defined by the pre-stored posterior probabilities of the most likely class, while the entries of the lower-most row provides information about the distortion in the data statistics due to a limited number of samples (caused by a classification of an input sequence x of short length). According to an embodiment the classification device 200b is configured to perform steps S108 and S110 in order to implement Equation (15):
S108: The classification device 200b determines, for each cluster Φm,k for k=1, . . . , K, a relation Θm* between the input weight factors vm*,k and training weight factors um,k for the class ωm* to which the input sequence x has been classified to belong.
S110: The classification device 200b stores the thus determined relation Θm*.
In general terms, when the uncertainty measure (either Ωm* or Λm*) of classifying the input sequence x into class ωm* has a numerical value that is close to 1 there is a high certainty in the classification decision, and the more the uncertainty measure decreases the more uncertain the classification decision becomes.
A non-limiting illustrative example of at least some of the herein disclosed embodiments will be provided next. According to the illustrative example there are two classes ω1, ω2 with two clusters each, as represented by the cluster parameters μm,k representing the mean values of each cluster. Table 1 illustrates numerical values of all posterior probabilities. However, as the skilled person understands, only values for one of the classes need to be stored.
Reference is also made to
To simplify notation, all variances are assumed equal, and hence excluded from the calculations. All training weight factors um,k are also assumed to equal, i.e., all training weight factors um,k=0.5. Given these weights and posterior probabilities the posterior probabilities as given in Table 1, the classification device 200a, 200b determines Ψ1opt=Ψ2opt=0.5·0.85+0.5·0.55=0.7 by implementing Equation (12).
Assume that the input sequence x is divided into 10 input vectors xn. Assume further that the input sequence x generates a higher value for the discriminative function of the first class ω1 than for the second class ω2. That is m*=1, according to Equation (4). Let the input sequence x have 9 out of 10 input vectors xn, closer to μ1,2 and only one being closer to μ1,1, which leads to v1,1=0.1 and v1,2=0.9. Then, by the classification device 200a, 200b implementing Equation (13) and Equation (14) the values Ψ1=0.58 and Ω1=0.8286 are obtained. Since most of the input vectors xn fall closest to μ1,2=11 (in a confusion region, i.e., where the classes ω1, ω2 are difficult to discriminate), the above determined uncertainty measure Ω=0.8286 is regarded as being significantly lower than 1.
Particularly, the processing circuitry 310 is configured to cause the classification device 200a, 200b to perform a set of operations, or steps, S102-S110, as disclosed above. For example, the storage medium 330 may store the set of operations, and the processing circuitry 310 may be configured to retrieve the set of operations from the storage medium 330 to cause the classification device 200a, 200b to perform the set of operations. The set of operations may be provided as a set of executable instructions.
Thus the processing circuitry 310 is thereby arranged to execute methods as herein disclosed. The storage medium 330 may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The classification device 200a, 200b may further comprise a communications interface 320 configured for communications with another device, for example to obtain the short-term frequency representation as in step S102 and to provide as result of the uncertainty measure as determined in step S106. As such the communications interface 320 may comprise one or more transmitters and receivers, comprising analogue and digital components. The processing circuitry 310 controls the general operation of the classification device 200a, 200b e.g. by sending data and control signals to the communications interface 320 and the storage medium 330, by receiving data and reports from the communications interface 320, and by retrieving data and instructions from the storage medium 330. Other components, as well as the related functionality, of the classification device 200a, 200b are omitted in order not to obscure the concepts presented herein.
It should also be mentioned that even though the modules correspond to parts of a computer program, they do not need to be separate modules therein, but the way in which they are implemented in software is dependent on the programming language used. Preferably, one or more or all functional modules 310a-310f may be implemented by the processing circuitry 310, possibly in cooperation with functional units 320 and/or 330. The processing circuitry 310 may thus be configured to from the storage medium 330 fetch instructions as provided by a functional module 310a-310g and to execute these instructions, thereby performing any steps as disclosed above.
The classification device 200a, 200b may be provided as a standalone device or as a part of at least one further device. For example, the classification device 200a, 200b may be provided in an audio mining device. Alternatively, functionality of the classification device 200a, 200b may be distributed between at least two devices, or nodes. These at least two nodes, or devices, may either be part of the same network part or may be spread between at least two such network parts.
Thus, a first portion of the instructions performed by the classification device 200a, 200b may be executed in a first device, and a second portion of the of the instructions performed by the classification device 200a, 200b may be executed in a second device; the herein disclosed embodiments are not limited to any particular number of devices on which the instructions performed by the classification device 200a, 200b may be executed. Hence, the methods according to the herein disclosed embodiments are suitable to be performed by a classification device 200a, 200b residing in a cloud computational environment. Therefore, although a single processing circuitry 310 is illustrated in
In the example of
The inventive concept has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the inventive concept, as defined by the appended patent claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/054581 | 3/3/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/148521 | 9/8/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20090132442 | Subramaniam et al. | May 2009 | A1 |
20100004926 | Neoran et al. | Jan 2010 | A1 |
20100138223 | Koshinaka | Jun 2010 | A1 |
20140257820 | Laperdon et al. | Sep 2014 | A1 |
20150088509 | Gimenez et al. | Mar 2015 | A1 |
20150112682 | Rodriguez et al. | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
1883040 | Jan 2008 | EP |
2012103625 | Aug 2012 | WO |
Entry |
---|
International Search Report and Written Opinion dated Aug. 4, 2016 for International Application No. PCT/EP2016/054581 filed Mar. 3, 2016, consisting of 10-pages. |
Number | Date | Country | |
---|---|---|---|
20190013014 A1 | Jan 2019 | US |