The present invention relates to a signal classification system, and more particularly, to an intrinsic, discriminant, dimension-based signal representation and classification system that is operable for determining the minimum-dimension of a feature set that is needed for an optimum signal representation and classification.
Signals are used for a variety of purposes. By way of example, radio-frequency signals carrying information can be used for communication while radar pulses are often used to determine the existence of an object in space or on the ground. Signals are generally measured and classified to know their information content. The information content of a signal is extracted in the form of features that characterize it. The features are then used by classifiers to classify the signal. To better classify a signal, it would be useful to know what features most accurately and uniquely represent the signal. To determine a feature set, the prior art uses algorithms. In existing algorithms, signal-specific features are extracted for the representation, and then the optimum set of features are selected for the classification by applying techniques such as the Principal Component Analysis and the minimization of mutual information. A problem with such algorithms is that they rely on signal-specific features, which are often difficult to ascertain when the signal is combined with background noise or corrupted by the presence of other signals.
Thus, a continuing need exists for a system to identify the minimum-dimension discriminant features that optimally represent and classify signals of interest using a set of non-signal-specific features (i.e., features based on the overall trend of signals or information content) that represent signals robustly.
The present invention is a method for determining the minimum-dimension of a feature set that is needed for optimal signal representation. The method comprise using a processor to perform acts of:
The act of determining the minimum number of features for optimal signal representation further comprises an act of considering a set of N features F={F1,F2,Λ,FN}.
In another aspect, the act of determining the minimum number of features for optimal signal representation is performed according to the following:
In yet another aspect, determining the smallest subset of features that provides for optimal signal classification is determined according to the following using the minimum feature set:
In the act of determining the EDBFM, the EDBFM is calculated according to the following:
Additionally, in the act of determining the EDBFM, the EDBFM is derived in a multi-class problem having classes ω1 and ω2, according to acts of:
In yet another aspect, in the act of calculating an estimate of the final EDBFM, the final EDBFM is calculated in a multi-class problem according to the following:
where M is the number of classes, ΣDBFMij is the DBFM between classes ωi and ωj, and p(ωi) is the prior probability of class ωi.
Finally, as can be appreciated by one skilled in the art, the present invention also comprises a system and computer program product configured to cause a computer to perform the operations of the method described herein.
The objects, features, and advantages of the present invention will be apparent from the following detailed descriptions of the various aspects of the invention in conjunction with reference to the following drawings, where:
The present invention relates to a signal classification system, and more particularly, to an intrinsic, discriminant, dimension-based signal representation and classification system that is operable for determining the minimum-dimension of a feature set that is needed for an optimum signal representation and classification. The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6. In particular, the use of “step of” or “act of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
Before describing the invention in detail, first a glossary of terms used in the description and claims is given as a central resource for the reader. Second, a description of various principal aspects of the present invention is provided. Third, an introduction is provided for the reader with a general understanding of the present invention. Fourth, a description of various aspects of the present invention is provided to give an understanding of the specific details. Fifth, an exemplary simulation using the disclosed techniques is provided. Sixth, a conclusion is provided to supply the reader with a brief, yet concise summary of the present invention.
Before describing the specific details of the present invention, a glossary is provided in which various terms used herein and in the claims are defined. The glossary provided is intended to provide the reader with a general understanding for the intended meaning of the terms, but is not intended to convey the entire scope of each term. Rather, the glossary is intended to supplement the rest of the specification in more accurately explaining the terms used. The definitions for kurtosis, skewness, and Renyi entropy were provided by “Wikipedia, The Free Encyclopedia.” Wikipedia can be found at http: //www.wikipedia.org.
Effective Decision Boundary Feature Matrix (EDBFM)—The term “EDBFM” as used with respect to this invention is a matrix that is obtained by integrating the cross product of unit normal vectors of a feature point x and the probability density function of x. EDBFM is defined as:
where N(x) is the unit normal vector of x, p(x) is a probability density function,
and S′ is the effective decision boundary which is defined as: {x|h(x)=t, xεR1 or R2} where R1 is the smallest region that contains a certain portion Pthreshold of class ω1, and R2 is the smallest region that contains a certain portion Pthreshold of class ω2.
Information Bound—The term “information bound” as used with respect to this invention is the minimum set of features that are mutually uncorrelated, or is the minimum set of features for which dI>0 where dI is the change in information gain.
Instruction Means—The term “instruction means” as used with respect to this invention generally indicates a set of operations to be performed on a computer, and may represent pieces of a whole program or individual, separable, software modules. Non-limiting examples of “instruction means” include computer program code (source or object code) and “hard-coded” electronics (i.e., computer operations coded into a computer chip). The “instruction means” may be stored in the memory of a computer or on a computer-readable medium such as a floppy disk, a CD-ROM, and a flash drive.
Kurtosis—The term “kurtosis” as used with respect to this invention is a measure of the “peakedness” of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent modestly-sized deviations.
Renyi entropy—The term “Renyi entropy,” an extension of Shannon entropy, is a means of quantifying the entropy or information content of a system. Renyi entropy characterizes how much information, on average, is gained when the value of a random variable is learned. Alternatively, entropy characterizes the uncertainty about the value of a random variable before learning it; it should not be confused with thermodynamic entropy. Renyi entropy is defined as:
where pi are probabilities and α>0, α≠1. As a approaches 1, Hα converges to Shannon entropy. For some α and αand α′ where α≦α1, Renyi entropy guarantees that Hα≦Hα
Skewness—The term “skewness” as used with respect to this invention is a measure of the asymmetry of the probability distribution of a real-valued random variable. Roughly speaking, a distribution has positive skew (right-skewed) if the higher tail is longer and negative skew (left-skewed) if the lower tail is longer.
The present invention has three “principal” aspects. The first is a signal representation and classification system. The signal representation and classification system is typically in the form of a computer system operating software or in the form of a “hard-coded” instruction set. This system may be incorporated into a wide variety of devices that provide different functionalities. The second principal aspect is a method, typically in the form of software, operated using a data processing system (computer). The third principal aspect is a computer program product. The computer program product generally represents computer-readable code stored on a computer-readable medium such as an optical storage device, e.g., a compact disc (CD) or digital versatile disc (DVD), or a magnetic storage device such as a floppy disk or magnetic tape. Other, non-limiting examples of computer-readable media include hard disks, read-only memory (ROM), and flash-type memories. These aspects will be described in more detail below.
A block diagram depicting the components of a signal representation and classification system of the present invention is provided in
An illustrative diagram of a computer program product embodying the present invention is depicted in
Generally a set of signal-specific features such as energy and frequency change are used for the representation and classification of signals of interest. However, for robust representation and classification, features that are not so signal-specific such as a measure of information content (e.g., Renyi entropy) and measures of statistical properties of signals (e.g., kurtosis and skewness) are needed. The present invention derives such features. Further, the present invention describes an information bound-based measure to find the minimum-dimension of the feature set that is needed for an optimum signal representation. Similarly, the present invention also describes a decision boundary-based intrinsic discriminant dimension of a feature set that can be used in optimum classification.
An advantage of the present invention is that the computation of features does not correspond to signal-specific features, but instead correspond to overall trends of signals and hence, it is robust. Another advantage is the ability to find the minimum-dimension discriminant features that provide the optimum signal representation and classification. The minimum-dimension indicates that adding other features do not improve the classification and/or representation accuracy. Therefore, the minimum-dimension reduces the computational burden of extracting features that are not going to help in the improvement of accuracy.
This present invention is useful in classification algorithms. Accordingly, the present invention can be utilized with many commercial applications, such as signal confirmation, interference identification, and spectrum management where optimum classification is very important.
In all these applications, first a signal is represented in terms of certain features and then those features are used in classifying the signal. In most applications, the background environment is complex and dynamically changing, creating an environment-corrupted signal. The features that are extracted from the environment-corrupted signal should represent a desired signal's information content and its overall properties, and not the environment that is imprinted on the desired signal. Then the question becomes, what are these features? Further, what is the minimum number or dimension of these features one would need to optimally classify signals? The optimality, in the sense of adding more features, will not improve the classification accuracy. As such, the present invention describes several exemplary robust features that have been developed which correspond to a signal's information content and overall statistical properties. The robust features are described in section 4.1. The present invention also describes that by using a measure based on information bound, an optimum set of features that robustly represent signals can be found. The information bound-based intrinsic dimension of features for signal representation is described in section 4.2. Additionally, a decision boundary-based technique is described to find the minimum-dimension of the feature set that provides the optimum classification accuracy. This approach is described in section 4.3.
As described above, the present invention relates to a system for determining the minimum-dimension of a feature set that is needed for optimal signal representation. The system and its operations are described in further detail below.
(4.1) Robust Features
As mentioned before, the desired signals almost always are corrupted by noise associated with dynamically changing environment. As such, the signals that are needed to represent and classify have to be characterized by features that are independent of noise characteristics. The present invention is designed to identify such features. Through a simulation, three features have been identified that can be used to satisfy this requirement. While these features are shown for illustration purposes, the invention is not intended to be limited thereto.
One such feature is the desired signal's information content. This can be measured using an entropy function. This measure can be extended from probability theory to frequency plane or time-frequency plane by treating the spectrum or time-frequency distribution as density functions. In the frequency or time-frequency plane, Renyi entropy is a more appropriate measure for signal information content. So, one of the features that can be used is Renyi entropy. Since the signals are corrupted by noise, they are random signals. Therefore, the statistical features that discriminate signal from noise, such as higher order moments like Skewness and Kurtosis, need to be used. These features are further defined below.
(4.1.1) Renyi Entropy
The Fourier spectrum of the signal x(t) can be used to compute the Renyi entropy. Specifically, it is computed as:
with α>0 and α≠1. In the above equations, px(i) denotes the probability, Hα(y) is the Renyi entropy that is a generalized version of Shannon entropy, FFT is the fast-fourier transform of the signal x(t), and y=F_x(w). However, it is more robust than Shannon entropy and has one more degree of freedom. When α is equal to one, both the entropies are equal.
(4.1.2) Kurtosis
The kurtosis is a measure of excess or how much energy is in the tails of a distribution function. Since noise generally has a Gaussian distribution, its kurtosis will be close to zero. Therefore, this measure helps in distinguishing between noise and a signal of interest. The kurtosis of a random variable y is defined as:
where m denotes mean, E denotes expectation and a denotes standard deviation.
(4.1.3) Skewness
The skewness is a measure of non-symmetry of a distribution. In general, the spectrum of a signal is symmetric while the spectrum of noise tends to be non-symmetric. Therefore, skewness can be used as a feature to distinguish signals from noise. The skewness of a random variable y is defined as:
In equations (3) and (4) above my=E[y]−mean, and σy2=E[(y−my)(y−my)*]−variance.
Additionally, * denotes conjugation.
(4.2) Information Bound-Based Intrinsic Dimension of Features for Signal Representation
Consider a set of N features F={F1,F2,Λ, FN}. The mutual information between two features Fi and Fj is defined as: I(Fi,Fj)=H(F)−H(Fi|Fj), where H(Fi) is the entropy and H(Fi|Fj) is the conditional entropy. The mutual information dI>0 if Fi and Fj are mutually uncorrelated. In other words, there is information gain by including Fj with Fi. If there is no gain then dI=0. This implies that Fj is a redundant or a non-discriminant feature. Based on this, the information bound is defined as the minimum set of features which are mutually uncorrelated or as the minimum set of features for which dI>0. This minimum set is defined as the intrinsic dimension of the features for optimal signal representation.
(4.3) Decision Boundary-Based Intrinsic Dimension of Features for Classification
In the context of classification, the intrinsic dimension is defined as the smallest subset of features that provides the same classification accuracy as that can be obtained from the original set. This dimension can be found based on the effective decision boundary feature matrix (EDBFM). This is defined as:
where N(x) is the unit normal vector of x, N′(x) is a vector perpendicular to the unit normal vector, p(x) is a probability density function,
and S′ is the effective decision boundary which is defined as: {x|h(x)=t, Xε R1 or R2} where R1 is the smallest region that contains a certain portion Pthreshold of class ω1, and R2 is the smallest region that contains a certain portion Pthreshold of class ω2. The integral in equation (5) is performed over the effective decision boundary. However, if the integral in equation (5) is performed over the decision boundary, a decision boundary feature matrix (DBFM) is obtained. It can be shown that the rank of this DBFM of a pattern classification problem is the intrinsic discriminant dimension of the feature set. The rank corresponds to the dimension of the eigenvectors associated with the non-zero eigenvalues of DBFM.
The numerical procedure to find the EDBFM for a two-class problem is as follows:
Note that the chi-square test in act (a) will eliminate the outliers. The chi-square test with respect to the other class in act (b) is needed to concentrate on the effective decision boundary. For a multi-class problem, the same acts as above are performed. However, the EDBFM is computed using the following equation:
where M is the number of classes, ΣDBFMij is the DBFM between classes ωi and ωj, and p(ωi) is the prior probability of class ωi. Then the eigenvalues and eigenvectors of this matrix are computed. The rank of the matrix determined from the non-zero eigenvalues indicates the intrinsic discriminant dimension of the feature matrix.
Several types of communication and radar signals were considered for the verification of the above-disclosed algorithm and features. In the case of radar, real signals were also considered. Examples of waveforms of these signals are plotted in
Both the signal-specific features like energy ratio and frequency change, and the non-signal-specific robust features mentioned above (i.e., section 4.1) were extracted for all of these signals. The mutual correlation or mutual information between them was computed. In all these cases, information bound was reached for the features Renyi entropy, skewness and kurtosis.
Next, for the features (i.e., Renyi entropy, skewness, and kurtosis) that were extracted from all the signal types mentioned above, the intrinsic discriminant dimension using the decision boundary (i.e., as described in section 4.3) was determined. From the eigenvalues and eigenvectors of the EDBFM of three types of signals—communication signals (shown in
More specifically,
Finally,
From the tables presented in
The present invention describes a method for obtaining the robust minimum features that can optimally represent and classify signals. The minimum-dimension of the feature set that is needed for the representation is derived from the disclosed information bound; whereas the minimum-dimension of the feature set that is needed for the classification is derived from the decision boundary. The described concepts were verified by performing a simulation using different types of signals. Through the simulation, it was shown that at least for the types of signals considered in the simulation, the Renyi entropy, kurtosis and skewness seem to be universal features that provide the optimum representation. For classification, it appears that the subset of these features namely, Renyi entropy and skewness are the optimum features. One skilled in the art can appreciate that the present invention is not limited to the above features and signals and can be used for any signal to determine the minimum-dimension of the feature set that is needed for representation and classification.