The present invention relates to methods and apparatus for analysing an experimental data-set to estimate properties of the distribution (“model”). In particular, it relates to methods and apparatus in which a model of known functional form is estimated from the experimental data-set.
Many data-sets can be regarded as made up of (i) data points obtained from and representative of a model (“inliers”) and (ii) data points which contain no information about the model and which therefore should be neglected when parameter(s) of the model are to be estimated (“outliers”).
Existing outlier removal methods operate by using all the data points to generate one or more statistical measures of the entire data-set (e.g. its mean, median or standard deviation), and then using these measures to identify outliers. For example, the “robust standard deviation algorithm” (employed in [1]) computes a median and a statistical deviation from a number of data values and then discards as outliers all data points which are further than 3 standard deviations from the median. The “least median of squares algorithm” (employed in [2] and [3]) is applicable to data-sets composed of points in a two-dimensional space, and calculates the narrowest strip bounded by two parallel lines which contains the majority of the data points; again, once this strip has been determined using the entire data-set, the outliers are discarded. The “least trimmed squares algorithm” (employed in [4]) consists of minimising a cost function formed from all the data points, and then discarding outliers determined using the results of the minimisation. All three of these methods have the problem that they fail to work if the proportion of the outliers is greater than 50% of the data-set, because in this case the statistical measure of the entire data-set will be largely determined by the outliers, so that the points discarded as “outliers” will in fact include an approximately equal proportion of inliers.
Mathematical methods are used in the digital signal processing field to characterise signals and the processes that generate them. In this field outlier is called noisy signal. A primary use of analogue and digital signal processing is to reduce noise and other undesirable components in acquired data.
In psychology, researchers usually find the basis for predicting behaviour and studying a particular phenomenon and individual reactions to it. Outliers are individuals with abnormal reaction. To generate a model for the majority one should eliminate objects with uncommon response, that is, the outliers. In general, researchers in this domain use all sets of statistical methods: regression, correlation, factors, and cluster analysis. To avoid abnormal individuals some usual and specific approaches are applied in psychology, some of them are threshold values, confident intervals, normal distribution assumption, clustering, and pattern-based.
In the pharmaceutical field, researchers confront a lot of outliers and aberrant observations. As usual, least-square procedures are applied very often. Some other methods like Q test or Dixon's test are used for outlier removal as well [6].
Outlier removal is especially important in medical imaging, where outliers generally correspond to abnormalities or pathologies of subjects being imaged. An efficient way to remove outlier is desirable to enhance the capability of dealing with both normal and abnormal images.
The present invention aims to address the above problem. In particular, the invention makes it possible to judge which data points are outliers by applying criteria different from statistical measures determined by the whole data-set.
In general terms, the present invention proposes that multiple subsets of the data points are each used to estimate the parameters of the model, that the various estimates of the parameters are plotted in the parameter space to identify peak parameters in the parameter space, and the outliers are identified as data points which are not well-described by the peak parameters. In the original data space the data will scatter due to various reasons. When these input data are converted into parameter space, parameters corresponding to correlated features tend to form dense clusters. That is why parameter space is preferred to remove outliers.
Generally, for a model determined by K parameters, each subset should contain at least K′ data points to enable the K parameters to be estimated. K′ is the number that will uniquely determine the K parameters of a subset of data points containing K′ data points arbitrarily picked out from the N input data points.
Note that the subsets comprising only inliers will most likely form one cluster—being correlated with each other in the parameter space—whereas the subsets containing one or more outliers will tend to be less correlated. This result is true irrespective of the proportion of outliers in the data-set, and thus the present invention may make it possible to accurately discard a number of outliers which is more (even much more) than half of the data points. As explained below, some embodiments of the method are typically able to remove (N−K′−3) outliers from an input data-set with N data points.
Preferred features of the invention will now be described, for the sake of illustration only, with reference to the following figures in which:
Suppose the experimental data-set comprises N input data points. Each input data point is any quantity or vector denoted as X, X can be a vector of coordinates, gray level related quantities if the data originates from images, etc. X is called the feature vector of the input data point.
In the embodiment, the model has K independent parameters pj(j=1, . . . ,K) and is usually a function of X. The model is denoted as mod(X) given by:
mod(X)=p1.base1+p2.base2+ . . . + pk.basek (1)
where basej (j=1, . . . K) are known functions of the feature vector, X and the symbol “.” represents multiplication. A determination of the model is thus equivalent to the task of identifying the K parameters p1, . . . , pK using the experimental data-set.
For each data point with feature vector Xi, a corresponding model value mod(Xi) can be calculated, where i=1, . . . , N. For inlier data points, Xi and mod(Xi) are related by equation (1), possibly with a noise, whereas outlier data points are not related by equation (1).
The method proceeds by the steps shown in
In step 1 a number of subsets of the input data-set is generated. Each subset is composed of at least K′ (K′ is the number by which the K parameters will be uniquely determined in the subset containing any K′ data points) of the N input data points. The number of subsets with K′ data points which can be formed in this way is CNK′=(N.(N−1).(N−2) . . . 2)/(K′.(K′−1).(K′−2). . . 2). Note that in some applications all of these subsets may be generated, while in other applications only a portion of the total number of subsets may be generated. Denote the total number of ways to form the subsets as M.
In step 2, for each of the subsets the parameters {p1, . . . , pk} are estimated either by least square mean estimation or by solving the K′ linear equations. Thus, each subset yields a respective point in the K-dimensional parameter space. Hence in the K-dimensional parameter space, M parameter points are obtained from the estimation, with each parameter point denoted Pi=(p1(i), p2(i), . . . , pK(i))T. Here T stands for transpose. Each subset of input data points will have a corresponding parameter point in the parameter space.
In step 3, count the number of occurrence of a parameter point (histogram), and plot the histogram in the parameter space to show, for each of the M parameter points, the number of subsets of input data points with the parameters close to the parameter point. For some applications, the parameters may need to be digitised with any digitisation method (for example, an orientation of both 1.0° and 1.02° may both be digitised to 1.0°). As the parameters derived from each subset of input data points will be distributed in the K-dimensional parameter space, a preferable way to get the histogram from the distribution is to specify the sizes of neighborhood in each coordinate of the parameter space. The neighborhood sizes can be specified by users or by any means. Below a way to calculate the neighborhood sizes is illustrated. For the j-th (j=1, 2, . . . , K) coordinate of all the M parameter points of the estimated parameters, arrange them in ascending order and still denote them as pj(1), pj(2), . . . , pj(M) for simplicity of denotation. The difference between pj(t+1) and pj(t) (t=1, 2, . . . , M−1) is denoted dif(pj, t). The neighborhood size for the jth coordinate can be the median of dif(pj, t) for all t ranging from 0 to M−1, or the average of dif(pj, t), or any percent of the distribution of dif(pj, t) (100 percent will correspond to the maximum of dif(pj, t) while 0 percent will be 0, and 10 percent corresponds to the neighborhood size so that the number of difference dif(pj, t) being smaller than the neighborhood size will be no more than 0.1*(M−1)). Having decided the neighborhood size for each coordinate of the parameters, namely, the j-th coordinate's neighborhood size being Δj, the number of points for a given parameter point Pi (i=1, 2, . . . , M) in the parameter space is the number of parameter points P=(p1, p2, . . . pk)T falling in the neighborhood
|p1−p1(i)|≦Δ1, |p2−p2(i)|≦Δ2, . . . , |pK−pK(i)|≦ΔK
This number of points is also called the number of occurrence of the subsets of input data with the parameters specified by the parameter point Pi.
In step 4, we find the peak of the histograms found in step 3. The K parameters corresponding to the peak of the histogram are called candidate peak parameters. If the number of occurrence of the histogram peak is greater than a predetermined threshold, e.g. 3, and there is only one peak, then we may take the peak as a good estimate of the true parameters of the model, and the candidate peak parameters are called peak parameters. Note that such a peak will generally be found when at least 3 of the subsets consists exclusively of inlier data points. This is bound to occur when there are at least K′+3 inliers (so that at least 3 subsets are composed entirely of inliers), and thus the present method can cope even in the case that there are N−K′−3 outliers. In the case of multiple peaks exhibited in the histogram, depending on the nature of the original problem, one way is to take the candidate peak parameters with the maximum number of occurrence as the peak parameters. Alternatively, one can pick up the candidate peak parameters with the maximum integration as the peak parameters.
In step 5 we determine which input data points are such that they follow equation (1) with parameters equal to or very close to the peak parameters. Such input points are judged to be inlier input data points. All other input points are judged to be outlier input points.
In step 6 we determine a best estimate for the parameters using only the inliers. This can be done by a conventional method, such as a least square fit of the inliers.
We now consider one specific example of the method, namely to derive the midsagittal plane (MSP) from magnetic resonance (MR) brain images. Determination of midsagittal plane of the human brain is
According to the patent application [5] entitled “Method and apparatus for determining symmetry in 2D and 3D images” (International application number PCT/SG 02/00006), around 16 fissure line segments are extracted from 16 parallel planes of the volume (axial slices). Due to the pathology or ubiquitous asymmetry presented in axial slices, some of the extracted fissure line segments deviate greatly from the expected fissure that should be removed in order to get a precise plane equation of the MSP. There are two kinds of outliers to remove, i.e., orientation outliers and plane outliers. As all extracted fissure line segments are from different parallel axial slices and they are supposed to form a plane (the MSP), they should have the same orientation. Those extracted fissure line segments deviating from the expected orientation are taken as orientation outliers and the rest of extracted fissure line segments as orientation inliers. For all the orientation inliers, some extracted fissure line segments may deviate from an expected plane, and are judged as plane outliers with the rest of orientation inliers judged as plane inliers. The plane equation of the MSP is calculated by the least square error fit of all the plane inliers. Both the expected orientation and expected plane are derived from the proposed invention described below.
For orientation outlier removal, the model is a constant, i.e.,
mod (X)=1
Reference [5] includes a detailed description of the orientation outlier removal, but reference [5] can only deal with the orientation outlier removal based on empirical trial instead of a systematic framework while the current invention tends to provide a solution for the outlier removal of all kinds of models. For removal of plane outliers, the model is a three-dimensional plane, i.e.,
Mod(X)=p1.x+p2.y+p3.z+p4
where (x, y, z) are the coordinates in the three-dimensional image volume. In order to facilitate histogramming, it is supposed that
p12+p22+p32=1, p4>=0.
There are 3 independent parameters for the model. Each subset of data will contain two orientation inliers (4 three-dimensional points in three-dimensional image volume). Suppose there are N′ (N′<=16) orientation inliers. Refer to
Efficient outlier removal is a key factor to deal with both normal and pathological images in medical imaging. In the case of extraction of the MSP, the method proposed by Liu et al [1] uses the robust standard deviation, but still the inliers may have a scattered orientation instead of the dominant one which corresponds to the maximum peak of the histogram. The next example will illustrate this. The method proposed by Prima et al [4] uses the least trimmed squares estimation which can tackle at most 50% of outliers while the embodiment can yield an outlier removal rate (3 plane inliers—13 plane outliers out of 16 data) 81%.
Note that in this example, it is supposed that at least 3 strongly correlated subsets are available when at least K′+3 (K′=1) inliers are present (the occurrence of the peak orientation will be no less than 3). In other words, the present method can function satisfactorily even when there are N−K′−3 outliers.
In the next example, the difference between the embodiment of this invention and the result based on robust standard deviation as used by Liu et al [1] is illustrated.
Suppose the orientations of 11 extracted fissure line segments are 50°, 35°, 30°, 23°, 17°, 13°, 11°, 11°, 11°, 11°, 9° respectively. The median of the angle is 13°, and the robust standard deviation is 4.45°. According to [1], only three angles (50°, 35°, 30°) will be judged as outliers. The weighted estimation of orientation will be 15.8°, and the average of the inlier orientation is 13.25°. By the method disclosed in this invention, the peak parameter of the orientation is 11° by specifying the neighborhood size being 1°, which is the dominant orientation. Note the number of outliers is 6 which is beyond the limit of existing outlier removal methods, so it is understandable the existing methods will not able to remove all the outliers. The embodiment takes 11° as the inliers from the histogram and the number of outliers is 7.
The disclosure of the following references is incorporated herein in its entirety:
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/SG02/00231 | 10/11/2002 | WO | 5/2/2006 |