The present invention may be further understood from the following description of non-limitative examples, with reference to the accompanying description, in which:
a) is a prior art N×M data matrix of standardized expression levels, from a set of microarray samples;
b) is a prior art N×M visual matrix corresponding to the data matrix of
a) to 3(f) are portions of data matrices exemplifying several different types of known biclusters;
a) to 5(f) are representations of some geometries in 3D data space corresponding to certain bicluster patterns;
a) to 12(d) are synthetic dataset visual matrices used in a first performance analysis;
a) to 13(d) are synthetic dataset visual matrices used in a second performance analysis;
a) to 14(e) are synthetic dataset visual matrices used in a third performance analysis;
f) is a graph of multiplicative coefficients of the bicluster of
g) is a visual matrix of an additional bicluster extracted from the data set of
The present invention provides a new approach to dealing with data for analysis as well as, inter alia, a method, apparatus and computer software, etc. for analyzing data, to determine, represent and/or extract biclusters from within such data.
The embodiments below are exemplified using gene expression data, for which the present invention is well suited. However, it is also well-suited for representing and/or extracting biclusters from other data expressed in a matrix or data array, whether two dimensional or higher, such as in collaborative filtering, document indexing, information retrieval from large databases, and analysis of electoral data, consumer spending, foreign exchange rates, and stock market data, where there is a need to classify the data in terms of subsets of features or attributes. A higher dimensional data array can be converted to a 2D array before biclustering. For example, a 3D data matrix with rows as genes, columns as experiment conditions and depths as sub-conditions of each experiment condition can be flattened to become a 2D matrix with columns given by concatenation of depths.
Although the six patterns in
a) to a cluster at a single point with coordinate
[x, y, z, w]=[1.2, 1.2, 1.2, 1.2];
b) to a cluster defined by the line x=y=z=w;
c) to a cluster at a single point with coordinate
[x, y, z, w]=[1.2, 2.0, 1.5, 3.0];
d) to a cluster defined by the line x=y−1=z+1=w−2;
e) to a cluster defined by the line x=0.5y=2z=(2/3)w; and
f) to a cluster defined by the line x=0.5(y−0.1)=2(z−0.1)=(2/3)(w−0.2).
Each gene in a bicluster is a point lying on one of these points or lines when only considering the samples selected by this bicluster.
When the bicluster is embedded in a larger data matrix, the points or lines defined by the bicluster sweep out a hyperplane in a high dimensional data space. This can be visualized geometrically as follows. Assume a three-sample experiment with the samples denoted as x, y, z. If a bicluster covers samples x and z, then there exists a plane on which all data points in the bicluster would lie. The plane is defined by:
β1′β2x+β4z=0 (1)
where βi, (i=1, 2, 4) are constant and there is no term β3 y since β3=0 (the bicluster covers samples x and z for any y).
For gene expression data, the coordinates that would appear in Eq. (1) denote the experimental samples the bicluster covers, and the points on the plane denote the genes in the bicluster.
In general, the different patterns of biclusters discussed above can be uniquely defined by specific geometry (lines or planes) in high dimensional data space. Using the above example of a three-sample experiment with the samples denoted as x, y, z and a bicluster covering samples x and z, three-dimensional (3D) geometric views can be generated for the different patterns, as given in
a) shows a bicluster with constant values: represented by one of the lines that are parallel to the y-axis and lie in the plane x=z (the T-plane);
b) shows a bicluster with constant rows: represented by the T-plane;
c) shows a bicluster with constant columns: represented by one of the lines parallel to the y-axis;
d) shows a bicluster with coherent values with additive model: represented by one of the planes parallel to the T-plane;
e) shows a bicluster with coherent values with multiplicative model: represented by one of the planes that include the y-axis; and
f) shows a bicluster with coherent values on columns with linear model: represented by one of the planes that are parallel to the y-axis.
The gene expression data is transformed into geometric data in observed data space (feature space) by plotting the data for each row (gene), where each column (sample) represents an axis.
The Hough Transform (HT) is a powerful technique for line detection in noisy 2-D images and for plane detection in noisy 3-D signals and is widely used in pattern recognition. The HT is a mapping from observed data space into a parameter space. Each feature point in the data space generates “votes” for a set of parameter space points. An area in the parameter space containing many mapped points is suggestive of a hyperplane in the observed data space.
The HT is noise insensitive for line detection in poor quality images. The present invention is able to make use of the realization that the robustness of the HT against noise is especially useful in microarray data analysis since microarray data are often heavily corrupted by noise.
In a microarray experiment, the relative expression levels of N genes of a model organism are probed simultaneously by a single microarray. A series of M arrays probes the expression levels in M different samples (whether under M different experimental conditions or at M time points in the case of measuring temporal changes in expression, etc.).
{F1, F1, . . . , FM} denotes the coordinates of M arrays (samples). For each gene j {j=1, 2, . . . , N}, the expression vector is given as [F1(j), F2(j), . . . , FM(j)].
In 2-D space, a line can be described by
y=mx+c (2)
where (m, c) are the slope and the intercept of the line with y axis, respectively. However, a problem with the (m, c) parameterization of lines is its inability to describe vertical lines, i.e., m→∞. Therefore Eq. (2) is only used for lines with |m|≦1. A second equation which swaps the roles of the x and y axes,
x=m′y+c′ (3)
is used for lines with |m|>1.
With |m|≦1 and |m|<1, Eq. (2) and (3) describe all lines in 2-D space without overlap. A similar method can be used to describe hyperplanes in higher dimensional space. In the following description, this parameterization method is used for the hyperplane detection algorithm.
Suppose that among all the observed data [F1(j), F2(j), . . . , FM(j)], {j=1, 2, . . . , N}, there exists a target hyperplane with equation as
where {F1, F2, . . . , FM} are coordinates of points in observed data space and {β1, β2, . . . , βM} are M parameters. A set Ω is defined with all the indices of the genes which lies on this hyperplane. For each jεΩ,
The inversion of Eq. (4) indicates that all these points on the target surface satisfy
The initial ranges of values {β1, β2, . . . , βM} are taken to be centered at {P1, P2, . . . , PM} and with half-length {L1, L2, . . . , LM}, i.e. , βi ε [Pi−Li, Pi+Li]. These ranges {β1, β2, . . . , βM} can be divided into very small “array accumulators” so that each array accumulator can determine a unique array of value {β1, β2, . . . , βM} within acceptable error.
According to the parameterization method in Equation (2) and (3) above, |βi|≦1, {i=1, . . . , M−1}. That means P1=P2= . . . =PM−1=0, L1=L2= . . . =LM−1=1. By defining ψ=min(FM)−max(|F1|)−Λ−max(|FM−1|) and ξ=max(FM)+max(|F1|)+Λ+max(|FM−1|), it is easy to get ψ≦βM≧ξ. This then gives
In above case, the whole observed data space is searched for the hyperplane. So the computational cost is heavy. However, if there is some a priori knowledge about the target hyperplanes or the search is only interested in part of hyperplanes in the data, one can use a more narrow interval for {β1, β2, . . . , βM}, such as 0≦β1=β2= . . . =βM−≦1. This can reduce the computational cost.
According to Eq. (5), one feature point in the observed data space is mapped into many points (e.g., a hyperplane) in parameter space. An accumulator in the parameter space containing many mapped points (e.g., the intersection of many hyperplanes) reveals a potential feature of interest.
A suitable HT-based plane detection method includes three steps, as shown in
The HT does not use Eq. (5) directly.
Taking the initial ranges of value {β1, β2, . . . , βM} to be centered at {P1, P2, . . . , PM} with half-length {L1, L2, . . . , LM}, allows Eq. (5) to be rewritten as:
where W(j) is a weighting scale used to ensure that Σi=1Mai2(j)=1 [see below for definition of ai(j)] (normalization).
Let
Eq. (6) can then be rewritten as
In fact, it is not necessary for the dimension of the parameter space Xto be equal to the dimension of observed signal, M. k can be used to replace M for a more general expression
where Xi is the i-th dimension of the parameter space. Each ai(j) is a function of observed feature points and is normalized such that Σi=1kai2(j)=1. The initial range of each Xi is the interval given by [Pi/Li−1, Pi/Li+1] centered at Pi/Li. All these ranges will comprise a hypercube in the parameter space (X1, . . . , Xk).
Once the parameter space has been formulated, it is divided evenly up into equal size blocks or hypercubes in each direction, with each hypercube having an associated accumulator. The number of hypercubes depends on the required resolution, as discussed below.
For each accumulator, every point in the observed signals needs to vote. Each accumulator corresponds to a set of range values (X1, X2, . . . , Xk). For each point j in the observed data, if
can be satisfied when the values (X1, X2, . . . , Xk) lie in this accumulator, it will give a vote to this accumulator. An accumulator receiving votes more than a threshold reveals a corresponding hyperplane in the observed data space.
So, to determine whether an accumulator receives a vote from a point j in observed signals, it is only necessary to determine whether a hypercube (accumulator) intersects with a particular hyperplane
To speed up this test from the standard HT, a simpler conservative test can be used to check whether the hyperplane intersects the hypercube's circumscribing hypersphere. Assume the center of the accumulator is at [C1, . . . , Ck]. r is the radius of the hypersphere, the testing criterion is whether
If a vote count threshold is set, those accumulators having a vote count higher than the threshold are investigated further to determine if they represent hyperplanes in the observed data space. This is described later.
However, whilst the standard HT can be used in the present invention to detect planes, it requires large computational complexity and storage requirements for more than 3 dimensions, making it less useful in many circumstances. As such, the Fast Hough Transform (FHT) (described in Li, H., Lavin, M. A., and Master, R. J. L. (1986) Fast Hough Transform: a hierarchical approach. Computer Vision, Graphics, and Image Processing, 36, 139-161) may be preferred. This algorithm is used on the gene expression data as a plane detection algorithm. It gives considerable speedup and requires less storage than the conventional HT. The FHT has very simple and efficient high-dimensional extension. Furthermore, the FHT uses a coarse-to-fine mechanism which provides good noise resistibility.
The three-step HT-based plane detection method of
K-Tree Representation
The HT-based plane detection method of
For the FHT, the parameter space is represented as a nested hierarchy hypercube. A k-tree is associated with the representation. The root node of the tree corresponds to a hypercube covering the whole parameter space of the formulated hyperplane (step 110) centered at vector C0 with side-length S0. Each node of the tree has 2k children arising when that node's hypercube is halved along each of its k dimensions. Each child has a child index, a vector b=[b1, . . . , bk], where each bi is −1 or 1.
The child index is interpreted as follows: if a node at level l of the tree has center Cl, then the center of its child node with index [bl, . . . , bk] is
where Sl+1 is the side length of the child at level l+1 and Sl+1=Sl/2.
Since this embodiment uses a coarse-to-fine mechanism, for each accumulator at different levels a test is made using Eq. (9). For an accumulator of level l, the radius of its circumscribing hypersphere r equals to √{square root over (k)}Sl/2.
Based on the K-tree structure, an incremental formula can be used to calculate the left part of Eq. (9). Dividing the left part of Eq. (9) with Sl, the normalized distance can be computed incrementally for a child node at each level with child index [b1, . . . , bk] as follows:
At top level
Test (9) can now be expressed as:
for the gene j and a child node with child index [b1, . . . , bk] at level l, if
|Rl(j)|≦√{square root over (k)}/2, (13)
(i.e. the hyperplane intersects the cube) gene j will generate a vote for this child node.
According to the above analysis, the FHT is a mapping from an observed data space into a parameter space. Each feature point in the data space generates “votes” for a set of sub-areas (hypercubes) in the parameter space. A sub-area in the parameter space that received many votes reveals the feature of interest. The FHT algorithm recursively divides the parameter space into hypercubes from low to high resolutions. It performs the subdivision and the subsequent “vote counting” is done only in hypercubes with votes exceeding a selected threshold. A hypercube with acceptable resolution and with votes exceeding a selected threshold indicate a detected hyperplane in the observed data.
Step 202—Determine Parameters, Including:
The minimum vote count “T” denotes the minimum number of genes in a bicluster. It depends on the experiment objective and may be selected by the user. For example, the minimum may be 4.
The desired finest resolution “q” depends on the significance of the noise in the data. For a perfect bicluster (a perfect constant bicluster means that all values are equal), “q” can be arbitrarily big, that is, one can use an arbitrarily finest resolution. However, in a real case, the perfect bicluster is often useless and “q” reflects how significant noise is permitted in the bicluster detected. If it is hoped that the biclusters that will be detected are closer to the perfect bicluster, q should be a bigger number.
In many case, one has no knowledge about the noise in the data. It is acceptable to use a try-and-use method to select q. The FHT uses a coarse-to-fine mechanism. In a coarse resolution, there are few accumulator cells and the number of hyperplanes detected is small. At a finer resolution, there are more accumulator cells. However, in the finer resolution, the accumulator cells are smaller and it is more difficult for a feature point to generate a hit; thus many accumulators cannot gain enough votes (exceeding the threshold) to ensure the existence of the corresponding hyperplane. So, if a too large value is select for q, fewer hyperplanes will be detected. Hence, q can be tested starting from a small value and increasing gradually until there are plenty of hyperplanes detected.
The initial bound of each Xi can readily be determined since it is already known that {β1, β2, . . . , βM} are taken to be centered at {P1, P2, . . . , PM} and with half-length {L1, L2, . . . , LM}. and that
(i=1, . . . , M) (hyperplane variables).
Step 204 Transform gene expression data into the parameter space using the transformation derived in part (b) of step 202.
Step 206 Perform the voting procedure for each node, having first computed the normalized distance from the hyperplane to each node. Initially, the only node is the root node. For each gene, for each node, if Eq. (13) is satisfied, add one to the vote count of the relevant node.
Step 208 Determine if the vote count for any current level node is larger than the threshold T and the resolution is coarser than q.
If the determination at step 208 is positive, then for each relevant node:
Step 210 Subdivide the node into the K-tree child nodes and revert to step 206.
This vote-and-subdivide mechanism is performed for each new node until no new node appears, at which point the determination at step 208 is negative, at which point the method proceeds to:
Step 212 Determine if there is more than 1 node with the finest resolution q and a vote count larger than T.
If the determination at step 212 is negative, then proceed with the node with the best resolution, and with the highest vote count if there is more than one node with the same best resolution:
Step 214 Record that node. This is the most probable solution for the existence of a hyperplane and hence a bicluster in the observed data.
If the determination at step 212 is positive:
Step 216 When there are several nodes with resolution equal to q and vote counts larger than T, collect the hyperplanes in the observed data space associated with these nodes and which have the same genes into bundles. In the FHT, a node (accumulator) denotes a unique hyperplane in the observed data space.
Step 218 For each bundle of hyperplanes in observed data space, check for biclusters by checking the common samples (variables) and compare the hyperplanes with the models corresponding to different types of biclusters. Hyperplanes in a bundle that are not consistent with any patterns in
Step 220 Output, as biclusters, those bundles for which biclusters are found and not discarded.
This specific embodiment uses an approach based on the method described with reference to
Microarray data sets can now contain thousands of genes and hundreds of samples. Although the biclustering algorithm above can process moderate size data sets, for data sets of very big size, extra effort is necessary.
A divide-and-stitch mechanism, as shown in
Step 302 pre-cluster the samples;
Step 304 divide the samples into several non-overlapping blocks, each block including data from all genes but for different samples;
Step 306 perform bicluster extraction for each block, for instance as described above with reference to
Step 308 for a detected bicluster in a block, examine whether samples from other blocks can be incorporated into it;
Step 310 delete any duplicated biclusters; and
Step 312 output biclusters from the whole data set.
In this divide-and-stitch mechanism, if all samples in a possible bicluster are divided to the same block, the performance of this divide-and-stitch method is the same as using extracting the biclusters on the whole data set, globally. However, the act of dividing the data into different blocks can actually destroy some of the biclusters (putting different samples from the same biclusters into different blocks). This is a possibility for those biclusters containing only (2×the number of blocks) samples or fewer. Any more than that number and at least one block will contain the minimum of two samples to be recognized as a bicluster. Step 302 is therefore a useful option to reducing even this possibility. This preprocessing method: clusters the samples in the data set using a hierarchical clustering method before the division step (step 304) (for instance the Eisen et al. method mentioned in the earlier part of this specification). Since only samples are clustered into different blocks, such preprocessing can be very simple and fast.
An alternative to such pre-processing is to conduct bicluster extraction several times, each time the blocks containing different combinations of samples, then compare the results. Alternatively, or additionally, the data can be divided up into overlapping blocks.
To evaluate the effectiveness of the divide-and-stitch mechanism, it was tested using simulated data. A synthetic data set of size 100 by 30 with different types of biclusters embedded was provided. The bicluster extraction method of
In a real data set, the situation is more complex. It is inevitable that this divide-and-stitch mechanism will degrade the algorithm. To reduce this problem, it is desirable to be able to analyze larger data sets without having to resort to divide and stitch. One solution is to use more powerful and faster algorithms for high-dimensional hyperplane detection or use other variants of the HT, such as probabilistic Hough Transform (Kiryati, N. Eldar, Y. and Bruckstein, A. M. (1991) A Probabilistic Hough Transform, Pattern Recognition, 24, 303-316.) and random Hough Transform (Xu, L. and Oja, E., (1993) Randomized Hough Transform (RHT): Basic Mechanisms, Algorithms, and Computational Complexities, CVGIP: Image Understanding, 57, 131-154.).
The present invention can also use special hardware or parallel computer algorithms. Since HT is a very popular algorithm and has been widely used in astronomy, medical imaging and pattern recognition, there are many implementations available. Further, a parallel algorithm is also discussed in Li, H., Lavin, M. A., and Master, R. J. L. (1986) Fast Hough Transform: a hierarchical approach. Computer Vision, Graphics, and Image Processing, 36, 139-161.
Results are presented based on performing various the above described method of
Synthetic Data—Bicluster Corrupted by Noise of Same Variance for All Samples
A bicluster pattern of 50 rows by 8 columns was embedded into a data set of size 100 by 30. The background of the data set, that is the whole data set except where the bicluster was embedded, was generated from the uniform distribution U(−5, 5). The embedded bicluster pattern contained constant columns and the value of each column was random and produced from a uniform distribution U(−5, 5). Gaussian noise with variance from 0.1 to 1 was generated to degrade the bicluster.
In all these experiments, all the columns (samples) where the embedded pattern was actually located were correctly found. The results of the performance test in terms of the number of correctly detected genes in terms of the noise variance is given in
The results of a specific experiment with the bicluster data corrupted by Gaussian noise with variance equalto 0.8 are shown in
The algorithm missed some of the genes probably due to performance limitations. For a dataset degraded by noise, it is possible that in some points (especially for a point where the magnitude of the original signal is small), the magnitude of noise is significantly large compared to the magnitude of the original signal.
To evaluate the robustness of the algorithm, the algorithm was tested on data sets sampled randomly, rather than on the whole data set.
In a first experiment, 50 gene expression profiles (rows) were randomly sampled from the data set in
In the second experiment, 15 samples were randomly selected from the data set in
These results show that the invention, particularly as exemplified works well for noisy data.
Synthetic Data—Bicluster Corrupted by Noise of Different Variance Among Samples
For the above experiment, the noise in the bicluster had the same variance for all samples. In practical applications, there may be different variances for different samples. Fortunately, the exemplified algorithm can deal with this easily. In the derivation of the algorithm described above, there is a normalization, ai (j), related to the values of {L1, L2, . . . , LM}. By selecting different parameters for these variables, the noise in ai (j) can be rescaled to a same variance. For example, if the variance of the noise in {F1, F3, . . . , FM} are 1 while the variance of the noise in F2 is 2. Then, using √{square root over (2)}L2=L1=L3 . . . =LM−1 actually detects the biclusters in {F1, F2/√{square root over (2)}, . . . , FM}, where the noise variance is now equal for all samples.
A bicluster pattern of 50 rows by 8 columns was embedded into a data set of size 100 by 30. The background was again generated from the uniform distribution U(−5, 5). The pattern contained constant columns and the value of each column was random and produced from a uniform distribution U(−5, 5). Gaussian noise with variance 0.6 was generated to degrade the first seven samples in the bicluster and Gaussian noise with variance 0.9 was generated to degrade the remaining samples. The experiment was performed on the data set as a whole with T=50. The results of the experiment are shown is
a) is a 100 by 30 data matrix.
Synthetic Data with Multiple Overlapping Biclusters
To show that the algorithm can successfully detect biclusters of different types and to examine the ability of the algorithm to find multiple biclusters simultaneously, especially when overlap between biclusters is present, four biclusters were embedded into a noisy background generated by a uniform distribution U(−5, 5). Gaussian noise with variance equal to 0.3 was generated to degrade the bicluster data.
The four embedded biclusters had the following sizes: 40 by 7 for Bicluster 1 (
In this experiment, bicluster 1 was found with all 7 samples and 40 rows. Bicluster 2 is found with 10 samples and 25 rows. Bicluster 3 is found with 8 samples and 35 rows. Bicluster 4 is found with 8 samples and 40 rows. All four biclusters were perfectly detected. An unexpected outcome was that a new bicluster with 3 samples and 60 rows was also detected (
In this experiment, the algorithm was applied to the lymphoma data set (Alizadeh, A. A. et al. (2000) Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature, 403, 503-511). This data set is characterized by well defined expression patterns differentiating three types of lymphoma: diffuse large B-cell lymphoma (DLBCL), chronic lymphocytic leukaemia (CLL) and follicular lymphoma (FL). The data set consists of expression data from 128 lymphochip microarrays for 4026 genes in 96 normal and malignant lymphocyte samples. The missing values in the data set are imputed using a project-onto-convex-sets algorithm (Gan, X., Liew, A. W. C., and Yan, H. (2006) Microarray Missing Data Imputation based on a Set Theoretic Framework and Biological Knowledge. Nucleic Acids Res., 34(5): 1608-1619). No other preprocessing was necessary.
The results of the experiment are shown in
For this data set, existing hierarchical clustering, for example by the Eisen et al. method, results in germinal center tissues buried within the DLBCL class (Alizadeh et al. 2000). However, using the present biclustering algorithm, the germinal center (GC) tissues are biclustered with both the DLBCL and FL tissues, as shown in
The genes in these two biclusters can be annotated using the Duke Integrated Genomics Database (Delong, M. et al., (2005) DIG—a system for gene annotation and functional discovery. Bioinformatics, 21(13), 2957-2959.). Gene Ontology (GO) terms (The Gene Ontology Consortium. (2000) Gene ontology tool for the unification of biology. Nat. Genet., 25, 25-29) for cellular components may be identified and the distribution of genes is obtained by counting the number of genes in each bicluster for each GO category. The 10 main relevant categories are: cell, cellular component unknown, envelope, extracellular matrix, extracellular region, membrane-enclosed lumen, organelle, protein complex, synapse, and virion. In bicluster 1, 48.32% of the genes are not GO annotated, 49.11% of the genes have GO related to cell, 33.22% of the genes have GO related to organelle. In bicluster 2, 53.86% of the genes are not GO annotated, 41.39% of genes have GO related to cell, 19.21% of the genes have GO related to organelle. Note that some genes can belong to more than one cellular component. Hence, the percentage numbers in a bicluster can sum up to more than 100%. Moreover, 3.96% of the genes in bicluster 1 have GO related to extracellular matrix while none of the genes in bicluster 2 are related to this component; and 4.36% of the genes in bicluster 2 have GO related to extracellular region while none of the genes in bicluster 1 are related to this component. In addition, the 27 genes with GO related to protein complex exist only in bicluster 1 and in the intersection of the two biclusters.
In this experiment, the algorithm was applied to the breast cancer data set (van 't Veer, L. J. et al. (2002) Gene expression profiling predicts clinical outcome of breast cancer. Nature, 415, 530-536.). This is a rather complicated data set. The original data set includes ˜20000 genes and 98 breast tumors. Van 't Veer et al, 2002, identified ˜5000 genes as significantly regulated across the group of samples and were used for subsequent clustering analysis. Using existing hierarchical methods, 98 breast tumors can be classified into two clusters: in the first cluster, 34% of the sporadic patients were from the group who developed distant metastases within 5 years; in the second cluster, 70% of the sporadic patients had a progressive disease.
The genes in these two biclusters are then also annotated by using the Duke Integrated Genomics Database. The 11 main relevant GO categories are: biological process unknown, cellular process, development, growth, interaction between organisms, physiological process, pigmentation, regulation of biological process, reproduction, response to stimulus, and viral life cycle. In bicluster 1, 41.66% of the genes are not GO annotated, 51.14% of the genes have GO related to cellular process, 49.26% of the genes have GO related to physiological process, 13.03% of the genes have GO related to regulation of biological process. In bicluster 2, 56.54% of the genes are not GO annotated, 37.69% of the genes have GO related to cellular process, 35.05% of the genes have GO related to physiological process, 11.06% of the genes have GO related to regulation of biological process. In addition, a major difference between the two biclusters is that 12.24% of the genes (124 genes) in bicluster 1 have GO related to response to stimulus while only 0.93% (6 genes) of the gene in bicluster 2 are related to this process. From these results, it can be seen that biclustering can produce biologically relevant grouping of genes.
Although the above discussion is exemplified byway of gene expression data, the invention is not limited thereto, as has been mentioned before. For example, an array of reaction products which vary according to pressure and temperature levels might show subsets of temperatures that behave in a pattern for certain pressures. Another example would be an array of the level of various, apparently free-floating, currencies against the US dollar, measured over a period of time (for instance on the first day every month for 10 years) might lead to a subset of those currencies that have consistent up-and-down changes in a subset of months.
Embodiments of the invention may be implemented, inter alia, using just hardware, with circuits dedicated for the required processing, by software or by a combination of hardware and software modules, even if the hardware is just that of a conventional computer system.
A module, and in particular the module's functionality, can be implemented in either hardware or software. In the software sense, a module is a process, program, or portion thereof, that usually performs a particular function or related functions. In the hardware sense, a module is a functional hardware unit designed for use with other components or modules. For example, a module may be implemented using discrete electronic components, or it can form a portion of an entire electronic circuit such as an application specific integrated circuit (ASIC). Numerous other possibilities exist.
The computer software involves a set of programmed logic instructions that are able to be interpreted by a processor, such as a CPU, for instructing the computer system 400 to perform predetermined functions specified by those instructions. The computer software can be an expression recorded in any language, code or notation, comprising a set of instructions intended to cause a compatible information processing system to perform particular functions, either directly or after conversion to another language, code or notation.
The computer software is programmed by a computer program comprising statements in an appropriate computer language. The computer program is processed using a compiler into computer software that has a binary format suitable for execution by the operating system. The computer software is programmed in a manner that involves various software components, or code means, that perform particular steps in the process of the described techniques.
The components of the computer system 400 include: the computer 402, input and output devices such as a keyboard 404, a mouse 406 and an external memory device 408 (e.g. one or more of a floppy disc drive, a CD drive, a DVD drive and a USB flash-memory drive, etc.) a display 410, network connections for connecting to a network such as the Internet 412 and/or a LAN, possibly a microarray analyzer 414 (or other mechanism for inputting gene expression data, or other data to be analyzed, standardized or otherwise processed). The computer 402 includes: a processor 422, a first memory such as a ROM 424, a second memory such as a RAM 426, a network interface 428 for connecting to external networks, an input/output (I/O) interface 430 for connecting to the input and output devices, a video interface 432 for connecting to the display, a storage device such as a hard disc 434, and a bus 436.
The processor 422 executes the operating system and the computer software executing under the operating system. The random access memory (RAM) 426, the read-only memory (ROM) 424 and the hard disc 434 are used under direction of the processor 422. The video interface 432 is connected to the display 410 and provides video signals for display on the display 410. User input, to operate the computer 402 is provided from the keyboard 404 and the mouse 406. The internal storage device is exemplified here by a hard disc 434 but can include any other suitable storage medium.
Each of the components of the computer 402 is connected to the bus 436 that includes data, address, and control buses, to allow these components to communicate with each other.
The computer system 400 can be connected to one or more other similar computers via the Internet, LANs or other networks.
The computer software program may be provided as a computer program product. During normal use, the program may be stored on the hard disc 434. However, the computer software program may be provided recorded on a portable storage medium, e.g. a CD-ROM or a flash memory read by the external memory device 408. Alternatively, the computer software can be accessed directly from the network 412. As such, the software may not even be stored on the computer itself.
In either case, a user can interact with the computer system 400 using the keyboard 404 and the mouse 406 to operate the programmed computer software executing on the computer 402. Alternatively, the process can be carried out automatically on the input of appropriate data.
The computer 400 can analyze the microarray data from microarrays directly or analyze non-standardized expression level data or standardized expression level data, as described above, to determine, represent and/or extract biclusters from such data. The means for inputting the data can be the microarray analyzer 414, the network interface 428, the external memory device, the keyboard 404 or any other suitable approach.
The computer system 400 is described for illustrative purposes: other configurations or types of computer systems can be equally well used to implement the described techniques. The foregoing is only an example of a particular type of computer system suitable for implementing the described techniques.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. Furthermore, certain terminology has been used for the purposes of descriptive clarity, and not to limit the present invention. The embodiments and preferred features described above should be considered exemplary, with the invention being defined by the appended claims and/or as described.