The field of the invention relates to methods and computer program products for the control of assays for the analysis of nucleic acid within DNA samples.
A fundamental goal of genomic research is the application of basic research into the sequence and functioning of the genome to improve healthcare and disease management. The application of novel disease or disease treatment markers to clinical and/or diagnostic settings often requires the adaptation of suitable research techniques to large scale high throughput formats. Such techniques include the use of large scale sequencing, mRNA analysis and in particular nucleic acid microarrays. DNA microarrays are one of the most popular technologies in molecular biology today. They are routinely used for the parallel observation of the mRNA expression of thousands of genes and have enabled the development of novel means of marker identification, tissue classification, and discovery of new tissue subtypes. Recently it has been shown that microarrays can also be used to detect DNA methylation and that results are comparable to mRNA expression analysis, see for example P. Adoian et al. Tumour class prediction and discovery by microarray-based DNA methylation analysis. Nucleic Acid Research, 30(5), 02. and T. Golub et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286:531-537, 1999.
Despite the popularity of microarray technology, there remain serious problems regarding measurement accuracy and reproducibility. Considerable effort has been put into the understanding and correction of effects such as background noise, signal noise on a slide and different dye efficiencies see for example C. S. Brown et al. Image metrics in the statististical analysis of dna microarray data Proc Natl Acad Sci USA, 98(16):8944-8949, July 2001 and G. C. Tseng et al. Issues in cdna microarray analysis: Quality filtering, channel normalization, models of variations and assessment of gene effects. Nucleic Acids Research, 29(12):2549-2557, 2001. However, with the exception of overall intensity normalization (A. Zien et al. Centralization: A new method for the normalization of gene expression data. Proc. ISMB '01/Bioinformatics, 17(6):323-331, 2001), it is not clear how to handle variations between single slides and systematic alterations between slide batches. Between slide variations are particularly problematic because it is difficult to explicitly model the numerous different process factors which may distort the measurements. Some examples are concentration and amount of spotted probe during array fabrication, the amount of labeled target added to the slide and the general conditions during hybridization. Other common but often neglected problems are handling errors such as accidental exchange of different probes during array fabrication. These effects can randomly affect single slides or whole slide batches. The latter is especially dangerous because it introduces a systematic error and can lead to false biological conclusions.
There are several ways to reduce between slide variance and systematic errors. Removing obvious outlier chips based on visual inspection is an easy and effective way to increase experimental robustness. A more costly alternative is to do repeated chip experiments for every single biological sample and obtain a robust estimate for the average signal. With or without chip repetitions randomized block design can further increase certainty of biological findings. Unfortunately, there are several problems with this approach. Outliers can not always be detected visually and it is not feasible to make enough chip repetitions to obtain a fully randomized block design for all potentially important process parameters. However, when experiments are standardized enough, process dependent alterations are relatively rare events. Therefore instead of reducing these effects by repetitions one should rather detect problematic slides or slide batches and repeat only those. This can only be achieved by controlling process stability.
Maintaining and controlling data quality is a key problem in high throughput analysis systems. The data quality is often hampered by experiment to experiment variability introduced by the environmental conditions that may be difficult to control.
Examples of such variables include, variability in sample preparation and uncontrollable reaction conditions. For example, in the case of micro array analysis systematic changes in experimental conditions across multiple chips can seriously affect quality and even lead to false biological conclusions. Traditionally the influence of these effects has been minimized by expensive repeated measurements, because a detailed understanding of all process relevant parameters appears to be an unreasonable burden.
Process stability control is well known in many areas of industrial production where multivariate statistical process control (MVSPC) is used routinely to detect significant deviations from normal working conditions. The major tool of MVSPC is the T2 control chart, which is a multivariate generalization of the popular univariate Shewhart control procedure.
See for example U.S. Pat. No. 5,693,440. In this application Hotelling's T2 in combination with a simple PCA was used as a means of process verification in photographic processes. Although this application demonstrates the use of simple principle component analysis, the benefits of this are not obvious as the data set was not of a high dimensionality as is often encountered in biotechnological assays such as sequencing and microarray analysis. Furthermore, this application recommends the application of PCA on the “cleared” reference data set, which may hide variations caused by the data set to be monitored.
The application of MVSPC for statistical quality control of microarray and high throughput sequencing experiments is not straightforward. This is because most of the relevant process parameters of a microarray experiment cannot be measured routinely in a high throughput environment.
5-methylcytosine is the most frequent covalent base modification of the DNA of eukaryotic cells. Cytosine methylation only occurs in the context of CpG dinucleotides. It plays a role, for example, in the regulation of the transcription, in genetic imprinting, and in tumorigenesis. Methylation is a particularly relevant layer of genomic information because it plays an important role in expression regulation (K. D. Robertson et al. DNA methylation in health and disease. Nature Reviews Genetics, 1:11-19, 2000). Methylation analysis has therefore the same potential applications as mRNA expression analysis or proteomics. In particular DNA methylation appears to play a key role in imprinting associated disease and cancer (see for example, Zeschnigk M, Schmitz B, Dittrich B, Buiting K, Horsthemke B, Doerfler W. “Imprinted segments in the human genome: different DNA methylation patterns in the Prader-Willi/Angelman syndrome region as determined by the genomic sequencing method” Hum Mol Genet 1997 March; 6(3):387-95 and Peter A. Jones “Cancer. Death and methylation”. Nature. 2001 Jan. 11; 409(6817):141, 143-4. The link between cytosine methylation and cancer has already been established and it appears that cytosine methylation has the potential to be a significant and useful clinical diagnostic marker.
The application of molecular biological techniques in the field of methylation analysis have hereto been limited to research applications, to date it is not a commercially utilised clinical marker. The application of methylation disease markers to a large scale analysis format suitable for clinical, diagnostic and research purposes requires the implementation and adaptation of high throughput techniques in the field of molecular biology to the specific constraints and demands specific to methylation analysis. Preferred techniques for such analyses include the analysis of bisulfite treated sample DNA by means of micro array technologies, and real time PCR based methods such as MethyLight and HeavyMethyl.
The described invention provides a novel method and computer program products for the process control of assays for the analysis of nucleic acid within DNA samples. The method enables the estimation of the quality of an individual assay based on the distribution of the measurements of variables associated with said assay in comparison to a reference data set. As these measurements are extremely high dimensional and contain outliers the application of standard MVSPC methods is prohibited. In a particularly preferred embodiment of the method a robust version of principle component analysis is used to detect outliers and reduce data dimensionality. This step enables the improved application of multivariate statistical process control techniques. In a particularly preferred embodiment of the method, the T2 control chart is utilised to monitor process relevant parameters. This can be used to improve the assay process itself, limits necessary repetitions to affected samples only and thereby maintains quality in a cost effective way.
In the following application the term ‘statistical distance’ is taken to mean a distance between datasets or a single measurement vector and a data set that is calculated with respect to the statistical distribution of one or both data sets. In the following the term ‘robust’ when used to describe a statistic or statistical method is taken to mean a statistic or statistical method that retains its usefulness even when one or more of its assumptions (e.g. normality, lack of gross errors) is violated.
The method and computer program products according to the disclosed invention provide novel means for the verification and controlling of biological assays. Said method and computer program products may be applied to any means of detecting nucleic acid variations wherein a large number of variables are analysed, and/or for controlling experiments wherein a large number of variables influence the quality of the experimental data Said method is therefore applicable to a large number of commonly used assays for the analysis of nucleic acid variations including, but not limited to, microarray analysis and sequencing for example in the fields of mRNA expression analysis, single nucleotide polymorphism detection and epigenetic analysis.
To date, the automated analysis of nucleic acid variations has been limited by experiment to experiment variation. Errors or fluctuations in process variables of the environment within which the assays are carried out can lead to decreased quality of assays which may ultimately lead to false interpretations of the experimental results. Furthermore, cerin constraints of assay design, most notably nucleic acid sequence (which affects factors such as cross hybridisation, background and noise in microarray analysis), may be subject to experiment to experiment variation further complicating standard means of assay result analysis and data interpretation.
One of the factors that complicates the controlling of such high throughput assays within predetermined parameters is the high dimensionality of the datasets which are required to be monitored. Therefore, multiple repetitions of each assay are often carried out in order to minimize the effects of process artefacts in the interpretation of complex nucleic acid assays. There is therefore a pronounced need in the art for improved methods of insuring the quality of high throughput genomic assays.
In one embodiment, the method and computer program products according to the invention provide a means for the improved detection of assay results which are unsuitable for data interpretation. The disclosed method provides a means of identifying said unsuitable experiments, or batches of experiments, said identified experiments thereupon being excluded from subsequent data analysis. In an alternative embodiment said identified experiments may be further analysed to identify specific operating parameters of the process used to carry out the assay. Said parameters may then be monitored to bring the quality of subsequent experiments within predetermined quality limits. The method and computer program products according to the invention thereby decrease the requirement for repetition of assays as a standard means of quality control. The method according to the invention further provides a means of increasing the accuracy of data interpretation by identifying experiments unsuitable for data analysis.
In the following it is particularly preferred that all herein described elements of the method are implemented by means of a computer.
The aim of the invention is achieved by means of a method of verifying and controlling nucleic acid analysis assays using statistical process control and/or and computer program products used for said purpose. The statistical process control may be either multivariate statistical process control or univariate statistical process control. The suitability of each method will be apparent to one skilled in the art. The method according to the invention is characterized in that variables of each experiment are monitored, for each experiment the statistical distance of said variables from a reference data set (also herein referred to as a historical data set) are calculated and wherein a deviation is beyond a pre-determined limit said experiment is indicated as unsuitable for further interpretation. It is particularly preferred that the method according to the invention is implemented by means of a computer.
In a preferred embodiment this method is used for the controlling and verification of assays used for the determination of cytosine methylation patterns within nucleic acids. In a particularly preferred embodiment the method is applied to those assays suitable for a high throughput format, for example but not limited to, sequencing and microarray analysis of bisulphite treated nucleic acids.
In one embodiment, the method according to the invention comprises four steps. In the first step a reference data set (also herein referred to as a historical data set) is defined, said data set consisting of all the variables that are to be monitored and controlled. In the second step a test data set is defined. Said test data set consists of the experiment or experiments that are to be controlled, and wherein each experiment is defined according to the values of the variables to be analysed.
In the third step of the method the statistical distance between the reference and test data sets or elements or subsets thereof are determined. In the fourth step of the method individual elements or subsets of the test dataset which have a statistical distance larger than that of a predetermined value are identified.
In a particularly preferred embodiment of the method, subsequent to the definition of the reference and test data sets the method comprises a further step, hereinafter referred to as step 2ii). Said step comprises reducing the data dimensionality of the reference and test data set by means of robust embedding of the values into a lower dimensional representation. The embedding space may be calculated by using one or both of the reference and the test data set. It is particularly preferred that the data dimensionality reduction is carried out by means of principle component analysis. In one embodiment of the method step bii) comprises the following steps. In the first step the data set is projected by means of robust principle component analysis. In the second step outliers are removed from the data set according to their statistical distances calculated by means of one or more methods taken from the group consisting of: Hotelling's T2 distance; percentiles of the empirical distribution of the reference data set; Percentiles of a kernel density estimate of the distribution of the reference data set and distance from the hyperplane of a nu-SVM (see Schlkopf, Bernhard and Smola, Alex J. and Williamson, Robert C. and Bartlett, Peter L., New Support Vector Algorithms. Neural Computation, Vol. 12, 2000.), estimating the support of the distribution of the reference data set. In the third step the embedding projection is calculated by means of standard principle component analysis and the cleared or the complete data set is projected onto this basis vector system.
In one embodiment of the method at least one of the variables measured in steps a) and b) is determined according to the methylation state of the nucleic acids.
In a further preferred embodiment of the method at least one of the variables measured in the first and second steps is determined by the environment used to conduct the assay, wherein the assay is a microarray analysis it is further preferred that these variables are independent of the arrangement of the oligonucleotides on the array. In a particularly preferred embodiment said variables are selected from the group comprising mean background/baseline values; scatter of the background/baseline values; scatter of the foreground values, geometrical properties of the array, percentiles of background values of each spot and positive and negative assay control measures.
In a further preferred embodiment of the method at least one of the variables measured in the first and second steps is determined by the environment used to conduct the assay, wherein the assay is a microarray analysis it is further preferred that these variables are independent of the arrangement of the oligonucleotides on the array.
In a particularly preferred embodiment wherein the assay is a microarray based assay said variables are selected from the group comprising mean background/baseline intensity values; scatter of the background/baseline intensity values; coefficient of variation for background spot intensities, statistical characterisation of the distribution of the background/baseline intensity values (1%, 5%, 10%, 25% 50%, 75% 90%, 95%, 99% percentiles, skewness, kurtosis), scatter of the foreground intensity values; coefficient of variation for foreground spot intensities; statistical characterisation of the distribution of the foreground intensity values (1%, 5%, 10%, 25% 50%, 75% 90%, 95%, 99% percentiles, skewness, kurtosis), saturation of the foreground intensity values, ratio of mean to median foreground intensity values, geometrical properties of the array as in the gradient of background intensity values calculated across a set of consecutive rows or columns along a given direction, mean spot diameter values, scatter of spot diameter values, percentiles of spot diameter value distribution across the microarray, and positive and negative assay control measures.
When selecting appropriate variables for the analysis an important criterion is that the statistical distribution of these variables does not change significantly between different series of experiments (wherein each series of experiments is defined as a large series of measurements carried out within one time period and with the same assay design). This allows the utillisation of measurements from previous studies as reference data sets.
Wherein the assay is a microarray based assay it is preferred that the variables to be analysed include at least one variable that refers to each of the foreground, background, geometrical properties and saturation of the microarray. A particularly preferred set of variables is as follows:
For each variable or group thereof the further steps of the method are according to the described method. Therefore, in one embodiment of the method first calculate the statistical distance of each variable from the reference dataset. It is preferred that the reference data set is composed of a large set of previous measurements, that is obtained under similar experimental conditions. Then combine variables within each category either by embedding into a 1-dimensional space or by averaging single values.
Preferably, both the statistical distance and the embedding is carried out in a robust way.
In a further preferred embodiment the to calculate quality of the experiment first calculate a lower dimensional embedding of both the reference and the test data seL It is preferred that the reference data set that is used is composed of a large set of previous measurements, that are obtained under similar experimental conditions. Secondly, calculate the statistical distance in this reduced dimensional space. Use this statistical distance as the quality score.
It will be obvious to one skilled in the art that is not necessary that the second step of the method is temporally subsequent to the first step of the method. The reference data set may be defined subsequent to the test data set, alternatively it may be a defined concurrently with the test data set. In one embodiment of the method the reference data set may consist of all experiments run in a series wherein said series is user defined. To give one example, where a microarray assay is applied to a series of tissue samples the measured variables of all the samples may be included in said reference data set, however analyses of the same tissue set using an alternative array may not. Accordingly the test data set may be a subset of or identical to the reference data set. In another embodiment of the method the reference data set consists of experiments that were carried out independent or separate from those of the test data set. The two data sets may be differentiated by factors such as but not limited to time of production, operator (human or machine), environment used to carry out the experiment (for example, but not limited to temperature, reagents used and concentrations thereof, temporal factors and nucleic acid sequence variations).
In a further embodiment of the method the reference data set is derived from a set of experiments wherein the value of each analysed variable of each experiment is either within predetermined limits or, alternatively, said variables are controlled in an optimal manner.
In step 4 of the method the statistical distance may calculated by means of one or more methods taken from the group consisting of the Hotelling's T2 distance between a single test measurement vector and the reference data set, the Hotelling'-T2 distance between a subset of the test data set and the reference data set, the distance between the covariance matrices of a subset of the test data set and the covariance matrix of the reference set, percentiles of the empirical distribution of the reference data set and percentiles of a kernel density estimate of the distribution of the reference data set, distance from the hyperplane of a nu-SVM (see Schlkopt, Bernhard and Smola, Alex J. and Williamson, Robert C. and Bartlett, Peter L., New Support Vector Algorithms. Neural Computation, Vol. 12, 2000.), estimating the support of the distribution of the reference data set. Wherein Hotelling's T2 distance between a single test measurement vector and the reference data set is measured, it is preferred that the T2 distance is calculated by using the sample estimate for mean and variance or any robust estimate for location, including trimmed mean, median, Tukey's biwight, 11-median, Oja-median, minimum volume ellipsoid estimator and Sestimator (see Hendrik P. Lopuhaa and Peter J. Rousseeuw: Breakdown points of affine equivariant estimators of multivariate location and covariance matrices) and any robust estimate for scale including Median Absolute Deviation, interquantile range Qn-estimator, minimum volume ellipsoid estimator and S-timator.
In a particularly preferred embodiment this is defined as:
T2(i)=(mi−μ)′S−1(mi−μ)
wherein reference set mean
and the reference set sample covariance matrix
wherein Nc is the number of experiments in the reference set and mi is the is the ith measurement vector of the reference or test data set.
Wherein the Hotelling'-T2 distance is calculated between a subset of the test data set and the reference data set, it is preferred that the T2 is calculated by using the sample estimate for mean and variance or any robust estimate for location, including trimmed mean, median, Tukey's biwight, 11-median, Oja-median and any robust estimate for scale including Median Absolute Deviation, interquantile range Qn-estimator, minimum volume ellipsoid estimator and S-estimator. In a particularly preferred embodiment this is defined as:
Tw2(i)=(μHDS−μCDS)T{overscore (S)}−1(μHDS−μCDS)
Wherein ‘HDS’ refers to the historical data set, also referred to herein as the reference data set and ‘CDS’ refers to the current data set also referred to herein as the test data set Furthermore, {overscore (S)} is calculated from the sample covariance matrices SHDS and SCDS
Wherein the statistical distance is calculated as the distance between the covariance matrices of a subset of the test data set and the covariance matrix of the reference set, it is preferred that the test statistics of the likelihood ratio test for different covariance matrixes are included. See for example Hartung J. and Epelt B: Multivariate Statistik. R. Oldenburg, Miinchen, Wien, 1995. In a particularly preferred embodiment this is defined as:
In a further embodiment of the method, subsequent to steps 1 to 4, the method may further comprise a fifth step. In a first embodiment of the method said identified experiments or batches thereof are further interrogated to identify specific operating parameters of the process used to carry out the assay that may be required to be monitored to bring the quality of the assays within predetermined quality limits. In one embodiment of the method this is enabled by means of verifying the influence of each individual variable by computing its' univariate T2 distances between reference and test data set. In a further embodiment one may analyse the orthogonalized T2 distance computing the PCA embedding of step 2ii) based on the reference data set. The principle component responsible for the largest part of the T2 distance of an out of control test data point may then be identified. Responsible individual variables can be identified by their weights in this principle component. In a further embodiment variables responsible for the out of control situation can be identified by backward selection. A subset of variables or single variables can be excluded from the statistical distance calculation and one can observe whether the computed distance gets significantly smaller. Wherein the computed statistical distance significantly decreases one can conclude that the excluded variables were at least partially responsible for the observed out of control situation.
In a further embodiment, said identified assays are designated as unsuitable for data interpretation, the experiment(s) are excluded from data interpretation, and are preferably repeated until identified as having a statistical distance within the predetermined limit.
In a particularly preferred embodiment, the method further comprises the generation of a document comprising said elements or subsets of the test data determined to be outliers. In a further embodiment said document further comprises the contribution of individual variables to the determined statistical distance. It is preferred that said document be generated in a readable manner, either to the user of the computer program or by means of a computer, and wherein said computer readable document further comprises a graphical user interface.
Said document may be generated by any means standard in the art, however, it is particularly preferred that the document is automatically generated by computer implemented means, and that the document is accessible on a computer readable format (e.g. HTML, portable document format (pdf), postscript (ps)) and variants thereof. It is further preferred that the document be made available on a server enabling simultaneous access by multiple individuals. In another aspect of the invention computer program products are provided. An exemplary computer program product comprises:
It is further preferred that said computer program product comprises a computer code for the reduction of the data dimensionality of the reference and test data set by means of robust embedding of the values into a lower dimensional representation.
In a preferred embodiment the computer program product further comprises a computer code that reduces the data dimensionality of the reference and test data set by means of robust embedding of the values into a lower dimensional representation. In this embodiment of the invention the embedding space may be calculated using one or both of the reference and the test data sets. In one particularly preferred embodiment the computer code carries out the data dimensionality reduction step by means of a method comprising the following steps:
In a further preferred embodiment the computer program product further comprises a computer code that generates a document comprising said elements or subsets of the test data identified by the computer code of step d). It is preferred that said document be generated in a readable manner, either to the user of the computer program or by means of a computer, and wherein said computer readable document further comprises a graphical user interface.
In this example the method according to the invention is used to control the analysis of methylation patterns by means of nucleic acid microarrays.
In order to measure the methylation state of different CpG dinucleotides by hybridization, sample DNA is bisulphite treated to convert all unmethylated cytosines to uracil, this treatment is not effective upon methylated cytosines and they are consequently conserved. Genes are then amplified by PCR using fluorescently labelled primers, in the amplificate nucleic acids unmethylated CpG dinucleotides are represented as TG dinucleotides and methylated CpG sites are conserved as CG dinucleotides. Pairs of PCR primers are multiplexed and designed to hybridise to DNA segments containing no CpG dinucleotides. This allows unbiased amplification of multiple alleles in a single reaction. All PCR products from each individual sample are then mixed and hybridized to glass slides carrying a pair of immobilised oligonucleotides for each CpG position to be analysed. Each of these detection oligonucleotides is designed to hybridize to the bisulphite converted sequence around a specific CpG site which is either originally unmethylated (TG) or methylated (CG). Hybridization conditions are selected to allow the detection of the single nucleotide differences between the TG and CG variants.
In the following, NCpG is the number of measured CpG positions per slide, NS is the number of biological samples in the study and NC is the number of hybridized chips in the study. For a specific CpG position k Î{1, . . . , NCpG}, the frequency of methylated alleles in sample j Î{1, . . . , NS}, hybridized onto chip i Î{1, . . . , NC) can then be quantified as equation 1
where CGik and TGik are the corresponding hybridization intensities. This ratio is invariant to the overall intensity of the particular hybridization experiment and therefore gives a natural normalization of our data.
Here we will refer to a single hybridization experiment i as experiment or chip. The resulting set of measurement values is the methylation profile mi=(mi1, . . . , miNCpG)′. We usually have several repeated hybridization experiments i for every single sample j. The methylation profile for a sample j is estimated from its set of repetitions Rj by the L1-median as mj=xiÎRj|mi−x|2. In contrast to the simple component wise median this gives a robust estimate of the methylation profile that is invariant to orthogonal linear transformations such as PCA.
Data Sets
In our analysis we used data from three microarray studies. In each study the methylation status of about 200 different CpG dinucleotide positions from promoters, intronic and coding sequences of 64 genes was measured.
Temperature Control: Our first set of 207 chips came from a control experiment where PCR amplificates of DNA from the peripheral blood of 15 patients diagnosed with ALL or AML was hybridized at 4 different temperatures (38 C, 42 C, 44 C, 46 C). We will use this data set to prove that our method can reliably detect shifts in experimental conditions.
Lymphoma: The second data set with an overall number of 647 chips came from a study where the methylation status of different subtypes of non-Hodgkin lymphomas from 68 patients was analyzed. All chips underwent a visual quality control, resulting in quality classification as “good” (proper spots and low background), “acceptable” (no obvious defects but uneven spots, high background or weak hybridization signals) and “unacceptable” (obvious defects). We will use this data set to identify different types of outliers and show how our methods detect them.
In addition we simulated an accidental exchange of oligo probes during slide fabrication in order to demonstrate that such an effect can be detected by our method. The exchange was simulated in silico by permuting 12 randomly selected CpG positions on 200 of the chips (corresponding to an accidental rotation of a 24 well oligo supply plate during preparation for spotting).
ALL/AML: Finally we show data from a second study on ALL and AML, containing 468 chips from 74 different patients. During the course of this study 46 oligomeres had to be re-synthesized, some of which showed a significant change in hybridization behavior, due to synthesis quality problems. We will demonstrate how our algorithm successfully detected this systematic change in experimental conditions.
Typical Artefacts
Typical artefacts in microarray based methylation analysis are shown in
With a high number of replications for each biological sample and the corresponding average m being reliably estimated, outlier chips can be relatively easily detected by their strong deviation from the robust sample average. In the following, we will discuss some typical outlier situations, using data from the Lymphoma experiment. In this case the hybridization of each sample was repeated at a very high redundancy of 9 chips.
After identifying possible error sources the question remains how to reliably detect them, in particular if they can not be avoided with absolute certainty. One aim of the invention is therefore to exclude single outlier chips from the analysis and to detect systematic changes in experimental conditions as early as possible in order to facilitate a fast recalibration of the production process.
Detecting Outlier Chips with Robust PCA
Methods
As a first step we want to detect single outlier chips. In contrast to standard statistical approaches based on image features of single slides we will use the overall distribution of the whole experimental series. This is motivated by the fact that although image analysis algorithms will successfully detect bad hybridization signals, they will usually fail in cases of unspecific hybridization. The aim is to identify the region in measurement space where most of the chips mi, i=1 . . . Nc, are located. The region will be defined by its center and an upper limit for the distance between a single chip and the region center. Chips with deviations higher than the upper limit will be regarded as outliers.
A simple approach is to independently define for every CpG position k the deviation from the center μk as tk=|mik−μk|sk hereinafter referred to as Equation 3, where μk=(1/N)imik is the mean and s2k=1/(N−1)i(mik−μk)2 is the sample variance over all chips. Assuming that the mik are normally distributed, tk multiplied by a constant follows a t-distribution with N−1 degrees of freedom. This can be used to define the upper limit of the admissible region for a given significance level α.
However, a separate treatment of the different CpG positions is only optimal when their measurement values are independent. As
Assuming that the mi are multivariate normally distributed, T2 multiplied by a constant follows a F-distribution with NC-NCpG degrees of freedom and the non-centrality parameter NCpG. This can be used to define the upper limit of the admissible region for a given significance level α.
Two problems arise when we want to use the T2-distance for microarray data:
The first problem can be addressed by using principle component analysis (PCA) to reduce the dimensionality of our measurement space. This is done by projecting all methylation profiles mi onto the first d eigenvectors with the highest variance. As a result we get the d-dimensional centered vectors i=PPCA(mi−μ) in eigenvector space. After the projection, the covariance matrix=diag(1, . . . , d) of the reduced space is a diagonal matrix and the T2-distance of Equation 4 is approximated by the T2-distance in the reduced space
Under the assumption that the true variance is equal to {tilde over (S)}j, {tilde over (T)}2 follows a χ2 distribution with d degrees of freedom. This can be used to define the upper significance level α. However the problem remains that the estimated eigenvectors and variances {tilde over (s)}j are not robust against outliers.
We propose to solve the problem of outlier sensitivity together with the dimension reduction step by using robust principle component analysis (rPCA). rPCA finds the first d directions with the largest scale in data space, robustly approximating the first d eigenvectors. The algorithm starts with centering the data with a robust location estimator. Here we will use the L1 median according to Equation 6:
In contrast to the simple component-wise median, this gives a robust estimate of the distribution center that is invariant to orthogonal linear transformations such as PCA. Then all centered observations are projected onto a finite subset of all possible directions in measurement space. The direction with maximum robust scale is chosen as an approximation of the largest eigenvector (e.g. by using the Qn estimator). After projecting the data into the orthogonal subspace of the selected “eigenvector” the procedure searches for an approximation of the next eigenvector. Here the finite set of possible directions is simply chosen as the set of centered observations themselves.
After obtaining the robust projection of our data into a d dimensional subspace we can compute the upper limit of the admissible region 2UCL, also referred to as the upper control limit (UCL). For a given significance level iz it is computed as Equation 7:
{tilde over (T)}UCL2=χd.1-α.
Every observation mi with T2(i)>2UCL is regarded as an outlier.
Results
In order to test how the rPCA algorithm works on microarray data we applied it to the Lymphoma dataset and compared its performance to classical PCA. The results are shown in
The rPCA algorithm detected 97% of the chips with “unacceptable” quality, whereas classical PCA only detected 29%. 10% of the “acceptable” chips were detected as outliers by rPCA, whereas PCA detected 3%. rPCA detected 21 chips as outliers which were classified as “good”. These chips have all been confirmed to show saturated hybridization signals, not identified by visual inspection. This means rPCA is able to detect nearly all cases of outlier chips identified by visual inspection. Additionally rPCA detects microarrays which have unconspicous image quality but show an unusual hybridization pattern.
An obvious concern with this use of rPCA for outlier detection is that it relies on the assumption of normal distribution of the data If the distribution of the biological data is highly multi-modal, biological subclasses may be wrongly classified as outliers. To quantify this effect we simulated a very strong cluster structure in the Lymphoma data by shifting one of the smaller subclasses by a multiple of the standard deviation. Only when the measurements of all 174 CpG of the subclass where shifted by more than 2 standard deviations a considerable part of the biological samples were wrongly classified as outliers. In order to avoid such a misclassification, we tolerate at most 50% of repeated measurements of a single biological sample to be classified as outliers. However, we never reached this threshold in practice.
Statistical Process Control
Methods
In the last section we have seen how outliers can be detected solely on the basis of the overall data distribution. Statistical process control expands this approach by introducing the concept of time. The aim is to observe the variables of a process for some time under perfect working conditions. The data collected during this period form the so called historical data set (HDS), also referred to above as the ‘reference data set’. Under the assumption that all variables are normally distributed, the mean μHDS and the sample covariance matrix SHDS of the historical data set fully describe the statistical behavior of the process.
Given the historical data set it becomes possible to check at any time point, I, how far the current state of the process has deviated from the perfect state by computing the T2-distance between the ideal process mean μHDS and the current observation mi. This corresponds to Equation 4 with the overall sample estimates μ and S replaced by their reference counterparts μHDS and SHDS. Any change in the process will cause observations with greater T2-distances. To decide whether an observation shows a significant deviation from the HDS we compute the upper control limit as in Equation 8:
where p is the number of observed variables, n is the number of observations in the HDS, α is the significance level and F is the F-distribution with n-p degrees of freedom and the non-centrality parameter p. Whenever T2>T2UCL is observed the process has to be regarded as significantly out of control.
In our case the process to control is a microarry experiment and the only process variables we have observed are the log ratios of the actual hybridization intensities. A single observation is then a chip mi and the HDS of size NHDS is defined as (m1, . . . , mNHDS}. We have to be aware of a few important issues in this interpretation of statistical process control. First, our data has a multi-modal distribution which results from a mixture of different biological samples and classes. Therefore the assumption of normality is only a rough approximation and T2UCL from Equation should be regarded with caution. Secondly, as we have seen in the last sections, microarray experiments produce outliers, resulting in transgression of the UCL. This means sporadic violations of the UCL are normal and do not indicate that the process is out of control. The third issue is that we have to use the assumption that a microarray study will not systematically change its data generating distribution over time. Therefore the experimental design has to be randomized or block randomized, otherwise a systematic change in the correctly measured biological data will be interpreted as an out of control situation (e.g. when all patients with the same disease subtype are measured in one block). Finally, the question remains of what time means in the context of a microarray experiment. Beside the biological variation in the data, there are a multitude of different parameters which can systematically alter the final hybridization intensities. The experimental series should stay constant with regard to all of them. In our experience the best initial choice is to order the chips by their date of hybridization, which shows a very high correlation to most parameters of interest.
Although it is certainly interesting to look how single hybridization experiments mi compare to the HDS, we are more interested in how the general behavior of the chip process changes over time. Therefore we define the current data set (CDS) (also referred to above as the test data set) as {mi-NCDS/2, . . . , mi, . . . , mi+NCDS/2}, where i is the time of interest This allows us to look at the data distribution in a time interval of size NCDS around i. In analogy to the classical setting in statistical process control we can define the T2-distance between the HDS and the CDS as in Equation 9:
Tw2(i)=(μHDS−μCDS)T
where {tilde over (S)} is calculated from the sample covariance matrices SHDS and SCDS as
Although it is possible to use T2w-distance between the historical and current data set to test for μHDS=μCDS, this information is relatively meaningless. The hypothesis that the means of HDS and CDS are equal would almost always be rejected, due to the high power of the test. What is of more interest is T itself, which is the amount by which the two sample means differ in relation to the standard deviation of the data.
In order to see whether an observed change of the T2w-distance comes from a simple translation it is also interesting to compare the two sample covariances SHDS and SCDS. A translation in log(CG/TG) space means that the hybridization intensities of HDS and CDS differ only by a constant factor (e.g. a change in probe concentration). This situation can be detected by looking at
which is the test statistics of the likelihood ratio test for different covariance matrices. It gives a distance measure between the two covariance matrices (i.e. L=0 means equal covariances).
Before we can apply the described methods to a real microarray data set we have again to solve the problem that we need a non-singular and outlier resistant estimate of SHDS and SCDS. What makes the problem even harder than is that we cannot a priori know how a change in experimental conditions will affect our data. In contrast to the last section, the simple approximation of SHDS by its first principle components will not work here. The reason is that changes in the experimental conditions outside the HDS will not necessarily be represented in the first principole components of SHDS.
The solution is to first embed all the experimental data into a lower dimensional space by PCA. This works, because any significant change in the experimental conditions will be captured by one of the first principle components. SHDS and SCDS can then be reliably computed in the lower dimensional embedding. The problem of robustness is simply solved by first using robust PCA to remove outliers before performing the actual embedding and before computing the sample covariances. A summary of our algorithm is:
With the computed values for T2, T2w and L we can generate a plot that visualizes the quality development of the chip process over time, a so called T2 control chart.
Results
The first example is shown in
Finally,
The major practical problem is now to identify the reasons for the changes. In this regard the most valuable information from the T2 control chart is the time point of process change. It can be cross-checked with the laboratory protocol and the process parameters which have changed at the same time can be identified. In our case the two process shifts corresponded to the time of replacement of re-synthesized probe oligos for slide production, which were obviously delivered at a wrong concentration. After exclusion of the affected CpG positions from the analysis the T2 chart showed normal behavior and the overall noise level of the data set was significantly reduced.
Discussion
Taken together, we have shown that robust principle components analysis and techniques of statistical process control can be used to detect flaws in microarray experiments. Robust PCA has proven to be able to automatically detect nearly all cases of outlier chips identified by visual inspection; as well as microarrays with unconspicous image quality but saturated hybridization signals. With the T2 control chart we introduced a tool that facilitates the detection and assessment of even minor systematic changes in large scale microarray studies.
A major advantage of both methods is that they do not rely on an explicit modeling of the microarray process as they are solely based on the distribution of the actual measurements. Having successfully applied our methods to the example of DNA methylation data, we assume that the same results can be achieved with other types of microarray platforms. The sensitivity of the methods improve with increasing study sizes, due to their multivariate nature. This makes them particularly suitable for medium to large scale experiments in a high throughput environment.
The retrospective analysis of a study with our methods can greatly improve results and avoid misleading biological interpretations. When the T2 control chart is monitored in real time a given quality level can be maintained in a very cost effective way. On the one hand, this allows for an immediate correction of process parameters. On the other hand, this makes it possible to specifically repeat only those slides affected by a process artefact This guarantees high quality while minimizing the number of repetitions.
A general shortcoming of T2 control charts is that they only indicate that something went wrong, but not what was exactly the source. Therefore we have used the time at which a significant change happened in order to identify the responsible process parameter. We have shown how a quantification of the change in covariance structure provides additional information and permits to discriminate between different problems like changes in probe concentration and accidental handling errors.
In one aspect, the method according to the disclosed invention provides a means for automatically generating a concise report based on the disclosed methods for quality monitoring of laboratory process performance. In the disclosed embodiment this report is structured in sections starting with summary table (see Table 1) of the performance grades for several evaluation categories of the individual experiment units, a section detailing each evaluation category in turn in a table of grades for this category, the corresponding performance variables the grades are based on and a set of graphical displays implemented as panel of box plots (see
Table 1 shows the summary table of category grades for each experimental unit: From left to right, the columns represent the identifier of the experimental unit, the human expert visual grade, the distance for the experimental unit from the estimate the robust mean location of the set of experiments, the background category grade, the spot characteristic category grade, the geometry characteristic grade and the intensity saturation category grade are stated. Three grade levels are used, good, dubious, bad, based on the grades calculated for each category in turn.
Table 2 shows the complete summary table of all chips analysed in study ‘1’ according to
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP03/03288 | 3/28/2003 | WO | 6/2/2005 |
Number | Date | Country | |
---|---|---|---|
60368452 | Mar 2002 | US |