The present application claims priority from Australian Provisional Application No 2010900948 filed on 8 Mar. 2010, the content of which is incorporated herein by reference. The present application is related to a corresponding international application entitled “Annotation of a Biological Sequence”, which also claims priority from Australian Provisional Application No 2010900948. The content of the corresponding international application is incorporated herein by reference.
This disclosure generally concerns bioinformatics and more particularly, a computer-implemented method, computer system and computer program for evaluating performance of a classifier. For example, the classifier may be trained (but not limited to one) for annotation of a biological sequence.
A genome project generally includes two phases, the first being to assign and map sequences of the genome of a given species (or a phenotype group). The second phase, which is the annotation of the genome, assigns a role to defined portions of the genome. Genome annotation is important to transform a sequence of adenine (a), guanine (g), thymine (t) and cytosine (c) into commodities or new modalities of human health management.
Most structural annotation to date involves identification of genomic elements such as coding regions, exons, introns and open reading frames (ORFs). Less emphasis has been placed on annotation of regulatory regions, which is more difficult to achieve and could reside anywhere relative to the structural annotation listed above. Functional annotation includes attaching biological information such as biochemical function, biological function, gene expression, regulation and interactions to the genomic elements.
Recent advances in microarray technologies such as tiling arrays, single nucleotide polymorphism (SNP) arrays and, more recently, high throughput next generation sequencing (NGS) have opened the field of genome wide associations analysis (GWAS). In general terms, GWAS is an analysis of the genome of different individuals of a particular species to identify genetic associations with observable traits or disease. Such analysis puts a pressure on the development of data analysis techniques capable of coping with the large volumes of data and extracting the relevant knowledge reliably.
According to a first aspect, there is provided a computer-implemented method for evaluating performance of a classifier, the method comprising:
The estimated probability, which represents the statistical significance of an observed precision at a given recall, is a metric for evaluating performance of the classifier and for comparing multiple classifiers. In one example, a negative logarithmic transformation of the estimated probability will be referred to as the calibrated precision of the classifier. In this case, the larger the calibrated precision, the better is the performance of the classifier. The metric is important for a number of reasons. Firstly, the degree of linkage between the determined label and local content of the segment can be estimated. Secondly, the internal consistency of labelling and thus of the classifier can be measured. Further, calibrated precision allows an objective evaluation of different classifiers trained using different methods. Calibrated precision also provides an insight into classifiers whose performance is inadequately captured by precision-recall curves, especially when the dataset has an extremely imbalanced ratio between classes such as 1:10,000.
Step (a) may comprise calculating a number of correctly determined positive labels or a number of incorrectly determined positive labels, or both. The recall may be a ratio between the number of correctly determined positive labels and a total number of positive known labels. The precision may be a ratio between the number of correctly determined positive labels, and a total number of correctly or incorrectly determined positive labels.
The probability in step (b) may be estimated by calculating the probability of observing a predetermined number of incorrectly determined positive labels given the number of correctly determined positive labels. In this case, he probability of observing the predetermined number of incorrectly determined positive labels may be the maximum probability over a range of possible predetermined numbers of incorrectly determined positive labels given the number of correctly determined positive labels. Step (b) may further comprise improving the estimated probability using approximation error correction.
Step (a) may further comprise determining whether each determined positive label is correct or incorrect based on the corresponding known label and a decision threshold. In this case, step (a) may further comprise ranking the labels determined by the classifier according to their value, and determining the decision threshold based on the ranked labels.
In this example, the decision threshold is not a predetermined number, but rather a threshold that is calculated based on the ranked labels. For example, the decision threshold may be set to control precision and recall. Increasing the decision threshold may result in less labels meeting the threshold, which usually increases the precision but decreases recall. Conversely, decreasing the decision threshold generally decreases precision but increases recall. Advantageously, this allows evaluation of a large set of results (determined labels).
The method may further comprise determining an area under a curve of the estimated probability in step (b) against recall. In this case, the area, also referred to as the area under a calibrated precision-recall curve, provides a measure of overall performance that is independent of any particular decision threshold. If a uniform distribution is assumed on the feature space, the area represents the expected value of the random variable calibrated precision on the space of positive labels.
The method may further comprise maximising the estimated probability in step (b) with respect to recall. In this case, the maximised probability, also referred to as the maximum calibrated precision, represents the maximal rise of the calibrated precision-recall curve. Since the maximised probability is a single number, it facilitates easy comparison of classifier performance.
The classifier may be a support vector machine classifier.
In one example, the labels may each be determined by the classifier in step (a) for a first segment of a first biological sequence of a first species.
In this case, the estimated probability may be used to evaluate the performance of classifiers used for genome wide analysis. Compared with receiver operating characteristics (ROC) curves, area under ROC and enrichment scores, the metric is more suitable for genome wide analysis because it is able to discriminate efficiently between performance of classifiers in the regions of high precision settings and when datasets have highly imbalanced sizes of elements in two label classes, such as 1:10,000.
The classifier may be trained for annotation of second segments of a second biological sequence of a second species that is different to, or a variant of, the first species. In this case, the determined label in step (a) may be calculated by the classifier based on an estimated relationship between the second segments and known labels of the second segments. Advantageously, the method facilitates translation of problems and solutions from one species to another, generalising beyond the apparent scope of the initial annotation. For example, the method allows a classifier trained on mouse dataset to be used for annotation of human biological sequences.
The first or second biological sequence may be a genome and the first or second segments are genome segments. In this case, the label of each segment may represent whether the segment is a transcription start site (TSS).
Alternatively, the first or second biological sequence may be an RNA sequence and the first or second segments are RNA segments.
In both cases of genome and RNA segments, the label of each segment may represent one of the following:
According to a second aspect, there is provided a computer program to implement the method according to the first aspect. The computer program may be embodied in a computer-readable medium such that when code of the computer program is executed, causes a computer system to implement the method.
According to a third aspect, there is provided a computer system for evaluating performance of a classifier, the computer system comprising a processing unit operable to: (a) compare labels determined by the classifier with corresponding known labels; and (b) based on the comparison, estimate a probability of observing an equal or better precision at a given recall with random ordering of the labels determined by the classifier.
Non-limiting example(s) of the method and system will now be described with reference to the accompanying drawings, in which:
a) is a plot of receiver operating characteristic (ROC) curves.
b) is a plot of precision-recall curves (PRC).
c) is a plot of precision-enrichment-recall curves (PERC).
d) is a plot of enrichment-score-recall curves.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated-precision-recall curves (CPRC),
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision-enrichment-recall curves (PERC), and
e) is a plot of enrichment-score-recall curves.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall (CPRC),
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against number of top hits, and
e) a plot of calibrated precision (double log scale) against number of top hits.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall (CPRC),
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against number of top hits, and
e) is a plot of calibrated precision (double log scale) against number of top hits.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall (CPRC),
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against number of top hits, and
e) is a plot of calibrated precision (double log scale) against number of top hits.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall (CPRC),
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against the number of top hits, and
e) is a plot of calibrated precision (double log scale) against number, of top hits.
a) is a plot of precision against number of top hits, and
b) a plot of calibrated precision (double log scale) against number of top hits.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall,
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against the number of top hits, and
e) is a plot of calibrated precision (double log scale) against number of top hits.
a) is a plot of precision-recall curves (PRC),
b) is a plot of calibrated precision (normal log scale) against recall,
c) is a plot of receiver operating characteristic (ROC) curves,
d) is a plot of precision against the number of top hits, and
e) is a plot of calibrated precision (double log scale) against number of top hits.
Referring first to
A local computing device 140 controlled by a user (not shown for simplicity) can be used to operate the processing unit 110. The local computing device 140 is capable of receiving input data from a data entry device 144, and displaying output data using a display screen 142. Alternatively or in addition, the method can be offered as a web-based tool accessible by remote computing devices 150, 160 each having a display screen 152, 162 and data entry device 154, 164. In this case, the remote computing devices 150, 160 are capable of exchanging data with the processing unit 110 via a wide area communications network 130 such as the Internet and where applicable, a wireless communications network comprising a wireless base station 132.
Referring first to
{right arrow over (s)}ε{a,g,t,c}
n,
where n is the length of the sequence and each nucleotide in the sequence is either adenine (a), guanine (g), thymine (t) or cytosine (c). In this example, the biological sequence {right arrow over (s)} relates to a “genome”, a term which is understood in the art to represent hereditary information present in an organism.
The sequence may be retrieved from the local 120 or remote 170 data store, or received from a local computing device 140 or a remote computing device 150, 160 via the communications network 130. In this case, the remote data store 170 may be a genetic sequence database such as GenBank with an annotated collection of DNA nucleotide sequences.
The processing unit 110 then divides the sequence {right arrow over (s)} into multiple potentially overlapping segments or tiles; see step 210. Each segment {right arrow over (x)}i comprises some the nucleotides in the sequence {right arrow over (s)};
{right arrow over (x)}
i
ε{a,c,g,t}
w
where {right arrow over (x)}i is the ith segment, w<n is the window size or length of the segment and each nucleotide in the sequence is either adenine (a), guanine (g), thymine (t) or cytosine (c).
The overlapping segments {right arrow over (x)}i may be of window size w=500 bp (base pairs) are used, shifted every 250 bp; see Examples 1 and 2 and
Each segment {right arrow over (x)}i is associated with a known binary label yi=±1. The binary label yi represents a known outcome or classification of the segment {right arrow over (x)}i. The ({right arrow over (x)}i, yi) pairs form a training dataset for training the classifier 112 that labels each segment {right arrow over (x)}i into one of two classes: +1 or −1. Although two classes are considered here, it should be appreciated that there may be more than two classes of known labels in other applications.
Depending on the applications, the label yi may be whether the segment {right arrow over (x)}i is a transcription start site (TSS) to predict the location of genes which encode proteins in a genome segment.
In other applications, the label yi may represent one of the following:
The volume of datasets available for the whole genome analysis is generally very large. For example in Table 1, the Pol II and RefGene datasets contain 10.72 M different segments each, while the RefGeneEx dataset contains only 0.96 M segments. Training using the whole dataset is therefore a resource-intensive and expensive exercise.
To cope with large volume of data, the processing unit 110 then forms a training set using only a subset of the segments {right arrow over (x)}i see step 215. In the example above, in the case of training on the reduced dataset RefGeneEx of 0.96M segments, only 13 K segments were actually—used during training. Testing, as will be explained further below, is performed on the whole datasets available, including Pol II and RefGene with 11 M segments each.
The processing unit 110 then extracts one or more features from each segment {right arrow over (x)}i in the training set; see step 220 in
For some classification tasks that are not strand specific, the frequencies for forward and reverse complement pairs are summed together. For modeling strand specific phenomena, the compression of forward and reverse complements can be omitted. If k=4 is used for classification and learning, and for notational convenience a constant feature of value 1 is added, the feature vector:
{right arrow over (φ)}({right arrow over (x)}i)ε137
maps each segment {right arrow over (x)}i into a
In the following examples, k=4 is chosen based on some initial experimentation with different values of k in (Bedo et al., 2009). However, it should be understood that other values of k may be more suitable for different applications. In should also be understood that, additionally or alternatively, other types features may be used, such as a position weight matrix (PWM) score histogram of the segment; empirical data or estimation of transcription factor binding affinity of a transcription factor in the segment; a non-linear transformation of a set or a subset of features and occurrence of a base pair such as c-g in the segment.
As shown in step 220 in
In this example, the classifier 112 is in the form of a support vector machine (SVM) and the relationship is represented using a set of weights in a weight vector {right arrow over (β)}. The classifier 112 is defined by a linear prediction function:
ƒ({right arrow over (x)}i):={right arrow over (φ)}({right arrow over (x)}i),{right arrow over (β)}
where {right arrow over (x)}i is the ith segment, {right arrow over (φ)}({right arrow over (x)}i) is a feature vector and {right arrow over (β)}ε137 is a weight or coefficient vector with weights corresponding to each feature in the feature vector. The classifier 112 is also associated with an objective function Ξ({right arrow over (β)}), which the processing unit 110 minimises to compute weight vector {right arrow over (β)}=[βi]:
where λ is the regularisation hyperparameter.
Let X denote a matrix where the ith row is the sample {right arrow over (φ)}({right arrow over (x)}i) in feature space and {right arrow over (y)} denotes the vector [yi], then we can write Ξ in matrix form as
where I:=I({right arrow over (β)}) is a diagonal matrix with entries:
I
ii=[[1−yi{right arrow over (φ)}({right arrow over (x)}i),{right arrow over (β)}≧0]]
and [[•]] denotes the Iverson bracket (indicator function).
Minimisation of objective function Ξ({right arrow over (β)}) can be done for small k in the primal domain. This comprises of iterating weights:
{right arrow over (β)}t+1←(XTItX{right arrow over (β)}t+Λ)−1XTItY,
where Λ is a diagonal matrix with entries Λii:=λ. This is a variant of the well-known ridge-regression solution (Hastie et al., 2001) with the additional It:=I({right arrow over (β)}t) matrix. It effectively implements a descent along the subgradient of Ξ. For large k, Ξ can still be minimised using a large-scale SVM learning algorithm such as the Pegasos algorithm (Shalev-Shwartz et al., 2007).
To reduce the number of features used in the model, the processing unit 110 uses a recursive support vector (RSV) method where the SVM is combined with recursive feature elimination (Guyon et al., 2002). Referring also to
To accelerate the process, 10% of the worst features were discarded when the model size was above 100 features and individually discarded when below. To optimise the model size and regularisation parameter λ, the 3-fold cross-validation on the training data (Hastie et al., 2001) was used with a grid search for λ, and the model with greatest average area under precision recall curve, auPRC chosen.
This process is then repeated recursively until a classifier 112 with a desired number of features is obtained; see steps 320 and 325. The trained classifier 112 will be referred to as a “RSV classifier” or trained model in the rest of the specification. However, it will be appreciated that the classifier 112 does not have to be a SVM or RSV classifier and any other suitable classifiers can be used.
For example, Naive Bayes (NB) algorithm and Centroid algorithm (Bedo, J., Sanderson, C. and Kowalczyk, A., 2006) may be used. Unlike the “RSV classifier”, these algorithms do not require any iterative procedure to create their predictive models. Accordingly, their development is rapid, and in the current setting when the number training examples exceeds significantly the number of features, they may be robust alternatives to the “RSV classifier”. The NB algorithm makes an assumption that all measurements represent independent variables and estimates the probability of the class given evidence using the product of frequencies of variables in classes in the training data. On the other hand, the centroid classifier builds a 2-class discrimination model by weighting each variable by a difference of its means in both classes (or phenotypes).
The processing unit 110 then applies the trained classifier 112 to annotate or determine a label for segments that are not in the training set; see step 230 in
More specifically; the trained classifier 112 can be applied on the following:
It should be understood that, in an evolutionary context, the second species of the training set may be a different species. The first species (of the testing set) may be human and the second species (of the training set) non-human, or vice versa. For example, a classifier trained using mouse tags can be tested using human tags to assess its performance on the latter. This allows translation of problems and solutions from one organism to another and better use of model organism for research or treatment of human conditions.
In a micro-evolutionary context, the second species may be a variant of the first species. For example, the first species is a healthy cell of an organism, and the second species may be an unhealthy or cancer cell that has diverged from its original germline sequence present in the first species, and is thus a variant of the first species. In this case, the divergence may exceed an acceptable threshold that would otherwise classify the second species as the same as the first. The first species may also be a diseased tissue sample of a first patient, and the second species is a diseased tissue sample of a second patient who is distinct from the first patient in its clinical presentation.
The processing unit 110 is operable to evaluate the performance of the classifier 112; see step 240 in
Consider a predictive model (hypothesis) ƒ:→. As the decision threshold θε is varied, we denote:
n
r
+
=n
r
+(θ):=|{{right arrow over (x)}i|ƒ({right arrow over (x)}i)≧θ& yi=+1})|, (1)
n
r
−
=n
r
−(θ):=|{{right arrow over (x)}i|ƒ({right arrow over (x)}i)≧θ& yi=−1})|, (2)
where nr+ is the number of true positive labels and nr− is the number of false positive (i.e. negative) labels or examples recalled with scores not less than the threshold θ. In other words, true positive labels are positive labels that are correctly determined by the classifier 112 and have a corresponding known positive label. Also, false positive labels are positive labels that are incorrectly determined by the classifier 112 and have a corresponding known negative label.
Referring also to the flowchart in
The performance of the classifier 112 can then be evaluated by calculating the following metrics: calibrated precision in step 410, area under a calibrated precision-recall curve (auCPRC) in step 415 and maximum calibrated precision in step 420 in
The recall metric ρ(θ) is defined as the sensitivity or true positive rate (TPR):
ρ(θ)=sen(θ)=TPR(θ):=nr+/n+, (3)
where nr+ is the number of true positive examples, n+ is the total number of positive examples. Recall ρ(θ) provides a measure of completeness as a ratio between the number of true positive examples “recalled” and the total number of examples that are actually positive examples. In other words, recall is a ratio between a number of correctly determined positive labels (nr+) and a total number of positive known labels (n+).
The precision metric ρ(θ) is defined as:
where nr+ is the number of true positive examples and nr:=nr++nr− is the total number of true positive and negative examples. Precision p generally provides a measure of exactness. In other words, precision is a ratio between a number of correctly determined positive labels (nr+) and a total number of (correctly or incorrectly) determined positive labels (nr:=nr++nr−).
The area under PRC (auPRC) is the area under a plot of precision ρ(θ) versus recall ρ(θ). The plot is known as the precision-recall curve (PR curve or PRC) and auPRC is used as a general measure of the performance across all thresholds in Sonnenburg et al., 2006 and Abeel et al., 2009.
The receiver operating characteristic (ROC) curve is the plot of the specificity versus the recall (or sensitivity or true positive rate). Specificity spec(θ) is defined as:
spec(θ)=1−FPR(θ):=1−nr−/n− (5)
where FPR(θ) is the false positive rate obtained by dividing the total number of negative recalled examples nr− with the total number of negative examples n−.
The area under ROC (auROC) is the area under an ROC curve, that is a plot of specificity spec(θ) versus recall p(θ); see
Both metrics, auROC and auPRC, are used in machine learning as almost equivalent concepts (see comment by Abeel et al. (2009) in section 2.4), though in the area of information retrieval the PRC is preferred. However, in the context of whole genome analysis they can provide dramatically different results, with the PRC and auPRC being the metrics of choice as the ROC and auROC are generally unreliable and possible completely uninformative. This corroborates the observations in Sonnenburg et al. (2006), however in their case they still choose to optimise auROC during model training while we optimise auPRC.
The PRC and ROC curve are typically used for comparing performance of classifiers on a fixed benchmark. However, when one evaluates a ChIP-Seq experiment, such as the Pol-II benchmark, there is no other classifier or dataset to compare performance against. Thus, a form of “calibration” is needed to evaluate the classifier performance in isolation. Consider two test datasets with radically different prior probability of positive examples:
If a uniformly random classifier is used, its expected precision at any recall level will be 5% in case A and 95% in case B.
Now, consider two non-random classifiers: ƒA with precision p=10% on set A and ƒB with precision p=99% on set B, both at recall p=20%. The question is which of them performs better, which is not straightforward to resolve. On one hand, the classifier ƒA performs two times better than random guessing, while ƒB performs only 1.04 times better than random guessing. Thus, in terms of the ratio to the expected performance of a random classifier, ƒA performs far better than ƒB. However, in case A the perfect classifier is capable of 100% precision, that is 10 times better than random guessing and 5 times better than ƒA. In case B, the perfect classifier is capable of only 1.05 times better than random guessing. This is approximately what ƒB is capable of, so ƒB now seems stronger than ƒA!
To resolve this problem, rather than analysing ratios we can ask a different question: what is the probability of observing better precision at given recall with random ordering of the data? The smaller such a probability, the better the performance of the classifier, hence it is convenient to consider −log 10 of those probabilities. We call this metric the calibrated precision, (CP), where better classifiers will result in higher values of CP. The plot of CP as a function of recall is referred to as the calibrated precision-recall curve (CPRC).
Calibrated precision CP(p, ρ) is defined as follows:
where precision p=nr+/(nr++nr−) and recall ρ=nr+/n+. This is −log 10 of the probability that for a uniform random ordering to the test examples, the n+th positive example is preceded by ≦nr− negative examples. Calibrated precision may also be interpreted as the probability of observing an equal or better precision at a given recall with random ordering of the labels determined by the classifier 112.
As it is more convenient to convert CP curve into a single number for easy comparison, maximum calibrated precision is defined as:
max(CP):=maxρCP(p(ρ),ρ).
To derive Eq. (6), the significance of an observed precision p(ρ) for a given recall ρ is compared with pNULL(ρ), which is the precision for random sampling of the mixture of nr+ and nr− positive and negative examples without replacement, until nr+≧n+ρ successes (positive labels) are drawn. The latter random variable has a hypergeometric distribution, although in a slightly non-standard form as it is usually given for drawing a fixed number of nr samples.
The scores allocated by a predictive model sort the test set of n=n++n− elements in a particular sequence. There are n! possible such sequences altogether, of which
have exactly the same composition of nr+ positive and nr− negative elements amongst the top nr samples, assuming the nrth sample is fixed and has a positive label. The product of the first three factors above is the number of different (nr−1)-sequences with the required positive/negative split, the fourth is the number of choices of the nrth element (out of n+ elements) and the fifth factor is the number of arrangements for the remaining n−nr elements.
Dividing the above number by the total n! of permutations of n elements gives the following expression for the probability
n
r
+
[N
r
−
=n
r
−]=ƒ(nr−),
where, following the usual convention, Nr− denotes the random variables with instantiations nr− and for x=0, 1, . . . , n−,
For the observed recall ρ=nr+/n+ (see Definition 1), the probability of observing the precision p:=nr+/nr (see Definition 2) or higher is:
where prec is an observed precision. Note that Pval is precisely the p-value of interest to us, leading to the formula for the calibrated precision (CP) in Eq. (6) as follows:
as the total number of choices n will not exceed the size of the genome, hence is ≦1010 in the cases of human or mouse genomes. Evaluation of −log10 ƒ(x•)−ε avoids the computation of the sum in Eq. (6) which can have millions of terms. The approximation error ε is negligible in practical situations encountered in this research where CP is of order of tens of thousands, hence practically ε/CP0.1%.
The maximum
can be computed as follows. Consider the more general problem of finding
Consider the inequality
This inequality is equivalent to the bound
This implies that φ(x)≧1 for
hence,
and
x
•=min(nr−,x′•).
A more constructive form can then be obtained:
As already mentioned, this approximation is generally very accurate in practice, with the relative error between 0 and −0.1%. In one implementation, binomial coefficient
or “n choose x”, can be approximated using Stirling's approximation, where log n!=n log n−n+0.5 log 2πn.
However, if necessary, a more precise approximation as follows can be used:
The sums above could contain tens of thousands or even millions of positive terms ≦1, each can be easily evaluated recurrently. Those terms are monotonically decreasing, so summation can be terminated if a term's value is sufficiently low. For instance, if stop summation when a term has values below
then we know that the resulting approximation will have an error between 0 and −δ.
In one implementation, the numerical computation of the sums and products above requires care as the numbers involved are large in practice, e.g. nr+ ˜105 and n˜107, hence a naive direct implementation might cause numerical under/over-flows. Indeed, the most significant Pval computed in
The above p-values, Pval, are used in three different ways. Firstly, it is used for stratification of the precision-recall planes in
ρCP(p(ρ),ρ):=−log10ρn+[prec≧p(ρ)],
where the right-hand-side is computed using the Pval function defined above and assuming that the product pn+ is rounded to the nearest integer. Thirdly, for each curve we also compute the area under it and list them in Table 3 below under the heading auCPRC. The area can be viewed as a measure of overall performance that is independent of any particular decision threshold.
Eq. (6) depends on values of n+ and n−, thus different results are expected for different values of those numbers even if their ratio is preserved. Indeed, if it is assumed that n=n++n−=103, then the respective values of the calibrated precision are CPA=−3.74 and CPB=−4.85. For n=106, the results are CPA=−904.3 and CPB=−2069.2. The results are what one should expect intuitively considering dealing with datasets of size of hundreds is far easier than dealing with dataset of size of millions. More formally, in the latter case, although we have the same proportion of correct guesses as in the former case (that is, the same precision at the same recall level), the absolute number of correct guesses is proportionally higher. This is much harder, as by the central limit theorem of statistics, the average of larger number of repeated samplings has stronger tendency to converge to the mean, resulting in the variance inversely proportional to the number of trial.
Thus, for the larger datasets, the same size of deviation from the mean must result in a far smaller probability of occurrence. The above simple example vividly illustrates this principle, which is also clearly visible in the real-life test results explained further below with reference to
The area under CPRC (auCPRC) is defined as:
where n+ is the total number of positive examples, CP({right arrow over (x)}):=CP(p({right arrow over (x)}), ρ({right arrow over (x)})) is the calibrated precision based on precision p({right arrow over (x)}) and recall ρ({right arrow over (x)}) and X+:={{right arrow over (x)}i|yi=+1} is the subset of all (n+) positive examples.
The area under CPRC can be interpreted as the expected value of the random variable calibrated precision CP({right arrow over (x)}) on the space of positively labelled test examples. More precisely, given a predictive model, ƒ:→, where ={{right arrow over (x)}1, {right arrow over (x)}2, . . . , {right arrow over (x)}n}εm is the set of all feature vectors with labels y1, y2, . . . , ynε{−1, +1}. Let rank: →{1, 2 . . . , n} be a (bijective) ranking of all n-test examples in agreements with the scoring function ƒ, i.e. if ƒ({right arrow over (x)}i)>ƒ({right arrow over (x)}j), then rank ({right arrow over (x)}i)<rank ({right arrow over (x)}j). We assume here that rank is defined even in the case of draws with respect to the score ƒ. Let +:={{right arrow over (x)}i|yi=+1} be a subset of all (n+) positive examples.
For any {right arrow over (x)}ε+, the number of positive and negative examples are defined as:
n
+({right arrow over (x)}):=|{j|rank({right arrow over (x)}j)≧rank({right arrow over (x)}) & yj=+1}
n
−({right arrow over (x)}):=|{j|rank({right arrow over (x)}j)≧rank({right arrow over (x)}) & yj=−1}
and then:
If we assume uniform distribution on the finite space +, then the area under CPRC can be defined using the expectation in Eq. (7) above.
The whole genome scanning using NGS opens a new machine learning paradigm of learning and evaluating on extremely unbalanced datasets. Here, we are dealing with binary classification where the minority (target) class has a size often less than that of 1% of the majority class. This requires careful evaluation metrics of PRC and ROC curves and areas under them in particular and auROC and auPRC.
Referring now to
Ratio 1:400 roughly corresponds to a whole genome scan for TSS, while ratio 1:4 corresponds to the classical machine learning regime. Six corresponding PRCs are shown in
Therefore, auPRC discriminates between the model of Class A with critical high specificity and the poorer-models of Class B while auROC does not. However, auPRC for model A3, with reasonably balanced classes, is higher than for the NGS-type Case A1, with the significantly unbalanced classes and thus much “harder” to predict; see
From the above example, PRC analysis is, in general, more suitable then ROC analysis for evaluation of datasets with highly unbalanced class sizes. However the PRC curve is inversely dependent on the minority or majority scale of such imbalance. This is a drawback if one intends to compare results involving different class size ratios, which may arise when comparing different experiments or methods.
The source of such discrepancies can be concluded from the definition of precision as follows (see Definition 4):
If nr+nr−, then p(θ)≈nr+/nr−. Thus, if the number of minority class examples is increased uniformly by a factor κ, then for the same recall threshold θ=θ(ρ) we expect κ times more positive samples and approximately the same number of negative sample recalls. Hence, the precision will increase by factor κ. A heuristic solution to this unwelcomed increase (scaling) is to take the ratio of precision to the prior probability of the minority class.
Precision enrichment pe(ρ) is defined as:
where Fr+(ρ):=nr+/n+ and Fr(ρ):=nr/n denote the cumulative distributions of recalls of the positive examples and of the mixture, respectively. See
Note that nr+/n+ is also the expected value of the conditional distribution of precision [p|ρ]:=[p|recall=ρ] for a given recall 0<ρ<1 for the distribution of uniform mixture of positive and negative examples. Indeed, under this assumption a randomly selected n×ρ sample is expected to contain n+×ρ positive samples. Thus,
Another argument can be based on the observation that the right-hand-side fraction above is the maximal likelihood estimator of the expectation of p|ρ with distribution characterised above. In summary, the precision enrichment has an appealing interpretation of gain in precision with respect to expectation of the precision for a uniform random sampling of the mixture. Alternatively it can be interpreted as the ratio of cumulative distributions and is thus linked to gene set enrichment analysis. It accounts for the ratio n+/n but still not for the values of n+ or n.
The enrichment score for a given recall ρ is defined as (Subramanian et al., 2005):
ES(ρ):=Fr+(ρ)−Fr−(ρ),
where Fr− and Fr+, respectively, denote cumulative distribution of the negative class and positive class; and the Kolmogorov-Smirnov statistic
If the negative class size is much larger than the positive one, then Fr−≈Fr and p(ρ)≈Fr+/Fr− is just the ratio rather than the difference of the two cumulative distributions. However, in the case of high class imbalance, the ES and KS-statistic are uninformative in terms of capturing performance under high precision settings. In this case,
Thus, if n+n−, then both Fr+ and Fr are ≈0 whenever precision pn+/n≈0. Hence ES(ρ)≈ρ is monotonically increasing until the precision drops significantly, to the level of p≈n+/n.
For a further illustration, see
An additional issue is the determination of statistical significance, which for the KS test is accomplished via a permutation test (Subramanian et al., 2005; Ackermann and Strimmer, 2009). Such a test is a computational challenge for NGS analysis as the datasets involved are ≈2 orders of magnitude larger than in the case of microarrays. Thus, a proper permutation test should involve two orders of magnitude more permutations, each followed by independent development of the predictive model, which is clearly infeasible.
However, it is feasible to associate with values of ES the significance in terms of p-values capturing the probability of observing larger values under a uniform random sampling of the mixture, i.e. along the lines developed for CPRC in Definition 6. However, we do not develop this here because such a p-value function on the (ρ, ES) plane is “unstable.” Namely, log Pval diverges to ≈−∞ along the diagonal ES=ρ. This diagonal is practically the locus of ES values for the critical initial segment (ρ<0.5) in
In this example, the processing unit 110 evaluates the performance of the classifier 112 trained according to step 255 in
The best-of-class in-silico TSS classifier ARTS serves as a specific baseline for accuracy assessment. Compared to ARTS, the following results demonstrate that the RSV classifier 112 requires simpler training and is more universal as it uses only a handful of features, k-mer frequencies in a linear fashion.
In this example, different datasets, are used for training and testing the classifiers, including whole genome scans, dataset similar to the benchmark tests used by Abeel et al. (2009) and Sonnenburg et al. (2006) and independent benchmark sets embedded in the software of Abeel et al. (2009).
This dataset is used as the main benchmark. The ChIP-Seq experimental data of Rozowsky et al. (2009) provides a list of 24,738 DNA ranges of putative Pol-II binding sites for the HeLa cell line. These ranges are defined by the start and end nucleotide. The lengths are varying between 1 and 74668 and have a median width of 925 bp. Every 500 bp segment was given label 1 if overlapping a range of ChIP-Seq peak and −1 otherwise. This provides ≈160K positive and ≈11 M negative labels.
For this dataset, hg18 are used with RefGene annotations for transcribed DNA available through the UCSC Genome Browser (http://genome.ucsc.edu). It annotates ≈32K TSSs including alternative gene transcriptions. More specifically, if a 500 bp segment was overlapping the first base of the first exon it was labelled +1, and if not it was labelled −1. This creates n+=43K positive examples and n−=11M negative examples.
(iii) RefGeneEx Dataset
This is an adaptation of the previous dataset to the methodology proposed by Sonnenburg et al. (2006) and adopted by Abeel et al. (2009) in an attempt to generate more reliable negative labels. The difference is that all negative examples that do not overlap with at least one gene exon are discarded from the RefGene dataset. This gives n+=43K positive examples and n−=0.55M negative examples.
The predictions for ARTS were downloaded from a website (see http://www.fml.tuebingen.mpg.de/raetsch/suppl/arts) published by the authors of the algorithm (Sonnenburg et al., 2006). These predictions contain scores for every 50 bp segment aligned against hg17. The liftOver tool was used to shift the scores to hg18 (see http://hgdownload.cse.ucsc.edu/goldenPath/hg17/liftOver/). For the results shown in
The training datasets for RSV classifiers are summarised in Table 1. They are overlaps of the respective label sets with chromosome 22 only. In contrast, ARTS used carefully selected RefGene-annotated regions for hg16. This resulted in n+=8.5K and n−=85K examples for training, which contain roughly 2.5 to 8 times more positive examples than used to train the RSV models. Additionally, the negative examples for ARTS training were carefully chosen, while we have chosen all non-positive examples on Chromosome 22 for RSV training, believing that the statistical noise will be mitigated by the robustness of the training algorithm.
Three RSV classifiers RSVPo2, RSVRfG and RSVEx are compared against 17 dedicated promoter prediction algorithms evaluated by Abeel et al. (2009) using the software provided by the authors. This software by Abeel et al. (2009) implements four different protocols:
The results are summarised in Table 2 where the RSV classifier is compared with a subset of top performers reported by (Abeel et al., 2009, Table 2). Only one of the 17 dedicated algorithms evaluated in (Abeel et al., 2009), that is the supervised learning based ARTS, performs better than any of the three RSV classifiers in terms of overall promoter prediction program (PPP) score. The PPP score is the harmonic mean of four individual scores for tests 1A-2B introduced in (Abeel et al., 2009).
Also, only three additional algorithms out of 17 predictors evaluated by (Abeel et al., 2009) have shown performance better or equal to the RSV classifier on any individual test. The results demonstrate that, although the RSV classifier only uses raw DNA sequence and a small subset of the whole genome for RSV training, better or comparable results can be achieved. This is unexpected because the dedicated algorithms in (Abeel et al., 2009) use a lot of special information other than local raw DNA sequence and are developed using carefully selected positive and negative examples covering the whole genome.
0.19
0.36
0.47
0.64
0.34
0.29
0.57
Referring to
The PRC curves on each subplot are very close to each other, meaning that RSVPo2, RSVRfG, RSVEx and ARTS show very similar performance on all benchmarks despite being trained on different datasets. However, there are significant differences in those curves across different test datasets, with the curves for subplot C in
The background shading shows calibrated precision CP(p, ρ), values with the values in
It is observed that curves in
Note also that the most significant loci are different from the loci with the highest precision. In terms of
To further quantify impact of the test data (that is, the differences between genome wide analysis and restriction to the exonic regions), the different benchmark sets and the three metrics PRC, ROC, and CPRC are plotted in
In
Some of those differences are also captured numerically in Table 3, where metrics auPRC, auCPRC and auROC denote areas under the PRC, CPRC and ROC curves in
0.23/0.22
0.47/0.46
34K/23K
19K/18K
0.81/0.69
0.88/0.84
58.4K/57.2K
36.0K/35.6K
The most significant values are shown in boldface. The performance of RSV and ARTS are remarkably close, with ARTS slightly prevailing on the smallest testset RefGeneEx, which is the closest to the training set used for ARTS training, while RSV classifiers are better on the two genome-wide benchmarks. However, those differences are minor, the most striking is that all those classifiers are performing so well in spite of all differences in their development This should be viewed as a success of supervised learning which could robustly capture information hidden in the data (in a tiny fraction, 1/60th of the genome in the case of RSV).
It is observed that max(CP) is achieved by RSVPo2 for precision p=25% and recall ρ=38% positive samples out of n+=160K. This corresponds to compressing n+=61K of target patterns into the top-scored nr=23.4K samples out of n=10.7M. In comparison, the top CP results for ARTS on RefGeneEx data resulted in compression of nr+=25.3K of positive samples into top nr=47K out of total n=0.59M. Note that in the test on RefGene dataset, the results are more impressive then for RefGeneEx. In this case, roughly the same number of positive samples n+=23.4K was pushed into top nr=123K out of n=10.6M, which is out of the dataset ≈20 times larger.
Note that
Based on the above, it is demonstrated that the lack of information from empirical ChIP-Seq data, such as the directionality of the strands, does not prevent the development of accurate classifiers on-par with dedicated tools such as ARTS. The classifiers in the RSV method are created by a generic algorithm and not a TSS-prediction tuned procedure with customised problem-specific input features.
Compared with one or more embodiments of the method, ARTS is too specialised and overly complex. ARTS uses five different sophisticated kernels—i.e., custom developed techniques for feature extraction from DNA neighbourhood of ±1000 bp around the site of interest. This includes two spectrum kernels comparing the k-mer composition of DNA upstream (the promoter region) and down stream of the TSS, the complicated weighted degree kernel to evaluate the neighbouring DNA composition, and two kernels capturing the spatial DNA configuration (twisting angles and stacking energies). Disadvantageously, ARTS is very costly to train and run: it takes ≈350 CPU hours (Sonnenburg et al., 2006) to train and scan the whole genome. Furthermore, for training the labels are very carefully chosen and cross-checked in order to avoid misleading clues (Sonnenburg et al., 2006).
By contrast, the RSV method according to
The performance of the exemplary RSV method is surprising and one may hypothesise about the reasons:
One curious point of note is the sharp decline in precision that can be observed as recall ρ→0 in
One of the most intriguing outcomes is the very good performance of the RSVPo2 classifier in the tests on the RefGene and RefGeneEx datasets and also on the benchmark of Abeel et al. (2009). After all, the RSVPo2 classifier was trained on data derived from broad ChIP-Seq peak ranges on chromosome 22 only. This ChIP-Seq data (Rozowsky et al., 2009) was derived from HeLa S3 cells (an immortalized cervical cancer-derived cell line) which differ from normal human cells. Those peaks should cover most of the TSS regions but, presumably, are also subjected to other confounding phenomena (e.g., Pol-II stalling sites (Gilchrist et al., 2008)). In spite of such confounding information, the training algorithm was capable of creating models distilling the positions of the carefully curated and reasonably localised TSS sites in RefGene.
As a proof of feasibility, it has been shown that the generic supervised learning method (RSV) is capable of learning and generalising from small subsets of the genome (chromosome 22). It is also shown that the RSV method successfully competes and often outperforms the baseline established by the TSS in-silico ARTS classifier on several datasets, including a recent Pol-II ENCODE ChIP-Seq dataset (Rozowsky et al., 2(09). Moreover, using the benchmark protocols of Abeel et al. (2009), it has been shown that the RSV classifier outperforms 16 other dedicated algorithms for TSS prediction.
For analysis and performance evaluation of highly class-imbalanced data typically encountered in genome-wide analysis, plain (PRC) and calibrated precision-recall curves (CPRC) can be used. Each can be converted to a single number summarising overall performance by computing the area under the curve. The popular ROC curves, auROC the area under ROC, enrichments scores (ES) and KS-statistics are generally uninformative for whole genome analysis as they are unable to discriminate between performance under the critical high precision setting.
It will be appreciated that, unlike a method tailored for specific application, a generic supervised learning algorithm is more flexible and adaptable, thereby more suitable for generic annotation extension and self validation of ChIP-Seq datasets. It will also be appreciated that the idea of self validation and developed metrics can be applied to any learning method apart from RSV, provided it is able to capture generic relationships between the sequence and the phenomenon of interest.
In this experiment, five different genome-wide annotation datasets described as follows and summarised in Table 4 are used. The first part of the Table 4 shows the number of positive marked segments and total number of segments for the training sets of human (chromosome 22) and mouse (chromosome 18) and the total number of segments. The second part shows the corresponding numbers for the whole genome and their ratio.
11M
11M
10M
(i) Pol-II (pol2H)
This is used as the main benchmark, the same as Pol II dataset of Example 1. Recent ChIP-Seq experimental data of Rozowsky et al. (2009) provides a list of 24,738 DNA ranges of putative Pol-II binding sites for HeLa cell lines. These ranges are defined by the start and end nucleotide, the lengths are varying between 1 and 74668, and have a median width of 925 bp. Every 500 bp segment was given label 1 if overlapping a range of a ChIP-Seq peak and −1 otherwise. This provided 160K positive and ≈11M negative labels.
(ii) RefGene Human (rfgH)
For this dataset, the same as RefGene dataset of Example 1, we have used hg18 with RefGene annotations for transcribed DNA available through the UCSC browser. It annotates ≈32K transcription start sites for genes, including alternative gene transcriptions. More specifically, if a 500 bp segment was overlapping the first base of the first exon it was labelled +1 and −1, otherwise. This created n+=43K positive and n−=11 M negative examples.
(iii) CAGE Human (cagH)
The CAGE tags were extracted from the GFF-files which are available through the FANTOM 4 project (Kawaji et al., 2009) website. A segment which contains at least one tag with a score higher than zero was labelled +1 and −1 otherwise. Thus 1988630 tags were extracted out of 2651801, which gave 2.6M positive and 8.9M negative labels.
(iv) RefGene Mouse (rfgM)
This dataset was generated using the mm9 build with RefGene annotations which can be downloaded from the UCSC browser. The labelling was done the same way as its human equivalent. This created n+=43K positive and n−=11M negative examples.
(v) CAGE Mouse (cagM)
Using the FANTOM CAGE tags in the same way as for human generates 922K positive labelled segments of 698K tags with a greater than zero. This gives 9.3M negative examples.
The processing unit 110 trains three different RSV classifiers on human DNA data, RSVPo2H, RSVRfgH, and RSVcagH using the methods described with reference to
The results are shown in
In this example, the five different classifiers or predictive models are trained and applied on five different test sets as discussed above. This subsection discusses the results of two tests: one against CAGE human genome and one against RefGene human genome annotations. The global performance of the classifiers is discussed below, while the local analysis of most significant peaks-regions is shifted to section 2(d). The performance curves for each of the five classifiers tested on human CAGE data in
Let us analyse the precision recall curves (PRC), in
The second observation is that the messages from the PRC and ROC plots are contradicting each other. The PRCs for CAGE in
This discrepancy is due to the differences in prior probabilities of positive examples—i.e., the proportion n+/n, which according to Table 4 is over 22% for CAGE (cagH) and 55 times smaller for RefGene (RefGene). However, this explanation points to the major drawback of those two “classical” metrics: one needs to take into account the context, in which those two metrics are considered—i.e., additional information in the form of n+ and n values in order to sensibly interpret/calibrate those metrics. This is especially important if they are used for comparing vastly different test sets of size in millions, when direct inspections and contemplation of individual cases is vastly inadequate.
The above discussion of drawbacks of the PRC and ROC curves creates the right background for analysis in terms of the method of evaluating or quantifying the performance of classifiers in genome-wide analysis according to
Note the differences in the height of the plots, especially the curves for the RSVcagH in
Although this RSVcagH classifier is a clear winner on global scale, the other models are very impressive as well. Referring also to tests on CAGE Mouse cagM in
For instance in test on cagH, RSVrfgH and RSVrfgM have achieved precision of 65% and 66% at recall of 5.4% and 4.2%, respectively, while both RSVPo2H and RSVcagM, trained on “unrefined” empirical data, obtained equivalent performance of precision 52% at recall 10%. Note the 10% recall correspond still the huge number of 510K of loci. This is far beyond capabilities of rigorous wet lab verification other than high throughput techniques and still far from an investigation of the peaks in start of the PRC curves in
a) to (c) and
The analysis of results for very low recall will now be analysed. To facilitate such an analysis we have prepared different version of plots in
Compared with
In
e) to (d) and
Higher resolution transcriptome profiling has raised doubts to the capacity of in silico prediction of functional control elements in the genome, such as TSS (Cheng et al., 2005; Cloonan et al., 2008). However, we show here, that the most updated empirical annotations of TSS, RNA Pol-II ChIP-Seq and CAGE can effectively be substituted by improvement of the prediction algorithms. While this exercise is redundant to the cases where empirical evidence for TSS already exists, we do find many sites in the genome that are predicted but lack evidence in the empirical measurements. While these could be false positive hits, it is more likely that these are real TSS elements, active in specialised, uncharacterised cell type or conditions. Recording such annotations may become valuable to geneticists who find allelic variations or epigenetic signal in intergenic regions. Indeed, a lot of top hits are intergenic.
The evidence in support of these elements representing true TSS activity comes from the vast coverage of the existing annotations in our predicted TSS pool. Namely:
To further improve the algorithm we will swap the order between training and test datasets, use RNA Pol-II ChIP-Seq data to build predictive models for CAGE tags on par with refine RefGene annotation. There are a few potential future uses of the data:
Looking at the biological annotation of the group of genes neighbouring these potential TSS may provide insight as to which conditions or cell types are currently not being represented in genome annotation. Further high resolution testing of coincidence of the region our algorithm predicted as TSS, with disease-associated SNPs is ongoing. In conclusion, at least one embodiment of the RSV method provides a good baseline in-silico tool for extending the empirical data obtained during phase I of the Encyclopedia of Non-COding DNA Elements (ENCODE), through to the rest of the genome, further to the TSS task we explored in this manuscript. Furthermore, our predicted TSS annotations merits consideration by the human ENCODE Genome Annotation Assessment Project (EGASP) (Guigo et al., 2006), and could improve our annotation of functional elements in the context of interpretation of genetic studies, such as genome wide disease-allelic associations.
The results described in Examples 1 and 2 were for the window of width w=500 bp. In this example we show results for classifiers RSVcagH, RSVcagM, RSVPo2H, RSVRfgH and RSVRfgM applied on human CAGE data, for the window 10 times smaller, namely w=50 bp. The plots in
In this example, the classifier 112 is trained for annotation of transcription factor binding site. In experiments we have focused on an important oncogene, Myc (c-Myc) which encodes gene for a transcription factor that is believed to regulate expression of 15% of all genes (Gearhart et al, 2007) through binding on Enhancer Box sequences (E-boxes) and recruiting histone acetyltransferases (HATs). This means that in addition to its role as a classical transcription factor, Myc also functions to regulate global chromatin structure by regulating histone acetylation both in gene-rich regions and at sites far from any known gene (Cotterman et al, 2008).
A mutated version of Myc is found in many cancers which causes Myc to be persistently expressed. This leads to the unregulated expression of many genes some of which are involved in cell proliferation and results in the formation of cancer. A common translocation which involves Myc is t(8:14) is involved in the development of Burkitt's Lymphoma. A recent study demonstrated that temporary inhibition of Myc selectively kills mouse lung cancer cells, making it a potential cancer drug target (Soucek et al, 2008).
The following four human cell lines were downloaded from the website: http://hgdownload.cse.ucsc.edu/goldenPath/hg18/encodeDCC/wgEncodeYaleChIPseq/:
For a more complete set of binding sites, the four datasets above are merged into a single dataset:
For c-Myc mouse, the ChIP-Seq experiment available at website http://www.ncbi.nlm.nih.gov/geo/querv/acc.cgi?acc=GSM288356 is used:
We have used 4 models trained for 4 label datasets described above, namely, (i) cMycH_MergedCellLine; (ii) cMycH_Helas3 Cmyc; (iii) cMycH_K562 CmycV2 and (iv) cMycM, respectively. For the first 3 human cell lines, we trained on chromosome 22; for the last, mouse model we trained on chromosome 18. In
We focus here on showing the results for the window w=50 bp in order to reinforce the message that the methodology of this invention is applicable of a high resolution annotations. The results are shown in
93%
93%
93%
93%
60%
60%
59%
56%
Note that in test on cMyc human data in
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
For example, the method described with reference to
{right arrow over (s)}ε{a,g,u,c}
n and {right arrow over (x)}iε{a,c,u,t}w
where n is the length of the sequence {right arrow over (s)}, w<n is the window size or length of the segment and each nucleotide in the sequence {right arrow over (s)} or tile {right arrow over (x)}i is either adenine (a), guanine (g), uracil (u) or cytosine (c). Similarly, one or more features {right arrow over (φ)}({right arrow over (x)}i) can be extracted from the RNA tile {right arrow over (x)}i to train a classifier with the following predictive function:
ƒ({right arrow over (x)}i):={right arrow over (φ)}({right arrow over (x)}i),{right arrow over (β)}
where {right arrow over (x)}i is the ith segment, {right arrow over (φ)}({right arrow over (x)}i) is a feature vector and {right arrow over (β)} is a weight or coefficient vector with weights corresponding to each feature in the feature vector. In this case, the classifier 112 may be trained to annotate 5′ untranslated regions (UTRs); 3′ UTRs; and intronic sequences which would control processes such as transcription elongation, alternative splicing, RNA export, sub-cellular localisation, RNA degradation and translation efficiency. An example of such regulatory mechanism is micro-RNAs which bind to 3′ UTRs.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving”, “processing”, “retrieving”, “selecting”, “calculating”, “determining”, “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The processing unit 110 and classifier 112 can be implemented as hardware, software or a combination of both.
It should also be understood that the methods and systems described might be implemented on many different types of processing devices by computer program or program code comprising program instructions that are executable by one or more processors. The software program instructions may include source code, object code, machine code or any other stored data that is operable to cause a processing system to perform the methods described. The methods and systems may be provided on any suitable computer readable media. Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media (e.g. copper wire, coaxial cable, fibre optic media). Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data streams along a local network or a publically accessible network such as the Internet.
It should also be understood that computer components, processing units, engines, software modules, functions and data structures described herein may be connected directly or indirectly to each other in order to allow any data flow required for their operations. It is also noted that software instructions or module can be implemented using various of methods. For example, a subroutine unit of code, a software function, an object in an object-oriented programming environment, an applet, a computer script, computer code or firmware can be used. The software components and/or functionality may be located on a single device or distributed over multiple devices depending on the application.
Reference in the specification to “one embodiment” or “an embodiment” of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” appearing in various places throughout the specification are not necessarily all referring to the same embodiment. Unless the context clearly requires otherwise, words using singular or plural number also include the plural or singular number respectively.
Number | Date | Country | Kind |
---|---|---|---|
2010900948 | Mar 2010 | AU | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/AU2011/000259 | 3/8/2011 | WO | 00 | 2/1/2013 |