This disclosure relates to methods, systems, computer programs and computer-readable media for the multidimensional analysis of real-time amplification data.
Since its inception, the real-time polymerase chain reaction (qPCR) has become a routine technique in molecular biology for detecting and quantifying nucleic acids. This is predominantly due to its large dynamic range (7-8 orders of magnitude), desirable sensitivity (5-10 molecules) and reproducible quantification results. New methods to improve the analysis of qPCR data are invaluable to a number of analytical fields, including environmental monitoring and clinical diagnostics. Absolute quantification of nucleic acids in real-time PCR using standard curves is undoubtedly important and significant in various fields of biomedicine, although research has saturated in recent years.
The current “gold standard” for absolute quantification of a specific target sequence is the cycle-threshold (Ct) method. The Ct value is a feature of the amplification curve defined as the number of cycles in the exponential region where there is a detectable increase in fluorescence. Since this method has been proposed, several alternative methods have been developed in a hope to improve absolute quantification in terms of accuracy, precision and robustness. The focus of existing research has been based on the computation of single features, such as Cy and −log10(F0), that are linearly related to initial concentration. This provides a simple approach for absolute quantification, however, data analysis based on such single features has been limited. Thus, research into improving methods for absolute quantification of nucleic acids using standard curves has plateaued and is very incremental in improvement.
Rutledge et al. 2004 proposed the Sigmoidal curve-fitting (SCF) for quantification based on three kinetic parameters (Fc, Fmax and F0). Sisti et al. 2010 developed the “shape-based outlier detection” method, which is not based on amplification efficiency and uses a non-linear fitting to parameterize PCR amplification profiles. The shape-based outlier detection method takes a multidimensional approach in order to define a similarity measure between amplification curves, but relies on using a specific model for amplification, namely the 5-parameter sigmoid, and is not a general method. Furthermore, the shape-based outlier detection method is typically used as an add-on, and only uses a multidimensional approach for outlier detection, such that quantification is only considered using a unidimensional approach. Guescini et al. 2013 proposed the Cy0 method, which is similar to the Ct method but takes into account the kinetic parameters of the amplification curve and may compensate for small variations among the samples being compared. Bar et al. 2013 proposed a method (KOD) based on amplification efficiency calculation for the early detection of non-optimal assay conditions.
The present disclosure aims to at least partially overcome the problems inherent in existing techniques.
The invention is defined by the appended claims. The supporting disclosure herein presents a framework that shows that the benefits of standard curves extend beyond absolute quantification when observed in a multidimensional environment. The focus of existing research has been on the computation of a single value, referred to herein as a “feature”, that is linearly related to target concentration, and thus there has been a gap in existing approaches in terms of taking advantage of multiple features. It has now been realised that the benefits of combining linear features are non-trivial. Previous methods have been restricted to the simplicity of conventional standard curves such as the gold standard cycle-threshold (Ct) method. This new methodology enables enhanced quantification of nucleic acids, single-channel multiplexing, outlier detection, characteristic patterns in the multidimensional space related to amplification kinetics and increased robustness for sample identification and quantification.
Relating to the field of Machine Learning, the presently disclosed method takes a multidimensional view, combining multiple features (e.g. linear features) in order to take advantage of, and improve on, information and principles behind existing methods to analyze real-time amplification data. The disclosed method involves two new concepts: the multidimensional standard curve and its ‘home’, the feature space. Together they expand the capabilities of standard curves, allowing for simultaneous absolute quantification, outlier detection and providing insights into amplification kinetics. This disclosure describes a general method which, for the first time, presents a multi-dimensional standard curve, increasing the degrees of freedom in data analysis and thereby being capable of uncovering trends and patterns in real-time amplification data obtained by existing qPCR instruments (such as the LightCycler 96 System from Roche Life Science). It is believed that this disclosure redefines the foundations of analysing real-time nucleic acid amplification data and enables new applications in the field of nucleic acid research.
In a first aspect of the disclosure there is provided a method for use in quantifying a sample comprising a target nucleic acid, the method comprising: obtaining a set of first real-time amplification data for each of a plurality of target concentrations; extracting a plurality of N features from the set of first data, wherein each feature relates the set of first data to the concentration of the target; and fitting a line to a plurality of points defined in an N-dimensional space by the features, each point relating to one of the plurality of target concentrations, wherein the line defines a multidimensional standard curve specific to the nucleic acid target which can be used for quantification of target concentration.
Optionally the method further comprises: obtaining second real-time amplification data relating to an unknown sample; extracting a corresponding plurality of N features from the second data; and calculating a distance measure between the line in N-dimensional space and a point defined in N-dimensional space by the corresponding plurality of N features. Optionally, the method further comprises computing a similarity measure between amplification curves from the distance measure, which can optionally be used to identify outliers or classify targets.
Optionally each feature is different to each of the other features, and optionally wherein each feature is linearly related to the concentration of the target, and optionally wherein one or more of the features comprises one of Ct, Cy and −log10(F0).
Optionally the method further comprises mapping the line in N-dimensional space to a unidimensional function, M0, which is related to target concentration, and optionally wherein the unidimensional function is linearly related to target concentration, and/or optionally wherein the unidimensional function defines a standard curve for quantifying target concentration. Optionally, the mapping is performed using a dimensionality reduction technique, and optionally wherein the dimensionality reduction technique comprises at least one of: principal component analysis; random sample consensus; partial-least squares regression; and projecting onto a single feature. Optionally, the mapping comprises applying a respective scalar feature weight to each of the features, and optionally wherein the respective feature weights are determined by an optimization algorithm which optimizes an objective function, and optionally wherein the objective function is arranged for optimization of quantisation performance.
Optionally, calculating the distance measure comprises projecting the point in N-dimensional space onto a plane which is normal to the line in N-dimensional space, and optionally wherein calculating the distance measure further comprises calculating, based on the projected point, a Euclidean distance and/or a Mahalanobis distance. Optionally, the method further comprises calculating a similarity measure based on the distance measure, and optionally wherein calculating a similarity measure comprises applying a threshold to the similarity measure. Optionally, the method further comprises determining whether the point in N-dimensional space is an inlier or an outlier based on the similarity measure. Optionally, the method further comprises: if the point in N-dimensional space is determined to be an outlier then excluding the point from training data upon which the step of fitting a line to a plurality of points defined in N-dimensional space is based, and if the point in N-dimensional space is not determined to be an outlier then re-fitting the line in N-dimensional space based additionally on the point in N-dimensional space.
Optionally, the method further comprises determining a target concentration based on the multidimensional standard curve, and optionally further based on the distance measure, and optionally when dependent on claim 4 based on the unidimensional function which defines the standard curve. Optionally, the method further includes displaying the target concentration on a display.
Optionally, the method further comprises a step of fitting a curve to the set of first data, wherein the feature extraction is based on the curve-fitted first data, and optionally wherein the curve fitting is performed using one or more of a 5-parameter sigmoid, an exponential model, and linear interpolation. Optionally, the set of first data relating to the melting temperatures is pre-processed, and the curve fitting is carried out on the processed set of first data, and optionally wherein the pre-processing comprises one or more of: subtracting a baseline; and normalization.
Optionally, the data relating to the melting temperature is derived from one or more physical measurements taken versus sample temperature, and optionally wherein the one or more physical measurements comprise fluorescence readings.
In a second aspect there is provided a system comprising at least one processor and/or at least one integrated circuit, the system arranged to carry out a method according to the first aspect.
In a third aspect there is provided a computer program comprising instructions which, when executed by one or more processors, cause the one or more processors to perform a method according to the first aspect.
In a fourth aspect there is provided a computer-readable medium storing instructions which when executed by at least one processor, cause the at least one processor to carry out a method according to the first aspect.
In a fifth aspect there is provided a method according to the first aspect, used for detection of genomic material, and optionally wherein the genomic material comprises one or more pathogens, and optionally wherein the pathogens comprise one more carbapenemase-producing enterobacteria, and optionally wherein the pathogens comprise one or more carbapenemase genes from the set comprising blaOXA-48, blaVIM, blaNDM and blaKPC.
In a sixth aspect there is provided a method for diagnosis of an infection by detection of one or more pathogens according to the method of the first aspect, and optionally wherein the pathogens comprise one more carbapenemase-producing enterobacteria, and optionally wherein the pathogens comprise one or more carbapenemase genes from the set comprising blaOXA-48, blaVIM, blaNDM and blaKPC.
In a seventh aspect there is provided a method for point-of-care diagnosis of an infectious disease by detection of one or more pathogens according to the method of the first aspect, and optionally wherein the pathogens comprise one more carbapenemase-producing enterobacteria, and optionally wherein the pathogens comprise one or more carbapenemase genes from the set comprising blaOXA-48, blaVIM, blaNDM and blaKPC.
The methods disclosed herein, if used for diagnosis, can be performed in vitro or ex vivo. Embodiments can be used for single-channel multiplexing without post-PCR manipulations.
It will be appreciated in the light of the present disclosure that certain features of certain aspects and/or embodiments described herein can be advantageously combined with those of other aspects and/or embodiments. The following description of specific embodiments should not therefore be interpreted as indicating that all of the described steps and/or features are essential. Instead, it will be understood that certain steps and/or features are optional by virtue of their function or purpose, even where those steps or features are not explicitly described as being optional. The above aspects are thus not intended to limit the invention, and instead the invention is defined by the appended claims.
In order that the disclosure may be understood, preferred embodiments are described below, by way of example, with reference to the Figures in which like features are provided with like reference numerals. Figures are not necessarily drawn to scale.
The structure of the disclosure is as follows. In order to understand the proposed framework, it is useful to have an overall picture of what is done in the conventional approach in the same language. First, the conventional approach and then the proposed multidimensional framework are presented. For easier comprehension, the theory and benefits of the disclosed method are explained and discussed. Further, by way of example, an example instance of this new method is given, with a set of real-time data using lambda DNA as a template, and specific applications of the disclosed methods are explored.
Conventional Approach
In a conventional method, raw amplification data for several known concentrations of the target is typically pre-processed and fitted with an appropriate curve. A single feature such as the cycle threshold, Ct, is extracted from each curve. A line is fitted to the feature vs concentration such that unknown sample concentrations can be extrapolated. Here, two terms, namely training and testing (as used in the field of Machine Learning), are used to describe the construction of a standard curve 110 and quantifying unknown samples respectively. Within the conventional approach for quantification, training using a first set of data relating to melting temperatures of samples having known characteristics is achieved through 4 stages: pre-processing 101, curve fitting 102, single linear feature extraction 103 and line fitting 104, as illustrated in the upper branch of
Pre-processing 101 can be optionally performed to reduce factors such as background noise such that a more accurate comparison amongst samples can be achieved.
Curve fitting 102 (e.g. using a 5-parameter sigmoid, an exponential model, and/or linear interpolation) is optional, and beneficial given that amplification curves are discrete in time/temperature and most techniques require fluorescence readings that are not explicitly measured at a given time/temperature instance.
Feature extraction 103 involves selecting and determining a feature (or “characteristic”, e.g. Ct, Cy, −log10(F0), FDM, SDM) of the target data.
Line (or curve) fitting 104 involves fitting a line (or curve) 110 to the determined feature data versus target concentration.
Examples of pre-processing 101 include baseline subtraction and normalization. Examples of curve fitting 102 include using a 5-parameter sigmoid, an exponential model, and linear interpolation. Examples of features extracted in the feature extraction 103 step include Ct, Cy or −log10(F0). Examples of line fitting 104 techniques include principal component analysis, and random sample consensus (RANSAC).
Testing of unknown samples (i.e. quantifying target concentration in unknown samples, based on second data relating to the melting temperature of a target comprised in the unknown sample) is accomplished by using the same first 3 blocks (pre-processing 101, curve fitting 102, linear feature extraction 103) as training, and using the line 110 generated from the final line fitting 104 step during training in order to quantify the samples.
Proposed Method
The proposed method builds on the conventional techniques described in the above paragraph, by increasing the dimensionality of the standard curve (against which data is compared in the testing phase) in order to explore, research and take advantage of using multiple features together. This new framework is presented in the lower branch of
For training, in this example embodiment there are 6 stages: pre-processing 101, curve fitting 102, multi-feature extraction 113, high dimensional line fitting 114, multidimensional analysis 115, and dimensionality reduction 116. Testing follows a similar process: pre-processing 101, curve fitting 102, multi-feature extraction 113, multidimensional analysis 115, and dimensionality reduction 116. As for the conventional approach, pre-processing 101 and curve fitting 102 are optional, and with suitable multidimensional analysis techniques an explicit step of dimensionality reduction may also be rendered optional.
Again, examples of pre-processing 101 include baseline subtraction and normalization, and examples of curve fitting 102 include using a 5-parameter sigmoid, an exponential model, and linear interpolation. Examples of features extracted in the multi-feature extraction 113 step include Ct, Cy, −log10(F0), FDM, SDM. Examples of high-dimensional line fitting 114 techniques include principal component analysis, and random sample consensus (RANSAC). Examples of multidimensional analysis 115 techniques include calculating a Euclidean distance, calculating confidence bounds, weighting features using scalars αi, as further described below. Examples of dimensionality reduction 116 techniques include principal component regression, calculating partial least-squares, and projecting onto original features, as further described below.
Once training is complete, at least one further (e.g. unknown) sample can then be analyzed (e.g. quantified and/or classified) through testing as follows. Similar to training, processed amplification data (
Given that this higher dimensional space has not previously been disclosed, it is effective to highlight the degrees of freedom within this new framework that were non-existent when observing the quantification process through the conventional lens. The following advantages arise:
Advantage 1. The weight of each extracted feature can be controlled by the scalars, α1, . . . αn. There are two main observations of this degree of freedom. The first observation is that features that have poor quantification performance can be suppressed by setting the associated a to a small value. This introduces a very useful property of the framework which is referred to as the separation principle. The separation principle means that including features to enhance multidimensional analyses does not have a negative impact on quantification performance if the a's are chosen appropriately. Optimization algorithms can be used to set the a's based on an objective function. Therefore, the performance of the quantification using the proposed framework is lower bounded by the performance of the best single feature for a given objective. The second observation is that no upper bound exists on the performance of using several scaled features. Thus, there is a potential to outperform single features as shown in this report.
Advantage 2. The versatility of this multidimensional way of thinking means that there are multiple methods for dimensionality reduction such as: principal component regression, partial-least squares regression, and even projecting onto a single feature (e.g. using the standard curve 110 used in conventional methods). Given that DRTs can be nonlinear and take advantage of multiple features, predictive performance may be improved.
Advantage 3. Training and testing data points do not necessarily lie perfectly on a straight line as they did in the conventional technique. This property is the backbone behind why there is more information in higher dimensions. For example, the closer two points are in the feature space, the more likely that their amplification curves are similar (resembling a Reproducing Kernel Hilbert Spaces). Therefore, a distance measure in the feature space can provide a means of computing a similarity measure between amplification curves. It is important to understand that the distance measure is not necessarily, and in reality unlikely to be, linearly related to the similarity measure. For example, it is not necessarily true that a point twice as far from the multidimensional standard curve is twice as unlikely to occur. This relationship can be approximated using the training data itself. In the case of training, a similarity measure is useful to identify and remove outliers that may skew quantification performance. As for testing, the similarity measure can give a probability that the unknown data is an outlier of the standard curve, i.e. non-specific or due to a qPCR artefact, without the need of post-PCR analyses such as melting curves or agarose gels.
Advantage 4. The effect of changes in reaction conditions, such as annealing temperature or primer mix concentration, can be captured by patterns in the feature space. Uncovering these trends and patterns can be very insightful in understanding the data. This is also possible in the conventional case, e.g. how Ct varies with temperature, however since reaction conditions affect different features differently, in the proposed multidimensional technique conclusions can be drawn with higher confidence e.g. if a pattern is observed in multidimensional space. For example, consider the following: a change in temperature, ΔT, causes a different change for different features, e.g. ΔX, ΔY and ΔZ. Therefore, if (as in the conventional technique) only a single feature, X, is used and a variation ΔX is observed then it is unlikely to capture the source of the variation, i.e. AT, with high confidence. Whereas, considering multiple features (as in the proposed multidimensional technique) and observing ΔX, ΔY and ΔZ simultaneously, can provide more confidence that the source is due to ΔT.
An extension of advantage 4 is related to the effect of variations in target concentration. Clearly, the pattern for varying target concentration is known: along the axis of the multidimensional standard curve 130. Therefore, the data itself is sufficient to suggest if a particular sample is at a different concentration than another. This is significant, since it allows variations amongst replicates (which are possible due to experimental errors such as dilution and mixing) to be identified and potentially compensated for. This is of particular importance for low concentrations wherein such errors are typically more significant. It is interesting to observe that if multiple features are used, and the DRT is chosen such that the multidimensional curve is projected onto a single feature, e.g. Ct, then the quantification performance is similar as for the conventional process (e.g. a special instance of the proposed framework, wherein only a single feature is used) yet the opportunities and insights obtained as a result of employing a multidimensional space still remain.
Example Method
It has been established that each step in the proposed method, as seen in the lower branch of
Pre-Processing 101
The only pre-processing 101 performed in this example is background subtraction. This is accomplished using baseline subtraction: removing the mean of the first 5 fluorescence readings from every amplification curve. In other embodiments, however, pre-processing can be omitted, or other or additional pre-processing steps such as normalization can be carried out, and more advanced pre-processing steps can optionally be carried out so improve performance and/or accuracy.
Curve Fitting 102
An example model for curve fitting is the 5-parameter sigmoid (Richards Curve) given by:
Where x is the cycle number, F(x) is the fluorescence at cycle x, Fb is the background fluorescence, Fmax is the maximum fluorescence, c is the fractional cycle of the inflection point, b is related to the slope of the curve, and d allows for an asymmetric shape (Richard's coefficient).
An example optimization algorithm used to fit the curve to the data is the trust-region method and is based on the interior reflective Newton method. Here, the trust-region method is chosen over the Levenberg-Marquardt algorithm since bounds for the 5 parameters can be chosen in order to encourage a unique and realistic solution. Example lower and upper bounds for the 5 parameters, [Fb, Fmax, c, b, d], are given as: [−0.5, −0.5, 0, 0, 0.7] and [0.5, 0.5, 50, 100, 10] respectively.
Multi Feature Extraction 113
The number of features, n, that can be extracted is arbitrary, however 3 features have been chosen in this example in order to enhance visualization of each step of the framework: Ct, Cy and −log10(F0), for ease of explanation. As a result, in this example, each point in the feature space is a vector in 3-dimensional space,
e.g. p=[Ct,Cy,−log10(F0)]T
where [·]T denotes the transpose operator.
Note that by convention, vectors are columns and are bold lowercase letters. Matrices are bold uppercase. The details of these features are not the focus of this disclosure, and so will not be described further herein, it being assumed that the reader is familiar with said details.
High-Dimensional Line Fitting 114
When constructing a multidimensional standard curve, a line must be fitted in n-dimensional space. This can be achieved in multiple ways such as using the first principal component in principal component analysis (PCA) or techniques robust to outliers such as random sample consensus (RANSAC) if there is sufficient data. This example uses the former (PCA) since a relatively small number of training points are used to construct the standard curve.
Distance and Similarity Measure (Multi-Dimensional Analysis 115)
There are two distance measures given as examples in this disclosure: Euclidean and Mahalanobis distance, although it will be appreciated that other distance measures can be used.
The Euclidean distance between a point, p, and the multidimensional standard curve can be calculated by orthogonally projecting a point onto the multidimensional standard curve 130 and then using simple geometry to calculate the Euclidean distance, e:
where Φ computes the projection of the point p∈Rn onto the multidimensional standard curve, the points q1,q2∈Rn are any two distinct points that lie on the standard curve, and |·| denotes the absolute value operator.
The Mahalanobis distance is defined as the distance between a point, p, and a distribution, D, in multidimensional space. Similar to the Euclidean distance, a point is first projected onto the multidimensional standard curve 130 and the following formula is applied to compute the Mahalanobis distance, d:
d=√{square root over ((p−P·(q2−q1)TΣ−1(p−P·(q2−q1))} (4)
where p, P, q1 and q2 are given in equation (2), and Σ is the co-variance matrix of the training data used to approximate the distribution D.
In order to convert the distance measure into a similarity measure, it can be shown that if the data is approximately normally distributed then the Mahalanobis distance squared, i.e. d2, follows an χ2-distribution. Therefore, an χ2-distribution table can be used to translate a specific p-value into a distance threshold. For instance, for a χ2-distribution with 2 degrees of freedom, a p-value of 0.05 and 0.01 correspond to a squared Mahalanobis distance of 5.991 and 9.210 respectively.
Feature weights.
As mentioned previously, different weights, a, can be assigned to each feature. In order to accomplish this, a simple optimization algorithm can be implemented. Equivalently, an error measure can be minimized.
Dimensionality Reduction 116
In this example, principal component regression is used, e.g. M0=P from equation (2), and it is compared with projecting the standard curve onto all three dimensions, i.e. Ct, Cy and −log10(F0).
Evaluating Standard Curves
In consistency with the existing literature on evaluating standard curves, relative error (RE) and average coefficient of variation (CV) can, by way of example, be used to measure accuracy and precision respectively. The CV for each concentration can be calculated after normalizing the standard curves such that a fair comparison across standard curves is achieved. The formula for the two measures are given by:
where n is the number of training points, i is the index of a given training point, xi is the true concentration of the ith training data, x{circumflex over ( )}i is the estimate of xi using the standard curve.
where m is the number of concentrations, j is the index of a given concentration and x is a vector of estimated concentrations for a given concentration indexed by j. The functions std(·) and mean(·) perform the standard deviation and mean of their vector arguments respectively.
Referring to the field of Statistics, this example also uses the “leave one-out cross validation” (LOOCV) error as a measure for stability and overall predictive performance. Stability refers to the predictive performance when training points are removed. The equation for calculating the LOOCV is given as:
where n is the number of training points, i is the index of a given training point, zi is a vector of the true concentration for all training points except the ith training point and z{circumflex over ( )}i is the estimate of zi generated by the standard curve without the ith training point.
In order for the optimization algorithm for computing a to simultaneously minimize the three aforementioned measures, it is convenient to introduce a figure of merit, Q, to capture all of the desired properties. Therefore, Q is defined as the product between all three errors and can be used to heuristically compare the performance across quantification methods.
Q=RE×CV×LOOCV (8)
Several DNA targets were used for qPCR amplification by way of example:
(i) Synthetic double-stranded DNA (gblocks Fragments Genes, Integrated DNA Technologies) containing phage lambda DNA sequence was used to construct and evaluate the standards curves (DNA concentration ranging from 102 to 108 copies per reaction). See Appendix A.
(ii) Genomic DNA isolated from pure cultures of carbapenem-resistant (A) Klebsiella pneumoniae carrying blaOXA-48, (B) Escherichia coli carrying blaNDM and (C) Klebsiella pneumoniae carrying blaKPC were used for the outlier detection experiments. See Appendix B.
(iii) Phage lambda DNA (New England Biolabs, Catalog #N3011S) was used for primer variation experiment (final primer concentration ranging from 25 nM/each to 850 nM/each) and temperature variation experiments (annealing temperature ranging from 52° C. to 72° C.
All oligonucleotides used in this example were synthesised by IDT (Integrated DNA Technologies, Germany) and are shown in Table 1. The specific PCR primers for lambda phage were designed in-house using Primer3 (http://biotools.umassmed.edu/bioapps/primer3_www.cgi), whereas the primer pairs used for the specific detection of carbapenem resistance genes were taken from Monteiro et al 2012. Real-time PCR amplifications were conducted using FastStart Essential DNA Green Master (Roche) according to the manufacturer's instructions, with variable primer concentration and a variable amount of DNA in a 54 final reaction volume. Thermocycling was performed using a LightCycler 96 (Roche) initiated by a 10 min incubation at 95° C., followed by 40 cycles: 95° C. for 20 sec; 62° C. (for lambda) or 68° C. (for carbapenem resistance genes) for 45 sec; and 72° C. for 30 sec, with a single fluorescent reading taken at the end of each cycle. Each reaction combination, starting DNA and specific PCR amplification mix, was conducted in octuplicate. All the runs were completed with a melting curve analysis to confirm the specificity of amplification and lack of primer dimer. The concentrations of all DNA solutions were determined using a Qubit 3.0 fluorometer (Life Technologies). Appropriate negative controls were included in each experiment.
Results
The following example results illustrate the aforementioned advantages of the proposed framework using an example instance of the method as described above. Given that there is a separation principle between quantification performance and insights in the feature space, this section is split into two parts: quantification performance and multidimensional analysis. The first part shows the results that arose from the two degrees of freedom introduced in advantage 1 & 2 and the latter explores advantage 3 & 4 regarding interesting observations in multidimensional space.
Quantification Performance
In this example, synthetic double-stranded DNA was used to construct a multidimensional standard curve 130 and evaluate its quantification performance relative to single feature methods. The resulting multidimensional standard curve 130, constructed using the features Ct, Cy and −log10(F0), is visualized in
In this example, the optimal feature weights, a, to control the contribution of each feature to quantification, after 20 iterations of the optimization algorithm, converged to α=[1.6807,1.0474,0.0134] where the weights correspond to Ct, Cy and −log10(F0) respectively. This result is readily interpretable and it suggests that −log10(F0) exhibits the poorest quantification performance amongst the three features; as consistent with the existing knowledge. It is important to stress again that although the weight of −log10(F0) is suppressed relative to the other features to improve quantification, there is still a lot of value in keeping it as it can uncover trends in multidimensional space: as will become apparent later.
The performance measures and figure of merit, Q, for this particular instance of the proposed framework against the conventional instance is given in Table 2. A breakdown of each calculated error grouped by concentration is provided in Appendix D. It can be observed that Ct offers the smallest RE, i.e. accuracy, whereas M0 outperforms the other methods in CV and LOOCV, i.e. precision and overall prediction. In terms of the figure of merit, combining all of the errors, this arbitrary realisation of the framework enhanced quantification by 6.8%, 25.6% and 99.3% compared to Ct, Cy and −log10(F0) respectively.
Multidimensional Analysis
Given that the feature space is a new concept, there is room to explore what can be achieved. In this section the concept of distance in the feature space is explored and is demonstrated through an example of outlier detection. Furthermore, it is shown that in this example a pattern exists in the feature space when altering reaction conditions.
In this example, genomic DNA carrying carbapenemase genes, namely blaOXA, blaNDM and blaKPC, are used as deliberate outliers for the multidimensional standard curve 130.
In order to fully capture the position of the outliers in the feature space, it is convenient to view the feature space along the axis of the multidimensional standard curve 130. This is possible by projecting data points in the feature space onto the plane perpendicular to the multidimensional standard curve 130 as illustrated in
It can be observed that all three outliers 601, 602, 603 can be clustered and clearly distinguished from the training data 610. Furthermore, in this example, the Euclidean distance, e, from the multidimensional standard curve 130 to the mean of the outliers is given by eOXA=1.16, eNDM=0.77 and eKPC=1.41. Given that in this example the furthest training point from the multidimensional standard curve 130 in terms of Euclidean distance is 0.22: the ratio between eOXA, eNDM, eKPC and 0.22 is given by 5.27, 3.5, 6.41 respectively. Therefore, this ratio can be used as a similarity measure and the three clusters could be classified as outliers. However, this similarity measure has two implicit assumptions: (i) The data follows a uniform probability distribution. That is, a point twice as far is twice as likely to be an outlier. This assumption is typically made when there is not enough information to infer a distribution. (ii) Distances in different directions (e.g. along difference axes) are equally likely. This is intuitively untrue in the feature space because a change along one direction, e.g. Ct, does not impact the amplification curve as much as a change in another direction, e.g. −log10(F0). It is important to emphasise that directions in the feature space contain information regarding how much amplification kinetics change and therefore direct comparisons between amplification reactions should be made along the same direction. This information is not captured in the aforementioned previous (unidimensional) data analysis.
In order to tackle the two aforementioned assumptions, the Mahalanobis distance, d, can be used. Clearly, by observing
A useful property of the Mahalanobis distance is that its squared value follows a χ2-distribution if the data is approximately normally distributed. Therefore, the distance can be converted into a probability in order to capture the non-uniform distribution.
A second example multidimensional analysis (as shown in
In the illustrated example, annealing temperature and primer mix concentration have been chosen to illustrate the idea. Specificity of the qPCR is not affected, as shown with melting curve analyses (see Appendix F and
Based on this finding, the previous (unidimensional) way of proceeding would indicate the use of Ct or Cy for subsequent experiments. However, it has been realised that this implies a loss of information contained in patterns generated by −log10(F0). Therefore, the proposed multidimensional approach combines features that are beneficial for quantification performance and pattern recognition: preserving all information without compromising quantification performance.
Finally, a further interesting observation is that for low concentrations of nucleic acids, there is a variation of training data points along the axis of the multidimensional standard curve 130 as seen in
Although the disclosed framework has been described as considering features that are linearly related to initial target concentration, that example design choice was chosen so as to reduce the complexity of the analysis, however other features such as non-linearly related features can optionally be used.
Additionally, it will be noted that if two unrelated PCR reactions exhibit a perfectly symmetric sigmoidal amplification curve, their respective standard curves may potentially overlap, and thus a question arises as to whether sufficient information might be captured between amplification curves in order to distinguish them in the feature space. However, such an effect can be mitigated from a molecular perspective by tuning the chemistry in order to sufficiently change amplification curves without compromising the performance of the reaction (e.g. speed, sensitivity, specificity etc).
In conclusion, this disclosure presents a versatile method, multidimensional standard curve and feature space, which enable techniques and advantages that were not previously realisable. It has been illustrated that an advantage of using multiple features is improved reliability of quantification. Furthermore, instead of trusting a single feature, e.g. Ct, other features such as Cy and −log10(F0) can be used to check if a quantification result is similar. The previous unidimensional way of thinking failed to consider multiple degrees of freedom and the resulting advantages that the versatile framework disclosed herein enables. There are thus four main capabilities that are enabled by the disclosed method:
(i) the ability to select multiple features and weight them based on quantification performance.
(ii) the flexibility of choosing an optimal mathematical method that maps multiple features into a single value representing target concentration. The first two capabilities lead to a separation principle which lower bounds the quantification performance of the framework to the best single feature, however the insights and multidimensional analyses from the multiple features still remain. It is interesting to observe that, for the example dataset used in this proposed approach, the gold standard Ct method outperformed the other single features. This is an example of why there is a technical prejudice against using other features, since the outcome is data dependent. The disclosed framework offers a method of absolute quantification without the need to select a specific feature with a guaranteed quantification performance. This disclosure shows that by using multiple features it is in fact possible to increase the quantification performance compared with the use of only single features.
(iii) enablement of applications such as outlier detection through the information gain captured by the elements of the feature space (e.g. distance measure, direction, distribution of data) that are typically meaningless or not considered in the previous unidimensional approach.
(iv) the ability to observe specific perturbations in reaction conditions as characteristic patterns in the feature space.
Example Application of the Disclosed Method
Absolute quantification of nucleic acids and multiplexing the detection of several targets in a single reaction both have, in their own right, significant and extensive use in biomedical related fields, especially in point-of-care applications. With previous approaches, the ability to detect several targets using qPCR scales linearly with the number of targets, and is thus an expensive and time-consuming feat. In the present disclosure, a method is presented based on multidimensional standard curves that extends the use of real-time PCR data obtained by common qPCR instruments. By applying the method disclosed herein, simultaneous single-channel multiplexing and robust quantification of multiple targets in a single well is achieved using only real-time amplification data (that is, using bacterial isolates from clinical samples in a single reaction without the need of post PCR operations such as fluorescent probes, agarose gels, melting curve analysis, or sequencing analysis). Given the importance and demand for tackling challenges in antimicrobial resistance, the proposed method is shown in this example to simultaneously quantify and multiplex four different carbapenemase genes: blaOXA-48, blaNDM, blaVIM and blaKPC, which account for 97% of the UK's reported carbapenemase-producing Enterobacteriaceae.
Quantitative detection of nucleic acids (DNA and RNA) is used for many applications in the biomedical field, including gene expression analysis, genetic disease predisposition, mutation detection and clinical diagnostics. One such application is in the screening of antibiotic resistance genes in bacteria: the emergence and spread of carbapenemase-producing enterobacteria (CPE) represents one of the most imminent threats to public health worldwide. Invasive infections with carbapenemase-resistant strains are associated with high mortality rates (up to 40-50%) and represent a major public health concern worldwide. Rapid and accurate screening for carriage of carbapenemase-producing Enterobacteriaceae (CPE) is essential for successful infection prevention and control strategies as well as bed management. However, routine laboratory detection of CPE based on carbapenem susceptibility is challenging: i) culture-based methods are convenient due to their ready availability and low cost, but their limited sensitivity and long turnaround time may not always be optimal for infection control practices; (ii) nucleic acid amplification techniques (NAATs), such as qPCR, provide fast results and added sensitivity and specificity compared with culture-based methods. However, these methodologies are often too expensive and require sophisticated equipment to be used as a screening tool in healthcare systems; and (iii) multiplexed NAATs have significant sensitivity, cost and turnaround time advantages, increasing the throughput and reliability of results, but the biotechnology industry has been struggling to meet the increasing demand for high-level multiplexing using available technologies. There is thus an unmet clinical need for new molecular tools that can be successfully adopted within existing healthcare settings.
Currently, qPCR is the gold standard for rapid detection of CPE and other bacterial infection. This technique is based on fluorescence-based data detection allowing kinetics of PCR amplification to be monitored in real-time. Different methodologies are used to analyze qPCR data, being the cycle-threshold (Ct) method the preferred approach for determining the absolute concentration of a specific target sequence. The Ct method assumes that the compared samples have similar PCR efficiency and it is defined as the number of cycles in the log-linear region of the amplification where there is significant detectable increase in fluorescence. Alternative methods have been developed to quantify template nucleic acids, including the standard curve methods, linear regression and non-linear regression models, but none of them allow simultaneous target discrimination. Multiplex analytical systems allow the detection of multiple nucleic acid targets in one assay and can provide the required speed for sample characterisation while still saving cost and resources. However, in a practical context, multiplex quantitative real-time PCR (qPCR) is limited by the number of detection channels of the real-time thermocycler and commonly rely on melting curve analysis, agarose gels or sequencing for target confirmation. These post-PCR processes increase diagnostic time, limit high throughput application and lead to amplicon contamination by laboratory environments. Therefore, there is an urgent need to develop simplified molecular tools which are sensitive, accurate and low-cost.
The disclosed method allows existing technologies to get as a return the benefits of multiplex PCR whilst reducing the complexity of CPE screening; resulting in cost reduction. This is due to the fact that the proposed method: (i) enables multi-parameter imaging with a single fluorescent channel; (ii) is compatible with unmodified oligonucleotides; and (iii) does not require post-PCR processing. This is enabled through the use of multidimensional standard curves, which in this example are constructed using Ct, Cy and −log10(F0) features extracted from amplification curves. In this example, we show that the described methodology can be successfully applied to CPE screening. This provides a proof-of-concept that several nucleic acid targets can be multiplexed in a single channel using only real-time amplification data. It will be appreciated nevertheless that the disclosed method can be applied to detection of any nucleic acid, and to detection of any pathogenic or non-pathogenic genomic material.
This example application of the disclosed method, as described with reference to
In the example conventional uni-dimensional analysis shown at
In the multidimensional analysis (B) disclosed herein, multidimensional standard curves and the feature space are used to simultaneously quantify and discriminate a target of interest solely based on the amplification curve: eliminating the need for expensive and time consuming post-PCR manipulations. Similar to conventional standard curves, multidimensional standard curves are generated by using standard solutions with known concentrations under uniform experimental conditions. In this example, multiple features, a, β and γ, are extracted from each amplification curve and plotted against each other. Because each amplification curve has been reduced to three values, it can be represented as a single point in a 3D space (a greater or lesser number of dimensions can be used in embodiments). In this example, amplification curves from each concentration for a given target will thus generate three-dimensional clusters, which can be connected by high dimensional line fitting to generate the target-specific multidimensional standard curves 130. The multidimensional space where all the data points are contained is referred to as the feature space, and those data points can be projected to an arbitrary hyperplane orthogonal to the standard curves for target classification and outlier detection. Unknown samples can be confidently classified through the use of clustering techniques and enhanced quantification can be achieved by combining all the features into a unified feature called M0. It is important to stress that any number of targets and features could have been chosen, a three-plex assay and three features have been selected in this example to illustrate the concept in a comprehensive manner.
Example Primers and Amplification Reaction Conditions
All oligonucleotides were synthesised by Integrated DNA Technologies (The Netherlands) with no additional purification. Primer names and sequences are shown in Table 3. Each amplification reaction was performed in 5 μL of final volume with 2.5 μL FastStart Essential DNA Green Master 2× concentrated (Roche Diagnostics, Germany), 1 μL PCR Grade water, 0.5 μL of 10× multiplex PCR primer mixture containing the four primer sets (5 μM each primer) and 1 μL of different concentrations of synthetic DNA or bacterial genomic DNA. PCR amplifications consisted of 10 min at 95.0 followed by 45 cycles at 95.0 for 20 sec, 68.0 for 45 sec and 72.0 for 30 sec. One melting cycle was performed at 95.0 for 10 sec, 65.0 for 60 sec and 97.0 for 1 sec (continuous reading from 65.0 to 97° C.) for validation of the specificity of the products. Each experimental condition was run 5 to 8 times loading the reactions into LightCycler 480 Multiwell Plates 96 (Roche Diagnostics, Germany) utilising a LightCycler 96 Real-Time PCR System (Roche Diagnostics, Germany).
Sequences are given in the 5′ to 3′ direction. Size denotes PCR amplification products.
Synthetic and Genomic DNA Samples
Four gBlock® Gene fragments were purchased from Integrated DNA Technologies (The Netherlands) and resuspended in TE buffer to 10 ng/4 stock solutions (stored at −20° C.). The synthetic templates contained the DNA sequence from blaOXA, blaNDM, blaVIM and blaKPC genes required for the multiplex qPCR assay. Eleven pure cultures from clinical isolates were obtained (Table 4). One loop of colonies from each pure culture was suspended in 50 μL digestion buffer (Tris-HCl 10 mmol/L, EDTA 1 mmol/L, pH 8.0 containing 5 U/4 lysozime) and incubated at 37.0 for 30 min in a dry bath. 0.75 μL proteinase K at 20 μg/4 (Sigma) were subsequently added, and the solution was incubated at 56.0 for 30 min. After boiling for 10 min, the samples were centrifuged at 10,000×g for 5 min and the supernatant was transferred in a new tube and stored at −80.0 before use. Bacterial isolates included non-CPE producer Klebsiella pneumoniae and Escherichia coli as control strains.
Klebsiella pneumoniae
Escherichia coli
Citrobacter Freundii
Escherichia coli
Klebsiella pneumoniae
Klebsiella pneumoniae
Pseudomonas aeruginosa
Klebsiella pneumoniae
Klebsiella pneumoniae
Klebsiella pneumoniae
Escherichia coli
Example of the Disclosed Method
The data analysis for simultaneous quantification and multiplexing is achieved using the method previously described herein. Therefore, there are the following stages in data analysis: pre-processing 101, curve fitting 102, multi-feature extraction 113, high-dimensional line fitting 114, similarity measure (multidimensional analysis) 115 and dimensionality reduction 116.
Pre-processing 101: (optional) Background subtraction via baseline correction, in this example. This is accomplished by removing the mean of the first 5 fluorescent readings from each raw amplification curve.
Curve fitting 102: (optional) The 5-parameter sigmoid (Richard's curve) is fitted, in this example, to model the amplification curves:
where x is the cycle number, F(x) is the fluorescence at cycle x, Fb is the background fluorescence, Fmax is the maximum fluorescence, c is the fractional cycle of the inflection point, b is related to the slope of the curve and d allows for an asymmetric shape (Richard's coefficient). The optimization algorithm used in this example to fit the curve to the data is the trust-region method and is based on the interior reflective Newton method. The lower and upper bounds for the 5 parameters, [Fb, Fmax, c, b, d], are given in this example as: [−0.5, −0.5, 0, 0, 0.7] and [0.5, 0.5, 50, 100, 10] respectively.
Feature extraction 113: Three features are chosen in this example to construct the multidimensional standard curve: Ct, Cy and −log10(F0). The details of these features are not the focus of this disclosure. It will be appreciated that fewer, or a greater number of, features could be used in other examples.
Line fitting 114: The method of least squares is used for line fitting in this example, i.e. the first principal component in principal component analysis (PCA).
Similarity measure (multidimensional analysis) 115: The similarity measure used in this example is the Mahalanobis distance, d:
d=√{square root over ((p−P·(q2−q1)TΣ−1(p−P·(q2−q1))}
where p, P, q1 and q2 are given in equation (2), and Σ is the co-variance matrix of the training data used to approximate the distribution D.
Feature weights: In order to maximize quantification performance, different weights, a, can be assigned to each feature. In order to accomplish this, a simple optimization algorithm can be implemented. Equivalently, an error measure can be minimized. In this example, the error measure to minimize is the figure of merit described in the following subsection. The optimization algorithm is the Nelder-Mead simplex algorithm (32,33) with weights initialized to unity, i.e. beginning with no assumption on how good features are for quantification. This is a basic algorithm and only 20 iterations are used to find the weights so that there is little computational overhead.
Dimensionality reduction 116: Three dimensionality reduction techniques were used in order to compare their performance. The first 3 are simple projections onto each of the individual features, i.e. Ct, Cy and −log10(F0). The final method uses principal component regression to compute a feature termed M0 using a vector
p=[Ct,Cy,−log10(F0)]T
The general form for calculating M0 for an arbitrary number of features, as shown in equation (2) is given as:
Where Φ computes the projection of the point p∈Rn onto the multidimensional standard curve 130. The points q1,q2∈Rn are any two distinct points that lie on the standard curve.
Evaluation of the standard curves is performed as described in the general disclosure above.
Results
In this example, it is shown that simultaneous robust quantification and multiplexing detection of blaOXA-48, blaNDM, blaVIM and blaKPC-lactamase genes in bacterial isolates can be achieved through analysing the fluorescent amplification curves in qPCR by using multidimensional standard curves. This section is broken into two parts: multiplexing and robust quantification. First, it is proven that single-channel multiplexing can be achieved, which is non-trivial and highly advantageous.
Target Discrimination Using Multidimensional Analysis
The multidimensional method disclosed herein shows that considering multiple features gives sufficient information gain in order to discriminate outliers from a specific target using a multidimensional standard curve 130. Taking advantage of this property, several multidimensional standard curves can be built in order to discriminate multiple specific targets.
In order to prove this, 11 samples given in Table 4 were tested against the multidimensional standards 1301, 1302, 1303, 1304. The similarity measure used to classify the unknown samples is the Mahalanobis distance, using a p-value of 0.01 as the threshold. In order to fully capture the position of the outliers in the feature space, it is convenient to view the feature space along the axis of the multidimensional standard curves 1301, 1302, 1303, 1304. Melting curves are provided in
The second observation is that the mean of the test samples (bacterial isolates) which have a single resistance fall (samples 1-8) within the correct cluster (p-value<0.01) of training points. Melting curve analysis was used to validate the results, as provided in the Appendices. The results from testing can be succinctly captured within a bar chart as shown in
It can be observed that using appropriate clustering techniques in each transformed space, it can be distinguished whether a point belongs to the target or not. Furthermore, if a probability is assigned to each data point then samples can be classified reliably to a given standard whilst simultaneously quantifying it. Given that the training data follow approximately a multivariate normal distribution, the Mahalanobis distance squared can provide a measure of probability.
Robust Quantification
Given that multiplexing has been established, quantification can be obtained using any conventional method such as the gold standard cycle threshold, Ct. However, as shown in the general method disclosed herein, enhanced quantification can be achieved using a feature, M0, that combines all of the features for optimal absolute quantification. The measure of optimality in this study is a figure of merit that combines accuracy, precision, robustness and overall predictive power as shown in equation X. Table 5 shows the figure of merit for the 3 chosen features (Ct, Cy and −log10(F0)) and M0 used in this example. The percentage improvement is also shown. It can be observed that quantification is always improved compared to the best single feature. The improvement is 30.69%, 14.39%, 2.12% and 35.00% for blaOXA-48, blaNDM, blaVIM and blaKPC respectively. This is a result of the multidimensional framework. It is further interesting to observe that amongst the conventional methods, there is no single method that performs the best for all the targets. Thus, M0 is the most robust method in the sense that it will always be the best performing method.
2.45e+07
2.12e+09
8.88e+07
1.31e+09
1.47e+09
7.60e+07
2.40e+07
8.53e+08
Nucleotide sequence for synthetic double-stranded DNA ordered from Integrated DNA Technologies containing the lambda phage DNA target.
Forward lambda PCR primer in bold and reverse lambda primer in italics.
TAATGAGGTGCTTTATGACTCTGCCGCCGTCATAAAATGGT
ATGCCGAAAGGGATGCTGAAATTGAGAACGAAAAGCTGCGC
Template preparation from bacterial isolates for real-time PCR assays.
One loop of colonies from the pure culture was suspended in 50 μL digestion buffer (Tris-HCl 10 mmol/L, EDTA 1 mmol/L, pH 8.0 containing 5 U/4 lysozime) and incubated at 37° C. for 30 min in a dry bath. 0.75 μL proteinase K at 20 μg/4 (Sigma) were subsequently added, and the solution was incubated at 56° C. for 30 min. After boiling for 10 min, the samples were centrifuged at 10,000×g for 5 min and the supernatant was transferred in a new tube and stored at −80 C before use.
Experimental values for construction of lambda DNA standard.
242 bp of double-stranded DNA lambda phage was used to build molecule (gBlock gene fragment, IDT) containing the desired target sequence from the standard curves. Each condition run in octuplicate.
Experimental values for outlier detection experiment.
Genomic DNA extracted from pure bacterial cultures. All targets at 1.00E+05 gDNA copies per reaction. Each condition run in octuplicate.
Melting curve analysis for lambda DNA standard experiment as shown in
Melting curve analysis for outlier detection experiment, as shown In
Melting curve analysis for primer concentration variation experiment, as shown in
Melting curve analysis for temperature variation experiment, as shown in
Experimental values for temperature variation experiment.
Lambda DNA as target (NEB, Catalog #N3011S), 106 genomic copies per reaction. Temperature in Celsius. Each experimental condition run in octuplicate.
Experimental values for primer concentration variation experiment.
Lambda DNA as target (NEB, Catalog #N3011S), 106 genomic copies per reaction. Primer concentration in nanomolar (nM), ranging from 25 to 850 nM each primer. Each experimental condition run in octuplicate.
Advantages and technical effects of aspects and embodiments, including those mentioned above, will be apparent to a skilled person from the foregoing description and from the Figures.
It will be appreciated that the described methods can be carried out by one or more computers under control of one or more computer programs arranged to carry out said methods, said computer programs being stored in one or more memories and/or other kinds of computer-readable media.
Referring to
The computer-readable storage medium may be any form of non-volatile and/or non-transitory data storage device such as a magnetic disk (such as a hard drive or a floppy disc) or optical disk (such as a CD-ROM, a DVD-ROM or a BluRay disc), or a memory device (e.g. a ROM, RAM, EEPROM, EPROM, Flash memory or portable/removable memory device) etc., and may store data, application program instructions according to one or more embodiments of the disclosure herein, and/or an operating system. The storage medium may be local to the processor, or may be accessed via a computer network or bus.
The processor may be any apparatus capable of carrying out method steps according to embodiments, and may for example comprise a single data processing unit or multiple data processing units operating in parallel or in cooperation with each other, or may be implemented as a programmable logic array, graphics processor, or digital signal processor, or a combination thereof.
The input interface is arranged to receive input from a user and provide it to the processor, and may comprise, for example, a mouse (or other pointing device), a keyboard and/or a touchscreen device.
The output interface optionally provides a visual, tactile and/or audible output to a user of the system, under control of the processor.
Finally, the network interface provides for the computer to send/receive data over one or more data communication networks.
Embodiments may be carried out on any suitable computing or data processing device, such as a server computer, personal computer, mobile smartphone, set top box, smart television, etc. Such a computing device may contain a suitable operating system such as UNIX, Windows® or Linux, for example.
It will be appreciated that the above-described partitioning of functionality can be altered without affecting the functionality of the methods and systems, or their advantages/technical effects. The above-described functional partitioning is presented as an example in order that the invention can be understood, and is thus conceptual rather than limiting, the invention being defined by the appended claims. The skilled person will also appreciate that the described method steps may be combined or carried out in a different order without affecting the advantages and technical effects resulting from the invention as defined in the claims.
It will be further appreciated that the described functionality can be implemented as hardware (for example, using field programmable gate arrays, ASICs or other hardware logic), firmware and/or software modules, or as a mixture of those modules. It will also be appreciated that, a computer-readable storage medium and/or a transmission medium (such as a communications signal, data broadcast, communications link between two or more computers, etc.), carrying a computer program arranged to implement one or more aspects of the invention, may embody aspects of the invention. The term “computer program,” as used herein, refers to a sequence of instructions designed for execution on a computer system, and may include source or object code, one or more functions, modules, executable applications, applets, servlets, libraries, and/or other instructions that are executable by a computer processor.
It will be further appreciated that the set of first data (training data) and second data (unknown sample data) can be obtained via the above-mentioned networked computer system components, such as by being retrieved from storage, being inputted by a user via an input device. Results data such as inlier/outlier determinations, and determined sample concentrations can also be stored using the aforementioned storage elements, and/or outputted to a display or other output device. The multidimensional standard curve 130 and/or the standard curve defined by the unidimensional function can also be stored using such storage elements. The aforementioned processor can process such stored and inputted data, as described herein, and store/output the results accordingly.
As will be appreciated by the skilled person, details of the above embodiment may be varied without departing from the scope of the present invention as defined by the appended claims. Many combinations, modifications, or alterations to the features of the above embodiments will be readily apparent to the skilled person and are intended to form part of the disclosure. Any of the features described specifically relating to one embodiment or example may be used in any other embodiment by making appropriate changes as apparent to the skilled person in the light of the above disclosure.
Number | Date | Country | Kind |
---|---|---|---|
1809418.5 | Jun 2018 | GB | national |
The present application is a National Phase entry of PCT Application No. PCT/EP2019/065039, filed Jun. 7, 2019, which claims priority from Great Britain Application No. 1809418.5 filed Jun. 8, 2018, all of these disclosures being hereby incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/065039 | 6/7/2019 | WO | 00 |