Technique for quantiating biological markers using quantum resonance interferometry

Information

  • Patent Grant
  • 6704662
  • Patent Number
    6,704,662
  • Date Filed
    Tuesday, July 2, 2002
    22 years ago
  • Date Issued
    Tuesday, March 9, 2004
    20 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Horlick; Kenneth R.
    • Kim; Young
    Agents
    • Kukkonen, III; Carl A.
Abstract
A technique is described for quantitating biological indicators, such as viral load, using interferometric interactions such as quantum resonance interferometry. A biological sample is applied to an array information structure that has a plurality of elements that emit data indicative of viral load. A digitized output pattern of the arrayed information structure is interferometrically enhanced by generating interference between the output pattern and a reference wave. The interferometrically enhanced output pattern is then analyzed to identify emitted data indicative of viral load which in turn is used to determine viral load.
Description




FIELD OF THE INVENTION




The invention generally relates to techniques for monitoring the effectiveness of medical therapies and dosage formulations, and in particular to techniques for monitoring therapy effectiveness using viral load measurements.




BACKGROUND OF THE INVENTION




It is often desirable to determine the effectiveness of therapies, such as those directed against viral infections, including therapies involving individual drugs, combinations of drugs, or other related therapies. One conventional technique for monitoring the effectiveness of a viral infection therapy is to measure and track a viral load associated with the viral infection, wherein the viral load is a measurement of a number of copies of the virus within a given quantity of blood, such per milliliter of blood. The therapy is deemed effective if the viral load is decreased as a result of the therapy. A determination of whether any particular therapy is effective is helpful in determining the appropriate therapy for a particular patient and also for determining whether a particular therapy is effective for an entire class of patients. The latter is typically necessary in order to obtain FDA approval of any new drug or medical device therapy. Viral load monitoring is also useful for research purposes such as for assessing the effectiveness of new antiviral compounds determine, for example, whether it is useful to continue developing particular antiviral compounds or to attempt to gain FDA market approval.




A test to determine the viral load can be done with blood drawn from T-cells or from other standard sources. The viral load is typically reported either as an absolute number, i.e., the number of virus particles per milliliter of blood or on a logarithmic scale. Likewise, decreases in viral load are reported in absolute numbers, logarithmic scales, or as percentages.




It should be noted though that a viral load captures only a fraction of the total virus in the body of the patient, i.e., it tracks only the quantity of circulating virus. However, viral load is an important clinical marker because the quantity of circulating virus is the most important factor in determining disease outcome, as changes in the viral load occur prior to changes in other detectable factors, such as CD4 levels. Indeed, a measurement of the viral load is rapidly becoming the acceptable method for predicting clinical progression of certain diseases such as HIV.




Insofar as HIV is concerned, HIV-progression studies have indicated a significant correlation between the risk of acquiring AIDS and an initial HIV baseline viral load level. In addition to predicting the risk of disease progression, viral load testing is useful in predicting the risk of transmission. In this regard, infected individuals with higher viral load are more likely to transmit the virus than others.




Currently, there are several different systems for monitoring viral load including quantitive polymerase chain reaction (PCR) and nucleic acid hybridization. Herein, the term viral load refers to any virological measurement using RNA, DNA, or p24 antigen in plasma. Note that viral RNA is a more sensitive marker than p24 antigen. p24 antigen has been shown to be detectable in less than 50% asymptomatic individuals. Moreover, levels of viral RNA rise and fall more rapidly than levels of CD4+ lymphocytes. Hence, changes in infection can be detected more quickly using viral load studies based upon viral RNA than using CD4 studies. Moreover, viral load values have to date proven to be an earlier and better predictor of long term patient outcome than CD4-cell counts. Thus, viral load determinations are rapidly becoming an important decision aid for anti-retro viral therapy and disease management. Viral load studies, however, have not yet completely replaced CD+ analysis in part because viral load only monitors the progress of the virus during infection whereas CD4+ analysis monitors the immune system directly. Nevertheless, even where CD4+ analysis is effective, viral load measurements can supplement information provided by the CD4 counts. For example, an individual undergoing long term treatment may appear stable based upon the observation of clinical parameters and CD4 counts. However, the viral load of the patient may nevertheless be increasing. Hence, a measurement of the viral load can potentially assist a physician in determining whether to change therapy despite the appearance of long term stability based upon CD4 counts.




Thus, viral load measurements are very useful. However, there remains considerable room for improvement. One problem with current viral load measurements is that the threshold level for detection, i.e., the nadir of detection, is about 400-500 copies per milliliter. Hence, currently, if the viral load is below 400-500 copies per milliliter, the virus is undetectable. The virus may nevertheless remain within the body. Indeed, considerable quantities of the virus may remain within the lymph system. Accordingly, it would be desirable to provide an improved method for measuring viral load which permits viral load levels of less than 400-500 copies per milliliter to be reliably detected.




Another problem with current viral load measurement techniques is that the techniques are typically only effective for detecting exponential changes in viral loads. In other words, current techniques will only reliably detect circumstances wherein the viral load increases or decreases by an order of magnitude, such by a factor of 10. In other cases, viral load measurements only detect a difference between undetectable levels of the virus and detectable levels of the virus. As can be appreciated, it would be highly desirable to provide an improved method for tracking changes in viral load which does not require an exponential change in the viral load for detection or which does not require a change from an undetectable level to a detectable level. Indeed, with current techniques, an exponential or sub-exponential change in the viral load results only in a linear change in the parameters used to measure the viral load. It would instead be highly desirable to provide a method for monitoring the viral load which converts a linear change in the viral load into an exponential change within the parameters being measured to thereby permit very slight variations in viral load to be reliably detected. In other words, current viral load detection techniques are useful only as a qualitative estimator, rather than as a quantative estimator.




One reason that current viral load measurements do not reliably track small scale fluctuations in the actual number of viruses is that a significant uncertainty in the measurements often occurs. As a result, individual viral load measurements have little statistical significance and a relatively large number of measurements must be made before any statistically significant conclusions can be drawn. As can be appreciated it would be desirable to provide a viral load detection technique which can reliably measure the viral load such that the statistical error associated with a single viral load measurement is relatively low to permit individual viral load measurements to be more effectively exploited.




Moreover, because individual viral load measurements are not particularly significant when using current methods, treatment decisions for individual patients based upon the viral load measurements must be based only upon long term changes or trends in the viral load resulting in a delay in any decision to change therapy. It would be highly desirable to provide an improved method for measuring and tracking viral load such that treatment decisions can be made much more quickly based upon short term trends of measured viral load.




As noted above, the current nadir of viral load detectability is at 400-500 copies of the virus per milliliter. Anything below that level is deemed to be undetectable. Currently the most successful and potent multi-drug therapies are able to suppress viral load below that level of detection in about 80-90 percent of patients. Thereafter, viral load is no longer an effective indicator of therapy. By providing a viral load monitoring technique which reduces the nadir of detectability significantly, the relative effectiveness of different multi-drug therapies can be more effectively compared. Indeed, new FDA guidelines for providing accelerated approval of a new drug containing regimen requires that the regimen suppress the viral load below the current nadir of detection in about 80 to 85 percent of cases. If the new regimen suppresses the viral load to undetectable levels in less than 80 to 85 percent of the cases, the new drug will gain accelerated approval only if it has other redeeming qualities such as a preferable dosing regimen (such as only once or twice per day), a favorable side effect profile, or a favorable resistance or cross-resistance profile. Thus, the ability of a regimen to suppress the viral load below the level of detection is an important factor in FDA approval. However, because the level of detectability remains relatively high, full approval is currently not granted by the FDA solely based upon the ability of the regimen to suppress the viral load below the minimum level of detection. Rather, for full approval, the FDA may require a further demonstration of the durability of the regimen, i.e., a demonstration that the drug regimen suppresses the viral load below the level of detectability and keeps it below the level of detectability for some period of time.




As can be appreciated, if a new viral load measurement and tracking technique were developed which could reliably detect viral load at levels much lower than the current nadir of detectability, the FDA may be able, using the new technique, to much more precisely determine the effectiveness of a drug regimen for the purposes of granting approval such that a demonstration of the redeeming evalities will no longer be necessary.




For all of these reasons, it would be highly desirable to provide an improved technique for measuring and tracking viral load capable of providing much more precise and reliable estimates of the viral load and in particular capable of reducing the nadir of detectability significantly. The present invention is directed to this end.




SUMMARY OF THE INVENTION




In accordance with a first aspect of the invention, a method is provided for determining the effectiveness of a therapy, such as an anti-viral therapy, by analyzing biochip output patterns generated from biological samples taken at different sampling times from a patient undergoing the therapy. In accordance with the method, a viral diffusion curve associated with a therapy of interest is generated and each of the output patterns representative of hybridization activity is then mapped to coordinates on the viral diffusion curve using fractal filtering. A degree of convergence between the mapped coordinates on the viral diffusion curve is determined. Then, a determination is made as to whether the therapy of interest has been effective based upon the degree of convergence.




In an exemplary embodiment, the viral diffusion curve is spatially parameterized such that samples map to coordinates near the curve maxima, if the viral load is increasing (i.e., therapy or dosage is ineffective). In this manner, any correlation between rate and extent of convergence across different patient samples is exploited to provide a quantitative and qualitative estimate of therapy effectiveness.




Also in the exemplary embodiment, the biological sample is a DNA sample. The output pattern of the biochip is quantized as a dot spectrogram. The viral diffusion curve is generated by inputting parameters representative of viral load studies for the therapy of interest, generating a preliminary viral diffusion curve based upon the viral load studies; and then calibrating a degree of directional causality in the preliminary viral diffusion curve to yield the viral diffusion curve. The parameters representative of the viral load studies include one or more of baseline viral load (BVL) set point measurements at which detection is achieved, BVL at which therapy is recommended and viral load markers at which dosage therapy is recommended. The step of generating the preliminary viral diffusion curve is performed by selecting a canonical equation representative of the viral diffusion curve, determining expectation and mean response parameters for use in parameterizing the equation selected to represent the viral diffusion curve and parameterizing the equation selected to represent the viral diffusion curve to yield the preliminary viral diffusion curve.




Also, in the exemplary embodiment, each dot spectrogram is mapped to the viral diffusion curve using fractal filtering by generating a partitioned iterated fractal system IFS model representative of the dot spectrogram, determining affine parameters for IFS model, and then mapping the dot spectrogram onto the viral diffusion curve using the IFS. Before the dot spectrograms is mapped to the viral diffusion curve, the dot spectrograms are interferometrically enhanced. After the mapping, any uncertainty in the mapped coordinates is compensated for using non-linear information filtering.




In accordance with a second aspect of the invention, a method is provided for determining the viral load within a biological sample by analyzing an output pattern of a biochip to which the sample is applied. In accordance with the method, a viral diffusion curve associated with a therapy of interest is generated and then calibrated using at least two viral load measurements. Then the output pattern for the sample is mapped to coordinates on the calibrated viral diffusion curve using fractal filtering. The viral load is determined from the calibrated viral diffusion curve by interpreting the coordinates of the viral diffusion curve.




Apparatus embodiments are also provided. By exploiting aspects of the invention, disease management decisions related to disease progression, therapy and dosage effectiveness may be made by tracking the coordinates on the viral diffusion curve as successive DNA-/RNA-based microarray samples are collected and analyzed.











BRIEF DESCRIPTION OF THE DRAWINGS




The features, objects, and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:





FIG. 1

is a flow chart illustrating an exemplary method for determining the effectiveness of a viral therapy in accordance with the invention.





FIG. 2

is a flow chart illustrating an exemplary method for generating Viral Diffusion Curves for use with the method of FIG.


1


.





FIG. 3

is a flow chart illustrating an exemplary method for mapping dot spectrograms onto Viral Diffusion Curves using fractal filtering for use with the method of FIG.


1


.





FIG. 4

is a block diagram illustrating the effect of the fractal filtering of FIG.


3


.





FIG. 5

is a flow chart illustrating an exemplary method for compensating for uncertainty using non-linear information filtering for use with the method of FIG.


1


.











DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS




With reference to the figures, exemplary method embodiments of invention will now be described. The exemplary method will be described primarily with respect to the determination of changes in viral loads based upon the output patterns of a hybridized biochip microarray using DNA samples, but principles of the invention may be applied to other protein-based samples or to other types of output patterns as well.




With reference to

FIG. 1

, steps will be described for generating viral diffusion curves for use is processing DNA biomicroarray output patterns to determine the effectiveness of therapies imposed upon a patient providing samples for which the outputs are generated. Then, steps will be described for processing the specific output patterns using the VDC's.




An underlying clinical hypothesis of the exemplary method is that antiviral treatment should inhibit viral replication and lower an individual's viral load from baseline or suppress rising values. A stationary or rising viral load after the introduction of antiviral therapy indicates a lack of response to the drug(s) or the development of drug resistance. The VDC exploits the underlying hypothesis in part by correlating the rate of disease progression to a sample point value such that a change in sample point indicates progression.




The Method




At step


100


, parameters representative of viral load studies for the therapy of interest are input. A preliminary viral diffusion curve is generated, at step


102


, based upon the viral load studies. The parameters representative of the viral load studies include baseline viral load (BVL) set point measurements at which detection is achieved, BVL at which therapy is recommended and viral load markers at which dosage therapy is recommended. At step


104


, a degree of directional causality in the preliminary viral diffusion curve is calibrated to yield the final viral diffusion curve.




Steps


100


-


104


are performed off-line for setting up the VDC's. These steps need be performed only once for a given therapy and for a given set of baseline viral load measurements. Thereafter, any number of DNA biomicroarray output patterns may be processed using the VDC's to determine the effectiveness of the therapy. Preferably, VDC's are generated for an entire set of therapies that may be of interest such that, for any new DNA biomicroarray output pattern, the effectiveness of any of the therapies can be quickly determined using the set of VDC's. In general, the aforementioned steps need be repeated only to update the VDC's based upon new and different baseline viral load studies or if new therapies of interest need to be considered.




In the following, steps will be summarized for processing DNA biomicroarray output patterns using the VDC's to determine whether any therapies of interest represented by the VDC's are effective. To determine the effectiveness of therapy at least two samples of DNA to be analyzed are collected from a patient, preferably taken some time apart, and biomicroarray patterns are generated therefrom. In other cases though, the different samples are collected from different patients.




The output patterns for the DNA biomicroarray are referred to herein as dot spectrograms. A dot spectrogram is generated using a DNA biomicroarray for each sample from an N by M DNA biomicroarray. An element of the array is an “oxel”: o(i,j). An element of the dot spectrogram is a hixel: h(i,j). The dot spectrogram is represented by cell amplitudes given by Φ(i,j) for i:1 to N, and j:1 to M.




Dot spectrograms are generated from the samples taken at different times using a prefabricated DNA biomicroarray at step


106


. The dot spectrograms are interferometrically enhanced at step


108


. Each dot spectrogram is then mapped to coordinates on the viral diffusion curves using fractal filtering at step


110


. After the mapping, any uncertainty in the mapped coordinates is compensated for at step


112


using non-linear information filtering.




VDC coordinates are initialized at step


114


, then updated in accordance with filtered dot spectrograms at step


116


. A degree of convergence between the mapped coordinates on the viral diffusion curves is then determined at step


118


and a determination is made as to whether the therapy of interest has been effective. The determination is based upon whether the degree of convergence increases from one DNA sample to another. An increase in degree of convergence is representative of a lack of effectiveness of the therapy of interest. Hence, if the degree of convergence decreases, then execution proceeds to step


120


, wherein a signal is output indicating that the therapy is effective. If the degree of convergence increases, then execution proceeds to step


122


wherein VDC temporal scale matching is performed. Then a determination is made at step


124


whether an effectiveness time scale has been exceeded. If exceeded, then a conclusion is drawn that the effectiveness of the viral therapy cannot be established even if more samples are analyzed. If not exceeded, then execution returns to step


106


wherein another sample taken from the same patient at a latter time is analyzed by repeating steps


106


through


118


.




Viral Load Studies




Viral load studies for therapies of interest are parameterized at step


100


as follows. The therapy of interest is selected from a predetermined list of therapies for which viral load studies have been performed. Measurements from viral load studies are input for therapy of interest. As noted, the viral load measurements include one or more of Baseline Viral Load (BVL) set point measurements at which detection is achieved; BVL at which therapy is recommended; and VL markers at which dosage change recommended.




Data for the viral load measurements are obtained, for example, from drug qualification studies on a minimum include dosages, viral limits as well as time cycles within which an anti-viral drug is deemed effective. The data is typically qualified with age/weight outliners and patient history. Attribute relevant to this claim is the γ


1


or BVL


LOW


which corresponds to the lowest detection limit shown for a therapy to be effective using conventional assays or any other diagnostic means. BVL


NP-LOW


denotes the lowest threshold at which viral load is achieved using a nucleotide probe. Using interferometric enhancement technique, BVL


NP-LOW <<


BVL


LOW


.




Generation of Viral Diffusion Curves




Referring now to

FIG. 2

, the viral diffusion curves are generated as follows. An equation is selected for representing the VDC at step


200


. Expectation (μ) and mean response parameters are determined at step


202


for use in parameterizing the selected equation. Then the equation selected to represent the VDC is parameterized at step


204


to yield a numerical representation of the VDC. These steps will now be described in greater detail.




These steps populate a canonical machine representation, denoted as VDC(i, Γ, γ, κ) (which is a special case of Fokker-Planck equation) to calibrate responses from a viral load detection DNA-array based hybridization biomicroarray.




i is the index for a diagnostic condition/therapy of interest,




Γ denotes the parameter vector characterizing the VDC,




λ denotes the clinical endpoints vector that indicates detectability thresholds for a specific DNA-hybridization array implementation




κ correspond to the uncertainty interval estimates.




An example of an equation selected for representing the VDC is:










ρ



t


=


div


(




Ψ


(
x
)




ρ

)


+

Δρ
β



,






ρ


(

x
,
0

)


=


ρ
0



(
x
)













The potential Ψ(x): R


n


→[0, ∞) is a smooth function, β>0 is selected constant, and ρ


0


(x) is a probability density on R


n


.




Preferably, the diffusion potential of the equation and the BVL data are such that:






Ψ(


x


)<


c[BVL




NP













LOW




|BVL




LOW


]






the constant c is generally set to






c
=


[

number





of





amplitude





discretization





levels

]





[


log
(

PCR





amplification





factor

)

*
avg


(



oligonucleotides
/
oxel



)

*











tagging





efficiency



*



binding





efficiency




]















Binding efficiency is difficult to quantify analytically for a biomicroarray device technology. Hence, for use in the above equation, an estimate of the binding efficiency is preferably employed. A binding efficiency of 30% (0.3) is appropriate, though other values may alternatively used. Depending upon the specific biomicroarray used, the constant c typically ranges between 0.0001 to 0.5.




Expectation and Mean Response Parameters




The expectation (μ) and mean response parameters are then determined at step


202


for use in parameterizing the equation selected to represent the VDC. The expectation and mean response values are determined by: 1) performing conventional PCR amplification; 2) obtaining calibrated viral counts from the PCR amplification; 3) determining enhanced and normalized hybridization amplitude mean and variance values corresponding to the calibrated viral counts; and 4) matching the enhanced and normalized hybridization amplitude mean and variance values.




Two synthetic amplification techniques (in addition to PCR and any designer tagging) are used to achieve VL estimation above the BVL limit set for the exemplary embodiment of the method, namely (a) readout pre-conditioning, and (b) nonlinear interferometric enhancement. Moreover, the expectation match condition implies that:











Expectation




[
Log








(



interferometrically





enhanced





image



)

]













[

expression





set





of





interest

]










Expectation






[

preconditioned





image





amplitude

]









[

expression





set





of





interest










1










Variance matching is done similarly with respect to biomicroarray readout. The lower bound of mean response value can be given by:











Variance




[
Log












(



interferometrically





enhanced





image



)

]













[

expression





set





of





interest

]










Variance






[

preconditioned





image





amplitude

]













[

expression





set





of





interest










1










Using the above expression, a conservative lower bound for interferometric enhancement is estimated for each nucleotide expression of interest. Since the array fabrication device is assumed to have (i) an identical oligonucleotide density per oxel and (ii) equal length oligonucleotides, the same mean response amplitude can be assumed. If these two assumptions are not met then bounds need to be individually calculated and averaged using the above formula. Another assumption is that the binding efficiency is statistically independent of the actual oligonucleotide sequence. If this assumption does not hold for the specific device technology then the binding efficiency should be provided as well for each expressed sequence of interest. So the computational analysis method uses the analytically derived lower bounds, as computed using the above equation. This is a one-time calculation only and is done offline at design time.




Parameterization of the VDC




The equation selected to represent the VDC is then parameterized using the expectation and mean response values at step


204


to yield a numerical representation of the VDC using







y


(


x


)=β


0





1




x+β




2




x




2






subject to constraints







x
.

=


γ







sin
k



[



ω

α



(


β
0

+


β
1


x

+


β
2



x
2



)


]







sin





ω





t

+

ε


(
t
)













with constants α, β, γ and ε<<1.




This utility of this parameterization is established as follows. The VDC canonical representation is based on a variational formulation of the Fokker-Planck. The Fokker-Planck (FP) equation, or forward Kolmogorov equation, describes the evolution of the probability density for a stochastic process associated with an Ito stochastic differential equation. The exemplary method exploits the VDC to model a physical time-dependent phenomena in which randomness plays a major role. The specific variant used herein is one for which the drift term is given by the gradient of a potential. For a broad class of potentials (that correspond to statistical variability in therapy response), a time discrete, iterative variational is constructed whose solutions converge to the solution of the Fokker-Planck equation. The time-step is governed by the Wasserstein metric on probability measures. In this formulation the dynamics may be regarded as a gradient flux, or a steepest descent for the free energy with respect to the Wasserstein metric. This parameterization draws from theory of stochastic differential equations: wherein a (normalized) solution to a given Fokker-Planck equation represents the probability density for the position (or velocity) of a particle whose motion is described by a corresponding Ito stochastic differential equation (or Langevin equation). The drift coefficient is a gradient. The method exploits “designer conditions” on the drift and diffusion coefficients so that the stationary solution of a Fokker-Planck equation satisfies a variational principle. It minimizes a certain convex free energy functional over an appropriate admissible class of probability densities.




A physical analogy is to an optimal control problem which is related to the heating of a probe in a kiln. The goal is to control the heating process in such a way that the temperature inside the probe follows a certain desired temperature profile. The biomolecular analogy is to seek a certain property in the parameterized VDC—namely, an exponential jump in the VDC coordinate position for “small linear changes in the viral count”.




This method is in contradistinction to conventional calibration strategies which obtain a linear or superlinear shift in quantization parameter for an exponential shift in actual viral count.




As noted, the form of FP equation chosen is










ρ



t


=


div


(




Ψ


(
x
)




ρ

)


+



ρ

β



,






ρ


(

x
,
0

)


=


ρ
0



(
x
)













where the potential Ψ(x): R


n


→[0, ∞) is a smooth function, β>0 is selected constant, and ρ


0


(x) is a probability density on R


n


. The solution ρ(t,x) is a probability density on R


n


for almost every fixed time t. So the distribution ρ(t,x)≧0 for almost every (t,x)∈ (0,∞) X R


n


, and











n





ρ


(

t
,
x

)





x



=
1










for almost every t, ∈ (0,∞).




It is reasonably assumed that hybridization array device physics for the DNA biomicroarray (i.e., corresponding to the potential ψ) has an approximately linear response to the nucleotide concentration and the response is monotonic with bounded drift. So,








ρ
s



(
x
)


=


1
Z





(

-

βΨ


(
x
)



)













where the partition function Z is given by






Z
=





n







(

-

βΨ


(
x
)



)





x













In this model the basis for device physics design is that the potential needs to be modulated such that it grows rapidly enough for Z to be finite. This is not achieved by conventional methods. However, a technique which does achieve this result is described in co-pending U.S. patent application Ser. No. 09/253,789, now U.S. Pat. No. 6,136,541 filed contemporaneously herewith, entitled “Method and Apparatus for Analyzing Hybridized DNA Microarray Patterns Using Resonant Interactions Employing Quantum Expressor Functions”, which is incorporated by reference herein.




The probability measure ρ


s


(x)dx is the unique invariant measure for the Markov random field (MRF) fit to the empirical viral load data.




The method exploits a special dynamical effect to design ρ. The method restricts the FP equation form above to a more restricted case: random walk emulating between critical equilibrium points.




To aid in understanding this aspect of the invention, consider the diffusion form










ρ


(

x
,
t

)





t


=


1
2



D
2






2



ρ


(

x
,
t

)






2


t













where D


2


=πα


2






and α is constant.




A specific VDC shape is parameterized by:








y


(


x


)=β


0





1




x+β




2




x




2








subject to constraints







x
.

=


γ







sin
k



[



ω

α



(


β
0

+


β
1


x

+


β
2



x
2



)


]







sin





ω





t

+

ε


(
t
)













with constants α, β, γ and ε<<1.




These constants are set based upon the dynamic range expected for the viral load. Thus, if the viral load is expected to vary only within a factor of 10, the constants are set accordingly. If the viral load is expected to vary within a greater range, different constants are employed. The actual values of the constants also depend upon the particular disease.




Where the following conditions are met







Expectation





match


:  



E


(
x
)



=





-







xf


(
x
)





x



=
μ






Variance


:  



σ
2


=




-








(

x
-
μ

)

2



f
x




x








And









-






f


(
x
)




=
1










The expectation and mean response parameters for use in these equations are derived, as described above, from matching the enhanced and normalized hybridization amplitude mean and variance that correspond to calibrated viral counts (via classical PCR amplification).




A distribution represented by the above-equations then satisfies the following form with a prescribed probability distribution







x
.

=


γ







sin
k



[



ω

α



y


(
x
)



]







sin





ω





t

+

ε


(
t
)









Assuming


:  



y



=




y



x


>
β
>
0


,





β
=
constant











and ε(t)=ε


0


ay




such that








{dot over (a)}=a




1/3


(


y


−1)(


y


+1)−ε


0




a








and the distribution controlling equation is








f


(


x


)=0.5


|y


′(


x


)|






such that y(−1)<X<y(1).




The characteristic timescale of response for this system is given by







T
*

=


1
ω



arccos




[

1
-



B


(


1
/
3

,

1
/
3


)



2
3





α


ω


γ



]












the successive points must show a motion with characteristic timescale. The VDC is designed such that sampling time falls well within characteristic time.




As noted the actual information used for populating above the parameters is available from the following: Baseline viral load (BVL) set point measurements at which detection is achieved; BVL at which therapy is recommended; and VL markers at which dosage change recommended.




The following provides an example of preclinical data that is available to to assist in parameterizing the VDC.




NOTIONAL VIRAL LOAD MANAGEMENT EXAMPLE




This is a synthetic example to illustrate how data from clinical studies may be used to calibrate the VDC.




Viral Load analysis studies, using conventional assays, in HIV progression have shown that neither gender, age, HCV co-infection, past history of symptomatic HIV-1 infection, duration of HIV-1 infection nor risk group are associated with a higher risk of increasing baseline viral load (BVL) to the virologic end-point. However, patients with a high (BVL) between 4000-6000 copies had a 10-fold higher risk of increasing the level of viral load than patients with a BVL below 1500 copies/ml. Thus, baseline viral load set point measurements provide an important indicator for onset of disease.




Initiation of antiretroviral therapy is generally recommended when the CD4


+


T-cell count is <600 cells/mL and the viral load level is >6,000 copies/mL. When the viral load is >28000 copies/mL, initiation of therapy is recommended regardless of other laboratory markers and clinical status.




Effective antiretroviral treatment may be measured by changes in plasma HIV RNA levels. The ideal end point for effective antiviral therapy is to achieve undetectable levels of virus (<400 copies/mL). A decrease in HIV RNA levels of at least 0.5 log suggests effective treatment, while a return to pretreatment values (±0.5 log) suggests failure of drug treatment.




When HIV RNA levels decline initially but return to pretreatment levels, the loss of therapy effectiveness has been associated with the presence of drug-resistant HIV strains.




The therapy-specific preclinical viral load markers (such as low and high limits in the above example) are used to establish actual BVL boundaries for the VDC associated with a particular therapy. In this regard, the determination of the BVL parameters is disease specific. For example, in HIV methods such as RT-PCR, bDNA or NASBA are used. Other diseases use other assays. Typically, once the parameters of the VDC equation have been set (i.e. constants α, β, γ and ε), only two viral load markers are needed to complete the parameterization of the VDC. This is in contrast to previous techniques whereby expensive and laborious techniques are required to determine the shape of a viral diffusion curve. The present invention succeeds in using only two viral load markers in most cases by exploiting the canonical VDC described above which has predefined properties and which is predetermined based upon the particular biochip being used.




Calibration of Viral Diffusion Curves




Referring again to

FIG. 1

, the directional causality of the VDC is calibrated at step


104


in the context of an NIF discussed in greater detail below. At least arbitrarily selected three sample points are used execute the NIF calibration computation. The resulting polynomial is used to extracting qualitative coherence properties of the system.




The spectral [Θ] and temporal coherence [] is incrementally estimated and computed for each mutation/oligonucleotide of interest by a NIF forward estimation computation (described further below). The two estimates are normalized and convolved to yield cross-correlation function over time. The shape index (i.e., curvature) of the minima is used as a measure for directional causality. Absence of curvature divergence is used to detect high directional causality in the system.




Sample Collection




Samples are collected at step


106


subject to a sample point collection separation amount. The separation amount for two samples is preferably within half a “drug effectiveness mean time” covering 2σ population level wherein σ denotes the standard deviation in period before which a effectiveness for a particular drug is indicated. The following are some general guidelines for sample preparation for use with the exemplary method:




It is important that assay specimen requirements be strictly followed to avoid degradation of viral RNA.




A baseline should be established for each patient with two specimens drawn two to four weeks apart.




Patients should be monitored periodically, every three to four months or more frequently if therapy is changed. A viral load level that remains at baseline or a rising level indicates a need for change in therapy.




Too much significance should not be given to any one viral load result. Only sustained increases or decreases of 0.001-0.01 log [conventional methods typically require a 0.3-0.5 log change] or more should be considered significant. Biological and technical variation of up to 0.01 log [typical conventional limit: 0.3 log] is possible. Also, recent immunization, opportunistic infections and other conditions may cause transient increases in viral load levels.




A new baseline for each patient should be established when changing laboratories or methods.




Recommendations for frequency of testing are as follows:




establish baseline: 2 measurements, 2 to 4 weeks apart




every 3 to 4 months or in conjunction with CD4


+


T-cell counts




3 to 4 weeks after initiating or changing antiretroviral therapy




shorter intervals as critical decisions are made.




measurements 2-3 weeks apart to determine a baseline measurement.




repeat every 3-6 months thereafter in conjunction with CD4 counts to monitor viral load and T-cell count.




avoid viral load measurements for 3-4 weeks following an immunization or within one month of an infection.




a new baseline for each patient should be established when changing laboratories or methods.




The samples are applied to a prefabricated DNA biomicroarray to generate one or more dot spectrograms each denoted Φ(i,j) for i:1 to N, and j:1 to M. The first sample is referred to herein as the k=1 sample, the second as the k=2 sample, and so on.




Interferometric Enhancement of the Dot Spectrogram




Each dot spectrogram provided by the DNA biomicroarray is filtered at step


108


to yield enhanced dot spectrograms Φ(κ) either by performing a conventional Nucleic Assay Amplification or by applying preconditioning and normalization steps as described in the co-pending patent application having Ser. No. 09/253,789, now U.S. Pat. No. 6,136,541, entitled “Method And Apparatus For Analyzing Hybridized Biochip Patterns Using Resonance Interactions Employing Quantum Expressor Functions”. The application is incorporated by reference herein, particularly insofar as the descriptions of the use of preconditioning and normalization curves are concerned.




Fractal Filtering




Each enhanced dot spectrogram is then mapped to the VDC using fractal filtering at step


110


as shown in

FIG. 3

by generating a partitioned iterated fractal system


302


, determining affine parameters for the IFS


304


and then mapping the enhanced dot spectrogram onto the VDC using the IFS, step


306


.




The VDC representation models a stochastic process given by








W


(
f
)




(

x
,
y

)


=

{






γ
i

·

f


(



1

σ
i




(


x
-

x
D
i



y
-

y
D
i



)


+


x
R
i


y
R
i



)



+

τ


(


x
-

x
D
i



y
-

y
D
i



)


+

β

i
,










if






(

x
,
y

)





μ
i

-
1




(
1
)



,



for





some





1


i

m

;







0
,



          


otherwise

;
















for any (x,y)∈R


2


and f ∈p (R


2


)




An exemplary partitioned iterated fractal system (IFS) model for the system is






W={Φ


i


=(μ


i


,T


i


)}i=1,2, . . . , m






where the affine parameters for the IFS transformation are given by







T
i

=

(


(


x
D
i

,

y
D
i


)

,

(


x
R
i

,

y
R
i


)

,


σ
i

=

(




s
00
i




s
01
i






s
10
i




s
11
i




)


,


τ
i

=

(


t
0
i

,

t
1
i


)


,


γ

i
,




β

i
,




)











where the D-origin is given by (x


i




D


, y


i




D


),




the R-origin is given by (x


i




R


, y


i




R


)




spatial transformation matrix is given by σ


i






the intensity tilting vector is given by τ


i






the contrast adjustment is given by γ


i


,




the brightness adjustment is given by β


i


,




and wherein Φ represents the enhanced dot spectrogram and wherein μ represents the calculated expectation match values.




This IFS model maps the dot spectrogram to a point on the VDC wherein each VDC coordinate is denoted by VDC(t,Θ) such that






W[Φ, k]→VDC (k, Θ)






Wherein k represents a sample.




In the above equation, Θ represent the parameters of the IFS map.




Thus the output of step


110


, is a set of VDC coordinates, identified as VDC(k, Θ), with one set of coordinates for each enhanced dot spectrogram k=1, 2. . . n.




The effect of the steps of

FIG. 3

is illustrated in

FIG. 4

which shows a set of dot spectrograms


450


,


451


and


452


and a VDC


454


. As illustrated, each dot spectrogram is mapped to a point on the VDC. Convergence toward a single point on the VDC implies ineffectiveness of the viral therapy. A convergence test is described below.




Uncertainty Compensation




With reference to

FIG. 5

, any uncertainty in the coordinates VDC(k, Θ) is compensated using Non-linear Information Filtering as follows. Biomicroarray dispersion coefficients, hybridization process variability values and empirical variance are determined at step


402


. The biomicroarray dispersion coefficients, hybridization process variability values and empirical variance are then converted at step


404


to parameters for use in NIF. The NIF is then applied at step


406


to the VDC coordinates generated at step


106


of FIG.


1


.




Nonlinear information filter (NIF), is a nonlinear variant of the Extended Kalman Filter. A nonlinear system is considered. Linearizing the state and observation equations, a linear estimator which keeps track of total state estimates is provided. The linearized parameters and filter equations are expressed in information space. This gives a filter that predicts and estimates information about nonlinear state parameters given nonlinear observations and nonlinear system dynamics.




The information Filter (IF) is essentially a Kalman Filter expressed in terms of measures of the amount of about the parameter of interest instead of tracking the states themselves, i.e., track the inverse covariance form of the Kalman filter. Information here is in the Fisher sense, i.e. a measure of the amount of information about a parameter present in the observations.




Uncertainty bars are estimated using NIF algorithm. The parameters depend on biomicroarray dispersion coefficients, hybridization process variability and empirical variance indicated in the trial studies.




One particular advantage of the method of the invention is that it can also be used to capture the dispersion from individual to individual, therapy to therapy etc. It is extremely useful and enabling to the method in that it can be apriori analytically set to a prechosen value and can be used to control the quality of biomicroarray output mapping to VDC coordinates.




The biomicroarray dispersion coefficients, hybridization process variability values and empirical variance are determined as follows. Palm generator functions are used to capture stochastic variability in hybridization binding efficacy. This method draws upon results in stochastic integral geometry and geometric probability theory.




Geometric measures are constructed to estimate and bound the amplitude wanderings to facilitate detection. In particular we seek a measure for each mutation-recognizer centered (MRC-) hixel that is invariant to local degradation. Measure which can be expressed by multiple integrals of the form







m


(
Z
)


=



Z




f


(
z
)





z













where Z denotes the set of mutations of interest. In other words, we determine the function f(z) under the condition that m(z) should be invariant with respect to all dispersions ξ. Also, up to a constant factor, this measure is the only one which is invariant under a group of motions in a plane. In principle, we derive deterministic analytical transformations on each MRC-hixel., that map error-elliptic dispersion bound defined on R


2


(the two dimension Euclidean space—i.e., oxel layout) onto measures defined on R. The dispersion bound is given by






Log


4





(i,j)


|


z


).






Such a representation of uniqueness facilitates the rapid decimation of the search space. It is implemented using a filter constructed using measure-theoretic arguments. The transformation under consideration has its theoretical basis in the Palm Distribution Theory for point processes in Euclidean spaces, as well as in a new treatment in the problem of probabilistic description of MRC-hixel dispersion generated by a geometrical processes. Latter is reduced to a calculation of intensities of point processes. Recall that a point process in some product space E X F is a collection of random realizations of that space represented as {(e


i


, f


i


), |e


I


∈E, f


i


∈F}.




The Palm distribution, Π of a translation (T


n


) invariant, finite intensity point process in R


n


is defined to the conditional distribution of the process. Its importance is rooted in the fact that it provides a complete probabilistic description of a geometrical process.




In the general form, the Palm distribution can be expressed in terms of a Lebesgue factorization of the form






E


p


N*=ΛL


N









Where Π and Λ completely and uniquely determine the source distribution P of the translation invariant point process. Also, E


p


N* denotes the first moment measure of the point process and L


N


is a probability measure.




Thus a determination of Π and Λ is needed which can uniquely encode the dispersion and amplitude wandering associated with the MRC-hixel. This is achieved by solving a set of equations involving Palm Distribution for each hybridization (i.e., mutation of interest). Each hybridization is treated as a manifestation of a stochastic point process in R


2


.




In order to determine Π and Λ we have implemented the following measure-theoretic filter:




Determination of Λ




using integral formulae constructed using the marginal density functions for the point spread associated with MRC-hixel(i,j)




The oligonucleotide density per oxel ρ


m(i,j)


, PCR amplification protocol (σ


m


), fluorescence binding efficiency (η


m


) and imaging performance ({tilde over (ω)}


m


) provide the continuous probability density function for amplitude wandering in the m-th MRC-hixel of interest. Let this distribution be given by p(ρ


m(i,j)


, σ


m


, η


m


, {tilde over (ω)}


m


).




The method requires a preset binding dispersion limit to be provided to compute Λ. The






p(ρ


m(i,j)


, σ


m


, η


m


, {tilde over (ω)}


m


)






second moment to the function




at SNR=0 condition is used to provide the bound.




Determination of Π.




Obtained by solving the inverse problem






Π=Θ*


P








where






P
=




τ
1


τ
2











(


ρ

m


(

i
,
j

)



,

σ
m

,

η
m

,

ϖ
m


)




τ













where τ


1


and τ


2


represent the normalized hybridization dispersion limits.




The number are empirically plugged in. The values of 0.1 and 0.7 are appropriate for, respectively, signifying loss of 10%-70% hybridization. Also , Θ denotes the distribution of known point process. The form 1/(1+exp(p( . . . ))) is employed herein to represent Θ.




The biomicroarray dispersion coefficients, hybridization process variability values and empirical variance are then converted to parameters at step


304


for use in NIF as follows.




The NIF is represented by:




Predicted State=f(current state, observation model, information uncertainty, information model)




Detailed equations are given below.




In the biomicroarray context, NIF is an information-theoretic filter that predicts and estimates information about nonlinear state parameters (quality of observable) given nonlinear observations (e.g., post hybridization imaging) and nonlinear system dynamics (spatio-temporal hybridization degradation). The NIF is expressed in terms of measures of the amount of information about the observable (i.e., parameter of interest) instead of tracking the states themselves. It has been defined as the inverse covariance of the Kalman filter, where the information is in the Fisher sense, i.e, a measure of the amount of information about o


l


present in the observations Z


k


where the Fisher information matrix is the covariance of the score function.




In a classical sense the biomicroarray output samples can be described by the nonlinear discrete-time state transition equation of the form:






VDC(k)=f(VDC(k−1), Φ(k−1),k)+v(k)






where VDC(k−1) is the state at time instant (k−1),




Φ(k−1) is the input vector (embodied by dosage and/or therapy)




v(k) is some additive noise; corresponds to the biomicroarray dispersion as computed by the Palm Generator functions above.




VDC(k) is the state at time k,




f(k, . ,) is the nonlinear state transition function mapping previous state and current input to the current state. In this case it is the fractal mapping that provides the VDC coordinate at time k.




The observations of the state of the system are made according to a non-linear observation equation of the form








z


(


k


)=


h


(


VDC


(


k


))+


w


(


k


)






where z(k) is the observation made at time k




VDC(k) is the state at time k,




w(k) is some additive observation noise




and h(.,k) is the current non-linear observation model mapping current state to observations, i.e., sequence-by-hybridization made at time k,




v(k) and w(k) are temporally uncorrelated and zero-mean. This is true for the biomicroarray in how protocol uncertainties, binding dynamics and hybridization degradation are unrelated and additive. The process and observation noises are uncorrelated.








E[v


(


i


)


w




T


(


j


)]=0, ∀


i,j.








The dispersion coefficients together define the nonlinear observation model.




The nonlinear information prediction equation is given by








ŷ


(


k|k−


1)=


Y


(


k|k


−1)


f


(


k,{circumflex over (V)}DC


(


k−


1|


k


−1),


u


(


k


−1))










Y


(


k|k


−1)=└∇


f




x


(


k


)


Y




−1


(


k−


1


|k


−1)∇


f




x




T


(


k


)+


Q


(


k


)┘


−1








The nonlinear estimation equations are given by








ŷ


(


k|k


)=


ŷ


(


k|k


−1)+


i


(


k


)










Y


(


k|k


)=


Y


(


k|k


−1)+


I


(


k


)






where








I


(


k


)=∇


h




x




T


(


k


)


R




−1


(


k


) ∇


h




x


(


k


)










i


(


k


)=∇


h




x




T


(


k


)


R




−1


(


k


) [


v


(


k


)+∇


h




x


(


k


)


{circumflex over (V)}DC


(


k|k


−1)]






where








v


(


k


)=


z


(


k


)−


h


(


{circumflex over (x)}


(


k|k


−1)).






In this method NIF helps to bind the variability in the VDC coordinate mapping across sample to sample so that dosage and therapy effectiveness can be accurately tracked.




The NIF is then applied to enhanced, fractal-filtered dot spectrogram at step


306


as follows. States being tracked correspond to post-hybridization dot spectrogram in this method. NIF computation as described above specifies the order interval estimate associated with a VDC point. It will explain and bound the variability in Viral load estimations for the same patient from laboratory to laboratory.




The NIF also specifies how accurate each VDC coordinate is given the observation model and nucleotide set being analyzed.




Convergence Testing




Referring again to

FIG. 1

, once any uncertainty is compensated, the VDC coordinates are renormalized at step


114


. The renormalized VDC coordinates are patient specific and therapy specific. Alternately the coordinates could be virus/nucleotide marker specific. The NIF-compensated VDC coordinates are renormalized to the first diagnostic sample point obtained using the biomicroarray. Thus a patient can be referenced to any point on the VDC.




This renormalization step ensures that VDC properties are maintained, notwithstanding information uncertainties as indicated by the NIF correction terms. The approach is drawn from “renormalization-group” approach used for dealing with problems with many scales. In general the purpose of renormalization is to eliminate an energy scale, length scale or any other term that could produce an effective interaction with arbitrary coupling constants. The strategy is to tackle the problem in steps, one step for every length scale. In this method the renormalization methodology is abstracted and applied during a posteriori regularization to incorporate information uncertainty and sample-to-sample variations.




This is in contradistinction to current viral load measurement calibration methods that either generate samples with same protocol and same assumptions of uncertainty or use some constant correction term. Both existing approaches skew the viral load readout so that measurements are actually accurate only in a limited “information” and “observability” context. This explains the large variations in readings from different laboratories and technicians for the same patient sample.




Specifically, we include the dynamic NIF correction function to the gradient of the VDC at the sample point normalized in a manner such that when the information uncertainty is null, the correction term vanishes. As discussed in the above steps, the NIF correction terms is actually derived from the noise statistics of the microarray sample.






<


VDC


′(


k


,Θ)>=


VDC


(


k


,Θ)+[∇


NIF


(


Y,I


)


k


]






where ∇NIF(Y,I)


k


denotes the gradient of nonlinear information prediction function. Under perfect observation model this term vanishes.




Once initialized, the VDC coordinates are then updated at step


116


applying the IFS filter W[ ] on k+1th sample, by






VDC(k+1, Θ)←W[Biomicroarray Output, K+1];






A direction convergence test is next performed at step


118


to determine whether the selected therapy has been effective. If convergence establishes that the viral load for the patient is moving in a direction representative of a lower viral load, then the therapy is deemed effective. The system is deemed to be converging toward a lower viral load if and only if:







&LeftDoubleBracketingBar;



VDC


(

t
k

)


-

VDC


(

t

k
-
j


)





VDC


(

t

k
-
1


)


-

VDC


(

t

k
-
j


)




&RightDoubleBracketingBar;

>

1


&LeftDoubleBracketingBar;



VDC
peak

-

VDC


(

t
k

)





VDC
peak

-

VDC


(

t

k
-
1


)




&RightDoubleBracketingBar;


<
1





for





k

>

2





and





j

>
0










The above relationships needs to be monotonically persistent for at least two combinations of k and j.




Also, date[k]-date[j]<κ* characteristic time, {haeck over (t)} (in days)




Where κ captures the population variability. Typically, κ<1.2.




The peak VDC value is determined based on the VDC. The peak amplitude is an artifact of the specific parameterization to the Fokker-Planck equation used in deriving the VDC. It is almost always derived independent of the specific sample.




In connection with step


118


, a VDC Shift factor Δ may be specified at which a dosage effectiveness decision and/or disease progression decision can be made. The VDC shift factor is applied to estimate the VDC curvature traversed between two measurements.




If the system is deemed to be converging toward a lower viral load, an output signal is generated indicating that the therapy of interest is effective at step


120


. If not, then the execution proceeds to step


122


wherein VDC scale matching is performed. A key assumption underlying this method is that movement along VDC is significant if and only if the sample points are with a constant multiple of temporal scale characterizing the VDC. This does not in any way preclude the pharmacological relevance associated with the datapoints. But complete pharmacological interpretation of the sample points is outside the scope of this method. The process is assumed to be cyclostationary or at a large time scale and two or more sample points have been mapped to VDC coordinates. The coordinates are then plugged into an analytic to estimate the empirical cycle time ({haeck over (t)}). This is implemented as described in the following sections.




Again the empirical cycle time ({haeck over (t)}) is used to establish decision convergence.




Scale Matching




Select a forcing function of the form:






Ψ=k. p


m


cos ωt






where k is a constant and m is a small odd integer (m<7).




The phase space for this dynamical system is represented by:







x
.

=

γ







sin


[



ω

α


erf






m


(

x

σ


2



)



]



k

k
+
2




sin





ω





t




where




erf






m


(
x
)



=

{




-
1





if





x

<

-
N







erf






(
x
)






if






&LeftBracketingBar;
x
&RightBracketingBar;



N





1



x
>
N















and erf m(.) denotes the error function.




K is set to 1; where o<α, γ, ω<1 are refer to constants.




Let τ


emp


denote the cycle-time-scale for this empirical system.




If log


e





emp


/T)>1 (in step


10


) then we claim that time-scales do not match.




Time Scale Testing




Next a determination has been made as to whether an effectiveness timescale has been exceeded at step


124


by:




Checking if a time step between successive sampling has exceeded T by




determining if Time


k+1


−Time


k


>T




such that VDC(k+1, Θ)−VDC(k, Θ)<ζ where




is set to 0.0001 and wherein




T is given by







T
*

=


1
ω



arccos




[

1
-



B


(


1
/
3

,

1
/
3


)



2
3





α


ω


γ



]












B(1/3,1/3) represents the Beta function around the coordinates (1/3,1/3), We can actually use all B(1/2i+1,1/2i+1) for i>1 and i<7.




If Time


k+1


−Time


k


>T then output signal at step


126


indicating that either




no change in viral load concluded, OR




therapy deemed ineffective, OR




dosage deemed suboptimal.




If Time


k+1


−Time


k


<T then process another sample by repeating all steps beginning with Step


4


wherein a dot spectrogram is generated for a new sample.




If the effectiveness time scale has been exceeded then a signal is output indicating that no determination can be as to whether the therapy of interest is effective. If the time scale is not exceeded, then execution returns to step


106


for processing another sample. If available, and the processing steps are repeated.




Alternative Implementations




Details regarding a related implementation may be found in co-pending U.S. patent application Ser. No. 09/253,792, now U.S Pat. No. 6,142,681, filed contemporaneously herewith, entitled “Method and Apparatus for Interpreting Hybridized Bioelectronic DNA Microarray Patterns Using Self Scaling Convergent Reverberant Dynamics”, and incorporated by reference herein.




The exemplary embodiments have been primarily described with reference to flow charts illustrating pertinent features of the embodiments. Each method step also represents a hardware or software component for performing the corresponding step. These components are also referred to herein as a “means for” performing the step. It should be appreciated that not all components of a complete implementation of a practical system are necessarily illustrated or described in detail. Rather, only those components necessary for a thorough understanding of the invention have been illustrated and described in detail. Actual implementations may contain more components or, depending upon the implementation, may contain fewer components.




The description of the exemplary embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art and the generic principles defined herein may be applied to other embodiments without the use of the inventive faculty. Thus, the invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.



Claims
  • 1. A technique for determining viral load within a patient sample applied to an arrayed information structure, where the arrayed information structure has a plurality of elements that emit data indicative of viral load, based on digitized output patterns from the arrayed information structure, comprising the steps of:interferometrically enhancing the output patterns to cause interference between the output patterns and a reference wave; and analyzing the interferometrically enhanced output patterns to identify data indicative of viral load to determine viral load.
  • 2. The technique of claim 1 wherein the interferometric enhancement comprises the step of: inducing resonances based on interference between an expressor function and spectral characteristics of the output patterns; and wherein the analysis step comprises the step of detecting the resonances, if any, at each element in the interferometrically enhanced output patterns from the arrayed information structure.
  • 3. The technique of claim 2 wherein the interferometric enhancement further includes the step tessellating the interferometrically enhanced output patterns prior to the induction of resonances.
  • 4. The technique of claim 2 wherein and expressor function is a chosen from a group comprising: stochastic expressor functions, quantum expressor functions.
  • 5. The technique of claim 2 wherein the spectral characteristics are selected from a group comprising: noise, signal, and noise coupled to signal.
  • 6. The technique of claim 2 further including the initial steps of generating a set of nonlinear expressor functions by:calculating values representative of a pre-selected Hamiltonian function; calculating harmonic amplitudes for the Hamiltonian function; generating an order function from the Hamiltonian; function measuring entrainment states of the order function; and modulating the order function using the entrainment states to yield the expressor function.
  • 7. The technique of claim 2 wherein the arrayed information structure is a microarray.
  • 8. The technique of claim 1 wherein the arrayed information structure embodies measurements selected from a group comprising: intensity, amplitude, and phase.
  • 9. The technique of claim 1 wherein the analysis step comprises the step of mapping the interferometrically enhanced output pattern to coordinates on a viral diffusion curve.
  • 10. The technique of claim 9 wherein the mapped coordinates are non-linearly filtered.
  • 11. A system for determining viral load within a digitized image of a biological sample measurement taken from a microarray comprising:an interferometric unit configured to generate an interference between the digitized image and a reference wave to enhance the digitized image; and an analysis unit for analyzing the interferometrically enhanced digitized image to determine viral load.
  • 12. The system of claim 11, wherein said interferometric unit is configured to induce resonances based on interference between an expressor function and spectral characteristics of the digitized image.
  • 13. The system of claim 12, wherein said analysis unit is configured to detect resonances, if any, within the interferometrically enhanced digitized image that are associated with viral load.
  • 14. The system of claim 12 wherein and expresser function is chosen from a group comprising: stochastic expresser functions, quantum expressor functions.
  • 15. The system of claim 11 wherein said interferometric unit tessellates the digitized image of the output pattern prior to generating an interference between the digitized image and a reference wave.
  • 16. The system of claim 11 wherein the digitized image embodies measurements from a group comprising: intensity, amplitude, phase.
  • 17. The system of claim 11 wherein the analysis unit maps the interferometrically enhanced digitized image to a viral diffusion curve.
  • 18. The system of claim 11 wherein the analysis unit maps the interferometrically enhanced digitized image to a viral diffusion curve using fractal filtering.
  • 19. A system for determining the level of a biological indicator within a patient sample applied to an arrayed information structure, where the arrayed information structure emits data indicative of the biological indicator, based on digitized images of the arrayed information structure, comprising:signal processing means for generating interference between the digitized image and a reference wave to enhance the digitized image; and analysis means for analyzing the enhanced digitized image to determine the level of the biological indicator.
  • 20. A system for determining the level of a biological indicator within a patient sample applied to an arrayed information structure, where the arrayed information structure emits data indicative of the biological indicator, based on digitized output patterns from the arrayed information structure, comprising:an interferometric unit configured to generate an interference between the digitized image and a reference wave to enhance the digitized output pattern; and an analysis unit for analyzing the interferometrically enhanced digitized output pattern to determine the level of the biological indicator.
  • 21. A system for determining the level of a biological indicator within a patient sample applied to an arrayed information structure, where the arrayed information structure emits data indicative of the biological indicator, based on digitized output patterns from the arrayed information structure, comprising:an interferometric unit configured to generate an interference between the digitized output pattern and a reference wave to enhance the digitized output pattern; and an analysis unit for analyzing the interferometrically enhanced digitized output pattern to determine the level of the biological indicator.
RELATED APPLICATION

This application is a Continuation of U.S. patent application Ser. No. 09/523,539 entitled “Method and System for Quantitation of Viral Load Using Microarrays” filed Mar. 10, 2000, which is a Continuation of U.S. patent application Ser. No. 09/253,791, now U.S. Pat. No. 6,245,511, entitled “Method and Apparatus for Exponentially Convergent Therapy Effectiveness Monitoring Using DNA Microarray Based Viral Load Measurements” filed Feb. 22, 1999.

US Referenced Citations (26)
Number Name Date Kind
5631734 Stern May 1997 A
5733729 Lipshutz Mar 1998 A
5858659 Sapolsky Jan 1999 A
5925525 Fodor Jul 1999 A
5968740 Fodor Oct 1999 A
5974164 Chee Oct 1999 A
6025601 Trulson Feb 2000 A
6066454 Lipshutz May 2000 A
6090555 Fiekowsky Jul 2000 A
6171793 Phillips Jan 2001 B1
6185561 Balaban Feb 2001 B1
6223127 Berno Apr 2001 B1
6225625 Pirrung May 2001 B1
6228593 Lipshutz May 2001 B1
6229911 Balaban May 2001 B1
6242180 Chee Jun 2001 B1
6294327 Walton Sep 2001 B1
6308170 Balaban Oct 2001 B1
6334316 Maeda et al. Jan 2002 B1
6342355 Hacia Jan 2002 B1
6361937 Stryer Mar 2002 B1
6368799 Chee Apr 2002 B1
6391550 Lockhart May 2002 B1
6420108 Mack Jul 2002 B2
6468744 Cronin Oct 2002 B1
6490533 Weiner Dec 2002 B2
Non-Patent Literature Citations (29)
Entry
Lin et al., “A Porous Silicon-Based Optical Interferometric Biosensor,” Science, Oct. 31, 1997, vol. 278, pp. 840-843.*
Merigan, T., “Individualization of therapy using viral markers,” Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology, 1995, vol. 10, suppl. 1, pp. S41-46.*
McNamara et al., Theory of Stochastic Resonance American Physical Society, May 1, 1989 pp. 4854-4869.
Lofstedt et al., Quantum Stochastic Resonance, American Physical Society, Mar. 28, 1994 pp. 1947-1950.
Simonotto et al., “Visual Perception of Stochastic Resonance”American Physical Society, Feb. 10, 1997, pp. 1186-1189.
Daido, “Multibranch Entrainment and Scaling in Large Populations of Coupled Oscillators”, American Physical Society, pp. 1406-1409.
Goychuk et al., “Quantum Stochastic Resonance in Parallel”, New Journal of Physics, Aug. 27, 1999, pp. 14.1-14.14.
Gammaitoni et al., “Extraction of Periodic Signals from a Noise Background”, Physics Letters A, Dec. 4, 1989, pp. 59-62.
Ando et al., “Stochastic Resonance Theory and Applications”, 2000, pp. 11-91, Kluwer Academic Publishers, Boston, USA.
Grifoni et al., “Decoherence and Preparation Effects in the Dissipative Two-State System”, unknown publication date.
Nishiyama, “Numerical Analysis of the Dissipative Two-State System With the Density-Matrix Hilbert-space-reduction Algorithm”, The European Physical Journal B, Spring 1999, pp. 547-554.
Brody et al., “Geometry of Quantum Statistical Inference”, Physical Review Letter, Sep. 30, 1999, vol. 77, No. 14., pp. 2851-2854.
Lindner et al., “Array Enhanced Stochastic Resonance and Spatiotemporal Synchronization”, Physical Review Letters, Jul. 3, 1995, vol. 75, No. 1. pp. 3-6.
Kilin et al., “Complex Quantum Structure of Nonclassical Superposition States and Quantum Instability in Resonance Fluorescence”, Feb. 12, 1996, Physical Review Letters, vol. 76, No. 7. pp. 1051-1054.
Wornell et al., “Estimation of Fractal Signals From Noisy Measurements Using Wavelets”, IEEE Transactions on Signal Processing, Mar. 1992, pp. 611-623, vol. 40, No. 3.
Vitali et al., “Quantum Stochastic Resonance in the Dissipative Two-State System”, Jul. 1995, Il Nuovo Cimento, pp. 959-967, vol. 17D, No. 7-8.
Locher et al., “Spatiotemporal Stochastic Resonance in a System of Coupled Diode Resonators”, Physical Review Letters, Dec. 2, 1996, pp. 4698-4701, vol. 77., No. 23.
Gammaitoni et al., “Stochastic Resonance”, Review of Modern Physics, Jan. 1998, pp. 223-287, vol. 70, No. 1.
Somaroo et al., “Expressing the Operations of Quantum Computing in Multiparticle Geometric Algebra”, Jan. 3, 1998, pp. 1-10.
Darling et al., “Adiabatic Analysis of Quantum Dynamics”, Mar. 3, 1997, Physical Review Letters, pp. 1731-1734, vol. 78, No. 9.
Stinchcombe et al., “Application of Operator Algebras to Stochastic Dynamics and the Heisenberg Chain”, Physical Review Letters, Jul. 3, 1995, pp. 140-143, vol. 75, No. 1.
Lemm et al., “A Bayesian Approach to Inverse Quantum Statistics”, Physics Review Letters,, Sep. 12, 2000, vol. 84.
Jordan et al., “The Variational Formulation of the Fokker-Planck Equation”, Siam. J. Math. Anal., pp. 1-17, Jan. 1998, vol. 29, No. 1.
Van Leeuwen, “Causality and Symmetry in Time-Dependent Density-Functional Theory”, Physical Review Letters, pp. 1280-1283, Feb. 9, 1998, vol. 80, No. 6.
Leggett et al., “Dynamics of the Dissipative Two-State System”, Reviews of Modern Physics, Jan. 1987, vol. 59, No. 1, pp. 1-85.
Li et al., “Model-Based Analysis of Oligo-Nucleotide Arrays: Expression Index Computation and Outlier Detection”, Proc. Natl. Acad. Sci., Jan. 2, 2001, vol. 98, No. 1, pp. 31-36.
Maybeck, “Stochastic Models, Estimation, and Control, vol. 1”, Academic Press, 1979, pp. 1-16.
Imafuku et al., “Quantum Stochastic Resonance in Driven Spin-Boson System with Stochastic Limit Approximation”, arXiv:quant-ph/9910025 v. 1, Oct. 6, 1999, pp. 1-9.
Mitaim et al., “Adaptive Stochastic Resonance”, Proceedings of the IEEE, vol. 86, No. 11, Nov. 1998, pp. 2152-2183.
Continuations (2)
Number Date Country
Parent 09/523539 Mar 2000 US
Child 10/189885 US
Parent 09/253791 Feb 1999 US
Child 09/523539 US